edit-icon download-icon

Environment preparations

Last Updated: Mar 15, 2018

Setup guide

This document details how to set up the Loghub Shipper Service so you can can easily convert log data of a log group in Log Service to structured data, and then store the structured data in Table Store.

For more information, see Concept and configuration information.

Activate services

Log Service

  1. Activate Log Service.

  2. Apply for your own project and log store. The transfer service does not modify your log data in Log Service, which means you can use your own project and log store.

In the following examples, assume that the project is lhshipper-test, the log store is test-store, and the region is China East 1 (hangzhou).

Table Store

  1. Activate Table Store.

  2. Create a data table.

    A data table stores log data from Log Service. In the following example, assume that this table has three Primary Key columns as follows:

    • rename: The type is STRING.
    • trans-pkey: The type is INTEGER.
    • keep: The type is STRING.
  3. Create a transfer service status table.

    A transfer service status table stores information about the synchronization progress of log data from log service projects and shards. Your transfer service for multiple projects and log stores can share the same status table. To help reduce usage costs, we recommend that you set Time To Live (TTL) to one or two days. The Primary Key of the status table consists of four columns configured as follows:

    • project_logstore: The type is STRING.
    • shard: The type is INTEGER.
    • target_table: The type is STRING.
    • timestamp: The type is INTEGER.

Resource Access Management (RAM)

For RAM, you must obtain your AccessKeyId and AccessKeySecret from Resource Access Management.

To guarantee security, we recommend that you use a RAM user account to set up the transfer service during production. Grant this RAM user the AliyunLogReadOnly permission to read the Log Service and the AliyunTableStoreWriteOnlyAccess permission to write data to Table Store.

Elastic Compute Service (ECS) and Container Service (CS)

For ECS and CS, you must activate the Elastic Compute Service (ECS) and Container Service.

The following example uses a Pay-As-You-Go ECS instance. In actual scenarios, your account may require a small amount of credit to activate services.

Set up service

  1. Log on to the Container Service console.

  2. On the left side, select Clusters.

  3. In the upper right corner, click Create cluster.

  4. Set the following cluster information according to your requirements.

    • The region must be the same as the region where your Log Service and Table Store are located. This enables you to use a private IP address to avoid generating public downstream traffic fees and public network delay.

    • No Server Load Balancer instance needs to be selected or created as the transfer service does not use HTTP.

    • The Add node option facilitates presentations.

      Note: If you have bought an ECS instance, you can add it to the specified cluster. For more information, see Add an existing ECS instance. The transfer service allows dynamic and horizontal scaling, so you can select multiple ECS instances.

    • Generally, 1-core and 1 GB are the recommended configurations for transfer service.

  5. On the right side, click Create cluster.

Create an application

  1. Log on to the Container Service console.

  2. On the left side, select Applications.

  3. On the right side, click Create Application.

  4. Set basic information of the application (the example uses loghub-shipper) according to your requirements.

    • For the Cluster option, select the cluster on which the application is created.

    • For easy version upgrades, we recommend that you select Pull Docker image.

  5. Click Create with image. The application configuration page is displayed.

    Note: In the example, an image is created to help explain fundamental procedures. In actual scenarios, an application may consist of multiple types of service. In this case, you can select Create with application template to better manage the application.

  6. Configure the application.

    1. Click Select image next to the application.

    2. To locate the example image, input loghub-shipper in the search box and click Global search. Select the image in the search results and click OK. Then return to the application configuration page.

    3. Configure the following additional parameters.

      • access_key_id

      • access_key_secret

      • loghub: Indicates the log service message. The message is a JSON string consisting of the endpoint, logstore, and consumer_group. In the message, consumer_group can be any character string. Multiple container instances in a single transfer service share one consumer_group, but multiple transfer services cannot use the same consumer_group. An example of this variable is shown as follows.

        1. {"endpoint": "https://lhshipper-test.cn-hangzhou.log.aliyuncs.com",
        2. "logstore": "test-store",
        3. "consumer_group": "defaultcg"}
      • tablestore: Indicates the message of a table store. The message is a JSON string consisting of the endpoint (a domain name used to access Table Store), instance (an instance name used to access Table Store), target_table (a data table name), and status_table (a status table name). An example of this variable is shown as follows.

        1. {"endpoint": "https://lhshipper-test.cn-hangzhou.ots.aliyuncs.com",
        2. "instance": "lhshipper-test",
        3. "target_table": "loghub_target",
        4. "status_table": "loghub_status"}
      • exclusive_columns: Indicates the field message of log data that is not imported to Table Store. This message is a JSON string. An example of this variable is shown as follows.

        1. ["__source__","time"]
      • transform: Indicates the message of format conversion for log data (all data in the log data is character string), for example, the renaming and conversion of an attribute type. This message is a JSON string. An example of this variable is shown as follows.

        1. {"rename": "original",
        2. "trans-pkey": "(->int 10 original)"}

        This example shows that the data of original in the log data is transformed to the rename column in the data table, and converted to a decimal integer saved to the attribute column trans-pkey of the data table. For more definitions of type conversion, see Concept and configuration information.

        Note: Primary Key information of the data table does not need specification during configuration. The transfer service automatically reads the schema information of the data table. However, the log data or transform needs to include all Primary Key fields; otherwise, this log message is discarded.

  7. In the lower right corner, click Create.

Thank you! We've received your feedback.