All Products
Search
Document Center

Table Store Trigger

Last Updated: Jul 01, 2019

Table Store Trigger

Alibaba Cloud Table Store is a distributed NoSQL data storage service built on Alibaba Cloud’s Apsara distributed system. It uses data sharding and load balancing techniques to seamlessly scale up data size and access concurrency, providing storage of and real-time access to massive structured data. Table Store has now been integrated as an event source with Function Compute. When data changes in Table Store, a Table Store trigger encodes the changed data as event parameters to trigger the execution of the associated function in Function Compute.

fc-ots

The preceding figure shows a typical use case. The original data is stored in the original Table A. When data changes in Table A, Function Compute cleanses the changed data and stores cleansed data in Table B. Then you can directly read cleansed data from Table B for display. In this way, an elastic and scalable serverless web application is built.

If you are using Table Store triggers for the first time, read Region restrictions and Precautions first.

Region restrictions

Regions that currently support Table Store triggers

China North 2 (Beijing), China East 1 (Hangzhou), China East 2 (Shanghai), China South 1 (Shenzhen), Asia-Pacific Northeast 1 (Tokyo), Asia-Pacific Southeast 1 (Singapore), Asia-Pacific Southeast 2 (Sydney), and Central Europe 1 (Frankfurt), Hong Kong

Precautions

Avoid recursive calls

Avoid using the following logic to compile a function: Table Store Table A triggers Function B, which in turn updates the data in Table A. Otherwise, infinite recursive function calls occur.

Grant Function Compute access to your VPC resources

Use the VPC endpoint instead of the intranet endpoint of Table Store in your compiled functions.

For more information, see VPC feature.

Event definition for Table Store triggers

Data format

Table Store triggers use the CBOR format to encode incremental data into an event to trigger the execution of Function Compute. The specific format of incremental data is as follows:

  1. {
  2. "Version": "string",
  3. "Records": [
  4. {
  5. "Type": "string",
  6. "Info": {
  7. "Timestamp": int64
  8. },
  9. "PrimaryKey": [
  10. {
  11. "ColumnName": "string",
  12. "Value": formatted_value
  13. }
  14. ],
  15. "Columns": [
  16. {
  17. "Type": "string",
  18. "ColumnName": "string",
  19. "Value": formatted_value,
  20. "Timestamp": int64
  21. }
  22. ]
  23. }
  24. ]
  25. }
  • Parameter definition
    • Version
      • Description: The payload version number, which is currently Sync-v1.
      • Type: string
    • Records
      • Description: The array of incremental data rows in a data table.
      • Contains:
        • Type
          • Description: The type of a data row, which can be PutRow, UpdateRow, or DeleteRow.
          • Type: string
        • Info
          • Description: The basic information about a data row.
          • Contains:
            • Timestamp
            • Description: The last modification time of the row, in UTC format.
            • Type: int64
        • PrimaryKey
          • Description: The array of primary key columns.
          • Contains:
            • ColumnName
              • Description: The name of a primary key column.
              • Type: string
            • Value
              • Description: The content of a primary key column.
              • Type: formatted_value, which can be integer, string, or blob
        • Columns
          • Description: The array of attribute columns.
          • Contains:
            • Type
              • Description: The type of an attribute column, which can be Put, DeleteOneVersion, or DeleteAllVersions.
              • Type: string
            • ColumnName
              • Description: The name of an attribute column.
              • Type: string
            • Value
              • Description: The content of an attribute column.
              • Type: formatted_value, which can integer, boolean, double, string, or blob
            • Timestamp
              • Description: The last modification time of the attribute column, in UTC format.
              • Type: int64
  • Data example

    1. {
    2. "Version": "Sync-v1",
    3. "Records": [
    4. {
    5. "Type": "PutRow",
    6. "Info": {
    7. "Timestamp": 1506416585740836
    8. },
    9. "PrimaryKey": [
    10. {
    11. "ColumnName": "pk_0",
    12. "Value": 1506416585881590900
    13. },
    14. {
    15. "ColumnName": "pk_1",
    16. "Value": "2017-09-26 17:03:05.8815909 +0800 CST"
    17. },
    18. {
    19. "ColumnName": "pk_2",
    20. "Value": 1506416585741000
    21. }
    22. ],
    23. "Columns": [
    24. {
    25. "Type": "Put",
    26. "ColumnName": "attr_0",
    27. "Value": "hello_table_store",
    28. "Timestamp": 1506416585741
    29. },
    30. {
    31. "Type": "Put",
    32. "ColumnName": "attr_1",
    33. "Value": 1506416585881590900,
    34. "Timestamp": 1506416585741
    35. }
    36. ]
    37. }
    38. ]
    39. }

    Table Store trigger example

    Trigger example: tablestore_trigger.yml

    1. triggerConfig:

    Trigger parameter description:None

    Examples of configuring a Table Store trigger

    The following examples describe how to use the Function Compute console, command-line tool fcli, or SDK to configure a Table Store trigger for a function.

    The following function examples are compiled with the Python runtime. If you want to use other runtime environments, see How to Configure Table Store Triggers in Node.js, PHP, and Java Runtime Environments.

    Example 1: Create a Table Store trigger in the console

    This example demonstrates how to configure a Table Store trigger in the console. You can either configure a trigger when you are creating a function or after you have created the function. For more information about triggers and how to create a trigger, see Introduction and Create a trigger.

    Log on to the Function Compute console and select the required region and service. If no services are available, create a service. For more information, see Create a service.

    Configure a trigger when creating a function

    1. Click Create Function. In the <>Function Template<> step, select the Empty Function template and click Next.

    2. In the Configure Triggers step, select <>TableStore Trigger<> for <>Trigger Type<>, set other parameters as required, and click <>Next<>.ots-2

    3. In the Configure Function Settings step, configure the function information. Select <>In-line Edit<>, paste the following Python runtime sample code, and click Next.

      1. import logging
      2. import cbor
      3. import json
      4. def get_attrbute_value(record, column):
      5. attrs = record[u'Columns']
      6. for x in attrs:
      7. if x[u'ColumnName'] == column:
      8. return x['Value']
      9. def get_pk_value(record, column):
      10. attrs = record[u'PrimaryKey']
      11. for x in attrs:
      12. if x['ColumnName'] == column:
      13. return x['Value']
      14. def handler(event, context):
      15. logger = logging.getLogger()
      16. logger.info("Begin to handle event")
      17. #records = cbor.loads(event)
      18. records = json.loads(event)
      19. for record in records['Records']:
      20. logger.info("Handle record: %s", record)
      21. pk_0 = get_pk_value(record, "pk_0")
      22. attr_0 = get_attrbute_value(record, "attr_0")
      23. return 'OK'
    4. In the Configure Function Permissions step, configure permissions as required. This step is optional. Then click <>Next<>. In the <>Verify Configurations<> step, verify that all settings are correct and click Create.

    5. Debug the Table Store trigger online. Because the events that Table Store triggers use to call functions are encoded in the CBOR format, which is a JSON-like binary format, you can perform online debugging as follows:

      • Import CBOR and JSON simultaneously in the code.
      • Click the trigger event, select <>Custom<>, paste the JSON file in the preceding data example into the edit box, modify it based on actual conditions, and save it.
      • On the <>Code<> page, use “records = json.loads(event)” to process the custom trigger event, and click <>Run<> to test the code.
      • When the test of “records = json.loads(event)” is successful, change the code to “records = cbor.loads(event)” and save it. When data is subsequently written into Table Store, the related function logic is triggered.

    Configure a trigger after you have created the function

    1. Select the function you have created, click the Triggers tab, and click Create Trigger.
    2. On the Create Trigger page that appears, select <>TableStore Trigger<> for <>Trigger Type<>, set other parameters as required, and click <>OK<>.

    Note: If you do not require finer-grained authorization, use Quick Authorize.

    ots-1

    Example 2: Create a Table Store trigger by using fcli

    First, create a .yaml file that contains triggerConfig. The .yaml file content is as follows:

    1. triggerConfig:

    Create a trigger in the directory of the corresponding function.

    1. mkt triggerName serviceName/functionName -t tablestore -c TriggerConfig.yaml -r acs:ram::$accountId:role/aliyuntablestorestreamnotificationrole -s acs:mns:cn-hangzhou:$accountId:/topics/$topicName
    • -r: specifies the role for triggering the function. The value is a string.
    • -s: specifies the Alibaba Cloud Resource Name (ARN) of the event source, for example, acs:ots:region:$accountId:instance/$instanceName]/table/$tableName. The value is a string.
    • -c: specifies the configuration file for the trigger. The value is a string.
    • -t: specifies the trigger type. The value is a string.

    For more information about fcli, see fcli.

    Example 3: Create a Table Store trigger by using SDKs

    The following example uses fc-python-sdk to create a Table Store trigger. Function Compute also provides fc-nodejs-sdk, fc-java-sdk, and fc-php-sdk.

    Code for creating a trigger

    1. client = fc2. Client(
    2. endpoint='<Your Endpoint>',
    3. accessKeyID='<Your AccessKeyID>',
    4. accessKeySecret='<Your AccessKeySecret>')
    5. service_name = 'serviceName'
    6. function_name = 'functionName'
    7. trigger_name = 'triggerName'
    8. trigger_type = 'tablestore'
    9. source_arn = 'acs:ots:region:<Your Account ID>:instance/<Your instanceName>/table/<Your tableName>'
    10. invocation_role = 'acs:ram::<Your Account ID>:role/<Your Invocation Role>'
    11. trigger_config = {}
    12. client.create_trigger(service_name, function_name, trigger_name, trigger_type, trigger_config, source_arn, invocation_role)

    Function code

    1. import logging
    2. import cbor
    3. import json
    4. def get_attrbute_value(record, column):
    5. attrs = record[u'Columns']
    6. for x in attrs:
    7. if x[u'ColumnName'] == column:
    8. return x['Value']
    9. def get_pk_value(record, column):
    10. attrs = record[u'PrimaryKey']
    11. for x in attrs:
    12. if x['ColumnName'] == column:
    13. return x['Value']
    14. def handler(event, context):
    15. logger = logging.getLogger()
    16. logger.info("Begin to handle event")
    17. #records = cbor.loads(event)
    18. records = json.loads(event)
    19. for record in records['Records']:
    20. logger.info("Handle record: %s", record)
    21. pk_0 = get_pk_value(record, "pk_0")
    22. attr_0 = get_attrbute_value(record, "attr_0")
    23. return 'OK'

    References

    Introduction to Function Compute

    Use Function Compute for data cleansing

    How to Configure Table Store Triggers in Node.js, PHP, and Java Runtime Environments

    If you have any questions, leave a comment or join the Function Compute customer support group (DingTalk group number: 11721331).