All Products
Search
Document Center

Function Compute:Tablestore triggers

Last Updated:Feb 19, 2024

Tablestore is a distributed NoSQL data storage service that is built on the Apsara system. You can create a Tablestore trigger to connect Tablestore as an event source of Function Compute. A function in Function Compute is automatically triggered to process Tablestore data when specified events occur.

Scenarios

The following figure shows a typical scenario of Tablestore triggers.

image

A data source stores data to Table A. The data update triggers a function to cleanse the data and store the cleansed data to Table B for direct reads. The entire process is an elastic and scalable serverless web application.

Prerequisites

  • Function Compute

    • A function is created. For more information, see the Create a function section of the "Manage functions" topic.

  • Tablestore

Limits

  • Tablestore triggers are supported in the following regions: China (Beijing), China (Hangzhou), China (Shanghai), China (Shenzhen), Japan (Tokyo), Singapore, Australia (Sydney), Germany (Frankfurt), and China (Hong Kong).

  • The Tablestore table must reside in the same region as the associated service in Function Compute.

  • If you want to access the function that is associated with a Tablestore trigger over an internal network, you can use a VPC endpoint of Tablestore, which is in the following format: {instance}.{region}.vpc.tablestore.aliyuncs.com. Do not use a Tablestore internal endpoint in this case.

  • The execution duration of a function that is invoked by a trigger cannot exceed one minute.

Precautions

  • Avoid invocation loops when you write function code. For example, the following logic causes an invocation loop: Table A triggers Function B to update data in Table A, which in turn triggers Function B again.

  • If an error occurs during the execution of a function, the function keeps retrying until the log data in Tablestore expires.

    Note
    • A function execution exception occurs in one of the following scenarios:

      • A function instance is started but function code does not run as expected. In this case, fees are generated for the instance.

      • A function instance fails to start due to reasons such as startup command errors. In this case, fees are not generated for the instance.

    • If a function execution exception occurs, you can disable the Stream feature for the data table to prevent the function from being retried for an indefinite number of times. Before you disable the Stream feature, make sure that no other triggers are using the data table. Otherwise, these triggers may not work as expected.

Step 1: Enable the Stream feature for the data table

Before you create a trigger, you must enable the Stream feature for the data table in the Tablestore console to allow the function to process incremental data that is written to the table.

  1. Log on to the Tablestore console.

  2. In the top navigation bar, select a region.

  3. On the Overview page, click the name of the instance that you want to manage or click Manage Instance in the Actions column.

  4. In the Tables section of the Instance Details tab, click the name of the required data table and then click the Tunnels tab. Alternatively, you can click the fig_001 icon and then click Tunnels.

  5. On the Tunnels tab, click Enable in the Stream Information section.

  6. In the Enable Stream dialog box, configure the Log Expiration Time parameter and click Enable.

    The value of the Log Expiration Time parameter must be a non-zero integer and cannot be changed after it is specified. Unit: hours. Maximum value: 168.

    Important

    The Log Expiration Time parameter cannot be modified after it is set. Proceed with caution.

Step 2: Create a Tablestore trigger

  1. Log on to the Function Compute console. In the left-side navigation pane, click Functions.

  2. In the top navigation bar, select a region. On the Functions page, click the function that you want to manage.

  3. On the function details page, click the Configurations tab. In the left-side navigation pane, click Triggers. Then, click Create Trigger.

  4. In the Create Trigger panel, configure the parameters and click OK.

    Parameter

    Description

    Example

    Trigger Type

    The type of the trigger. Select Tablestore.

    Tablestore

    Name

    The name of the trigger.

    Tablestore-trigger

    Version or Alias

    The version or alias of the trigger. Default value: LATEST. If you want to create a trigger for another version or alias, select a version or alias from the Version or Alias drop-down list on the function details page. For more information about versions and aliases, see Manage versions and Manage aliases.

    LATEST

    Instance

    The name of the existing Tablestore instance.

    d00dd8xm****

    Table

    The name of the existing table.

    mytable

    Role Name

    Select AliyunTableStoreStreamNotificationRole.

    Note

    After you configure the preceding parameters, click OK. If you create a trigger of this type for the first time, click Authorize Now in the dialog box that appears.

    AliyunTableStoreStreamNotificationRole

    After the trigger is created, it is displayed on the Triggers tab. To modify or delete a trigger, see Manage triggers.

Step 3: Configure the input parameters of the function

  1. On the Code tab of the function details page, click the image.png icon next Test Function and select Configure Test Parameters from the drop-down list.

  2. In the Configure Test Parameters panel, click the Create New Test Event or Modify Existing Test Event tab, enter the event name and event content, and then click OK.

    A Tablestore trigger encodes incremental data in the Concise Binary Object Representation (CBOR) format to construct an event that is used to invoke a function in Function Compute. The following sample code provides an example on the format of the event content:

    {
        "Version": "Sync-v1",
        "Records": [
            {
                "Type": "PutRow",
                "Info": {
                    "Timestamp": 1506416585740836
                },
                "PrimaryKey": [
                    {
                        "ColumnName": "pk_0",
                        "Value": 1506416585881590900
                    },
                    {
                        "ColumnName": "pk_1",
                        "Value": "2017-09-26 17:03:05.8815909 +0800 CST"
                    },
                    {
                        "ColumnName": "pk_2",
                        "Value": 1506416585741000
                    }
                ],
                "Columns": [
                    {
                        "Type": "Put",
                        "ColumnName": "attr_0",
                        "Value": "hello_table_store",
                        "Timestamp": 1506416585741
                    },
                    {
                        "Type": "Put",
                        "ColumnName": "attr_1",
                        "Value": 1506416585881590900,
                        "Timestamp": 1506416585741
                    }
                ]
            }
        ]
    }

    The following table describes the fields in event.

    Parameter

    Description

    Version

    The version of the payload. Example: Sync-v1. The value is a string.

    Records

    The array that stores the rows of incremental data in the table. Each element contains the following parameters:

    • Type: the type of the operation that is performed on the row. Valid values: PutRow, UpdateRow, and DeleteRow. The value is a string.

    • Info: the information about the row, including the Timestamp parameter, which specifies the time when the row was last modified. The time must be in UTC. The value is of the INT64 type.

    PrimaryKey

    The array that stores the primary key columns. Each element contains the following parameters:

    • ColumnName: the name of the primary key column. The value is a string.

    • Value: the value of the primary key column. The value is of the formated_value type, which can be INTEGER, STRING, or BLOB.

    Columns

    The array that stores the attribute columns. Each element contains the following parameters:

    • Type: the type of the operation that is performed on the attribute column. Valid values: Put, DeleteOneVersion, and DeleteAllVersions. The value is a string.

    • ColumnName: the name of the attribute column. The value is a string.

    • Value: the value of the attribute column. The value is of the formatted_value type, which can be INTEGER, BOOLEAN, DOUBLE, STRING, or BLOB.

    • Timestamp: the time when the attribute column was last modified. The time must be in UTC. The value is of the INT64 type.

Step 4: Write and test function code

After you create the Tablestore trigger, you can write function code and test the function code to verify whether the code is valid. The function is automatically invoked when the data in Tablestore is updated.

  1. On the function details page, click the Code tab, enter function code in the code editor, and then click Deploy.

    In this example, the function code is written in Python. For more information about how to write function code in other runtime environments, see Use Tablestore to trigger Function Compute in Node.js, PHP, Java, and C# runtimes.

    import logging
    import cbor
    import json
     def get_attribute_value(record, column):
         attrs = record[u'Columns']
         for x in attrs:
             if x[u'ColumnName'] == column:
                 return x['Value']
     def get_pk_value(record, column):
         attrs = record[u'PrimaryKey']
         for x in attrs:
             if x['ColumnName'] == column:
                 return x['Value']
     def handler(event, context):
         logger = logging.getLogger()
         logger.info("Begin to handle event")
         #records = cbor.loads(event)
         records = json.loads(event)
         for record in records['Records']:
             logger.info("Handle record: %s", record)
             pk_0 = get_pk_value(record, "pk_0")
             attr_0 = get_attribute_value(record, "attr_0")
         return 'OK'
  2. Click Test Function.

    After the function is executed, you can view the results on the Code tab.

FAQ

  • If you fail to create a Tablestore trigger in a specific region, check whether the region supports Tablestore triggers. For more information, see Limits.

  • If you cannot find a created Tablestore table when you create a Tablestore trigger, check whether the table resides in the same region as the associated service in Function Compute.

  • In most cases, if an error that indicates a client cancels invocation is repeatedly reported when you use a Tablestore trigger, the timeout period configured for function execution on the client is shorter than the actual function execution duration. In this case, we recommend that you increase the client timeout period. For more information, see What do I do if the client is disconnected and the message "Invocation canceled by client" is reported?

  • If data in added to a Tablestore table but the associated Tablestore trigger is not triggered, you can troubleshoot the issue by performing the following steps. For more information about how to troubleshoot trigger failures, see What do I do if a trigger cannot trigger function execution?

    • Check whether the Stream feature is enabled for the table. For more information, see Step 1: Enable the Stream feature for the data table.

    • Check whether the correct role was configured when you created the trigger. You can use the default trigger role AliyunTableStoreStreamNotificationRole. For more information, see Step 2: Create a Tablestore trigger.

    • Check the function run logs to see whether the function failed to be executed. If a function fails to be executed, the function is retried until the log data in Tablestore expires.