The execution of a workflow can be triggered by using the CloudFlow console or by invoking an SDK. The execution of a workflow can also be triggered by an event source. You can create workflow scheduling in a workflow. Workflow scheduling describes a set of rules. When an event meets these rules, the event source triggers the associated workflow. This topic describes what is workflow scheduling.
What is workflow scheduling?
Workflow scheduling (a trigger) is a method to trigger a workflow. In an event-driven computing model, event sources are event producers and workflows are event handlers. Workflow scheduling provides a centralized and unified way to manage different event sources. In an event source, when an event that meets the rules defined by the workflow scheduling occurs, the event source automatically invokes the workflow that is associated with the workflow scheduling.
Sample scenarios
Example 1: The status change of an image in an Object Storage Service (OSS) bucket triggers the execution of the workflow.
You can invoke a workflow to download the image for processing, and store the processing result in OSS or other services. If OSS can focus on images that are recently uploaded and automatically invoke the associated workflow, you no longer need to invoke the workflow. This simplifies the development and use process.
OSS workflow scheduling focuses on these events and invokes the associated workflow. After you configure OSS workflow scheduling, the OSS workflow scheduling automatically triggers the workflow to download and process the image if a new image is uploaded.
Example 2: A log update in Simple Log Service (SLS) triggers the execution of the workflow.
You can invoke a workflow to query and analyze incremental logs. If SLS can focus on updated logs and automatically invoke the associated workflow, you do not need to invoke the workflow.
Simple Log Service workflow scheduling focuses on these events and invokes the associated workflow. After you configure Simple Log Service workflow scheduling, the Simple Log Service workflow scheduling automatically triggers the associated workflow to consume incremental logs when logs are updated.
Example 3: The workflow is triggered at the specified time.
For example, an application needs to collect data every 1 hour. You can invoke the associated workflow to collect and process data every hour. If the workflow can be automatically executed every hour, you no longer need to focus on time.
Time-based scheduling focuses on time events and invokes the associated workflow. After you configure time-based scheduling, the time-based scheduling automatically triggers the associated workflow to collect and process data at the specified time.
Scheduling type
The following scheduling type based on the integration method is supported by workflow:
Two-way integration scheduling: You can configure scheduling in the workflow and event source.
Event scheduling of cloud services: You can configure scheduling in the workflow and create workflow trigger rules in EventBridge. You do not need to configure scheduling in the event source.
The scheduling type supported by workflows includes synchronous invocation scheduling and asynchronous invocation scheduling based on workflow invocation methods. The workflow invocation methods have the following difference:
Synchronous invocation: Results are directly returned after events are processed by workflows.
Asynchronous invocation: Results are returned after events are written to internal queues of workflows. The workflow system ensures that the messages can be processed.
Two-way integration scheduling
Scheduling name | Workflow invocation method in standard mode | Workflow invocation method in express mode |
Time-based scheduling | Asynchronous invocation | Synchronous invocation |
Simple Log Service workflow scheduling | Asynchronous invocation | Synchronous invocation |
MNS workflow scheduling | Asynchronous invocation | Synchronous invocation |
Kafka workflow scheduling | Asynchronous invocation | Synchronous invocation |
RocketMQ workflow scheduling | Asynchronous invocation | Synchronous invocation |
RabbitMQ workflow scheduling | Asynchronous invocation | Synchronous invocation |
HTTP workflow scheduling | Asynchronous invocation | Synchronous invocation |
Event scheduling of cloud services
Scheduling name | Workflow invocation method in standard mode | Workflow invocation method in express mode |
Alibaba Cloud service event scheduling | Asynchronous invocation | Synchronous invocation |
Event syntax of workflow scheduling
The event syntax that is passed to the workflow interface varies based on the scheduling name. The following sections describe the event syntaxes of different scheduling names.
In an event that triggers workflow scheduling, you must use an expression to extract the Event parameter that is passed to the workflow. After you extract the Event parameter, you can use the Event parameter. For more information, see Inputs and outputs.
Time-based scheduling
{
"datacontenttype":"application/json;charset=utf-8",
"aliyunaccountid":"143998900779****",
"aliyunpublishtime":"2022-09-21T05:00:00.035Z",
"data":{
"TimeZone":"GMT+0:00",
"Schedule":"0/30 * * * * ?",
"UserData":{
"key":"value"
}
},
"specversion":"1.0",
"aliyuneventbusname":"Housekeeping-Bus",
"id":"d100262d-90c7-4caf-a3b5-813f3526a1f7-****",
"source":"housekeeping.scheduledevent",
"time":"2022-09-21T05:00:00Z",
"aliyunregionid":"cn-beijing",
"type":"eventbridge:Events:ScheduledEvent"
}The following table describes the parameters contained in data. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description |
TimeZone | String | GMT+8:00 | The time zone. |
Schedule | String | 0 */10 * * * * | The cron expression that is used when you select Fixed Period for Trigger Period. |
UserData | Object | {"key":"value"} | The custom parameters in the format of a JSON object. |
HTTP workflow scheduling
{
"datacontenttype": "application/json",
"aliyunaccountid": "164901546557****",
"data": {
"headers": {
"content-length": "0",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7",
"Host": "164901546557****.eventbridge.cn-hangzhou.aliyuncs.com",
"Accept-Encoding": "gzip, deflate",
"X-Forwarded-Port": "80",
"Upgrade-Insecure-Requests": "1",
"X-Forwarded-For": "183.247.0.***",
"Accept-Language": "zh-CN,zh;q=0.9",
"X-Real-IP": "183.247.0.***",
"X-Scheme": "http"
},
"path": "/webhook/putEvents",
"body": "",
"httpMethod": "GET",
"queryString": {}
},
"subject": "acs:eventbridge:cn-hangzhou:164901546557****:eventbus/eventbus-created-by-fnf-466ccc7e-418a-403f-8d96-2d73a8e****/eventsource/httpschedule",
"aliyunoriginalaccountid": "164901546557****",
"source": "httpschedule",
"type": "eventbridge:Events:HTTPEvent",
"aliyunpublishtime": "2023-08-06T18:37:01.666Z",
"specversion": "1.0",
"aliyuneventbusname": "eventbus-created-by-fnf-466ccc7e-418a-403f-8d96-2d73a8e****",
"id": "6751261d-e496-4b36-a707-3c087bf3****",
"time": "2023-08-07T02:37:01.666+08:00",
"aliyunregionid": "cn-hangzhou",
"aliyunpublishaddr": "183.247.0.***"
}The following table describes the parameters contained in data. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description |
headers | Object | { "Accept": "application/json" } | The HTTP request header. |
path | String | /webhook/putEvents | The path of the request. |
body | Object | { "filePath": "/tmp/uploader" } | The HTTP request body. |
httpMethod | String | GET | The HTTP invocation method. |
queryString | String | username=leo | The |
Event scheduling of cloud services
{
"id":"c2g71017-6f65-fhcf-a814-a396fc8d****",
"source":"OSS-FunctionFlow-osstrigger",
"specversion":"1.0",
"type":"oss:PutObject",
"datacontenttype":"application/json; charset=utf-8",
"subject":"acs:mns:cn-hangzhou:164901546557****:queues/zeus",
"time":"2021-04-08T06:28:17.093Z",
"aliyunaccountid":"1649015465574023",
"aliyunpublishtime":"2021-10-15T07:06:34.028Z",
"aliyunoriginalaccountid":"164901546557****",
"aliyuneventbusname":"OSS-FunctionFlow-osstrigger",
"aliyunregionid":"cn-chengdu",
"aliyunpublishaddr":"42.120.XX.XX",
"data":{
*** This content varies based on the event source. ***
}
}The parameters contained in the data section vary based on the type of event source of the Alibaba Cloud service. For more information about the types and formats of cloud service event sources, see Alibaba Cloud service event sources.
Simple Log Service workflow scheduling
[
{
"datacontenttype": "application/json;charset=utf-8",
"aliyunaccountid": "164901546557****",
"data": {
"key1": "value1",
"key2": "value2",
"__topic__": "test_topic",
"__source__": "test_source",
"__client_ip__": "122.231.XX.XX",
"__receive_time__": "1663487595",
"__pack_id__": "59b662b2257796****"
},
"subject": "acs:log:cn-qingdao:164901546557****:project/qiingdaoproject/logstore/qingdao-logstore-1",
"aliyunoriginalaccountid": "164901546557****",
"source": "SLS-FunctionFlow-slstrigger",
"type": "sls:connector",
"aliyunpublishtime": "2022-09-18T07:53:15.387Z",
"specversion": "1.0",
"aliyuneventbusname": "SLS-FunctionFlow-slstrigger",
"id": "qiingdaoproject-qingdao-logstore-1-1-MTY2MzExODM5ODY4NjAxOTQyMw****",
"time": "2022-09-18T07:53:12Z",
"aliyunregionid": "cn-qingdao",
"aliyunpublishaddr": "10.50.XX.XX"
}
]The following table describes the parameters contained in data. The fields that start and end with __ are the system fields of Simple Log Service. For more information, see Reserved fields. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description |
key1 | String | testKey | The field in the log of the user. |
__topic__ | String | testTopic | The log topic. |
__source__ | String | testSource | The device from which the log is collected. |
__client_ip__ | String | 122.231.XX.XX | The IP address of the host in which the log resides. |
__receive_time__ | String | 1663487595 | The time when the log arrived at the server. |
__pack_id__ | String | 59b662b2257796**** | The unique ID of the log group to which the log belongs. |
MNS workflow scheduling
[
{
"id":"c2g71017-6f65-fhcf-a814-a396fc8d****",
"source":"MNS-FunctionFlow-mnstrigger",
"specversion":"1.0",
"type":"mns:Queue:SendMessage",
"datacontenttype":"application/json; charset=utf-8",
"subject":"acs:mns:cn-hangzhou:164901546557****:queues/zeus",
"time":"2023-04-08T06:28:17.093Z",
"aliyunaccountid":"1649015465574023",
"aliyunpublishtime":"2023-10-15T07:06:34.028Z",
"aliyunoriginalaccountid":"164901546557****",
"aliyuneventbusname":"MNS-Function-mnstrigger",
"aliyunregionid":"cn-chengdu",
"aliyunpublishaddr":"42.120.XX.XX",
"data":{
"requestId":"606EA3074344430D4C81****",
"messageId":"C6DB60D1574661357FA227277445****",
"messageBody":"TEST"
}
}
]The following table describes the parameters contained in data. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description |
requestId | String | 606EA3074344430D4C81**** | The request ID. The ID of each request is unique. |
messageId | String | C6DB60D1574661357FA227277445**** | The message ID. The ID of each message is unique. |
messageBody | String | TEST | The message body. |
RocketMQ workflow scheduling
[
{
"id":"94ebc15f-f0db-4bbe-acce-56fb72fb****",
"source":"RocketMQ-rocketmq-schedule",
"specversion":"1.0",
"type":"mq:Topic:SendMessage",
"datacontenttype":"application/json; charset=utf-8",
"subject":"acs:mq:cn-hangzhou:164901546557****:MQ_INST_164901546557****_BXhFHryi%TopicName",
"time":"2023-04-08T06:01:20.766Z",
"aliyunaccountid":"164901546557****",
"aliyunpublishtime":"2023-10-15T02:05:16.791Z",
"aliyunoriginalaccountid":"164901546557****",
"aliyuneventbusname":"RocketMQ-Function-rocketmq-trigger",
"aliyunregionid":"cn-chengdu",
"aliyunpublishaddr":"42.120.XX.XX",
"data":{
"topic":"TopicName",
"systemProperties":{
"MIN_OFFSET":"0",
"TRACE_ON":"true",
"MAX_OFFSET":"8",
"MSG_REGION":"cn-hangzhou",
"KEYS":"systemProperties.KEYS",
"CONSUME_START_TIME":1628577790396,
"TAGS":"systemProperties.TAGS",
"INSTANCE_ID":"MQ_INST_164901546557****_BXhFHryi"
},
"userProperties":{
},
"body":"TEST"
}
}
]The following table describes the parameters contained in data. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description |
topic | String | TopicName | The topic name. |
systemProperties | Map | The system properties. | |
MIN_OFFSET | Int | 0 | The earliest offset. |
TRACE_ON | Boolean | true | Specifies whether to include a message trace in the system properties. Valid values:
|
MAX_OFFSET | Int | 8 | The latest offset. |
MSG_REGION | String | cn-hangzhou | The region from which the message was sent. |
KEYS | String | systemProperties.KEYS | The keys that are used to filter the message. |
CONSUME_START_TIME | Long | 1628577790396 | The start time of message consumption. This value is a UNIX timestamp representing the number of milliseconds that have elapsed since January 1, 1970, 00:00:00 UTC. |
UNIQ_KEY | String | AC14C305069E1B28CDFA3181CDA2**** | The unique key of the message. |
TAGS | String | systemProperties.TAGS | The tags that are used to filter the message. |
INSTANCE_ID | String | MQ_INST_123456789098****_BXhFHryi | The ID of the ApsaraMQ for RocketMQ instance. |
userProperties | Map | None | The user properties. |
body | String | TEST | The message body. |
RabbitMQ workflow scheduling
[
{
"id":"bj694332-4cj1-389e-9d8c-b137h30b****",
"source":"RabbitMQ-Function-rabbitmq-trigger",
"specversion":"1.0",
"type":"amqp:Queue:SendMessage",
"datacontenttype":"application/json;charset=utf-8",
"subject":"acs:amqp:cn-hangzhou:164901546557****:/instances/amqp-cn-tl32e756****/vhosts/eb-connect/queues/housekeeping",
"time":"2023-08-12T06:56:40.709Z",
"aliyunaccountid":"164901546557****",
"aliyunpublishtime":"2023-10-15T08:58:55.140Z",
"aliyunoriginalaccountid":"164901546557****",
"aliyuneventbusname":"RabbitMQ-Function-rabbitmq-trigger",
"aliyunregionid":"cn-chengdu",
"aliyunpublishaddr":"42.120.XX.XX",
"data":{
"envelope":{
"deliveryTag":98,
"exchange":"",
"redeliver":false,
"routingKey":"housekeeping"
},
"body":{
"Hello":"RabbitMQ"
},
"props":{
"contentEncoding":"UTF-8",
"messageId":"f7622d51-e198-41de-a072-77c1ead7****"
}
}
}
]The following table describes the parameters contained in data. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description |
body | Map | The message body. | |
Hello | String | EventBridge | The user data. |
props | Map | The properties of the message. | |
contentEncoding | String | utf-8 | The format in which the message body is encoded. |
messageId | String | f7622d51-e198-41de-a072-77c1ead7**** | The message ID. The ID of each message is unique. |
envelope | Map | The information about the message envelope. | |
deliveryTag | Int | 98 | The message tag. |
exchange | String | None | The name of the exchange that sends the message. |
redeliver | Boolean | false | Specifies whether the message can be resent. Valid values:
|
routingKey | String | housekeeping | The rule that is used to route the message. |
Kafka workflow scheduling
[
{
"specversion":"1.0",
"id":"8e215af8-ca18-4249-8645-f96c1026****",
"source":"acs:alikafka",
"type":"alikafka:Topic:Message",
"subject":"acs:alikafka_pre-cn-i7m2t7t1****:topic:mytopic",
"datacontenttype":"application/json; charset=utf-8",
"time":"2023-06-23T02:49:51.589Z",
"aliyunaccountid":"164901546557****",
"data":{
"topic":"****",
"partition":7,
"offset":25,
"timestamp":1655952591589,
"headers":{
"headers":[
],
"isReadOnly":false
},
"key":"keytest",
"value":"hello kafka msg"
}
}
]The following table describes the parameters contained in data. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description |
topic | String | TopicName | The topic name. |
partition | Int | 1 | The information about partitions on the ApsaraMQ for Kafka instance. |
offset | Int | 0 | The message offset of the ApsaraMQ for Kafka instance. |
timestamp | String | 1655952591589 | The timestamp that indicates when message consumption started. |
Data Transmission Service (DTS) workflow scheduling
{
"data": {
"id": 321****,
"topicPartition": {
"hash": 0,
"partition": 0,
"topic": "cn_hangzhou_rm_1234****_test_version2"
},
"offset": 3218099,
"sourceTimestamp": 1654847757,
"operationType": "UPDATE",
"schema": {
"recordFields": [
{
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
},
{
"fieldName": "topic",
"rawDataTypeNum": 253,
"isPrimaryKey": false,
"isUniqueKey": false,
"fieldPosition": 1
}
],
"nameIndex": {
"id": {
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
},
"topic": {
"fieldName": "topic",
"rawDataTypeNum": 253,
"isPrimaryKey": false,
"isUniqueKey": false,
"fieldPosition": 1
}
},
"schemaId": "(hangzhou-test-db,hangzhou-test-db,message_info)",
"databaseName": "hangzhou--test-db",
"tableName": "message_info",
"primaryIndexInfo": {
"indexType": "PrimaryKey",
"indexFields": [
{
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
}
],
"cardinality": 0,
"nullable": true,
"isFirstUniqueIndex": false
},
"uniqueIndexInfo": [],
"foreignIndexInfo": [],
"normalIndexInfo": [],
"databaseInfo": {
"databaseType": "MySQL",
"version": "5.7.35-log"
},
"totalRows": 0
},
"beforeImage": {
"recordSchema": {
"recordFields": [
{
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
},
{
"fieldName": "topic",
"rawDataTypeNum": 253,
"isPrimaryKey": false,
"isUniqueKey": false,
"fieldPosition": 1
}
],
"nameIndex": {
"id": {
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
},
"topic": {
"fieldName": "topic",
"rawDataTypeNum": 253,
"isPrimaryKey": false,
"isUniqueKey": false,
"fieldPosition": 1
}
},
"schemaId": "(hangzhou-test-db,hangzhou-test-db,message_info)",
"databaseName": "hangzhou-test-db",
"tableName": "message_info",
"primaryIndexInfo": {
"indexType": "PrimaryKey",
"indexFields": [
{
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
}
],
"cardinality": 0,
"nullable": true,
"isFirstUniqueIndex": false
},
"uniqueIndexInfo": [],
"foreignIndexInfo": [],
"normalIndexInfo": [],
"databaseInfo": {
"databaseType": "MySQL",
"version": "5.7.35-log"
},
"totalRows": 0
},
"values": [
{
"data": 115
},
{
"data": {
"hb": [
104,
101,
108,
108,
111
],
"offset": 0,
"isReadOnly": false,
"bigEndian": true,
"nativeByteOrder": false,
"mark": -1,
"position": 0,
"limit": 9,
"capacity": 9,
"address": 0
},
"charset": "utf8mb4"
}
],
"size": 45
},
"afterImage": {
"recordSchema": {
"recordFields": [
{
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
},
{
"fieldName": "topic",
"rawDataTypeNum": 253,
"isPrimaryKey": false,
"isUniqueKey": false,
"fieldPosition": 1
}
],
"nameIndex": {
"id": {
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
},
"topic": {
"fieldName": "topic",
"rawDataTypeNum": 253,
"isPrimaryKey": false,
"isUniqueKey": false,
"fieldPosition": 1
}
},
"schemaId": "(hangzhou-test-db,hangzhou-test-db,message_info)",
"databaseName": "hangzhou-test-db",
"tableName": "message_info",
"primaryIndexInfo": {
"indexType": "PrimaryKey",
"indexFields": [
{
"fieldName": "id",
"rawDataTypeNum": 8,
"isPrimaryKey": true,
"isUniqueKey": false,
"fieldPosition": 0
}
],
"cardinality": 0,
"nullable": true,
"isFirstUniqueIndex": false
},
"uniqueIndexInfo": [],
"foreignIndexInfo": [],
"normalIndexInfo": [],
"databaseInfo": {
"databaseType": "MySQL",
"version": "5.7.35-log"
},
"totalRows": 0
},
"values": [
{
"data": 115
},
{
"data": {
"hb": [
98,
121,
101
],
"offset": 0,
"isReadOnly": false,
"bigEndian": true,
"nativeByteOrder": false,
"mark": -1,
"position": 0,
"limit": 11,
"capacity": 11,
"address": 0
},
"charset": "utf8mb4"
}
],
"size": 47
}
},
"id": "12f701a43741d404fa9a7be89d9acae0-321****",
"source": "DTSstreamDemo",
"specversion": "1.0",
"type": "dts:ConsumeMessage",
"datacontenttype": "application/json; charset=utf-8",
"time": "2022-06-10T07:55:57Z",
"subject": "acs:dts:cn-hangzhou:12345****:kk123abc60g782/dtsabcdet1ro"
}The following table describes the parameters contained in data. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description | |||
id | String | 321**** | The ID of the DTS data entry. | |||
topicPartition | Array | The partition information about the topic to which the event is pushed. | ||||
hash | String | 0 | The underlying storage parameter of DTS. | |||
partition | String | 0 | The partition. | |||
topic | String | cn_hangzhou_rm_1234****_test_version2 | The topic name. | |||
offset | Int | 3218099 | The offset of the DTS data entry. | |||
sourceTimestamp | Int | 1654847757 | The timestamp that indicates when the DTS data entry was generated. | |||
operationType | String | UPDATE | The type of the operation on the DTS data entry. | |||
schema | Array | The schema information about the database. | ||||
recordFields | Array | The details of fields. | ||||
fieldName | String | id | The name of the field. | |||
rawDataTypeNum | Int | 8 | The mapped value of the field type. | |||
isPrimaryKey | Boolean | true | Specifies whether the field is a primary key field. | |||
isUniqueKey | Boolean | false | Specifies whether the field has a unique key. | |||
fieldPosition | String | 0 | The position of the field. | |||
nameIndex | Array | The indexing information about the fields based on field names. | ||||
schemaId | String | (hangzhou-test-db,hangzhou-test-db,message_info) | The ID of the database schema. | |||
databaseName | String | hangzhou--test-db | The database name. | |||
tableName | String | message_info | The table name. | |||
primaryIndexInfo | Array | The primary key indexes. | ||||
indexType | String | PrimaryKey | The index type. | |||
indexFields | Array | The fields on which the indexes are created. | ||||
cardinality | String | 0 | The cardinality of the primary keys. | |||
nullable | Boolean | true | Specifies whether the primary keys can be null. | |||
isFirstUniqueIndex | Boolean | false | Specifies whether the index is the first unique index. | |||
uniqueIndexInfo | String | [] | The unique indexes. | |||
foreignIndexInfo | String | [] | The indexes for foreign keys. | |||
normalIndexInfo | String | [] | The regular indexes. | |||
databaseInfo | Array | The information about the database. | ||||
databaseType | String | MySQL | The type of the database engine. | |||
version | String | 5.7.35-log | The version of the database engine. | |||
totalRows | Int | 0 | The total number of rows in the table. | |||
beforeImage | String | The image that records field values before the operation is performed. | ||||
values | String | The field values recorded. | ||||
size | Int | 47 | The size of the fields recorded. | |||
afterImage | String | The image that records field values after the operation is performed. | ||||
MQTT workflow scheduling
[
{
"specversion":"1.0",
"id":"AC1EC0C950650816F27D46F7D7CA****",
"source":"acs:mqtt",
"type":"mqtt:Topic:SendMessage",
"subject":"acs:mq:cn-hangzhou:143998900779****:topic/mqtt-cn-2r42qam****/housekee****",
"datacontenttype":"application/json; charset\u003dutf-8",
"time":"2022-06-22T03:53:47.959Z",
"aliyunaccountid":"143998900779****",
"data":{
"props":{
"firstTopic":"housekee****",
"secondTopic":"/testMq4****",
"clientId":"GID_****"
},
"body":"TEST"
}
}
]The following table describes the parameters contained in data. For information about parameters that are defined in the CloudEvents specification, see Overview.
Parameter | Type | Example | Description |
props | Map | The properties of the message. | |
firstTopic | String | housekee**** | The parent topic that is used to send and receive messages. |
secondTopic | String | /testMq4**** | The child topic. |
clientId | String | GID_**** | The client ID. |
body | String | TEST | The message body. |