After you configure the mapping for the imported table, you need to execute the scheduling task based on the computing source and perform ID Mapping calculation at the same time. After the scheduling is complete, the calculation result data is stored in the analysis source.
supports three methods to initiate scheduling:
Manual scheduling: After the bottom table is updated, manual scheduling is initiated.
Periodic Scheduling: Scheduling is automatically initiated on a daily or hourly basis. This method is applicable when the bottom table is periodically updated.
Triggered Scheduling: You can call the API operation to send a scheduling request to initiate scheduling. This method is applicable to calling the API operation to trigger scheduling after the processing task of the bottom table is completed.
A type of scheduling task that is imported by using APIs is created by API Synchronization. The scheduling method is periodic scheduling or manual scheduling. You can view the execution records. You can manually schedule the execution records. However, you cannot manually schedule the execution records. You cannot edit or remove the execution records.
Create a job
Each time a task is scheduled, the ID of the users involved in all data tables must be re-calculated. To avoid frequent consumption of computing resources, we recommend that you create the same task for scheduling all data tables.
Create a manual scheduling task
Procedure
In the left-side navigation pane, choose Configuration Management > Data Import > Data Import > Scheduling Task.

In the upper-right corner, click Create Scheduling Task.
In the dialog box that appears, enter a task name and select the data table to be scheduled and imported. You can select multiple or all data tables.

Set the Scheduling Frequency parameter to Manual Scheduling.
Click Save.
The scheduling task is added to the task list. If you want to schedule the task, click the
icon to manually start scheduling.
Create a periodic scheduling task
Procedure
In the left-side navigation pane, choose Configuration Management > Data Import > Data Import > Scheduling Task.

In the upper-right corner, click Create Scheduling Task.
In the dialog box that appears, enter a task name and select the data table to be scheduled and imported. You can select multiple or all data tables.

Set Scheduling Frequency to Daily Scheduling or Hourly Scheduling, and set the execution cycle.
Click Save.
The scheduling task is added to the task list and automatically scheduled at the specified periodic time.
Create a triggered scheduling task
Procedure
Choose Workspace> Configuration Management > Data Import > Data Import > Scheduling Tasks.

In the upper-right corner, click Create Scheduling Task.
In the dialog box that appears, enter a task name and select the data table to be scheduled and imported. You can select multiple or all data tables.

If you set the Scheduling Frequency parameter to Trigger Scheduling, a URL that contains a token is generated for each table. Click Copy to save the URL.
Click Save.
The scheduling task is added to the task list. When you need to schedule, you can use these URLs to initiate scheduling. The specific rules are as follows:
When you use Dataphin, DataWorks, or other ETL tools to process bottom tables, you can write code or call this operation by using webhooks.
For more information about sample scripts for Dataphin and DataWorks, see Appendix: Sample scripts for triggering scheduling.
If multiple tables are in the same scheduling task, each table needs to be scheduled once through the corresponding URL request. The scheduling task will execute scheduling only when all tables are triggered.
In the task list, click Edit to view the trigger status of each table. A table that has been scheduled by using a URL is displayed as Triggered. If some tables are displayed as "Not Triggered", you need to request scheduling through the URL corresponding to the table. When all tables are displayed as "Triggered", the scheduling will start to be executed.
In the scheduling process of a task, the scheduling request received again will be ignored.
Modify a scheduling task
Click Edit to edit the scheduling task. From the next scheduling, the scheduling will be executed based on the edited settings.
Manual scheduling
In addition to manual scheduling, you can manually initiate scheduling. You can also manually initiate scheduling for periodic scheduling and triggered scheduling.
Click Manual Scheduling to manually start scheduling.
View results
After the scheduled task is executed at least once, the list displays the status of the last execution (execution succeeded /execution failed).
If the execution fails, move the pointer over Execution Failed. The failure cause is displayed on the page to help you troubleshoot the issue.

Click Execution Records to view the execution records and status of each execution of the task, as shown in the following figure.
Click the
icon to show the execution status of all tables included in an execution schedule. 
Remove a scheduling task
Choose
/> Remove. After you confirm the deletion, the scheduling task is deleted, but the data obtained by the scheduling task is still retained.
Appendix: Sample script for triggering scheduling
The following sample script is used to trigger an import scheduling task in Dataphin and DataWorks.
Dataphin
In Dataphin, perform the following steps to trigger an import scheduling task by using a Shell periodic task:
Add a sandbox whitelist and add all the URLs that are generated when you Create A Triggered Scheduling Task as the IP addresses that you want to access. For more information, see Add project members.
Create a shell task. For more information, see Create shell task. Select Periodic Task as the scheduling type.
If the same scheduling task contains multiple tables, multiple trigger URLs are generated, and multiple Shell periodic tasks must be created accordingly.
An example of the Shell script is as follows:
#!/bin/bash # {Trigger URL} Replace with the trigger URL. QA_TRIGGER_URL="{Trigger URL}" echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger scheduling Start." echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger scheduling Url:" $QA_TRIGGER_URL result=$(curl -k -s ${QA_TRIGGER_URL}) if [ ! -n "$result" ]; then echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger Failed" $result else echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger scheduling Response:" $result echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger scheduling End." fiOptional. You can click Execute Shell Task to test whether the import scheduling task can be triggered and check whether the output log contains the following success information:
QuickAudience Trigger scheduling Response: {"data":"true","errorCode":null,"errorDesc":null,"exStack":null,"opers":[],"solution":null,"success":true,"traceId":"0bc1409e16667784903588235e2ef1"}Configure the scheduling of the Shell task. For more information, see Configure basic task information.
Click Create Upstream Dependency to associate the upstream dependency that is used to generate data in the current table. When the upstream dependency has produced data and the configured scheduling time is reached, Shell task scheduling is triggered.
If all Shell tasks corresponding to the URL of the import scheduling task are triggered, the import scheduling task is triggered.
DataWorks
DataWorks allows you to trigger an import scheduling task on a Shell node or on a PyODPS 3 node.
Create a Shell node
To trigger an import scheduling task by using a Shell node in DataWorks, perform the following steps:
Shell nodes need to use exclusive resources for scheduling groups for scheduling. For more information, see Create and use an exclusive resource group for scheduling.
Otherwise, the Shell node fails to be scheduled and the following error message is displayed:
curl: (1) Protocol https not supported or disabled in libcurlCreate a Shell node. For more information, see Create a Shell node.
If the same scheduling task contains multiple tables, multiple trigger URLs are generated and multiple Shell nodes must be created accordingly.
An example of the Shell script is as follows:
#!/bin/bash # {Trigger URL} Replace with the trigger URL. QA_TRIGGER_URL="{Trigger URL}" echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger scheduling Start." echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger scheduling Url:" $QA_TRIGGER_URL result=$(curl -k -s ${QA_TRIGGER_URL}) if [ ! -n "$result" ]; then echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger Failed" $result else echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger scheduling Response:" $result echo $(date "+%Y-%m-%d %H:%M:%S") "QuickAudience Trigger scheduling End." fiOptional. You can click Execute Shell to test whether the import scheduling task is triggered and check whether the output log contains the following success information:
QuickAudience Trigger scheduling Response: {"data":"true","errorCode":null,"errorDesc":null,"exStack":null,"opers":[],"solution":null,"success":true,"traceId":"0bc1409e16667784903588235e2ef1"}Configure scheduling for the Shell node. For more information, see Configure scheduling dependencies.
As shown in the following figure, create an upstream dependency and associate it with the upstream node that produces data in the current data table. When the upstream node has produced data and the configured scheduling time is reached, the Shell node is scheduled.

If all the Shell nodes corresponding to the URL of the import scheduling task are triggered, the import scheduling task is triggered.
PyODPS 3 node
In DataWorks, use a PyODPS 3 node to trigger an import scheduling task. The procedure is as follows:
PyODPS 3 nodes can be scheduled using public resource groups or exclusive resources for scheduling groups.
Public resource groups are automatically activated. For more information, see Public resource groups.
For more information about exclusive resources for scheduling groups, see Exclusive Resources for Scheduling Group.
Create a PyODPS 3 node. For more information, see PyODPS 3 node.
If the same scheduling task contains multiple tables, multiple trigger URLs are generated and multiple PyODPS 3 nodes must be created accordingly.
The Python 3 script example is as follows:
import requests from datetime import datetime # {Address that triggers scheduling} Replace with the address that triggers scheduling. QA_TRIGGER_URL = "{Address that triggers scheduling}" print(datetime.now().strftime("%Y-%m-%d %H:%M:%S") + " QuickAudience Trigger scheduling Start.") print(datetime.now().strftime("%Y-%m-%d %H:%M:%S") + " QuickAudience Trigger scheduling Url:" + QA_TRIGGER_URL) response = requests.get(QA_TRIGGER_URL) print(datetime.now().strftime("%Y-%m-%d %H:%M:%S") + " QuickAudience Trigger scheduling Response:" + response.text) print(datetime.now().strftime("%Y-%m-%d %H:%M:%S") + " QuickAudience Trigger scheduling End.")Optional. You can click Run PyODPS 3 to test whether the import scheduling task is triggered and check whether the output log contains the following success information:
QuickAudience Trigger scheduling Response: {"data":"true","errorCode":null,"errorDesc":null,"exStack":null,"opers":[],"solution":null,"success":true,"traceId":"0bc1409e16667784903588235e2ef1"}Configure scheduling for PyODPS 3 nodes. For more information, see Configure scheduling dependencies.
As shown in the following figure, create an upstream dependency and associate it with the upstream node that produces data in the current data table. When the upstream node has produced data and reaches the configured scheduling time, PyODPS 3 node scheduling is triggered.

If all PyODPS 3 nodes corresponding to the URL of the import scheduling task are triggered, the import scheduling task is triggered.