All Products
Search
Document Center

DataWorks:Create and use AnalyticDB for PostgreSQL nodes

Last Updated:Mar 27, 2026

AnalyticDB for PostgreSQL nodes let you write, schedule, and run AnalyticDB for PostgreSQL tasks in DataStudio, and integrate them with other node types in a workflow.

Prerequisites

Before you begin, make sure you have:

Background information

Use the AnalyticDB for PostgreSQL node to access Alibaba Cloud AnalyticDB for PostgreSQL. For more information, see AnalyticDB for PostgreSQL.

AnalyticDB for PostgreSQL nodes are used to connect to AnalyticDB for PostgreSQL of Alibaba Cloud. For more information, see AnalyticDB for PostgreSQL.

Step 1: Create an AnalyticDB for PostgreSQL node

  1. Go to the DataStudio page. Log on to the DataWorks console. In the top navigation bar, select a region. In the left-side navigation pane, choose Data Development and O\&M \> Data Development. Select a workspace from the drop-down list, then click Go to Data Development.

  2. In the Scheduled Workflow pane, right-click the target workflow and choose Create Node \> AnalyticDB for PostgreSQL \> ADB for PostgreSQL.

  3. In the Create Node dialog box, enter a name in the Name field and click Confirm. The node is created. You can develop and configure a task based on the node.

Step 2: Develop an AnalyticDB for PostgreSQL task

(Optional) Select a computing resource

If your workspace has multiple AnalyticDB for PostgreSQL computing resources, select one on the configuration tab of the node. If only one computing resource is configured, it is used by default.

Write SQL code

In the code editor of the node, write SQL statements using the syntax supported by AnalyticDB for PostgreSQL.

Step 3: Configure scheduling properties

If you want to periodically run the AnalyticDB for PostgreSQL task on the created node, click Properties in the right-side navigation pane of the node's configuration tab and configure the scheduling properties for the task. For details, see Scheduling properties overview.

Important

Configure the Rerun and Parent Nodes parameters before you commit the task.

Step 4: Debug the task

  1. (Optional) Select a resource group and assign values to scheduling parameters. Click the 高级运行 icon in the top toolbar. In the Parameters dialog box, select a resource group for scheduling. If your task code uses scheduling parameters, assign values to those parameters here for debugging. For the value assignment logic, see Debugging procedure.

  2. Save and run the SQL statements. Click the 保存 icon in the top toolbar to save, then click the 运行 icon to run the SQL statements.

  3. (Optional) Perform smoke testing. Run smoke testing in the development environment when committing the task or after committing it. For details, see Perform smoke testing.

Step 5: Commit and deploy the task

You can commit the task only after you configure the Rerun and Parent Nodes parameters.
  1. Click the 保存 icon in the top toolbar to save the task.

  2. Click the 提交 icon to commit the task. In the Submit dialog box, fill in the Change description field. Then decide whether to enable code review for the committed code. If code review is enabled, the committed code can only be deployed after it passes review. For details, see Code review.

  3. If your workspace is in standard mode, deploy the task to the production environment after committing it. Click Deploy in the upper-right corner of the configuration tab. For details, see Deploy nodes.

What's next

After the task is committed and deployed, it runs on the schedule you configured. To monitor execution, click Operation Center in the upper-right corner of the configuration tab. For details, see View and manage auto triggered nodes.