All Products
Search
Document Center

DataWorks:SAP HANA

Last Updated:Mar 27, 2026

Use the SAP HANA node in DataWorks to write SQL, configure recurring schedules, and integrate SAP HANA tasks with other jobs in a Business Flow. SAP HANA is a high-performance in-memory database and application platform that combines database, data processing, and application platform features to provide enterprise-level in-memory computing capabilities. For more information, see SAP HANA.

Prerequisites

Before you begin, make sure you have the following:

  • A Business Flow created in DataStudio. DataStudio organizes development by Business Flows, and every node must belong to one. See Create a workflow.

  • An SAP HANA data source added to your workspace, created using a Java Database Connectivity (JDBC) connection string. Other connection types are not supported. See Data Source Management and SAP HANA data source.

  • Network connectivity between the data source and the resource group you plan to use. See Network connection solutions.

  • (Required for RAM users) The RAM user added to your workspace with the Develop or Workspace Administrator role. Assign the Workspace Administrator role with caution due to its elevated privileges. See Add members to a workspace.

Supported regions

China (Hangzhou), China (Shanghai), China (Beijing), China (Shenzhen), China (Chengdu), China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), Germany (Frankfurt), US (Silicon Valley), and US (Virginia).

Step 1: Create an SAP HANA node

  1. Go to the DataStudio page. Log on to the DataWorks console. In the top navigation bar, select a region. In the left-side navigation pane, choose Data Development and O\&M \> Data Development. Select the target workspace from the drop-down list and click Go to Data Development.

  2. Right-click the target business process and choose Create Node \> Database \> Saphana.

  3. In the Create Node dialog box, enter a Name for the node and click Confirm. The node editing page opens. You can now develop and configure the task.

Step 2: Develop the SAP HANA task

Select a data source (optional)

If multiple SAP HANA data sources exist in your workspace, select the appropriate one on the node editing page. If only one is configured, it is used by default.

Note

SAP HANA nodes support only data sources created using a JDBC connection string.

Write SQL code

In the code editor, write the SQL for your task. The following example runs a simple query:

SELECT * FROM usertablename;

Use scheduling parameters

DataWorks supports scheduling parameters that pass dynamic values into your code at runtime. Define variables using the ${variable_name} format, then assign values in the right-side navigation pane under Schedule \> Scheduling Parameters.

The following example uses a scheduling parameter in a query:

SELECT '${var}';

For supported formats and configuration details, see Supported formats of scheduling parameters and Configure and use scheduling parameters.

Step 3: Configure task scheduling

Click Scheduling Configuration on the right side of the node editing page and set the scheduling properties. For a full reference, see Overview.

Important

Configure Rerun Property and Upstream Dependent Node before you submit the task.

Step 4: Debug the task

  1. (Optional) Select a debugging resource group and assign parameter values.

    • Click the 高级运行 icon in the toolbar. In the Parameters dialog box, select a resource group.

    • Assign values to any scheduling parameters used in the code. For parameter assignment logic, see Task debugging process.

  2. Save and run the task. Click the 保存 icon to save, then click the 运行 icon to run.

  3. (Optional) Run a smoke test during or after submission to verify execution in the development environment. See Perform smoke testing.

Step 5: Submit and publish the task

  1. Click the 保存 icon to save the node.

  2. Click the 提交 icon to submit the node. In the Submit dialog box, enter a Change Description and select code review options.

    Note
    • Configure Rerun Property and Upstream Dependent Node before submitting.

    • If code review is enabled, a reviewer must approve the code before it can be published. See Code review.

  3. In standard mode workspaces, click Publish in the upper-right corner to deploy the task to production. See Publish tasks.

What's next

After the task is published, it runs on a recurring schedule based on the node configuration. To monitor its status, click O\&M in the upper-right corner of the node configuration tab to open Operation Center. See Manage recurring tasks.