All Products
Search
Document Center

Dataphin:Create a GoldenDB data source

Last Updated:May 28, 2025

By creating a GoldenDB data source, you can enable Dataphin to read business data from GoldenDB or write data to GoldenDB. This topic describes how to create a GoldenDB data source.

Permission requirements

Only users who have the Create Data Source permission in a custom global role and users who have the super administrator, data source administrator, domain architect, or project administrator role can create data sources.

Procedure

  1. On the Dataphin homepage, click Management Hub > Datasource Management in the top navigation bar.

  2. On the Datasource page, click +Create Data Source.

  3. On the Create Data Source page, select GoldenDB in the Relational Database section.

    If you have recently used GoldenDB, you can also select GoldenDB in the Recently Used section. You can also enter keywords in the search box to quickly search for GoldenDB.

  4. On the Create GoldenDB Data Source page, configure the parameters for connecting to the data source.

    1. Configure the basic information of the data source.

      Parameter

      Description

      Data Source Name

      The name must meet the following requirements:

      • It can contain only Chinese characters, letters, digits, underscores (_), and hyphens (-).

      • It cannot exceed 64 characters in length.

      Datasource Code

      After you configure the data source code, you can reference tables in the data source in Flink_SQL tasks by using the data source code.table name or data source code.schema.table name format. If you need to automatically access the data source in the corresponding environment based on the current environment, use the variable format ${data source code}.table or ${data source code}.schema.table. For more information, see Development method for Dataphin data source tables.

      Important
      • The data source code cannot be modified after it is configured successfully.

      • After the data source code is configured successfully, you can preview data on the object details page in the asset directory and asset inventory.

      • In Flink SQL, only MySQL, Hologres, MaxCompute, Oracle, StarRocks, Hive, and SelectDB data sources are currently supported.

      Version

      You can select GoldenDB v5.

      Data Source Description

      The description of the GoldenDB data source. It cannot exceed 128 characters in length.

      Data Source Configuration

      Based on whether the business data source distinguishes between production and development data sources:

      • If the business data source distinguishes between production and development data sources, select Production + Development Data Source.

      • If the business data source does not distinguish between production and development data sources, select Production Data Source.

      Tag

      You can categorize and tag data sources based on tags. For information about how to create tags, see Manage data source tags.

    2. Configure the connection parameters between the data source and Dataphin.

      If you select Production + Development Data Source for Data Source Configuration, you need to configure the connection information for both production and development data sources. If you select Production Data Source for Data Source Configuration, you only need to configure the connection information for the production data source. Production data source Production data source

      Note

      In general, production and development data sources should be configured as different data sources to achieve environment isolation between them and reduce the impact of development data sources on production data sources. However, Dataphin also supports configuring them as the same data source with identical parameter values.

      Parameter

      Description

      JDBC URL

      The format of the connection URL is: jdbc:mysql://host:port/dbname. Example: jdbc:mysql//192.168.*.212:3309/dataphin.

      Username and Password

      The username and password used to log on to the database.

    3. Configure advanced settings for the data source.

      Parameter

      Description

      connectTimeout

      The connectTimeout duration of the database (in milliseconds). The default value is 900,000 milliseconds (15 minutes).

      Note
      • If you have configured connectTimeout in the JDBC URL, the connectTimeout value is the timeout period configured in the JDBC URL.

      • For data sources created before Dataphin V3.11, the default connectTimeout value is -1, which indicates no timeout limit.

      socketTimeout

      The socketTimeout duration of the database (in milliseconds). The default value is 1,800,000 milliseconds (30 minutes).

      Note
      • If you have configured socketTimeout in the JDBC URL, the socketTimeout value is the timeout period configured in the JDBC URL.

      • For data sources created before Dataphin V3.11, the default socketTimeout value is -1, which indicates no timeout limit.

      Connection Retries

      If the database connection times out, the system automatically retries the connection until the specified number of retries is reached. If the connection still fails after the maximum number of retries, the connection fails.

      Note
      • The default number of retries is 1. You can set a value from 0 to 10.

      • The number of connection retries is applied by default to offline integration tasks and global quality (requires the asset quality feature to be enabled). You can separately configure the number of retries at the task level in offline integration tasks.

  5. Select a Default Resource Group that is used to run tasks related to the current data source, including database SQL, offline database migration, and data preview.

  6. Click Test Connection or directly click OK to save and complete the creation of the GoldenDB data source.

    When you click Test Connection, the system tests whether the data source can connect to Dataphin properly. If you directly click OK, the system automatically tests the connection for all selected clusters. However, even if the connection tests for all selected clusters fail, the data source can still be created normally.