All Products
Search
Document Center

Dataphin:Details of physical tables and fields in Transwarp ArgoDB/TDH

Last Updated:Nov 18, 2025

This topic describes how to view the details of physical tables and fields for the Transwarp ArgoDB and Transwarp Data Hub (TDH) compute engines.

Access the physical table details page

  1. On the Dataphin homepage, choose Administration > Asset Checklist from the top menu bar.

  2. Click the Table tab. You can filter physical tables by table type.

  3. In the list of physical tables, click the name of the target physical table or the image icon in the Actions column to open the object details page.

Physical table details

The details pages for physical tables in Transwarp ArgoDB and TDH are largely the same. This topic uses a Transwarp TDH physical table as an example.

image

Region

Description

Overview

Displays information such as the table type, environment, name, tags, and description. You can also perform the following operations:

  • Search for other asset objects: Quickly search for and switch to the details of other assets.

  • View Asset Details: If the current object is listed in the Asset Directory, you can go to the directory details page to view the listing information.

  • View Production/Development Object: Quickly switch to the object details in the corresponding production or development environment.

  • Tag: Displays the tag values configured for the current asset. To modify tags, click Edit.

    • Each tag value can be up to 128 characters long.

    • You can configure a maximum of 20 tag values for each asset object.

    • Super administrators can modify asset tags for all table types. The current table owner can modify asset tags for tables they own. Project administrators can modify asset tags for physical tables in the projects they manage.

  • Favorite: Click to add the asset to or remove it from your favorites. After you add an asset to your favorites, you can view the 30 most recently added assets in the Asset Checklist list under My Footprints for quick access. You can also view all your favorite assets in the Personal Data Center. For more information, see View and manage my favorite assets.

  • Go Analysis: Click to go to the Notebook page and automatically create a Notebook. For more information, see Create a Notebook.

  • Request Permission: Click to go to the permission request page for the current table. For more information, see Request, renew, and return table permissions.

  • Feedback Quality Issues: Use this feature to report quality issues, such as unstable data output or inaccurate data content, to the current quality owner. This notifies the relevant personnel to make timely corrections, which helps improve asset availability and health. For more information about the configuration, see Add and manage the issue checklist.

    You must activate the Data Quality module to use the feedback feature. You can then view the handling process and results of the feedback in the issue checklist of the Data Quality module.

  • Generate Select Statement: Click to generate a search statement for the current table. You can choose whether to add escape characters. Copy the search statement to the ad hoc query or analysis page to query data.

    image

  • View DDL Statement: In the upper-right corner, click More and select View DDL Statement to view the Data Definition Language (DDL) statement for the current data table in the tenant's compute engine. You can also select a Data Source Type and click Generate DDL Statement. The system generates a DDL statement to create a table with the same structure as the current table in the specified data source type. If you select Automatically Add Escape Characters, the generated DDL statement automatically includes the appropriate escape characters for the selected source type. This reduces issues such as the misinterpretation of system keywords.

    image

  • Edit Table: In the upper-right corner, click More and select Edit Table. You are redirected to the Development > Table Management page to edit the DDL information of the table. For more information, see Create an offline physical table.

  • Export Fields: In the upper-right corner, click More and select Export Fields. This exports the field information from the table in CSV format, which allows other developers or business personnel to quickly analyze and use the data.

  • View Transfer Records: In the upper-right corner, click More and select View Transfer Records to view the 100 most recent owner transfer records for the current data table.

  • View Permission List: In the upper-right corner, click More and select View Permission List to view the data table's permission information.

  • Refresh Metadata: In the upper-right corner, click More and select Refresh Metadata. If a data table was not created on the Dataphin platform, or if a query for a new table returns no results due to a system metadata retrieval delay, click Refresh Metadata. This retrieves the latest metadata and refreshes the specified data table's metadata in the Dataphin system. 

Note

The Go Analysis, Request Permission, Feedback Quality Issues, Edit table, View transfer records, and View permission list operations are not supported for analysis platform tables.

Details

Displays table, field, and partition information.

  • Detail: Displays the table's properties, including data category, subject area, project, highest sensitivity level (requires purchasing the Data Security feature), whether it is a partitioned table, whether it is a manually created table on the analysis platform, whether it is a lakehouse table, lakehouse table format, table storage mode, storage class, storage format, and storage size.

    • Project: The project to which the current table belongs. Click the project name to go to the asset details page of that project.

    • Highest sensitivity level: The highest sensitivity level among the fields in the current table. This helps you quickly understand the data confidentiality of the table. Data classification levels range from L1 (Public) to L4 (Top Secret), and you can also use custom data classifications.

    • Table storage mode: If the data table is a lakehouse table and its format is Hudi, this information can be collected from the table's source (compute source).

    • Storage class: The storage class of the current table, which can be an internal table or a foreign table.

    • Storage size: The actual storage size of the current table. This is updated on a T+1 basis.

      Note

      When the compute engine is Transwarp TDH, you can view whether the table is a lakehouse table, its format, and its storage mode.

  • Field Information:

    • The field list includes field details, description, data type, associated standards (requires purchasing the Data Standard module), data classification (requires purchasing the Data Security module), sample data (displayed only when the data sampling feature is enabled), sensitivity level (requires purchasing the Data Security module), and heat information. You can also search for fields, filter them, and view their lineage.

      • View lineage: In the Actions column, click the lineage icon image to view the field lineage centered on the specified field.

      • Search and filter: You can search for fields by name or description. You can also filter fields by data classification and sensitivity level (requires activating Data Security).

    • Value partitions and range partitions are supported.

  • Partition Info: View the partition information of the data table. Multi-level partitions are displayed as a combination of each level, separated by a forward slash (/). For example: ds=20221001/pt1=a/pt2=b. Note: The partition record count and storage size are for reference only.

    image

    Note
    • For a value partition, the partition name, record count, storage size, and creation time are displayed.

    • For a range partition, the partition name, partition filter expression, record count, storage size, and creation time are displayed.

Lineage & Impact

  • Lineage displays the data lineage between tables and fields.

    • Table-level lineage sources include sync tasks, SQL compute tasks, and logical table tasks for which the system can automatically parse lineage, custom lineage that is manually configured in compute tasks, and external lineage that is registered using OpenAPI. For more information, see Table-level lineage.

    • Field lineage sources include sync tasks (field-level lineage can be parsed for only some data sources), SQL compute tasks, and logical table tasks for which the system can automatically parse lineage, custom lineage that is manually configured in compute tasks, and external lineage that is registered using OpenAPI. For more information, see Field-level lineage.

      Note
      • For lineage that is automatically parsed by the system and lineage that is manually configured in compute tasks, the system parses the table and field lineage in the development environment when a task is submitted. When the task is published, the system parses the table and field lineage in the production environment. A single task submission or publication supports parsing of up to 100,000 lineage relationships. If this limit is exceeded, the relationships are not recorded and cannot be displayed in the asset checklist.

      • Deleting a task also deletes the lineage associated with the physical table. If you delete only the physical table but not the task associated with the lineage, the lineage relationship persists. In the lineage graph, the corresponding table node is displayed as uncollected or deleted.

      • For lineage registered using OpenAPI, the lineage relationship takes effect immediately after the system call is successful.

      • When a real-time sync task is submitted or unpublished, the system can parse the table's lineage. After a real-time sync task is running, the system cannot parse the lineage for newly added or deleted tables.

      • After a real-time sync task is unpublished, the lineage relationship is deleted. This affects only tasks that are unpublished on the real-time sync task page. Instances that are taken offline on the O&M page are not affected.

  • Impact is divided into data table impact and sync impact.

    • Data table impact: Displays the child tables that reference the current table, and the descendant tables that reference the child tables. You can export data and display only child tables.

      • Export Data: Export the data to an Excel file for business personnel to browse.

      • Show Only Child Tables: When selected, only the downstream tables directly affected by the current table are displayed. When cleared, all affected descendant tables are displayed, up to 15 levels. The child tables of the current table are considered the first level.

    • Sync impact: Displays the sync tasks where the current table and its descendant tables are used as source tables. You can export data and display only the impact of the current table.

      • Export Data: Export the data to an Excel file for business personnel to browse.

      • Show Only Current Table Impact: When selected, only the sync tasks where the current table is an input table are displayed. When cleared, the sync tasks where the current table and all its affected descendant tables are input tables are displayed, up to 15 levels. The child tables of the current table are considered the first level.

Quality overview

If you have activated the Data Quality feature, the system displays a rule validation overview and a list of quality monitoring rules for the current data table. Click View Report Details or View Rule Details to go to the corresponding page in the Data Quality module for more details.

Note

You cannot view the quality overview for analysis platform tables.

Data exploration

If you have activated the Data Quality feature, you can configure data exploration tasks for the data table. This helps you quickly understand the data profile and assess its availability and potential threats in advance. To enable automatic exploration, go to Administration > Metadata Center > Exploration and Analysis and enable the corresponding configuration. For more information, see Create a data exploration task.

Note

When the compute engine is Transwarp TDH, if the current table is a lakehouse table, the query engine defaults to Spark SQL. You must enable Spark SQL tasks in the compute source configuration where the current table is located to perform data exploration.

Data preview

If sample data exists for the data table, the sample data is displayed by default. You can also manually trigger a query to retrieve the latest data. If no sample data exists, a data preview query is automatically triggered.

  • Sample data: Displayed when the data sampling switch and the data preview switch in the usage configuration are both enabled. You can query only the sample data for fields for which you have column-level permissions and that do not require masking. The system stores and displays the sample data for each field independently, but does not guarantee the existence or correctness of row records.

  • Data preview: If you have permissions to query data in the current table, you can use the data preview feature. You can query only the results for fields for which you have permissions for SELECT statement, including field-level and row-level permissions. You can preview the first 50 data entries. For more information about how to request permissions, see Request, renew, and return table permissions.

For the filtered data, you can search or filter by field, view single-row details, adjust column widths automatically, and transpose rows and columns. You can also click the sorting icon next to a field to apply No Sorting, Ascending, or Descending order. Double-click a field value to copy it.

Note

When the compute engine is Transwarp TDH and the current table is a lakehouse table, the query engine defaults to Spark SQL. You must enable Spark tasks in the compute source configuration where the current table is located to perform a data preview.

⑦ Output information

Output tasks include data write tasks for the object, tasks where the current table is the output table due to automatic lineage parsing or custom configuration, and nodes where the output name is `Project name.Table name`.

The output task list is updated in near real-time. The output details are updated on a T+1 basis.

image.png

  • ① View Output Details: You can view output details for auto triggered tasks only. For more information, see Output details.

  • ② Go To O&M: Click the Go To O&M button to go to the task list page in the Operation Center. The current task is automatically filtered for you to view more information.

Usage instructions

You can add usage instructions for the data table to provide reference information for data browsers and consumers. Click Add Usage Instructions, then enter a title and content to add the instructions.image

Asset information

Displays detailed information about the physical table, such as Basic Information, Change Information, and Usage Information.

  • Basic Information: Includes environment, table type, creation time, creator, owner, and output tasks.

    • Owner: The owner of the current table. You can change the owner of the current table to another user. In the Change Owner dialog box, you can choose whether to also change the owner of the table in the development and production environments. After you select a Recipient, click OK to transfer ownership immediately. We recommend that you notify the recipient promptly after the transfer. You can view transfer information on the transfer records page. For more information, see View transfer records.

      Note
      • Super administrators can change the owner of all table types. The current table owner can change the owner of tables they own.

      • Project administrators can change the owner of physical tables in the projects they manage.

    • Output Task: You can view the output tasks for the current table. These include data write tasks for the object, tasks where the current table is an output table due to lineage parsing or configuration, and nodes where the output name is `Project name.Table name`. Click the name of an output task to go to the O&M details page for that data table.

      Note

      You can view output details for auto triggered tasks only.

  • Change Information: Includes data changes, recent access, and DDL changes.

    • Data Updated At: The time of the most recent table content change (corresponding to DML operations) as parsed by Dataphin from SQL. Changes triggered by external systems are not counted. This is updated in real time.

    • Last Accessed At: The time of the most recent SELECT operation (corresponding to DQL operations) as parsed by Dataphin from SQL. Access triggered by external systems is not counted. This is updated in real time.

    • Last Ddl Time: The time of the most recent table schema evolution (corresponding to DDL operations) as parsed by Dataphin from SQL. Changes triggered by external systems are not counted. This is updated in real time.

  • Usage Information: Includes the number of favorites, page views, and access count.

    • Number Of Favorites: The number of users who have added the current table to their favorites. This is updated in real time.

    • Page Views: The number of page views (PV) for the current data table. Each refresh increases the count by one. This is updated in real time.

    • Visits: The number of times the table has been selected in a Dataphin task (corresponding to DQL operations), as parsed from SQL. Each selection is counted as one visit. This is updated on a T+1 basis and shows the total access count for the last 30 days.

Field details

The details of the data table that contains the current field. For more information, see Physical table details.