Offline physical tables help you centrally configure and manage the tables used in compute task development to improve development efficiency.
Limits
If you have not purchased the Data Standard module, you cannot set the Data Standard field for tables.
If you have not purchased the Asset Security module, you cannot set the Data Sensitivity Level and Data Classification fields for tables.
Only the MaxCompute, Hadoop, Lindorm, Databricks, data warehouse service (DWS), and AnalyticDB for PostgreSQL compute engines are supported.
Create an offline physical table
Step 1: Configure basic information
On the Dataphin home page, choose Develop > Data Development from the top menu bar.
In the top menu bar, select a Project. If you are in Dev-Prod mode, you must also select an Environment.
In the navigation pane on the left, choose Data Processing > Tables.
In the table list, click the
icon and select Offline Physical Table.In the Create Physical Table wizard, configure the following parameters. The required parameters vary based on the compute engine.
MaxCompute compute engine
Parameter
Description
Table Name
Enter a name for the offline physical table. The name can contain only letters, digits, and underscores (_). The name can be up to 128 characters in length.
Directory
Select the folder where the offline physical table is stored.
If no folder is created, create a new folder as follows:
Above the compute node list on the left, click the
icon to open the Create Folder dialog box.In the Create Folder dialog box, enter a folder Name, set Type to Offline, and Select Directory as needed.
Click OK.
Data Domain (Optional)
Select the subject area to which the table belongs. If no subject areas are available, create one. For more information, see Create a subject area.
Description (Optional)
Enter a brief description. The description can be up to 1,000 characters in length.
Hadoop compute engine
Parameter
Description
Table Name
Enter a name for the offline physical table. The name can contain only letters, digits, and underscores (_). The name can be up to 128 characters in length.
Directory
Select the folder where the offline physical table is stored.
If no folder is created, create a new folder as follows:
Above the compute node list on the left, click the
icon to open the Create Folder dialog box.In the Create Folder dialog box, enter a folder Name, set Type to Offline, and Select Directory as needed.
Click OK.
Data Domain (Optional)
Select the subject area to which the table belongs. If no subject areas are available, create one. For more information, see Create a subject area.
Description (Optional)
Enter a brief description. The description can be up to 1,000 characters in length.
Storage File Format
Select a storage format for the offline physical table. The following formats are supported.
hudi and delta(Delta Lake): These formats are available only if Spark SQL is enabled for the compute source of the project. If Spark SQL is not enabled for the compute source, but the Default Storage Format in the table management settings of the development platform is set to hudi or delta(Delta Lake), the storage format defaults to Engine Default (can be specified in the CREATE TABLE statement).
If you select hudi, the storage format clause in the CREATE TABLE statement is
using hudi. If you select delta (Delta Lake), the storage format clause isusing delta.paimon
iceberg
parquet: The storage format clause in the CREATE TABLE statement is
stored as parquet.avro: The storage format clause in the CREATE TABLE statement is
stored as avro.rcfile: The storage format clause in the CREATE TABLE statement is
stored as rcfile.orc: The storage format clause in the CREATE TABLE statement is
stored as orc.textfile: The storage format clause in the CREATE TABLE statement is
stored as textfile.sequencefile: The storage format clause in the CREATE TABLE statement is
stored as sequencefile.
Lindorm compute engine
Parameter
Description
Table Name
Enter a name for the offline physical table. The name can contain only letters, digits, and underscores (_). The name can be up to 128 characters in length.
Directory
Select the folder where the offline physical table is stored.
If no folder is created, create a new folder as follows:
Above the compute node list on the left, click the
icon to open the Create Folder dialog box.In the Create Folder dialog box, enter a folder Name, set Type to Offline, and Select Directory as needed.
Click OK.
Data Domain (Optional)
Select the subject area to which the table belongs. If no subject areas are available, create one. For more information, see Create a subject area.
Description (Optional)
Enter a brief description. The description can be up to 1,000 characters in length.
Storage File Format
Select a storage format for the offline physical table. The default format is the same as the default storage format specified in the table management settings of the development platform. The following formats are supported.
Engine Default (can be specified in the CREATE TABLE statement): The CREATE TABLE statement does not contain a storage format setting clause such as `using` or `stored as`.
iceberg
parquet
avro
rcfile
orc
textfile
sequencefile
Databricks compute engine
Parameter
Description
Table Name
Enter a name for the offline physical table. The name can contain only letters, digits, and underscores (_). The name can be up to 128 characters in length.
Directory
Select the folder where the offline physical table is stored.
If no folder is created, create a new folder as follows:
Above the compute node list on the left, click the
icon to open the Create Folder dialog box.In the Create Folder dialog box, enter a folder Name, set Type to Offline, and Select Directory as needed.
Click OK.
Data Domain (Optional)
Select the subject area to which the table belongs. If no subject areas are available, create one. For more information, see Create a subject area.
Description (Optional)
Enter a brief description. The description can be up to 1,000 characters in length.
Storage File Format
Select a storage format for the offline physical table. The default format is the same as the default storage format specified in the table management settings of the development platform. The following formats are supported.
Engine Default (can be specified in the CREATE TABLE statement): The CREATE TABLE statement does not contain a storage format setting clause such as `using` or `stored as`.
avro
binaryfile
csv
delta(Delta Lake)
json
orc
parquet
text
DWS/AnalyticDB for PostgreSQL compute engine
Parameter
Description
Table Name
Enter a name for the offline physical table. The name can contain only letters, digits, and underscores (_). The name can be up to 63 characters in length.
Directory
Select the folder where the offline physical table is stored.
If no folder is created, create a new folder as follows:
Above the compute node list on the left, click the
icon to open the Create Folder dialog box.In the Create Folder dialog box, enter a folder Name, set Type to Offline, and Select Directory as needed.
Click OK.
Data Domain (Optional)
Select a subject area. This option is available only if the project is bound to a business segment.
Description (Optional)
Enter a brief description. The description can be up to 1,000 characters in length.
After you complete the configuration, click Next.
Step 2: Configure the field list
Different compute engines support different storage formats. For more information, see Table management settings. The supported configurations for the field list vary based on the Storage File Format, as shown in the following tables.
For offline physical tables that use the Databricks compute engine, if the storage format is set to Engine Default (can be specified in the CREATE TABLE statement), the supported configurations for the field list are the same as those for the delta (Delta Lake) storage format.
Storage format is hudi, delta (Delta Lake), iceberg, or paimon
On the Field List configuration page, configure the fields, data types, data classification, and other structural information for the physical table.

Area
Description
① Field list operations
Search: Search for a field by its name.
Edit DDL: Edit the Data Definition Language (DDL) statement for the current physical table.
Add Data Field: Click Add Data Field. In the new row, enter the Column Name, Data Type, and Remark.
Import from Create Table Statement: Use a CREATE TABLE statement to import new fields. Click Import From Create Table Statement. In the Import From Create Table Statement dialog box, enter the CREATE TABLE statement as shown in the figure below and click Parse SQL. From the parsed fields, select the fields that you want to import and click Add.

Import from Details Table: Click Import From Details Table. In the Import From Details Table dialog box, select the source table and the desired fields, and then click Add.
② Field list
The field list displays details about each field, including Serial Number, Column Name, Data Type, Remark, Recommend Standards, Data Class, and Data Sensitivity Level.
Serial Number: The ordinal number of the table field. The number increments for each new field.
Column Name: The name of the table field. You can enter the full name of a root word to search. The system automatically matches the root words that are configured in Administration > Data Standard > Root Word.
Data Type: Supported data types include string, bigint, double, timestamp, decimal, Text, Numeric, Datetime Variable, and Other.
Text: varchar and char.
Numeric: int, Smallint, tinyint, and float.
Datetime: date. The MaxCompute compute engine supports datetime.
NoteThe Hadoop compute engine does not support datetime.
Other: boolean and binary.
Remark: The description of the table field. The description can be up to 512 characters in length.
Recommend Standards: Select a data standard for the field. To create a standard, see Create and manage data standards.
Data Class: Select a data classification for the field. To create a data classification, see Create a data classification.
Data Sensitivity Level: After you select a data classification, the system automatically identifies the data sensitivity level.
You can also Delete a field in the Actions column.
NoteYou cannot undo the deletion of a field.
③ Batch operations
You can select multiple table fields and perform the following batch operations.
Delete: Click the
icon to delete the selected data fields in a batch.Recommend Names: Click the
icon. The system tokenizes the content in the Remark column, matches the tokens with existing root words, and recommends field names. In the Root Word Naming dialog box, you can replace the names of the selected fields with the recommended names.NoteIf a recommended field name does not meet your requirements, you can modify it in the Corrected Field Name box.
Click Reset to change the Corrected Field Name back to the system-matched root word.
Recommend Standards: Click the
icon. The system recommends data standards based on the field names. In the Data Standard dialog box, you can apply the recommended data standards to the fields.
After you add the fields, click Next.
Storage format is not hudi, delta (Delta Lake), iceberg, or paimon
On the Field List configuration page, configure the fields, data types, data classification, and other structural information for the physical table.

Area
Description
① Field list operations
Search: Search for a field by its name.
Edit DDL: Edit the DDL statement for the current physical table.
Add Data Field: Click Add Field, select a field type, such as Data Field, Partition Field, or Quick Add Date Partition. In the new row, enter the Column Name, Data Type, and Remark.
Add Data Field: Adds a data field row to the table fields.
Partition Field: Adds a partition field row to the table fields.
Quick Add Date Partition: Adds a date partition row to the table fields. The default value is
ds.
Import from Create Table Statement: Use a create table statement to import new fields. Click Import From Create Table Statement. In the Import From Create Table Statement dialog box, follow the instructions in the figure below, enter the create table statement, and click Parse SQL. Select the fields that you want to import from the parsed fields and click Add to import them.
Import from Table: Click Import From Table, and in the Import From Table dialog box, select the source table that contains the fields to import, select the desired fields, and click Add to import them.
② Field list
The field list displays details about each field, including Serial Number, Column Name, Data Type, Remark, Recommend Standards, Data Class, and Data Sensitivity Level.
Serial Number: The ordinal number of the table field. The number auto-increments by 1 for each new field.
Column Name: The name of the table field. You can enter the full name of a root word to search. The system automatically matches the root words that are configured in Administration > Data Standard > Root Word.
Data Type: Supported data types include string, bigint, double, timestamp, decimal, Text, Numeric, Datetime Variable, and Other.
Text: varchar and char.
Numeric: int, Smallint, tinyint, and float.
Datetime: date. The MaxCompute compute engine supports datetime.
NoteThe Hadoop compute engine does not support datetime.
Other: boolean and binary.
Remark: The description of the table field. The description can be up to 512 characters in length.
Recommend Standards: Select a data standard for the field. To create a standard, see Create and manage data standards.
Data Class: Select a data classification for the field. To create a data classification, see Create a data classification.
Data Sensitivity Level: After you select a data classification, the system automatically identifies the data sensitivity level.
You can also Delete a field in the Actions column.
NoteYou cannot undo the deletion of a field.
③ Batch operations
You can select multiple table fields and perform the following batch operations.
Delete: Click the
icon to delete the selected data fields in a batch.Recommend Names: Click the
icon. The system tokenizes the content in the Remark column, matches the tokens with existing root words, and recommends field names. In the Root Word Naming dialog box, you can replace the names of the selected fields with the recommended names.NoteIf a recommended field name does not meet your requirements, you can modify it in the Corrected Field Name box.
Click Reset to change the Corrected Field Name back to the system-matched root word.
Recommend Standards: Click the
icon. The system recommends data standards based on the field names. In the Data Standard dialog box, you can apply the recommended data standards to the fields.
After you add the fields, click Next.
DWS/AnalyticDB for PostgreSQL compute engine

Area | Description |
① Field list operations |
|
② Field list | The field list displays details about each field, including Serial Number, Column Name, Data Type (Length), Primary Key, Not Null, Remark, Recommend Standards, Data Class, and Data Sensitivity Level.
You can also Delete a field in the Actions column. Note You cannot undo the deletion of a field. |
③ Batch operations | You can select multiple table fields and perform the following batch operations.
|
Step 3: Configure storage
The supported storage configurations for the MaxCompute and Hadoop compute engines vary based on the Storage File Format, as shown in the following tables.
Storage format is hudi
Parameter | Description |
Data Update Type (Optional) | Select a type. You can select Cow (Copy On Write) or Mor (Merge On Read). |
Primary Key And Merge Sort Key (Optional) |
|
Data Distribution | Enable or disable Partition. This feature is disabled by default. If you enable Partition, you must also select one or more fields for Partitioned By from the field list. Note For Partitioned By, you cannot select all fields in the list as partition fields. |
Storage Address (Optional) | Enter the Location (storage address). You can use global variables. The value can be up to 512 characters in length. Note The external keyword can be omitted for external tables. If the CREATE TABLE statement includes a location clause, the created table is an external table. |
After you configure the parameters, click Submit. In the Submit dialog box, review the SQL statement for creating the table, and then click Confirm And Submit.
For information about Hudi DDL statements, see SQL DDL and Schema Evolution.
If Location (storage address) is empty, the CREATE TABLE statement does not include a location clause.
If the current compute engine is Hadoop, Spark SQL is used. If the current compute engine is Lindorm or Databricks, the default SQL of the compute source is used.
Storage format is delta (Delta Lake)
Parameter | Description |
Data Distribution |
Note
|
Storage Address (Optional) | Enter the Location (storage address). You can use global variables. The value can be up to 512 characters in length. Note The external keyword can be omitted for external tables. If the CREATE TABLE statement includes a location clause, the created table is an external table. |
After you configure the parameters, click Submit. In the Submit dialog box, review the SQL statement for creating the table, and then click Confirm And Submit.
For information about Delta DDL statements, see ALTER TABLE.
If Location (storage address) is empty, the CREATE TABLE statement does not include a location clause.
If the current compute engine is Hadoop, Spark SQL is used. If the current compute engine is Lindorm or Databricks, the default SQL of the compute source is used.
Storage format is iceberg
Parameter | Description |
Partition | This feature is disabled by default. If you enable it, you must configure Partitioned By. Partitioned By: Select one or more fields from the field list. You can also manually enter fields. Separate multiple fields with commas (,). |
Location | Enter the Location (storage address). You can use global variables. The value can be up to 512 characters in length. Note The |
After you configure the parameters, click Submit. In the Submit dialog box, review the SQL statement for creating the table, and then click Confirm And Submit.
If Location (storage address) is empty, the CREATE TABLE statement does not include a location clause.
If the current compute engine is Hadoop, Spark SQL is used. If the current compute engine is Lindorm, the default SQL of the compute source is used.
Storage format is paimon
Parameter | Description |
Primary Key | Select one or more primary keys. After you make a selection, you must also configure the Table Mode. |
Table Mode | Select the Table Mode (data update type). The options are MOR, COW, and MOW. |
Partition | This feature is disabled by default. If you enable it, you must configure Partitioned By. Partitioned By: Select one or more fields from the field list. |
Location | Enter the Location (storage address). You can use global variables. The value can be up to 512 characters in length. Note The |
After you configure the parameters, click Submit. In the Submit dialog box, review the SQL statement for creating the table, and then click Confirm And Submit.
If Location (storage address) is empty, the CREATE TABLE statement does not include a location clause.
If the current compute engine is Hadoop, Spark SQL is used.
Other storage formats
Parameter | Description |
Storage Type | Select Managed Table or External Table. If you select External Table, you must also configure Location. |
Location | Enter the Location (storage address). You can use global variables. The value can be up to 512 characters in length. |
After you configure the parameters, click Submit. In the Submit dialog box, review the SQL statement for creating the table, and then click Confirm And Submit.
If the current compute engine is Hadoop and the storage format is kudu, Impala SQL is used.
MaxCompute compute engine
MaxCompute internal table
Parameter
Description
Storage Type
Select Managed Table.
Is Transactional Table
Select Yes or No. If you select Yes, you can also configure a Primary Key to create the table as a Delta table.
Primary Key (Optional)
Select one or more primary keys. You can select any field from the field list. If you select multiple primary keys, they are sorted in the order of selection.
Lifecycle (Optional)
The retention period of the table. You can enter a positive integer in days, or select 7, 14, 30, or 360 days.
MaxCompute external table
Parameter
Description
Storage Type
Select External Table.
Storage File Format
Select a storage format. The default format is the same as the default storage format for external tables specified in the table management settings of the development platform. You can select parquet, avro, rcfile, orc, textfile, or sequencefile.
Location
Enter the Location (storage address). You can use global variables. The value can be up to 512 characters in length.
After you configure the parameters, click Submit. In the Submit dialog box, review the SQL statement for creating the table, and then click Confirm And Submit.
DWS/AnalyticDB for PostgreSQL compute engine
Table Constraint: Click Add Constraint to add a constraint. You must configure the Constraint Type, Constraint Settings, and Deferrable Policy.
Parameter
Description
Constraint Type
Select Primary Key, UNIQUE Constraint, or Check Constraint.
Constraint Settings
If Constraint Type is set to Primary Key: Select one or more fields with the data type of text, character varying (varchar), bigint (int8), smallint (int2), or integer (int/int4).
NoteA table can have only one PRIMARY KEY constraint.
If a primary key is already selected in the field list, the system automatically adds a PRIMARY KEY constraint at the table level. The constraint is set to the fields selected in the field list.
If Constraint Type is set to UNIQUE constraint: Select one or more fields. All fields in the field list are available for selection.
If Constraint Type is set to Check constraint: Enter any characters. The value can be up to 512 characters in length.
Deferrable Policy
Select not deferrable, initially immediate, or initially deferred. The default value is not deferrable.
NoteThis parameter is not supported if the constraint type is Check constraint.
Click the Delete icon to delete the corresponding constraint.
Distribution Method (Distributed By) (Optional):
AnalyticDB for PostgreSQL compute engine: Select RANDOMLY, BY(<columns>), or REPLICATED.
DWS compute engine: Select REPLICATION, ROUNDROBIN, or BY HASH(<columns>). If you select BY HASH(<columns>), you must also select distribution fields. All fields in the field list are available for selection.
Partition: Click Add Partition to add a partition field. Configure parameters based on the Partition Type. Click the Delete icon to delete the corresponding partition field, including all of its data partition information.
If Partition Type is LIST or RANGE:
Partition Field: Select a partition field or manually enter an expression.
The AnalyticDB for PostgreSQL compute engine (database version 6.x) supports only single-field partitioning. The AnalyticDB for PostgreSQL compute engine (database version 7.x) and the DWS compute engine support expressions.
Partition Name and Partition Value: Click the Add icon to add a data partition. Click the Delete icon to delete the corresponding data partition.
NoteIf the compute engine is AnalyticDB for PostgreSQL (database version 6.x), the system automatically adds a default partition. The partition name is optional. If you do not specify a partition name, you must add at least one data partition.
If the compute engine is DWS, you must add at least one data partition.
If the compute engine is AnalyticDB for PostgreSQL (database version 7.x), you do not need to add any data partitions.
Partition Name: The name can contain only letters, digits, and underscores (_). The name can be up to 63 characters in length.
NoteA partition is a table itself. The default format for a full partitioned table name is
<primary_table_name>_<partition_level#>_prt_<partition_name>.Partition Value:
If Partition Type is LIST: Enter any characters. The value can be up to 512 characters in length. If the value is a text type, such as text, varchar, or char, enclose it in single quotation marks ('').
If Partition Type is RANGE: Configure START, END, and EVERY. You can enter any characters. The value can be up to 512 characters in length. If the value is a text type, such as text, varchar, or char, enclose it in single quotation marks ('').
For START and END, you must also select INCLUSIVE or EXCLUSIVE. The default value for START is INCLUSIVE. The default value for END is EXCLUSIVE.
NoteThe validity of the partition value depends on the data type of the partition field.
If Partition Type is HASH: You only need to configure the partition field. You do not need to configure data partitions. The configuration of the partition field is the same as for the LIST and RANGE types.
NoteTables with the REPLICATED distribution method do not support partitioning.
If the compute engine is DWS, multi-level partitioning is not supported.
The HASH partition type is supported only if the compute engine is AnalyticDB for PostgreSQL (database version 7.x).
Import data
After you submit the offline physical table, you can import data into it.
In the table list, click the destination offline physical table. You can search for the table by its name.
On the table details page, click Import Data to open the Import Data dialog box.
In the Import Data dialog box, upload the data file and configure the import parameters on the Basic Configuration step.
Parameter
Description
Upload File
Click Select File to upload the data file. Only .txt and .csv files are supported. The file size cannot exceed 10 MB.
Separator
The data separator. Supported separators include comma (,), tab character (\t), VERTICAL LINE (|), and forward slash (/). You can also specify a different separator.
Character Set Encoding
Select the character set encoding of the uploaded data file. Supported character sets include UTF-8 (without BOM), UTF-8 (with BOM), GBK, Big5, GB2312, ASCII, and UTF-16.
First Row As Header
Specify whether the first row of the uploaded data file is a header.
Target Partition
If the table is a partitioned table, enter the name of the Target Partition for the imported data.
Click Next.
On the Import Data step, configure the field mapping for the table.
Mapping:
Row Mapping: Binds fields based on the row number.
Name Mapping: Binds fields based on the same name.
Import File Data Column: You can set the value to a data column, NULL, or a static field.
Click Start Import to import the data into the table.
Edit an offline physical table
After you submit an offline physical table, you can edit it.
In the table list, click the destination offline physical table. You can search for the table by its name.
On the table details page, click Edit to open the Edit Physical Table page.
The editable parameters vary based on the storage format, as shown in the following tables.
Storage format is hudi
Tab
Description
Basic Information
You can edit only the Table Name and Description. The parameter requirements are the same as when creating an offline physical table.
Field List
You can edit and delete the names and types of fields, except for Hudi system fields. Hudi system fields include the following:
_hoodie_commit_time
_hoodie_commit_seqno
_hoodie_record_key
_hoodie_partition_path
_hoodie_file_name
NoteWhen you modify field names or data types, or add or delete fields, an error may be reported because the operation is not supported by the compute engine.
Storage Configuration
You can edit only type, primaryKey, preCombineField, and Location. You can edit Location only if the current data table is an external table.
After you finish editing, click Submit. The default SQL of the compute source is used for submission. In the Submit dialog box, review the SQL statement for the table, and then click Confirm And Submit.
NoteIf an engine error occurs when you rename, delete, or modify the type of a field in a Hudi table, contact the engine service provider to check the engine settings.
Storage format is delta (Delta Lake)
Tab
Description
Basic Information
You can edit only the Table Name and Description. The parameter requirements are the same as when creating an offline physical table.
Field List
You can edit and delete existing fields, and add new fields. The parameter requirements are the same as when creating an offline physical table.
Storage Configuration
Data Distribution
If Liquid Clustering is enabled, you can disable it or select a different Cluster By field.
If Liquid Clustering and Partition are both disabled, you can enable Liquid Clustering and specify a Cluster By field.
If Liquid Clustering and Partition are both disabled, you cannot enable Partition.
If Partition is enabled, you cannot modify Liquid Clustering, Partition, or Partitioned By.
Storage Address: You can edit Location only if the current data table is an external table.
After you finish editing, click Submit. The default SQL of the compute source is used for submission. In the Submit dialog box, review the SQL statement for the table, and then click Confirm And Submit.
Storage format is iceberg
Tab
Description
Basic Information
You can edit only the Table Name and Description. The parameter requirements are the same as when creating an offline physical table.
Field List
You can edit field names and data types. You can also add and delete fields.
NoteWhen you modify field names or data types, or add or delete fields, an error may be reported because the operation is not supported by the compute engine.
Storage Configuration
You can edit Location only if the current data table is an external table.
After you finish editing, click Submit. If the current compute engine is Hadoop, Spark SQL is used. If the current compute engine is Lindorm, the default SQL of the compute source is used.
In the Submit dialog box, review the SQL statement for the table, and then click Confirm And Submit.
Storage format is paimon
Tab
Description
Basic Information
You can edit only the Table Name and Description. The parameter requirements are the same as when creating an offline physical table.
Field List
You can edit field names and data types. You can also add and delete fields.
NoteWhen you modify field names or data types, or add or delete fields, an error may be reported because the operation is not supported by the compute engine.
Storage Configuration
You can edit Location only if the current data table is an external table.
After you finish editing, click Submit. Spark SQL is used for submission. In the Submit dialog box, review the SQL statement for the table, and then click Confirm And Submit.
MaxCompute compute engine
MaxCompute internal table
Tab
Description
Basic Information
You can edit only the Table Name and Description. The parameter requirements are the same as when creating an offline physical table.
Field List
You can edit and delete existing fields, and add new fields. The parameter requirements are the same as when creating an offline physical table. A DDL statement is generated when you modify a data type.
You can clear the Not Null check box for a field.
If the current table is an internal transactional table, you cannot clear the Not Null check box for fields that are set as primary keys.
You can clear the Not Null check box only for fields for which it is already selected.
Storage Configuration
You can edit the Lifecycle. The requirements are the same as when creating an offline physical table.
MaxCompute external table
Tab
Description
Basic Information
You can edit only the Table Name and Description. The parameter requirements are the same as when creating an offline physical table.
Field List
You can edit and delete existing fields, and add new fields. The parameter requirements are the same as when creating an offline physical table. A DDL statement is generated when you modify a data type.
You can clear the Not Null check box only for fields for which it is already selected.
Storage Configuration
You can edit the Storage Address (Location). The parameter requirements are the same as when creating an offline physical table.
After you finish editing, click Submit. In the Submit dialog box, review the SQL statement for the table, and then click Confirm And Submit.
NoteIf the changes include deleting fields or modifying data types, you must first run the following command to enable schema evolution at the project level in MaxCompute:
setproject odps.schema.evolution.enable=true;.DWS/AnalyticDB for PostgreSQL compute engine
Tab
Description
Basic Information
You can edit only the Table Name, Data Domain, and Description. The parameter requirements are the same as when creating an offline physical table.
Field List
You can edit and delete existing fields, and add new fields. The parameter requirements are the same as when creating an offline physical table. A DDL statement is generated when you modify a data type.
Supports modifying the Not Null option for fields.
You cannot reselect a primary key. If needed, you can modify the primary key by editing the table-level constraints.
Storage Configuration
You can edit Table Constraint and Distribution Method. The requirements are the same as when creating an offline physical table. You can find partition information in Asset > Object Details > Fields.
What to do next
If your development mode is Dev-Prod, you must publish the offline physical table. For more information, see Manage publish tasks.
If your development mode is Basic, you can manage the submitted offline physical table in the asset directory. For more information, see .