Learn how to use commands to create and drop tables in MaxCompute.
Type | Description | Required permission | Execution tools |
Creates a non-partitioned, partitioned, external, or clustered table. | Requires the CreateTable permission on the project. | You can run these commands in the following tools: | |
Drops a partitioned or non-partitioned table. | Requires the Drop permission on the table. |
Create a table
You can create non-partitioned, partitioned, external, or clustered tables.
Limitations
A partitioned table can have up to six partition levels. For example, you can use a date as the partition column and define levels such as
year/month/week/day/hour/minute.The maximum number of partitions per table is configurable on a per-project basis, with a default of 60,000.
For more information about table limitations, see MaxCompute SQL limits.
Syntax
Internal table
Create an internal table (non-partitioned or partitioned)
CREATE [OR REPLACE] TABLE [IF NOT EXISTS] <table_name> (
<col_name> <data_type>, ... )
[COMMENT <table_comment>]
[PARTITIONED BY (<col_name> <data_type> [COMMENT <col_comment>], ...)]
[AUTO PARTITIONED
BY (<auto_partition_expression> [AS <auto_partition_column_name>])
[TBLPROPERTIES('ingestion_time_partition'='true')]
];Clustered table
Create a clustered table
CREATE TABLE [IF NOT EXISTS] <table_name> (
<col_name> <data_type>, ... )
[CLUSTERED BY | RANGE CLUSTERED BY (<col_name> [, <col_name>, ...])
[SORTED BY (<col_name> [ASC | DESC] [, <col_name> [ASC | DESC] ...])]
INTO <number_of_buckets> BUCKETS];External table
Create an external table
The following example shows how to create an OSS external table by using the built-in text data parser. For more information, see ORC external table.
CREATE EXTERNAL TABLE [IF NOT EXISTS] <mc_oss_extable_name> (
<col_name> <data_type>, ... )
STORED AS '<file_format>'
[WITH SERDEPROPERTIES (options)]
LOCATION '<oss_location>';Transactional and Delta tables
Creates a transactional table. You can run UPDATE or DELETE operations on this type of table. However, transactional tables have limitations.
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] <table_name> ( <col_name <data_type> [NOT NULL] [DEFAULT <default_value>] [COMMENT <col_comment>], ... [COMMENT <table_comment>] [TBLPROPERTIES ("transactional"="true")];Creates a Delta table. When combined with a primary key, you can perform operations such as upserts, incremental queries, and time travel.
CREATE [EXTERNAL] TABLE [IF NOT EXISTS] <table_name> ( <col_name <data_type> [NOT NULL] [DEFAULT <default_value>] [COMMENT <col_comment>], ... [PRIMARY KEY (<pk_col_name>[, <pk_col_name2>, ...] )]) [COMMENT <table_comment>] [CLUSTERED BY (<pk_col_name>[, <pk_col_name2>, ...] )] [TBLPROPERTIES ("transactional"="true" [, "write.bucket.num" = "N", "acid.data.retain.hours"="hours"...])] [LIFECYCLE <days>];A Delta table with a primary key lets you use a subset of the primary key as the hash cluster key.
In a Delta table with a primary key, the system distributes data using hash clustering by default. The default cluster key is the complete set of primary key columns. In some business scenarios, queries frequently filter data by using a subset of the primary key columns. For these scenarios, you can explicitly specify a subset of the primary key as the cluster key. This ensures that data is distributed and stored based on this subset, which improves filtering performance.
If you explicitly specify a cluster key, the system distributes the table data into hash buckets based on the specified columns. The cluster key must be a subset of the primary key columns.
If you do not explicitly specify a cluster key, the system uses the complete set of primary key columns as the default cluster key.
You cannot modify the cluster key after the table is created. You must specify it when you create the table.
CTAS and LIKE clauses
Creates a new table based on an existing table and copies the data but does not copy partition properties. This clause applies to external tables and tables in external Lakehouse projects.
CREATE TABLE [IF NOT EXISTS] <table_name> [LIFECYCLE <days>] AS <select_statement>;Creates a new table with the same structure as an existing table but does not copy the data. This clause applies to external tables and tables in external Lakehouse projects.
CREATE TABLE [IF NOT EXISTS] <table_name> [LIFECYCLE <days>] LIKE <existing_table_name>;
Parameters
General parameters
General parameters
Parameter | Required | Description | Remarks |
OR REPLACE | No |
| This is equivalent to running the following commands: |
EXTERNAL | No | Creates an external table. | N/A |
IF NOT EXISTS | No | Creates a table only if one with the same name does not already exist. | If you do not specify the IF NOT EXISTS option and a table with the same name already exists, an error is reported. If you specify IF NOT EXISTS, the operation succeeds if a table with the same name already exists, even if the schema of the existing table does not match the schema of the table to be created. The metadata of the existing table is not changed. |
table_name | Yes | The name of the table. | The table name must be 128 bytes or less and contain only letters, digits, and underscores (_). It is case-insensitive. Starting the name with a letter is recommended. |
PRIMARY KEY(pk) | No | The primary key of the table. | You can define one or more columns as the primary key. This ensures the combination of values in these columns is unique within the table. The syntax follows standard SQL primary key syntax. The primary key columns must be set to NOT NULL and cannot be modified. Important This parameter applies only to a Delta Table. |
col_name | Yes | The name of the column. |
|
col_comment | No | The column comment. | Must be a string of 1,024 bytes or less. |
data_type | Yes | The data type of the column. | Supported data types include BIGINT, DOUBLE, BOOLEAN, DATETIME, DECIMAL, and STRING. For more information, see Data types. |
NOT NULL | No | Specifies that the column cannot contain NULL values. | For more information about how to modify the NOT NULL property, see Partition operations. |
default_value | No | The default value for the column. | If an Note Functions such as |
table_comment | No | The table comment. | Must be a string of 1,024 bytes or less. |
LIFECYCLE | No | The lifecycle of the table, in days. | Only positive integers are supported. The unit is days.
|
Partitioned table parameters
Partitioned table parameters
MaxCompute supports two types of partitioned tables: standard and auto-partitioned. You can choose a table type based on how you want to generate partition columns for your use case. For more information, see Partitioned table overview.
Standard partitioned table parameters
Parameter | Required | Description | Remarks |
PARTITIONED BY | Yes | Specifies the partitions for a standard partitioned table. | You can specify partitions by using either PARTITIONED BY or AUTO PARTITIONED BY, but not both. |
col_name | Yes | The name of the partition column. |
|
data_type | Yes | The data type of the partition column. | MaxCompute V1.0 supports only the STRING type. MaxCompute V2.0 supports additional partition types, including TINYINT, SMALLINT, INT, BIGINT, VARCHAR, and STRING. For more information, see Data types. When you partition a table, operations such as adding partitions, updating data, and reading data do not require full table scans, which improves efficiency. |
col_comment | No | The comment for the partition column. | Must be a string of 1,024 bytes or less. |
A partition value must be 255 bytes or less. It cannot contain double-byte characters, such as Chinese characters. The value must start with a letter and contain only letters, digits, and the following characters: space, colon (:), underscore (_), dollar sign ($), number sign (#), period (.), exclamation point (!), and at sign (@). The behavior of other characters, such as the escape characters \t, \n, and /, is undefined.
Auto-partitioned table parameters
Partition columns for an auto-partitioned table can be generated automatically. For more information about how to use auto-partitioned tables, see Partitioned table types.
Parameter | Required | Description | Remarks |
AUTO PARTITIONED BY | Yes | Specifies the partitions for an auto-partitioned table. | You can specify partitions by using either PARTITIONED BY or AUTO PARTITIONED BY, but not both. |
auto_partition_expression | Yes | The expression that defines how the partition column is calculated. Currently, only the TRUNC_TIME function can be used to generate the partition column. Only one partition column is supported. | The TRUNC_TIME function can truncate data from a time or date type column based on a specified time unit to generate a partition column. |
auto_partition_column_name | No | The name of the generated partition column. If a name is not specified, the system uses | Based on the calculation from the partition expression, a STRING-type partition column is generated. You can explicitly specify a name for this column, but you cannot directly modify its data type or value. |
TBLPROPERTIES('ingestion_time_partition'='true') | No | Specifies whether to generate partition columns based on the data ingestion time. | For more information about generating partitions based on data ingestion time, see Auto-partitioned tables based on data ingestion time. |
Clustered table parameters
Clustered table parameters
Clustered tables are categorized into hash-clustered tables and range-clustered tables.
Hash-clustered table
Parameter | Required | Description | Remarks |
CLUSTERED BY | Yes | Specifies the hash key. MaxCompute calculates hash values for the specified columns and distributes the data into hash buckets based on these values. | MaxCompute performs a hash operation on the specified columns and distributes the data into buckets based on the hash values. To prevent data skew and hotspots and achieve good parallel execution performance, select columns that have a large range of values and few duplicate keys for the |
SORTED BY | Yes | Specifies the sort order of columns within each hash bucket. | For optimal performance, use the same columns for SORTED BY and CLUSTERED BY. After you specify the SORTED BY clause, MaxCompute automatically creates an index and uses it to accelerate queries. |
number_of_buckets | Yes | Specifies the number of hash buckets. | This value is required and its value depends on the data volume. By default, MaxCompute supports a maximum of 1,111 reducers, which limits the number of hash buckets to 1,111. You can run the |
When you select the number of hash buckets, follow these two principles:
Keep the hash bucket size moderate: The recommended size of each hash bucket is approximately 500 MB. For example, if the estimated partition size is 500 GB, set the number of buckets to 1,000. This results in an average hash bucket size of about 500 MB. For very large tables, you can exceed the 500 MB limit. A size of 2 GB to 3 GB per bucket is suitable. You can also run the
set odps.stage.reducer.num=<concurrency>;command to exceed the 1,111 hash bucket limit.To optimize
joinoperations, removing the shuffle and sort steps significantly improves performance. This requires the number of hash buckets in the two tables to be a multiple of each other, for example, 256 and 512. Setting the number of hash buckets to a power of 2 (2n), such as 512, 1024, 2048, or 4096, is recommended. This allows the system to automatically split or merge hash buckets and remove the shuffle and sort steps, which improves execution efficiency.
Range-clustered table
Parameter | Required | Description | Remarks |
RANGE CLUSTERED BY | Yes | Specifies the range-cluster columns. | MaxCompute performs bucketing operations on the specified columns and distributes the data into buckets based on the bucket numbers. |
SORTED BY | Yes | Specifies the sort order of columns within each bucket. | The usage is the same as for a hash-clustered table. |
number_of_buckets | Yes | Specifies the number of buckets. | For a range-clustered table, the power-of-2 (2n) best practice that applies to hash-clustered tables is not required. Any number of buckets is acceptable if the data is evenly distributed. This parameter is optional for range-clustered tables. If you omit it, the system automatically determines the optimal number of buckets based on the data volume. |
When a join or aggregation operation is performed on a range-clustered table, if the join key or group key is the range clustering key or its prefix, you can eliminate data redistribution (shuffle remove) to improve performance. You can run the set odps.optimizer.enable.range.partial.repartitioning=true/false; command to enable or disable this feature. It is disabled by default.
Benefits of clustered tables:
Optimized bucket pruning
Optimized aggregations
Optimized storage
Limitations of clustered tables:
INSERT INTOis not supported. You can add data only by usingINSERT OVERWRITE.Directly uploading data to a range-clustered table by using Tunnel is not supported because Tunnel uploads data in an unordered manner.
Backup and recovery features are not supported.
External table parameters
External table parameters
This section uses the parameters for creating an OSS external table as an example. For more information about the parameters for creating other types of external tables, see External tables.
Parameter | Required | Description |
| Yes | Specifies the file_format based on the data format of the external table. |
| No | Specifies parameters related to authorization, compression, and character parsing for the external table. |
oss_location | Yes | The OSS storage location of the external table data. For more information, see OSS external tables. |
Transactional Table and Delta Table parameters
Transactional Table and Delta Table parameters
Delta Table parameters
A Delta Table is a table format that supports near-real-time reads and writes, incremental storage and access, and real-time updates. Currently, only primary key tables are supported.
Parameter | Required | Description | Remarks |
PRIMARY KEY(PK) | Yes | Defines the primary key for the Delta Table, which can include multiple columns. | The syntax follows the standard SQL primary key syntax. Primary key columns must be set to NOT NULL and cannot be modified. After the primary key is set, data is deduplicated based on the primary key columns. The unique constraint is enforced within a single partition or within a non-partitioned table. |
transactional | Yes | Required for creating a Delta Table. You must set this parameter to | Indicates that the table supports the transactional properties of MaxCompute ACID tables. The table uses the Multi-Version Concurrency Control (MVCC) model to ensure the snapshot isolation level. |
write.bucket.num | No | The default value is 16. The valid range is | Specifies the number of buckets for each partition or for a non-partitioned table. This also indicates the number of concurrent nodes for data writes. You can modify this parameter for a partitioned table, and the new setting takes effect for new partitions. You cannot modify this parameter for a non-partitioned table. Consider the following recommendations:
|
acid.data.retain.hours | No | The default value is 24. The valid range is | Specifies the time range in hours for which you can query historical data states by using Time Travel. If you need a Time Travel history longer than 168 hours (7 days), contact MaxCompute technical support.
|
acid.incremental.query.out.of.time.range.enabled | No | Default value: | If |
acid.write.precombine.field | No | Specifies a single column name. | If a column name is specified, the system uses this column in combination with the primary key columns to deduplicate data within the same commit. This ensures data uniqueness and consistency. Note If a single data commit exceeds 128 MB, multiple files are generated. This parameter does not apply across multiple files. |
acid.partial.fields.update.enable | No | If set to | This parameter is set when you create the table. It cannot be modified after the table is created. |
Other parameter requirements for Delta Tables:
LIFECYCLE: The table lifecycle must be greater than or equal to the time travel retention period, that is,
lifecycle >= acid.data.retain.hours / 24. A check is performed when the table is created, and an error is reported if this condition is not met.Unsupported features:
CLUSTERED BY,EXTERNAL, andCREATE TABLE ASare not supported.
Other limitations:
Currently, other engines cannot directly operate on a Delta Table. Only MaxCompute SQL is supported.
You cannot convert a standard table to a Delta Table.
You cannot perform schema changes on the primary key columns of a Delta Table.
Transactional Table parameters
Parameter | Required | Description |
TBLPROPERTIES("transactional"="true") | Yes | Sets the table as a Transactional Table. You can then perform |
The following limitations apply to a Transactional Table:
You can set the
transactionalproperty only when you create a table. You cannot useALTER TABLEto modify this property for an existing table. The following statement returns an error:ALTER TABLE not_txn_tbl SET TBLPROPERTIES("transactional"="true"); -- Error returned. FAILED: Catalog Service Failed, ErrorCode: 151, Error Message: Set transactional is not supportedYou cannot set a clustered table or an external table as a Transactional Table.
You cannot convert a standard internal table, external table, or clustered table to a Transactional Table, or vice versa.
Automatic compaction of files in a Transactional Table is not supported. You must manually perform this operation. For more information, see Compact files in a Transactional Table.
The
merge partitionoperation is not supported.Access to Transactional Tables from other systems is limited. For example, MaxCompute Graph does not support read or write operations. Spark and PAI support only read operations.
Before performing
update,delete, orinsert overwriteoperations on important data, manually back it up to another table by using aSELECT...INSERTstatement.
Create table (from existing)
Create table (from existing)
You can use the
CREATE TABLE [IF NOT EXISTS] <table_name> [LIFECYCLE <days>] AS <select_statement>;statement to create another table and simultaneously copy data into the new table.This statement does not copy partition properties. The partition columns of the source table are treated as regular columns in the destination table. The lifecycle property of the source table is also not copied.
You can use the lifecycle parameter to specify a lifecycle for the new table. This statement also supports creating an internal table and copying data from an external table.
You can use the
CREATE TABLE [IF NOT EXISTS] <table_name> [LIFECYCLE <days>] LIKE <existing_table_name>;statement to create a new table with the same schema as an existing table.This statement copies the schema but does not copy data or the lifecycle property from the source table.
You can use the lifecycle parameter to specify a lifecycle for the new table. This statement also supports creating an internal table that copies the schema of an external table.
Examples
Non-partitioned table
Create a non-partitioned table.
CREATE TABLE test1 (key STRING);Create a non-partitioned table and specify default values for columns.
CREATE TABLE test_default( tinyint_name tinyint NOT NULL default 1Y, smallint_name SMALLINT NOT NULL DEFAULT 1S, int_name INT NOT NULL DEFAULT 1, bigint_name BIGINT NOT NULL DEFAULT 1, binary_name BINARY , float_name FLOAT , double_name DOUBLE NOT NULL DEFAULT 0.1, decimal_name DECIMAL(2, 1) NOT NULL DEFAULT 0.0BD, varchar_name VARCHAR(10) , char_name CHAR(2) , string_name STRING NOT NULL DEFAULT 'N', boolean_name BOOLEAN NOT NULL DEFAULT TRUE );
Partitioned table
Create an AUTO PARTITION partitioned table that generates partitions based on a time-based data column by using a time function.
-- The sale_date column is truncated by month to generate a partition column named sale_month. The table is then partitioned by this column. CREATE TABLE IF NOT EXISTS auto_sale_detail( shop_name STRING, customer_id STRING, total_price DOUBLE, sale_date DATE ) AUTO PARTITIONED BY (TRUNC_TIME(sale_date, 'month') AS sale_month);Create an AUTO PARTITION partitioned table that generates partitions based on the data ingestion time. The system automatically retrieves the time when data is written to MaxCompute and generates partitions using a time function.
-- After the table is created, when data is written, the system automatically captures the data ingestion time (_partitiontime), truncates it by day, generates a partition column named sale_date, and then partitions the table by this column. CREATE TABLE IF NOT EXISTS auto_sale_detail2( shop_name STRING, customer_id STRING, total_price DOUBLE, _partitiontime TIMESTAMP_NTZ) AUTO PARTITIONED BY (TRUNC_TIME(_partitiontime, 'day') AS sale_date) TBLPROPERTIES('ingestion_time_partition'='true');
Hash or range clustered table
Create a hash-clustered non-partitioned table.
CREATE TABLE t1 (a STRING, b STRING, c BIGINT) CLUSTERED BY (c) SORTED BY (c) INTO 1024 buckets;Create a hash-clustered partitioned table.
CREATE TABLE t2 (a STRING, b STRING, c BIGINT) PARTITIONED BY (dt STRING) CLUSTERED BY (c) SORTED BY (c) INTO 1024 buckets;Create a range-clustered non-partitioned table.
CREATE TABLE t3 (a STRING, b STRING, c BIGINT) RANGE CLUSTERED BY (c) SORTED BY (c) INTO 1024 buckets;Create a range-clustered partitioned table.
CREATE TABLE t4 (a STRING, b STRING, c BIGINT) PARTITIONED BY (dt STRING) RANGE CLUSTERED BY (c) SORTED BY (c);
Transactional table
Create a non-partitioned transactional table.
CREATE TABLE t5(id BIGINT) TBLPROPERTIES ("transactional"="true");Create a partitioned transactional table.
CREATE TABLE IF NOT EXISTS t6(id BIGINT) PARTITIONED BY (ds STRING) TBLPROPERTIES ("transactional"="true");
Internal table
Create an internal table by copying data from an external partitioned table. The internal table will not include partition properties.
Create an OSS external table and a MaxCompute internal table. Create an internal table using CREATE TABLE AS.
-- Create an OSS external table and insert data into it. CREATE EXTERNAL TABLE max_oss_test(a INT, b INT, c INT) STORED AS TEXTFILE LOCATION "oss://oss-cn-hangzhou-internal.aliyuncs.com/<bucket_name>"; INSERT INTO max_oss_test VALUES (101, 1, 20241108), (102, 2, 20241109), (103, 3, 20241110); SELECT * FROM max_oss_test; -- Result a b c 101 1 20241108 102 2 20241109 103 3 20241110 -- Create an internal table by using CREATE TABLE AS. CREATE TABLE from_exetbl_oss AS SELECT * FROM max_oss_test; -- Query the new internal table. SELECT * FROM from_exetbl_oss; -- The result shows that all data is copied. a b c 101 1 20241108 102 2 20241109 103 3 20241110Run the
DESC from_exetbl_oss;command to view the schema of the internal table. The command returns the following output.+------------------------------------------------------------------------------------+ | Owner: ALIYUN$*********** | | Project: ***_*****_*** | | TableComment: | +------------------------------------------------------------------------------------+ | CreateTime: 2023-01-10 15:16:33 | | LastDDLTime: 2023-01-10 15:16:33 | | LastModifiedTime: 2023-01-10 15:16:33 | +------------------------------------------------------------------------------------+ | InternalTable: YES | Size: 919 | +------------------------------------------------------------------------------------+ | Native Columns: | +------------------------------------------------------------------------------------+ | Field | Type | Label | Comment | +------------------------------------------------------------------------------------+ | a | string | | | | b | string | | | | c | string | | | +------------------------------------------------------------------------------------+
Create an internal table by copying the schema from an external partitioned table. The internal table includes partition properties.
Create the internal table
from_exetbl_like. Query the OSS external table from MaxCompute. Create an internal table using CREATE TABLE LIKE.-- Query the OSS external table from MaxCompute. SELECT * FROM max_oss_test; -- Result a b c 101 1 20241108 102 2 20241109 103 3 20241110 -- Create an internal table by using CREATE TABLE LIKE. CREATE TABLE from_exetbl_like LIKE max_oss_test; -- Query the new internal table. SELECT * FROM from_exetbl_like; -- Result: Only the table schema is returned. a b cRun the
DESC from_exetbl_like;command to view the schema of the internal table. The command returns the following output.+------------------------------------------------------------------------------------+ | Owner: ALIYUN$************ | | Project: ***_*****_*** | | TableComment: | +------------------------------------------------------------------------------------+ | CreateTime: 2023-01-10 15:09:47 | | LastDDLTime: 2023-01-10 15:09:47 | | LastModifiedTime: 2023-01-10 15:09:47 | +------------------------------------------------------------------------------------+ | InternalTable: YES | Size: 0 | +------------------------------------------------------------------------------------+ | Native Columns: | +------------------------------------------------------------------------------------+ | Field | Type | Label | Comment | +------------------------------------------------------------------------------------+ | a | string | | | | b | string | | | +------------------------------------------------------------------------------------+ | Partition Columns: | +------------------------------------------------------------------------------------+ | c | string | | +------------------------------------------------------------------------------------+
Delta table
Create a Delta table.
CREATE TABLE mf_tt (pk BIGINT NOT NULL PRIMARY KEY, val BIGINT) TBLPROPERTIES ("transactional"="true");Create a Delta table and set key table properties.
CREATE TABLE mf_tt2 ( pk BIGINT NOT NULL, pk2 BIGINT NOT NULL, val BIGINT, val2 BIGINT, PRIMARY KEY (pk, pk2) ) TBLPROPERTIES ( "transactional"="true", "write.bucket.num" = "64", "acid.data.retain.hours"="120" ) LIFECYCLE 7;
Other methods
Replace an existing table
Create the original table
my_tableand insert data into it.CREATE OR REPLACE TABLE my_table(a BIGINT); INSERT INTO my_table(a) VALUES (1),(2),(3);Use
OR REPLACEto create a new table with the same name and modify its columns.CREATE OR REPLACE TABLE my_table(b STRING);Query the
my_tabletable. The query returns the following result.+------------+ | b | +------------+ +------------+The following SQL statements are invalid:
CREATE OR REPLACE TABLE IF NOT EXISTS my_table(b STRING); CREATE OR REPLACE TABLE my_table AS SELECT; CREATE OR REPLACE TABLE my_table LIKE newtable;
Copy data and set a lifecycle
-- Create a new table named sale_detail_ctas1, copy the data from sale_detail to it, and set a lifecycle.
SET odps.sql.allow.fullscan=true;
CREATE TABLE sale_detail_ctas1 LIFECYCLE 10 AS SELECT * FROM sale_detail;Run the DESC EXTENDED sale_detail_ctas1; command to view details such as the table's schema and lifecycle.
In this example, sale_detail is a partitioned table. When you use the CREATE TABLE ... AS select_statement ... statement to create the sale_detail_ctas1 table, the partition properties are not copied. The partition columns from the source table become regular columns in the destination table. Therefore, sale_detail_ctas1 is a non-partitioned table with five columns.
Use constants for column values
If you use constants as column values in the SELECT clause, specify column names. Otherwise, the fourth and fifth columns in the created table sale_detail_ctas3 receive default names such as _c4 and _c5.
Specify column names.
SET odps.sql.allow.fullscan=true; CREATE TABLE sale_detail_ctas2 AS SELECT shop_name, customer_id, total_price, '2013' AS sale_date, 'China' AS region FROM sale_detail;Do not specify column names.
SET odps.sql.allow.fullscan=true; CREATE TABLE sale_detail_ctas3 AS SELECT shop_name, customer_id, total_price, '2013', 'China' FROM sale_detail;
Copy a schema and set a lifecycle
CREATE TABLE sale_detail_like LIKE sale_detail LIFECYCLE 10;Run the DESC EXTENDED sale_detail_like; command to view details such as the table's schema and lifecycle.
The schema of sale_detail_like is identical to the schema of sale_detail. All properties, such as column names, column comments, and table comments, are copied, except for the lifecycle property. However, the data in sale_detail is not copied to the sale_detail_like table.
Copy a schema from an external table
-- Create a new table named mc_oss_extable_orc_like that has the same schema as the external table mc_oss_extable_orc.
CREATE TABLE mc_oss_extable_orc_like LIKE mc_oss_extable_orc;Run the DESC mc_oss_extable_orc_like; command to view details such as the table's schema.
+------------------------------------------------------------------------------------+
| Owner: ALIYUN$****@***.aliyunid.com | Project: max_compute_7u************yoq |
| TableComment: |
+------------------------------------------------------------------------------------+
| CreateTime: 2022-08-11 11:10:47 |
| LastDDLTime: 2022-08-11 11:10:47 |
| LastModifiedTime: 2022-08-11 11:10:47 |
+------------------------------------------------------------------------------------+
| InternalTable: YES | Size: 0 |
+------------------------------------------------------------------------------------+
| Native Columns: |
+------------------------------------------------------------------------------------+
| Field | Type | Label | Comment |
+------------------------------------------------------------------------------------+
| id | string | | |
| name | string | | |
+------------------------------------------------------------------------------------+New data types
SET odps.sql.type.system.odps2=true;
CREATE TABLE test_newtype (
c1 TINYINT,
c2 SMALLINT,
c3 INT,
c4 BIGINT,
c5 FLOAT,
c6 DOUBLE,
c7 DECIMAL,
c8 BINARY,
c9 TIMESTAMP,
c10 ARRAY<MAP<BIGINT,BIGINT>>,
c11 MAP<STRING,ARRAY<BIGINT>>,
c12 STRUCT<s1:STRING,s2:BIGINT>,
c13 VARCHAR(20))
LIFECYCLE 1;DROP TABLE
Drops a non-partitioned table or a partitioned table.
Precautions
Use caution when dropping a table. You can restore a dropped table only if the backup and restore feature is enabled for the project and the table is within the configured data retention period. For more information, see local backup.
Dropping a table reduces the storage usage of the MaxCompute project.
Syntax
DROP TABLE [IF EXISTS] <table_name>; Parameters
Parameter | Required | Description |
IF EXISTS | No | Without IF EXISTS, attempting to drop a non-existent table returns an exception. With IF EXISTS, the statement succeeds even if the table does not exist. |
table_name | Yes | The name of the table to drop. |
Examples
-- Drop the sale_detail table. The statement succeeds regardless of whether the table exists.
DROP TABLE IF EXISTS sale_detail; References
To copy table data to another table, see CLONE TABLE.
To modify and view table information, see Modify and view tables.
To manage existing table partitions, see Partition operations.
To manage existing table columns, see Column operations.