All Products
Search
Document Center

PolarDB:CREATE TABLE

Last Updated:May 22, 2025

This topic describes the CREATE TABLE statement, which is used to create an empty new table in the current database.

Overview

CREATE TABLE is used to create an empty new table in the current database. The new table is owned by the account who issues the command.

To create a new table in a specific schema, specify the schema name, such as CREATE TABLE myschema.mytable .... If no schema is specified, the table will be created in the current schema. You cannot specify a schema when creating a temporary table, as it exists in a special schema. The name of a new table must be distinct from the name of any other table, sequence, index, view, or external table in the same schema.

CREATE TABLE automatically creates a data type to represent the composite type corresponding to a row of the table. Therefore, a table name cannot be the same as any existing data type in the schema.

The optional constraint clause imposes a constraint (test) that the new or updated row must satisfy for the insert or update operation to succeed. A constraint is an SQL object that helps define the set of valid values in a table in a variety of ways.

There are two ways to define constraints: table constraints and column constraints. A column constraint is defined as part of the column definition. A table constraint definition is not tied to a specific column, and it can contain more than one column. Each column constraint can also be written as a table constraint. It serves as a convenient token for a constraint that affects only one column.

To create a table, you must have the USAGE privilege for all column types or types in the OF clause.

Syntax

    CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } ] TABLE [ IF NOT EXISTS ] table_name ( [
      { column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
        | table_constraint
        | LIKE source_table [ like_option ... ] }
        [, ... ]
    ] )
    [ INHERITS ( parent_table [, ... ] ) ]
    [ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
    [ USING method ]
    [ WITH ( storage_parameter [= value] [, ... ] ) | WITHOUT OIDS ]
    [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
    [ TABLESPACE tablespace_name ]

    CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } ] TABLE [ IF NOT EXISTS ] table_name
        OF type_name [ (
      { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ]
        | table_constraint }
        [, ... ]
    ) ]
    [ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
    [ USING method ]
    [ WITH ( storage_parameter [= value] [, ... ] ) | WITHOUT OIDS ]
    [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
    [ TABLESPACE tablespace_name ]

    CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } ] TABLE [ IF NOT EXISTS ] table_name
        PARTITION OF parent_table [ (
      { column_name [ WITH OPTIONS ] [ column_constraint [ ... ] ]
        | table_constraint }
        [, ... ]
    ) ] { FOR VALUES partition_bound_spec | DEFAULT }
    [ PARTITION BY { RANGE | LIST | HASH } ( { column_name | ( expression ) } [ COLLATE collation ] [ opclass ] [, ... ] ) ]
    [ USING method ]
    [ WITH ( storage_parameter [= value] [, ... ] ) | WITHOUT OIDS ]
    [ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
    [ TABLESPACE tablespace_name ]

    where column_constraint is:

    [ CONSTRAINT constraint_name ]
    { NOT NULL |
      NULL |
      CHECK ( expression ) [ NO INHERIT ] |
      DEFAULT default_expr |
      GENERATED ALWAYS AS ( generation_expr ) STORED |
      GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] |
      UNIQUE index_parameters |
      PRIMARY KEY index_parameters |
      REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ]
        [ ON DELETE referential_action ] [ ON UPDATE referential_action ] }
    [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]

    and table_constraint is:

    [ CONSTRAINT constraint_name ]
    { CHECK ( expression ) [ NO INHERIT ] |
      UNIQUE ( column_name [, ... ] ) index_parameters |
      PRIMARY KEY ( column_name [, ... ] ) index_parameters |
      EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] |
      FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ]
        [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE referential_action ] [ ON UPDATE referential_action ] }
    [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]

    like_option is:

    { INCLUDING | EXCLUDING } { COMMENTS | CONSTRAINTS | DEFAULTS | GENERATED | IDENTITY | INDEXES | STATISTICS | STORAGE | ALL }

    partition_bound_spec is:

    IN ( partition_bound_expr [, ...] ) |
    FROM ( { partition_bound_expr | MINVALUE | MAXVALUE } [, ...] )
      TO ( { partition_bound_expr | MINVALUE | MAXVALUE } [, ...] ) |
    WITH ( MODULUS numeric_literal, REMAINDER numeric_literal )

    index_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are:

    [ INCLUDE ( column_name [, ... ] ) ]
    [ WITH ( storage_parameter [= value] [, ... ] ) ]
    [ USING INDEX TABLESPACE tablespace_name ]

    exclude_element in an EXCLUDE constraint is:

    { column_name | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ]

Parameter

TEMPORARY or TEMP

If specified, the table is created as a temporary table. Temporary tables are automatically deleted at the end of the session, or you can choose to delete them at the end of the current transaction (see ON COMMIT below). If a temporary table exists, an existing permanent table with the same name will not be visible to the current session, but can be referenced using a schema-qualified name. Any index created on a temporary table is automatically temporary.

The autovacuum daemon does not have access to and therefore cannot clean or analyze temporary tables. For this reason, appropriate cleanup and analysis operations should be performed through the session's SQL commands. For example, if a temporary table is to be used for complex queries, it is best to run ANALYZE on it after it has been populated.

You can choose to write GLOBAL or LOCAL in front of the TEMPORARY or TEMP.

IF NOT EXISTS

If a relation with the same name already exists, do not throw an error. A message notification appears. Note that this does not guarantee that the existing relation is something similar to the table that will be created.

table_name

The name of the table to be created (optionally qualified with a schema).

OF type_name

Creates a typed table, whose structure is taken from the specified composite type (the name can optionally be qualified with a schema). A typed table is bound to its type, such as if the type is dropped, the table is also dropped (with DROP TYPE ... CASCADE ).

When a typed table is created, the data types of the columns are determined by the underlying composite type and are not specified in the CREATE TABLE command. But CREATE TABLE commands can add default values and constraints to tables, and can specify storage parameters.

column_name

The name of the column to created in the new table.

data_type

The data type of the column. This can include an array specifiers.

COLLATE collation

The COLLATE clause assigns a collation to the column, which must be of a collatable data type. If not specified, the default collation for the column data type is used.

INHERITS (parent_table [, ... ] )

The optional INHERITS clause specifies a list of tables from which the new table automatically inherits all columns. Parent tables can be regular tables or foreign tables.

The use of INHERITS creates a persistent relationship between the new child table and its parent table. Schema changes to the parent table are usually also propagated to the child table, and by default the data of the child table is included in the scan of the parent table.

If columns with the same name exist in more than one parent table, an error is reported unless the data type of each such column in the parent table matches. If there are no conflicts, the duplicate columns are merged to form a single column in the new table. If the list of column names in the new table contains a column name that is also inherited, the data type must also match the inherited column, and the column definitions will be merged into one. If the new table explicitly specifies any default value for a column, this default value overrides the default value from the inherited declaration of that column. Otherwise, any parent table must specify the same default value for the column, or an error will be reported.

CHECK constraints are essentially merged in the same way as columns: If multiple parent tables or new table definitions contain identically named CHECK constraints, these constraints must all have the same check expression, otherwise an error will be reported. Constraints with the same name and expression are merged into one copy. Constraints in a parent table that are marked as NO INHERIT will not be considered. Note that an unnamed CHECK constraint in the new table will never be merged, because a unique name will always be chosen for it.

The STORAGE settings for the columns are also copied from parent tables.

This property is not inherited if a column in the parent table is an identity column. A column in the child table can be declared identity column if desired.

PARTITION BY { RANGE | LIST | HASH } ( {column_name | (expression) } [opclass ] [, ...] )

The optional PARTITION BY clause specifies a strategy for partitioning the table. The table created is called a partitioned table. The parenthesized list of columns or expressions forms the partition key for the table. If you use range or hash partitioning, the partition key can contain multiple columns or expressions (up to 32, but this limit can be changed when building PolarDB). However, for list partitioning, the partition key must consist of a single column or expression.

Range and list partitioning require the btree operator class, while hash partitioning requires a hash operator class. If no operator class is explicitly specified, the default operator class of the corresponding type is used; if no default operator class exists, an error is reported.

The partitioned table is divided into child tables (called partitions), which are created using separate CREATE TABLE commands. The partitioned table itself is empty. Data recorded inserted into the table are routed to partitions based on the values of the columns or expressions in the partition key. If no existing partition matches the value in the new row, an error is reported.

Partitioned tables do not support EXCLUDE constraints; however, you can define these constraints on individual partitions.

PARTITION OF parent_table { FOR VALUES partition_bound_spec | DEFAULT }

Creates a table as a partition of a specified parent table. When the table is created, you can use FOR VALUES to create a partition for a specific value, or you can use a DEFAULT to create a default partition. Any indexes, constraints, and user-defined row-level triggers that exist in the parent table are cloned onto the new partition.

partition_bound_spec must correspond to the partitioning method and partitioning key of the parent table, and must not overlap any existing partition of the parent table. The form with IN is used for list partitioning, the form with FROM and TO is used for range partitioning, and the form with WITH is used for hash partitioning.

The partition_bound_expr is any variable-free expression (subqueries, window functions, aggregate functions, and set return functions are not allowed). Its data type must match the data type of the corresponding partition key column. The expression is evaluated only once at table creation, so it can even contain volatile expressions, such as CURRENT_TIMESTAMP.

When you create a list partition, you can specify NULL to indicate that the partition allows the partition key column to be null. However, a given parent table cannot have more than one such list partition. NULL cannot be specified for range partitions.

When you create a range partition, the lower bound specified with FROM is an inclusive bound while the upper bound specified with TO is an exclusive bound. In other words, the values specified in the FROM list are valid values for the corresponding partition key column for that partition, while the values in the TO list are not. Note that this statement must be understood according to the rules of row-wise comparison. For example, given PARTITION BY RANGE (x,y), a partition bound FROM (1, 2) TO (3, 4) allows x=1 with any y>=2, x=2 with any non-null y, and x=3 with any y<4.

You can use special values MINVALUE and MAXVALUE when you create range partitions to indicate that column values have no lower or upper bounds. For example, a partition defined using FROM (MINVALUE) TO (10) allows any value less than 10, and a partition defined using a FROM (10) TO (MAXVALUE) allows any value greater than or equal to 10.

When creating a range partition that involves multiple columns, it also makes sense to have the MAXVALUE as part of the lower bound and the MINVALUE as part of the upper bound. For example, a partition defined with FROM (0, MAXVALUE) TO (10, MAXVALUE) allows any row where the first partition key column is greater than 0 and less than or equal to 10. Similarly, a partition defined using FROM ('a', MINVALUE) TO ('b', MINVALUE) allows any row where the first partition key column starts with "a".

Note that if you define MINVALUE or MAXVALUE for one column of a partition bound, the same value must be used for all subsequent columns. For example, (10, MINVALUE, 0) is not a valid bound; you should write (10, MINVALUE, MINVALUE).

Also note that some element types, such as timestamp, have a concept of "infinity", which is another value that can be stored. This is different from MINVALUE and MAXVALUE, which are not actual values that can be stored, but rather ways to represent unbounded values. MAXVALUE may be thought of as a value greater than any other value (including "infinity"), and MINVALUE can be thought of as a value less than any other value (including "minus infinity"). Therefore, the range FROM ('infinity') TO (MAXVALUE) is not an empty range; it allows only one value to be stored, which is "infinity".

If DEFAULT is specified, the table is created as the default partition of the parent table. This option does not apply to the hash partitioned table. Partition key values that do not fit in any other partition of the given parent table are routed to the default partition.

When a table already has a DEFAULT partition and you want to add a new partition to it, the default partition must be scanned to verify that it does not contain any rows that might belong to the new partition. If the default partition contains a large number of rows, the scan can be slow. If the default partition is a foreign table or it has a constraint that proves that it is unlikely to contain rows which can be placed in the new partition, the scan will be skipped.

When creating a hash partition, a modulus and remainder must be specified. The modulus must be a positive integer and the remainder must be a non-negative integer less than the modulus. Typically, when initially setting up the hash-partitioned table, you should choose a modulus equal to the number of partitions, and assign each table the same modulus and a different remainder (see the following example). However, it is not required that every partition has the same modulus, only that every modulus which occurs among the partitions of a hash-partitioned table is a factor of the next larger modulus. This allows the number of partitions to be incrementally increased without moving all the data at once. For example, suppose you have a hash-partitioned table with 8 partitions, each with a modulus of 8, but find it necessary to increase the number of partitions to 16. You can detach one of the modulus-8 partitions, create two new modulus-16 partitions covering the same portion of the key space (one with a remainder equal to the remainder of the detached partition, and the other with a remainder equal to that value plus 8), and repopulate them with data. You can then repeat this process for each modulus-8 partition until there are no more left. Although each of these steps may involve a large amount of data movement, it is still better than creating a whole new table and moving all the data at once.

A partition must have the same column names and types as the partitioned table it belongs to. Changes to the column names or types of a partitioned table are automatically propagated to all partitions. CHECK constraints are automatically inherited by each partition, but individual partitions can specify additional CHECK constraints; additional constraints with the same name and conditions as the parent table will be merged by the parent table constraint. Defaults may be specified separately for each partition. Note, however, that the default value of a partition is not applied when inserting a tuple into the partitioned table.

Records inserted into the partitioned table are automatically routed to the correct partition. If a suitable partition does not exist, an error occurs.

Operations like TRUNCATE typically affect a table and all its inheritance children. These operations will cascade to all partitions, but they may also be performed on a single partition. Note that using a DROP TABLE to drop a partition requires an ACCESS EXCLUSIVE lock on the parent table.

LIKE source_table [ like_option ... ]

Specifies a table from which the new table will automatically replicate all column names, data types, and their not-null constraints.

Unlike INHERITS, the new table is completely decoupled from the original table after it is created. Changes to the original table will not be applied to the new table, and it is not possible to include the data of the new table in the scan of the original table.

Also unlike INHERITS, columns and constraints copied with LIKE are not merged with similarly named columns and constraints. An error is reported if the same name is explicitly specified or if the same name is specified in another LIKE clause.

The optional like_option clause specifies which additional properties of the original table to copy. Specify INCLUDING to copy the property and EXCLUDING to omit the property. EXCLUDING is the default value. If more than one specification is specified for an object of the same type, the last specification is used. Available options are:

INCLUDING COMMENTS

Comments for the copied columns, constraints, and indexes will be copied. The default behavior is to exclude comments, resulting in the columns and constraints copied in the new table without comments.

INCLUDING CONSTRAINTS

CHECK constraints will be copied. There is no difference between a column constraint and a table constraint. A not-null constraint is always copied to a new table.

INCLUDING DEFAULTS

The default expression defined in the copied column is copied. Otherwise, the default expression is not copied, causing the copied columns in the new table to have null default values. Note that copying defaults that call database-modification functions, such as nextval, may create a functional linkage between the original and new tables.

INCLUDING GENERATED

Any generation expressions of copied column definitions will be copied. By default, the new column will be regular base column.

INCLUDING IDENTITY

Any identity specification of copied column definitions will be copied. A new sequence is created for each identity column of the new table, separate from the sequences associated with the old table.

INCLUDING INDEXES

Indexes, PRIMARY KEY, UNIQUE, and EXCLUDE constraints on the original table will be created on the new table. The names of the new indexes and constraints are chosen according to the default rules, regardless of the original naming. (This behavior avoids possible duplicate-name failures for the new indexes.)

INCLUDING STATISTICS

Extended statistics are copied to the new table.

INCLUDING STORAGE

STORAGE settings for copied column definitions will be copied. The default behavior is to exclude STORAGE settings, which results in replicated columns in new tables having type-specific default settings.

INCLUDING ALL

An abbreviated form selecting all the available individual options. (It can be used to write individual EXCLUDING clauses after INCLUDING ALL to select all but some specific options.)

The LIKE clause can also be used to copy column definitions from views, foreign tables, or composite types. Inapplicable options, such as INCLUDING INDEXES from a view, are ignored.

CONSTRAINT constraint_name

An optional name for a column or table constraint. If the constraint is violated, the constraint name is present in error messages, so constraint names like col must be positive can be used to communicate helpful constraint information to client applications. (Double-quotes are needed to specify constraint names that contain spaces.) If a constraint name is not specified, the system generates a name.

NOT NULL

The column is not allowed to contain null values.

NULL

The column is allowed to contain null values. This is the default.

This clause is only provided for compatibility with non-standard SQL databases. Its use is discouraged in new applications.

CHECK (expression) [ NO INHERIT ]

The CHECK clause specifies an expression producing a Boolean result which new or updated rows must satisfy for an insert or update operation to succeed. Expressions evaluating to TRUE or UNKNOWN succeed. Should any row of an insert or update operation produce a FALSE result, an error exception is raised and the insert or update does not alter the database. A check constraint specified as a column constraint should reference that column's value only, while an expression appearing in a table constraint can reference multiple columns.

Currently, CHECK expressions cannot contain subqueries nor refer to variables other than columns of the current row The system column tableoid may be referenced, but not any other system column.

A constraint marked with NO INHERIT will not propagate to child tables.

When a table has multiple CHECK constraints, they will be tested for each row in alphabetical order by name, after checking NOT NULL constraints.

DEFAULT default_expr

The DEFAULT clause assigns a default data value for the column whose column definition it appears within. The value is any variable-free expression (in particular, cross-references to other columns in the current table are not allowed). Subqueries are not allowed either. The data type of the default expression must match the data type of the column.

The default expression will be used in any insert operation that does not specify a value for the column. If there is no default for a column, then the default is null.

GENERATED ALWAYS AS (generation_expr) STORED

This clause creates the column as a generated column. The column cannot be written to, and when read, the result of the specified expression will be returned.

The keyword STORED is required to signify that the column will be computed on write and will be stored on disk.

The generation expression can refer to other columns in the table, but not other generated columns. Any functions and operators used must be immutable. References to other tables are not allowed.

GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ (sequence_options ) ]

This clause creates the column as an identity column. It will have an implicit sequence attached to it and the column in new rows will automatically have values from the sequence assigned to it.

The clauses ALWAYS and BY DEFAULT determine how explicitly user-specified values are handled in INSERT and UPDATE commands.

In an INSERT command, if ALWAYS is selected, a user-specified value is accepted only if the INSERT statement specifies OVERRIDING SYSTEM VALUE. If BY DEFAULT is selected, the user-specified value takes precedence. (In the COPY command, user-specified values are always used regardless of this setting.)

In an UPDATE command, if ALWAYS is selected, any update of the column to any value other than DEFAULT will be rejected. If BY DEFAULT is selected, the column can be updated normally. (There is no OVERRIDING clause for the UPDATE command.)

The optional sequence_options clause can be used to override the options for a sequence. For more information, see CREATE SEQUENCE.

UNIQUE (column constraint) UNIQUE ( column_name [, ... ] ) [ INCLUDE ( column_name [, ...]) ] (table constraint)

The UNIQUE constraint specifies that a group of one or more columns of a table can contain only unique values. The behavior of a unique table constraint is the same as that of a column constraint, with the additional capability to span multiple columns.

For the purpose of a unique constraint, null values are not considered equal.

Each unique constraint must name a set of columns that is different from the set of columns named by any other unique or primary key constraint defined for the table. (Otherwise, redundant unique constraints will be discarded.)

When establishing a unique constraint for a multi-level partition hierarchy, all the columns in the partition key of the target partitioned table, as well as those of all its descendant partitioned tables, must be included in the constraint definition.

Adding a unique constraint automatically creates a unique btree index on the column or group of columns used in the constraint. The otional INCLUDE clause adds one or more columns to that index, with no enforcement of uniqueness. Note that although the constraint is not enforced on included columns, it still depends on them. Consequently, some operations on such columns (e.g., DROP COLUMN) can cause cascaded constraint and index deletion.

PRIMARY KEY (column constraint) PRIMARY KEY ( column_name [, ... ] ) [ INCLUDE ( column_name [, ...]) ] (table constraint)

The PRIMARY KEY constraint specifies that a column or columns of a table can contain only unique (non-duplicate), non-null values. Only one primary key can be specified for a table, whether as a column constraint or a table constraint.

The primary key constraint should name a set of columns that is different from the set of columns named by any unique constraint defined for the same table. (Otherwise, the unique constraint is redundant and will be discarded.)

PRIMARY KEY enforces the same data constraints as a combination of UNIQUE and NOT NULL. However, identifying a set of columns as the primary key also provides metadata about the design of the schema, since a primary key implies that other tables can rely on this set of columns as a unique identifier for rows.

When placed on a partitioned table, PRIMARY KEY constraints share the restrictions previously described for UNIQUE constraints.

Adding a PRIMARY KEY constraint automatically creates a unique btree index on the column or column group used in the constraints. The optional INCLUDE clause allows for specifying a list of columns to be included in the non-key portion of the index. Although uniqueness is not enforced on included columns, the constraint still depends on them. Consequently, some operations on such columns (e.g., DROP COLUMN) can cause cascaded constraint and index deletion.

EXCLUDE [ USING index_method] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE (predicate) ]

The EXCLUDE clause defines an exclusion constraint, which guarantees that if any two rows are compared on the specified columns or expressions using specified operators, not all comparisons will return TRUE. If all of the specified operators test for equality, this is equivalent to a UNIQUE constraint, although an ordinary unique constraint will be faster. However, exclusion constraints can specify constraints that are more general than simple equality. For example, you can specify a constraint that no two rows in the table contain overlapping circles by using the && operator.

Exclusion constraints are implemented using an index, so each specified operator must be associated with an appropriate operator class for the index access method index_method. The operators are required to be commutative. Each exclude_element defines a column of the index, so it can optionally specify a collation, an operator class, operator class parameters, and/or ordering options. For more information, see CREATE INDEX.

The access method must support amgettuple, which currently means that GIN cannot be used. Although it is allowed, there is little point in using B-tree or hash indexes with an exclusion constraint, because this does no better than an ordinary unique constraint. So in practice the access method will always be GiST or SP-GiST.

predicate allows you to specify an exclusion constraint on a subset of the table; internally this creates a partial index. Note that parentheses are required around the predicate.

REFERENCES reftable [ ( refcolumn ) ] [ MATCH matchtype ] [ ON DELETE referential_action ] [ ON UPDATE referential_action ] (column constraint) FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] [ MATCH matchtype ] [ ON DELETE referential_action ] [ ON UPDATE referential_action ] (table constraint)

These clauses specify a foreign key constraint, which requires that a group of one or more columns of the new table must only contain values that match values in the referenced column(s) of some row of the referenced table. If the refcolumn list is omitted, the primary key of the reftable is used. The referenced column must be a non-deferrable unique constraint or primary key constraint in the referenced table. The user must have the REFERENCES permission on the referenced table (either the whole table, or the specific referenced columns). The addition of a foreign key constraint requires a SHARE ROW EXCLUSIVE lock on the referenced table. Note that foreign key constraints cannot be defined between temporary tables and permanent tables.

A value inserted into the referencing column(s) is matched against the values of the referenced table and referenced columns using the given match type. There are three match types: MATCH FULL, MATCH PARTIAL, and MATCH SIMPLE (which is the default). MATCH FULL will not allow one column of a multi-column foreign key to be null unless all foreign key columns are null; if they are all null, the row is not required to have a match in the referenced table. MATCH SIMPLE allows any of the foreign key columns to be null; if any of them are null, the row is not required to have a match in the referenced table. MATCH PARTIAL is not yet implemented. (Of course, NOT NULL constraints can be applied to the referencing columns to prevent these cases from arising).

In addition, when the data in the referenced columns is changed, certain actions are performed on the data in this table's columns. The ON DELETE clause specifies the action to perform when a referenced row in the referenced table is being deleted. Similarly, the ON UPDATE clause specifies the action to perform when a referenced column in the referenced table is updated to a new value. If the row is updated, but the referenced column is not actually changed, no action is taken. Referential actions other than the NO ACTION check cannot be deferred, even if the constraint is declared deferrable. There are the following possible actions for each clause:

NO ACTION

Produce an error indicating that the deletion or update would create a foreign key constraint violation. If the constraint is deferred, this error will be produced at constraint check time if there still exist any referencing rows. This is the default action.

RESTRICT

Produce an error indicating that the deletion or update would create a foreign key constraint violation. This is the same as NO ACTION except that the check is not deferrable.

CASCADE

Delete any rows referencing the deleted row, or update the values of the referencing column(s) to the new values of the referenced columns, respectively.

SET NULL

Set all of the referencing columns to null.

SET DEFAULT

Set all of the referencing columns to their default values. (There must be a row in the referenced table matching the default values, if they are not null, or the operation will fail.)

If the referenced column(s) are changed frequently, it might be wise to add an index to the referencing column(s) so that referential actions associated with the foreign key constraint can be performed more efficiently.

DEFERRABLE NOT DEFERRABLE

This clause controls whether the constraint can be deferred. A non-deferrable constraint will be checked immediately after each command. The checking of deferrable constraints is deferred until the end of the transaction (using the SET CONSTRAINTS command). NOT DEFERRABLE is the default. Currently, only UNIQUE, PRIMARY KEY, EXCLUDE, and REFERENCES (foreign key) constraints accept this clause. NOT NULL and CHECK constraints are not deferrable. Note that deferrable constraints cannot be used as conflict arbitrators in an INSERT statement that includes an ON CONFLICT DO UPDATE clauses.

INITIALLY IMMEDIATE INITIALLY DEFERRED

If a constraint is deferrable, this clause specifies the default time to check the constraint. If the constraint is INITIALLY IMMEDIATE, it is checked after each statement. This is the default. If the constraint is INITIALLY DEFERRED, it is checked only at the end of the transaction. The constraint check time can be modified with the SET CONSTRAINTS command.

USING method

This optional clause specifies the table access method used to store the contents of the new table; this method requires an access method of type TABLE. If this option is not specified, the default table access method is chosen for the new table.

WITH ( storage_parameter [= value] [, ... ] )

This clause specifies optional storage parameters for a table or index. For more information, see the Storage parameters section of this topic. For backward compatibility, the WITH clause for a table can also include OIDS=FALSE to specify that rows of the new table should not contain OIDs (object identifiers). OIDS=TRUE is no longer supported.

WITHOUT OIDS

This is a backward-compatible syntax for declaring a table WITHOUT OIDS. Creating a table WITH OIDS is not supported anymore.

ON COMMIT

The behavior of temporary tables at the end of a transaction block can be controlled using ON COMMIT. The three options are:

PRESERVE ROWS

No special action is taken at the ends of transactions. This is the default behavior.

DELETE ROWS

All rows in the temporary table will be deleted at the end of each transaction block. In essence, an automatic TRUNCATE is done at each commit. When used on a partitioned table, this is not cascaded to its partitions.

DROP

The temporary table will be dropped at the end of the current transaction block. When used on a partitioned table, this action drops its partitions, and when used on tables with inheritance children, it drops the dependent children.

TABLESPACE tablespace_name

tablespace_name is the name of the tablespace in which the new table is to be created. If not specified, default_tablespace is consulted, or temp_tablespaces if the table is temporary. For partitioned tables, since no storage is required for the table itself, the tablespace specified overides default_tablespace as the default tablespace for any newly created partitions when no other tablespaces is explicitly specified.

USING INDEX TABLESPACE tablespace_name

This clause allows selection of the tablespace in which the index associated with a UNIQUE, PRIMARY KEY, or EXCLUDE constraint will be created. If not specified, default_tablespace is consulted, or temp_tablespaces if the table is temporary.

Storage parameters

The WITH clause can specify storage parameters for a table, and for indexes associated with a UNIQUE, PRIMARY KEY, or EXCLUDE constraint. Storage parameters for indexes are documented in CREATE INDEX. The storage parameters currently available for tables are listed below. For many of these parameters, as shown, there is an additional parameter with the same name prefixed with toast., which controls the behavior of the table's secondary TOAST table, if any. If a table parameter value is set but the corresponding toast. parameter is not set, then the TOAST table will use the parameter value of the table. You cannot specify these parameters for a partitioned table, but you can specify them for individual leaf partitions.

fillfactor (integer)

The fillfactor for a table is a percentage between 10 and 100. 100 (complete packing) is the default. When a smaller fillfactor is specified, the INSERT operations pack table pages only to the indicated percentage. The remaining space on each page is reserved for updating rows on that page. This gives UPDATE a chance to place the updated copy of a row on the same page as the original, which is more efficient than placing it on a different page. For a table whose entries are never updated, complete packing is the best choice, but in heavily updated tables smaller fillfactors are appropriate. This parameter cannot be set for TOAST tables.

toast_tuple_target (integer)

Before we try to compress and/or move long column values into TOAST tables, toast_tuple_target specifies the minimum tuple length needed, which is also the target length we try to reduce the length below once toasting begins. This affects columns marked as External (for move), Main (for compression), or Extended (for both) and applies only to new tuples. There is no effect on existing rows. By default this parameter is set to allow at least 4 tuples per block, which with the default block size will be 2040 bytes. Valid values are between 128 bytes and the (block size - header), by default 8160 bytes. Changing this value may not be useful for very short or very long rows. Note that the default setting is often close to optimal, and it is possible that setting this parameter could have negative effects in some cases. This parameter cannot be set for TOAST tables.

parallel_workers (integer)

This parameter sets the number of workers used to assist a parallel scan of this table. If not set, the system will determine a value based on the relation size. The actual number of workers chosen by the planner or by utility statements that use parallel scans may be fewer, for example due to the setting of max_worker_processes.

autovacuum_enabled, toast.autovacuum_enabled (boolean)

Enables or disables the autovacuum daemon for a particular table. If true, the autovacuum daemon will perform automatic VACUUM and/or ANALYZE operations on this table. If false, this table will not be autovacuumed, except to prevent transaction ID wraparound. Note that the autovacuum daemon does not run at all (except to prevent transaction ID wraparound) if the autovacuum parameter is false; setting individual tables' storage parameters does not override that. Therefore, there is seldom much point in explicitly setting this storage parameter to true, only to false.

vacuum_index_cleanup, toast.vacuum_index_cleanup (boolean)

Forces or disables index cleanup when VACUUM is run on this table. The default value is true. Forcibly disabling all index cleanup can speed up VACUUM very significantly, but may also lead to severely bloated indexes if table modifications are frequent. The INDEX_CLEANUP parameter of VACUUM, if specified, overrides the value of this option.

vacuum_truncate, toast.vacuum_truncate (boolean)

Enables or disables vacuum to try to truncate any empty pages at the end of this table. The default value is true. If true, VACUUM and autovacuum will perform truncation and the disk space for the truncated pages will be returned to the operating system. Note that the truncation requires ACCESS EXCLUSIVE lock on the table. The TRUNCATE parameter of VACUUM, if specified, overrides the value of this option.

autovacuum_vacuum_threshold, toast.autovacuum_vacuum_threshold (integer)

Specifies the minimum number of inserted, updated, or deleted tuples that can trigger VACUUM on a table. The default value is 50. This parameter can only be set in the postgresql.conf file or on the server command line. However, the setting can be overridden for individual tables by changing table storage parameters.

autovacuum_vacuum_scale_factor, toast.autovacuum_vacuum_scale_factor (floating point)

Specifies a fraction of the table size, which is added to the autovacuum_vacuum_threshold when deciding whether to trigger VACUUM. The default is 0.2 (20% of table size). This parameter can only be set in the postgresql.conf file or on the server command line. However, the setting can be overridden for individual tables by changing table storage parameters.

autovacuum_vacuum_insert_threshold, toast.autovacuum_vacuum_insert_threshold (integer)

Specifies the number of inserted tuples needed to trigger a VACUUM operation in any one table. The default value is 1000 tuples. If -1 is specified, autovacuum will not trigger a VACUUM operation on any table based on the number of inserts. This parameter can only be set in the postgresql.conf file or on the server command line; however, the setting can be overridden for individual tables by changing table storage parameters.

autovacuum_vacuum_insert_scale_factor, toast.autovacuum_vacuum_insert_scale_factor (float4)

Specifies a fraction of the size to add to autovacuum_vacuum_insert_threshold when deciding whether to trigger a VACUUM operation. The default is 0.2 (20% of table size). This parameter can only be set in the postgresql.conf file or on the server command line; however, the setting can be overridden for individual tables by changing table storage parameters.

autovacuum_analyze_threshold (integer)

Specifies the minimum number of inserted, updated, or deleted tuples that can trigger an ANALYZE operation in a table. The default value is 50. This parameter can only be set in the postgresql.conf file or on the server command line. However, the setting can be overridden for individual tables by changing table storage parameters.

autovacuum_analyze_scale_factor (floating point)

Specifies a fraction of the table size to add to the autovacuum_analyze_threshold when deciding whether to trigger an ANALYZE operation. The default value is 0.1 (10% of the table size). This parameter can only be set in the postgresql.conf file or on the server command line. However, the setting can be overridden for individual tables by changing table storage parameters.

autovacuum_vacuum_cost_delay, toast.autovacuum_vacuum_cost_delay (floating point)

Specifies the cost delay value used in automatic VACUUM operations. If -1 (the default) is specified, the vacuum_cost_delay value will be used. If this value is specified without units, it is taken as milliseconds. The default value is 2 milliseconds. This parameter can only be set in the postgresql.conf file or on the server command line. However, the setting can be overridden for individual tables by changing table storage parameters.

autovacuum_vacuum_cost_limit, toast.autovacuum_vacuum_cost_limit (integer)

Specifies a cost limit value used in automatic VACUUM operations. If -1 (the default) is specified, the value of the vacuum_cost_limit parameter is used. If multiple autovacuum workers exist, this value is distributed proportionally among the running autovacuum workers. Therefore, the sum of the limits for each worker does not exceed the value of the this parameter. This parameter can only be set in the postgresql.conf file or on the server command line. However, the setting can be overridden for individual tables by changing table storage parameters.

autovacuum_freeze_min_age, toast.autovacuum_freeze_min_age (integer)

Specifies the cutoff age (in transactions) that VACUUM uses to determine whether to trigger freezing of pages that have an older XID. The default value is 5 million transactions. Although the user can set this value from zero to one billion, VACUUM will quietly limit the effective value to half the value of autovacuum_freeze_max_age, so that there is not an unreasonably short time between forced autovacuums. Note that autovacuum ignores per-tablespecific autovacuum_freeze_min_age that exceeds half the system-wide autovacuum_freeze_max_age setting.

autovacuum_freeze_max_age, toast.autovacuum_freeze_max_age (integer)

Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid field can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that even if autovacuum is disabled, the system will launch the automatic cleanup process to prevent wraparound.

Vacuum also allows removal of old files from the pg_xact subdirectory, which is why the default is relatively low 200 million transactions. This parameter can only be set at server start, but the setting can be reduced for individual tables by changing table storage parameters.

autovacuum_freeze_table_age, toast.autovacuum_freeze_table_age (integer)

VACUUM performs an aggressive scan when the table's pg_class.relfrozenxid field reaches the age specified by this setting. An aggressive scan differs from a regular VACUUM in that it visits every page that might contain unfrozen XIDs or MXIDs, not just those that might contain dead tuples. The default is 150 million transactions. Although users can set this value anywhere from zero to two billion, VACUUM will silently limit the effective value to 95% of autovacuum_freeze_max_age, so that a periodic manual VACUUM has a chance to run before an anti-wraparound autovacuum is launched for the table.

autovacuum_multixact_freeze_min_age, toast.autovacuum_multixact_freeze_min_age (integer)

Specifies the cutoff age (in multixacts) that VACUUM should use to decide whether to trigger freezing of pages with an older multixact ID. The default value is 5 million multiacts. Although users can set this value anywhere from zero to one billion, VACUUM will silently limit the effective value to half the value of autovacuum_multixact_freeze_max_age, so that there is not an unreasonably short time between forced autovacuums. Note autovacuum ignores table-specific autovacuum_multixact_freeze_max_age that exceeds half system-wide autovacuum_multixact_freeze_max_age setting.

autovacuum_multixact_freeze_max_age, toast.autovacuum_multixact_freeze_max_age (integer)

Specifies the maximum age (in transactions) that a table's pg_class.relfrozenxid field can attain before a VACUUM operation is forced to prevent transaction ID wraparound within the table. Note that even if autovacuum is disabled, the system will launch the automatic cleanup process to prevent wraparound.

Vacuum also allows removal of old files from the pg_xact subdirectory, which is why the default is relatively low 200 million transactions. This parameter can only be set at server start, but the setting can be reduced for individual tables by changing table storage parameters. autovacuum_multixact_freeze_max_age: Per-table value for the autovacuum_multixact_freeze_max_age parameter. Note that autovacuum will ignore per-table autovacuum_multixact_freeze_max_age parameters that are larger than the system-wide setting (it can only be set smaller).

autovacuum_multixact_freeze_table_age, toast.autovacuum_multixact_freeze_table_age (integer)

VACUUM will perform an aggressive scan if the table's pg_class.relminmxid field reaches the age specified by this setting. An aggressive scan differs from a regular scan in that it visits every page that might contain an unfrozen XID or MXID, rather than just scanning pages that might contain dead tuples. The default is 150 billion multixacts. Although the user can set this value anywhere from zero to two billion, VACUUM will quietly set the effective value to 95% of the autovacuum_multixact_freeze_max_age value, so that a periodic manual VACUUM has a chance to run before an anti-wraparound is launched for the table.

log_autovacuum_min_duration, toast.log_autovacuum_min_duration (integer)

Causes each action executed by autovacuum to be logged if it ran for at least the specified amount of time. Setting this parameter to 0 logs all autovacuum actions. -1 (the default) disables logging autovacuum actions. If this value is specified without units, it is taken as milliseconds. For example, if you set it to 250ms, all automatic vacuums and analyzes that run 250ms or longer will be logged. In addition, when this parameter is set to any value other than -1, a message will be logged if an autovacuum action is skipped due to a conflicting lock or a concurrently dropped relation. This parameter is useful in tracking autovacuum activity. This parameter can only be set in the postgresql.conf file or on the server command line. However, this setting can be overridden for individual tables by changing the table's storage parameters.

user_catalog_table (boolean)

Declares the table as an additional catalog table for purposes of logical replication. This parameter cannot be set for TOAST tables.

Notes

PolarDB creates an index for each unique constraint and primary key constraints to enforce uniqueness. Therefore, it is not necessary to explicitly create an index for primary key columns.

Unique constraints and primary keys are not inherited in the current implementation. This makes the combination of inheritance and unique constraints rather dysfunctional.

A table cannot have more than 1600 columns. (In practice, the effective limit is usually lower because of tuple-length constraints.)

Examples

Create table films and table distributors:

    CREATE TABLE films (
        code        char(5) CONSTRAINT firstkey PRIMARY KEY,
        title       varchar(40) NOT NULL,
        did         integer NOT NULL,
        date_prod   date,
        kind        varchar(10),
        len         interval hour to minute
    );

    CREATE TABLE distributors (
         did    integer PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY,
         name   varchar(40) NOT NULL CHECK (name <> '')
    );

Create a table with a two-dimensional array:

    CREATE TABLE array_int (
        vector  int[][]
    );

Define a unique table constraint for table films. Unique table constraints can be defined on one or more columns of the table:

    CREATE TABLE films (
        code        char(5),
        title       varchar(40),
        did         integer,
        date_prod   date,
        kind        varchar(10),
        len         interval hour to minute,
        CONSTRAINT production UNIQUE(date_prod)
    );

Define a check column constraint:

    CREATE TABLE distributors (
        did     integer CHECK (did > 100),
        name    varchar(40)
    );

Define a check table constraint:

    CREATE TABLE distributors (
        did     integer,
        name    varchar(40),
        CONSTRAINT con1 CHECK (did > 100 AND name <> '')
    );

Define a primary key table constraint for the table films:

    CREATE TABLE films (
        code        char(5),
        title       varchar(40),
        did         integer,
        date_prod   date,
        kind        varchar(10),
        len         interval hour to minute,
        CONSTRAINT code_title PRIMARY KEY(code,title)
    );

Define a primary key constraint for the table distributors. The following two examples are equivalent, the first using the table constraint syntax and the second using the column constraint syntax:

    CREATE TABLE distributors (
        did     integer,
        name    varchar(40),
        PRIMARY KEY(did)
    );

    CREATE TABLE distributors (
        did     integer PRIMARY KEY,
        name    varchar(40)
    );

Assign a literal constant default value for the column name, arrange for the default value of column did to be generated by selecting the next value of a sequence object, and make the default value of modtime be the time at which the row is inserted:

    CREATE TABLE distributors (
        name      varchar(40) DEFAULT 'Luso Films',
        did       integer DEFAULT nextval('distributors_serial'),
        modtime   timestamp DEFAULT current_timestamp
    );

Define two NOT NULL column constraints on the table distributors, one of which is explicitly given a name:

    CREATE TABLE distributors (
        did     integer CONSTRAINT no_null NOT NULL,
        name    varchar(40) NOT NULL
    );

Define a unique constraint for the name column:

    CREATE TABLE distributors (
        did     integer,
        name    varchar(40) UNIQUE
    );

The same unique constraint is specified with a table constraint:

    CREATE TABLE distributors (
        did     integer,
        name    varchar(40),
        UNIQUE(name)
    );

Create the same table, specifying 70% fill factor for both the table and its unique index:

    CREATE TABLE distributors (
        did     integer,
        name    varchar(40),
        UNIQUE(name) WITH (fillfactor=70)
    )
    WITH (fillfactor=70);

Create a table circles with an exclusion constraint that prevents any two circles from overlapping:

    CREATE TABLE circles (
        c circle,
        EXCLUDE USING gist (c WITH &&)
    );

Create a table cinemas in the tablespace diskvol1:

    CREATE TABLE cinemas (
            id serial,
            name text,
            location text
    ) TABLESPACE diskvol1;

Create a composite type and a typed table:

    CREATE TYPE employee_type AS (name text, salary numeric);

    CREATE TABLE employees OF employee_type (
        PRIMARY KEY (name),
        salary WITH OPTIONS DEFAULT 1000
    );

Create a range partitioned table:

    CREATE TABLE measurement (
        logdate         date not null,
        peaktemp        int,
        unitsales       int
    ) PARTITION BY RANGE (logdate);

Create a range partitioned table with multiple columns in the partition key:

    CREATE TABLE measurement_year_month (
        logdate         date not null,
        peaktemp        int,
        unitsales       int
    ) PARTITION BY RANGE (EXTRACT(YEAR FROM logdate), EXTRACT(MONTH FROM logdate));

Create a list partitioned table:

    CREATE TABLE cities (
        city_id      bigserial not null,
        name         text not null,
        population   bigint
    ) PARTITION BY LIST (left(lower(name), 1));

Create a hash partitioned table:

    CREATE TABLE orders (
        order_id     bigint not null,
        cust_id      bigint not null,
        status       text
    ) PARTITION BY HASH (order_id);

Create partitions of a range partitioned table:

    CREATE TABLE measurement_y2016m07
        PARTITION OF measurement (
        unitsales DEFAULT 0
    ) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');

Create a few partitions of a range partitioned table with multiple columns in the partition key:

    CREATE TABLE measurement_ym_older
        PARTITION OF measurement_year_month
        FOR VALUES FROM (MINVALUE, MINVALUE) TO (2016, 11);

    CREATE TABLE measurement_ym_y2016m11
        PARTITION OF measurement_year_month
        FOR VALUES FROM (2016, 11) TO (2016, 12);

    CREATE TABLE measurement_ym_y2016m12
        PARTITION OF measurement_year_month
        FOR VALUES FROM (2016, 12) TO (2017, 01);

    CREATE TABLE measurement_ym_y2017m01
        PARTITION OF measurement_year_month
        FOR VALUES FROM (2017, 01) TO (2017, 02);

Create partitions of a list partitioned table:

    CREATE TABLE cities_ab
        PARTITION OF cities (
        CONSTRAINT city_id_nonzero CHECK (city_id != 0)
    ) FOR VALUES IN ('a', 'b');

Create partitions of a list partitioned table that is itself further partitioned and then add a partition to it:

    CREATE TABLE cities_ab
        PARTITION OF cities (
        CONSTRAINT city_id_nonzero CHECK (city_id != 0)
    ) FOR VALUES IN ('a', 'b') PARTITION BY RANGE (population);

    CREATE TABLE cities_ab_10000_to_100000
        PARTITION OF cities_ab FOR VALUES FROM (10000) TO (100000);

Create partitions of a hash partitioned table:

    CREATE TABLE orders_p1 PARTITION OF orders
        FOR VALUES WITH (MODULUS 4, REMAINDER 0);
    CREATE TABLE orders_p2 PARTITION OF orders
        FOR VALUES WITH (MODULUS 4, REMAINDER 1);
    CREATE TABLE orders_p3 PARTITION OF orders
        FOR VALUES WITH (MODULUS 4, REMAINDER 2);
    CREATE TABLE orders_p4 PARTITION OF orders
        FOR VALUES WITH (MODULUS 4, REMAINDER 3);

Create a default partition:

    CREATE TABLE cities_partdef
        PARTITION OF cities DEFAULT;

Compatibility

The CREATE TABLE command conforms to the SQL standard, with exceptions listed below.

Temporary tables

Although the syntax of the CREATE TEMPORARY TABLE resembles that of the SQL standard, the effect is not the same. In the standard, temporary tables are defined just once and automatically exist (starting with empty contents) in every session that needs them. PolarDB instead requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.

The standard's definition of the behavior of temporary tables is widely ignored. PolarDB's behavior on this point is similar to that of several other SQL databases.

The SQL standard also distinguishes between global and local temporary tables, where a local temporary table has a separate set of contents for each SQL module within each session, though its definition is still shared across sessions. Since PolarDB does not support SQL modules, this distinction is not relevant in PolarDB.

For compatibility's sake, PolarDB will accept the GLOBAL and LOCAL keywords in a temporary table declaration, but they currently have no effect. Use of these keywords is discouraged, since future versions of PolarDB might adopt a more standard-compliant interpretation of their meaning.

The ON COMMIT clause for temporary tables also resembles the SQL standard, but has some differences. If the ON COMMIT clause is omitted, SQL specifies that the default behavior is ON COMMIT DELETE ROWS. However, the default behavior in PolarDB is ON COMMIT PRESERVE ROWS. The ON COMMIT DROP option does not exist in SQL.

Non-deferred uniqueness constraints

When a UNIQUE or PRIMARY KEY constraint is not deferrable, PolarDB checks for uniqueness immediately whenever a row is inserted or modified. The SQL standard says that uniqueness should be enforced only at the end of the statement; this makes a difference when, for example, a single command updates multiple key values. To obtain standard-compliant behavior, declare the constraint as DEFERRABLE but not deferred (i.e., INITIALLY IMMEDIATE). Be aware that this can be significantly slower than immediate uniqueness checking.

Column check constraints

The SQL standard states that CHECK column constraints can only refer to the column they apply to; only CHECK table constraints can refer to multiple columns. PolarDB does not enforce this restriction; it treats column and table check constraints alike.

EXCLUDE constraint

The EXCLUDE constraint type is an extension of PolarDB.

NULL "constraint"

NULL "constraint" (actually a non-constraint) is a PolarDB extension to the SQL standard that is included for compatibility with some other database systems (and for symmetry with the NOT NULL constraint). Since it is the default for any column, its presence is simply noise.

Constraint naming

The SQL standard says that table and domain constraints must have names that are unique across the schema containing the table or domain. PolarDB is laxer: it only requires constraint names to be unique across the constraints attached to a particular table or domain. However, this extra freedom does not exist for index-based constraints (UNIQUE, PRIMARY KEY, and EXCLUDE constraints), because the associated index is named the same as the constraint, and index names must be unique across all relations within the same schema.

Currently, PolarDB does not record names for NOT NULL constraints at all, so they are not subject to the uniqueness restriction. This might change in a future release.

Inheritance

Multiple inheritance via the INHERITS clauses is a PolarDB language extension.

Zero-column table

PolarDB allows a table of no columns to be created (for example, CREATE TABLE foo();). This is an extension from the SQL standard, which does not allow zero-column tables. Zero-column tables are not in themselves very useful, but disallowing them creates odd special cases for ALTER TABLE DROP COLUMN, so it seems cleaner to ignore this spec restriction.

Multiple identity columns

PolarDB allows a table to have more than one identity column. The standard specifies that a table can have at most one identity column. This is relaxed mainly to give more flexibility for doing schema changes or migrations. Note that the INSERT command supports only one override clause that applies to the entire statement, so having multiple identity columns with different behaviors is not well supported.

Generated columns

The option STORED is not standard but is also used by other SQL implementations. The SQL standard does not specify the storage of generated columns.

LIKE clause

While a LIKE clause exists in the SQL standard, many of the options that PolarDB accepts for it are not in the standard, and some of the standard's options are not implemented by PolarDB.

WITH clause

The WITH clause is a PolarDB extension; storage parameters are not in the standard.

Tablespaces

The TABLESPACE and USING INDEX TABLESPACE clauses are extensions.

Typed tables

Typed tables implement a subset of the SQL standard. According to the standard, a typed table has columns corresponding to the underlying composite type as well as one other "self-referencing columns". PolarDB does not explicitly support self-referencing columns.

PARTITION BY clause

The PARTITION BY clause is an extension.

PARTITION OF clause

PARTITION OF clause is an extension.