All Products
Search
Document Center

AnalyticDB:Export to OSS

Last Updated:Mar 30, 2026

When your analysis is complete and you need to share datasets, archive results, or feed a data lake downstream, you can export data from AnalyticDB for MySQL Data Warehouse Edition (V3.0) directly to Object Storage Service (OSS). The export uses OSS external tables as the target, and you write data with standard INSERT statements. Both CSV and Parquet formats are supported.

Prerequisites

Before you begin, make sure you have:

  • An OSS bucket in the same region as your AnalyticDB for MySQL cluster. To set one up: OSS directory creation

    1. Activate OSS.

    2. Create a bucket in the OSS console.

    3. Create a directory in the bucket to store the exported data — for example, adb_data/.

  • An AnalyticDB for MySQL Data Warehouse Edition (V3.0) cluster with an account, a database, and a configured whitelist. See Create a cluster.

    If your cluster runs in elastic mode, log on to the AnalyticDB for MySQL console, go to the cluster details page, and turn on Elastic Network Interface (ENI) in the Network Information section. Enable the ENI network

Export overview

The export workflow has three steps:

  1. Connect to your AnalyticDB for MySQL cluster.

  2. Create an OSS external table that maps to the target OSS directory.

  3. Run an INSERT statement to write data from a source table to the external table.

After the export completes, verify the output by logging on to the OSS console and checking the target directory, or by running a SELECT query against the external table.

Export to a non-partitioned external table

Create the external table

Follow the steps in Create an OSS external table for a non-partitioned object to create an external table in the target database.

Choose a write statement

Select a write statement based on how you want to handle existing objects in the external table's OSS path.

Statement Behavior When to use
INSERT INTO ... SELECT Appends data. Each run generates a new OSS object without affecting existing objects. Incremental exports, or when accumulating multiple runs.
INSERT OVERWRITE ... SELECT Deletes all existing objects in the external table's OSS path, then writes new data. Full refreshes, or when replacing the previous export entirely.
REPLACE SELECT FROM Behaves the same as INSERT INTO ... SELECT. OSS external tables do not support primary keys, so no deduplication occurs. Appending data when existing data remains unchanged and new data is saved to a new OSS object.

Syntax:

-- Append data
INSERT INTO <external_table>
SELECT <columns> FROM <source_table>;

-- Replace data
INSERT OVERWRITE <external_table>
SELECT <columns> FROM <source_table>;

Example — export data from source_table in the adb_demo database to an OSS external table named oss_table:

-- Append
INSERT INTO oss_table
SELECT col1, col2, col3 FROM source_table;

-- Replace
INSERT OVERWRITE oss_table
SELECT col1, col2, col3 FROM source_table;
Important

All columns must be included in the SELECT list. Selecting a subset of columns is not supported.

Export to a single OSS object (CSV only, version 3.1.2 and later)

By default, data is written to multiple OSS objects. On AnalyticDB for MySQL version 3.1.2 and later, use the output_filename hint to export all data to a single named CSV object.

/*+output_filename=<filename>*/INSERT [OVERWRITE] <external_table>
SELECT <columns> FROM <source_table>;

Example:

/*+output_filename=adb.txt*/INSERT OVERWRITE oss_table
SELECT * FROM source_table;

The OVERWRITE keyword determines whether an existing object with the same name is overwritten. Other objects in the directory are not affected.

Single-object export requires CSV format and version 3.1.2 or later. Parquet is not supported. On earlier versions, the system always generates multiple objects and determines the count based on concurrent task execution.

Export to a partitioned external table

Partitioned external tables map to a hierarchical OSS directory structure. When data is written, partition key values become directory names — they are not stored as columns in the exported CSV or Parquet objects.

Example: a table with two partition keys (pcol1 and pcol2) and three data columns, exported to adb_data/. A row where pcol1=1 and pcol2=a is written to:

adb_data/pcol1=1/pcol2=a/

The CSV or Parquet object contains only the three data column values. The partition key values (1 and a) appear in the directory path, not in the file.

Important

Partitioned external table data cannot be exported to a single OSS object. The output_filename hint is not supported for partitioned tables.

Create the external table

Follow the steps in Use external tables to import data to Data Warehouse Edition to create a partitioned OSS external table.

Choose a partition specification mode

Select the mode based on how much you know about the partition values at write time.

Mode Syntax Behavior
Fully static PARTITION(par1=val1, par2=val2, ...) All partition values are specified. Data is appended to the matching partition; a new object is generated each time.
Static-dynamic PARTITION(par1=val1, par2, ...) High-level partition values are specified; low-level values are derived from the source data.
Fully dynamic No PARTITION clause All partition values are derived from the source data.

Syntax:

-- Fully static partition
INSERT INTO <external_table> PARTITION(<par1>=<val1>, <par2>=<val2>, ...)
SELECT <columns> FROM <source_table>;

-- Static-dynamic partition
INSERT INTO <external_table> PARTITION(<par1>=<val1>, <par2>, ...)
SELECT <columns>, <par2_column> FROM <source_table>;

-- Fully dynamic partition
INSERT INTO <external_table>
SELECT <columns>, <par1_column>, <par2_column> FROM <source_table>;

Examples:

-- Fully static partition
INSERT INTO oss_table_par PARTITION(par1=val1, par2=val2)
SELECT col1, col2, col3 FROM source_table;

-- Static-dynamic partition
INSERT INTO oss_table_par PARTITION(par1=val1, par2)
SELECT col1, col2, col3, par2col FROM source_table;

-- Fully dynamic partition
INSERT INTO oss_table_par
SELECT col1, col2, col3, par1col, par2col FROM source_table;
Important

All non-partition columns must be included in the SELECT list. Selecting a subset of columns is not supported.

Overwrite a partition

INSERT OVERWRITE PARTITION SELECT works like INSERT INTO PARTITION SELECT, except that existing objects in the target partitions are deleted before new data is written. Partitions with no new data are not affected.

INSERT OVERWRITE <external_table> PARTITION(<par1>=<val1>, <par2>=<val2>, ...) [IF NOT EXISTS]
SELECT <columns> FROM <source_table>;

Example:

INSERT OVERWRITE oss_table_par PARTITION(par1=val1, par2=val2) IF NOT EXISTS
SELECT col1, col2, col3 FROM source_table;

Append to a partition

REPLACE INTO PARTITION SELECT FROM behaves identically to INSERT INTO PARTITION SELECT FROM. Partitioned external tables do not support primary keys, so no deduplication occurs.

Usage notes

Include column names in output

By default, exported objects do not include a header row. To include column names, add the following hint to the beginning of the statement:

/*+sql_output_oss_file_head_enable=true*/INSERT [OVERWRITE] <external_table>
SELECT <columns> FROM <source_table>;

Run large exports asynchronously

INSERT INTO and INSERT OVERWRITE SELECT run synchronously by default. For exports of hundreds of gigabytes, the client connection may time out before the job completes. To avoid this, prefix the statement with SUBMIT JOB to run the export asynchronously:

SUBMIT JOB INSERT OVERWRITE <external_table> SELECT * FROM <source_table>;

After submitting, check the job status. See Asynchronously submit an import task for details.

Limitations

  • Only CSV and Parquet formats are supported.

  • All columns must be written to the external table. Selecting a subset of columns is not supported.

  • Row-level inserts are not supported. INSERT INTO VALUES and REPLACE INTO VALUES are not supported for external tables.

  • Single-object export (via the output_filename hint) is only available for CSV and requires version 3.1.2 or later.

  • Single-object export is not supported for partitioned external tables.

What's next