This page covers common import and export issues in AnalyticDB for MySQL, organized by data source. Unless noted otherwise, each issue applies to Data Warehouse Edition clusters.
Before investigating a specific error, check which import tasks are currently running:
SELECT * FROM INFORMATION_SCHEMA.kepler_meta_async_jobs WHERE status = 'RUNNING';You can also view and cancel tasks in the AnalyticDB for MySQL console under Diagnostics and Optimization > Import/Export Jobs.
Quick reference
| Error or symptom | Section |
|---|---|
cant submit job for job queue is full | MaxCompute — cant submit job for job queue is full |
Query Exceeded Maximum Time Limit | MaxCompute — Query Exceeded Maximum Time Limit |
Roll back this write and commit by writing one row at a time | MaxCompute — Roll back this write and commit by writing one row at a time |
odps partition num: 191 > 192 | MaxCompute — Partition limit exceeded |
ODPS Table XXX should be a partitioned table | MaxCompute — MAX_PT function error |
ErrorCode=NoSuchPartition | MaxCompute — NoSuchPartition error |
Receive error response with code 500 | MaxCompute — Error response 500 |
ErrorCode=SlotExceeded | MaxCompute — SlotExceeded error |
ODPS-0110044: Flow control triggered | MaxCompute — Flow control triggered |
endpoint unreachable | Endpoint unreachable |
endpoint should not contain special character | Special character in endpoint |
Project not found | Project not found |
DTS: 'id' is LONG type, Can't change column type | Data type change errors |
DTS: Cannot parse "2013-05-00 00:00:00" | Invalid date values |
DTS: DTS-077004: table not exist | Tables without primary keys |
DTS: default value is too long | Excessively long default values |
DTS: PacketTooBigException | Records larger than 16 MB |
DTS: disk is over flow | Insufficient disk space |
DTS: table not exist => t1 | Missing tables or fields |
DTS: No value present | No value in merged fields |
| Import performance poor (JDBC) | JDBC import performance |
| Write TPS low (DataWorks) | Low write TPS |
| Data skew (DataWorks) | Data skew |
Asynchronous tasks
Stop an asynchronous import or export task
Find the task on the Import/Export Jobs tab of the Diagnostics and Optimization page. Note the Asynchronous job name, then run:
CANCEL JOB "${Asynchronous task name}";For more information, see Submit asynchronous import tasks
JDBC import
JDBC import performance
Poor JDBC import performance usually comes from a bottleneck in data production, not the cluster itself. Check the following:
Source throughput: Verify that the upstream system or file produces data fast enough. If not, the database is waiting on the client.
Production/consumption balance: Confirm that data production and consumption are in sync so there is always data in the pipeline ready for import.
Client resources: Check CPU utilization and disk I/O on the client host. If resources are saturated, the client is the bottleneck.
OSS external tables
Internal vs. public endpoint
Always use the internal endpoint when creating OSS external tables. The backend nodes of AnalyticDB for MySQL access Object Storage Service (OSS) through the internal network, not the public internet. Using a public endpoint increases latency and may cause failures.
Why does the error `ErrorCode=ServiceUnavailable, ErrorMessage=Service is temporarily unavailable, Please try again later` occur when you import data by creating a MaxCompute external table?
Cause: An error occurred on the MaxCompute service.
Solution: You need to submit a ticket and contact technical support for assistance.
Why does the ErrorCode=Local Error, ErrorMessage=connect timed out error occur when I import data by creating a MaxCompute external table?
Cause: An error occurred on the MaxCompute service.
Solution: You need to submit a ticket to contact technical support for assistance.
MaxCompute external tables
Endpoint unreachable
The endpoint specified in your CREATE TABLE statement is inaccessible. To fix this:
Enable the Elastic Network Interface (ENI) network on your cluster: go to Cluster Management > Cluster Information > Network Information and turn on the ENI switch.
ImportantEnabling or disabling ENI interrupts all database connections for approximately 2 minutes. Plan for downtime before making this change.
Replace the endpoint in your
CREATE TABLEstatement with the VPC endpoint for your region.Run the
CREATE TABLEstatement again.
Special character in endpoint
The endpoint in your CREATE TABLE statement is invalid. Replace it with the VPC endpoint for your region and rerun the statement.
Project not found
Two possible causes:
Typo or missing project: The project name in your
CREATE TABLEstatement does not exist in MaxCompute. Correct the project name and recreate the external table.Region mismatch: The MaxCompute project exists but is in a different region from your AnalyticDB for MySQL cluster. Ensure that the AnalyticDB for MySQL cluster and the MaxCompute project are in the same region, then recreate the external table.
MaxCompute import errors
Roll back this write and commit by writing one row at a time
This error appears when using DataX to import data. It is caused by a limitation in the AnalyticDB for MySQL connection layer. Add rewriteBatchedStatements=false to your JDBC connection string and retry the import.
Query Exceeded Maximum Time Limit
The MaxCompute table is large, and the import exceeds the INSERT operation time limit. Increase the INSERT_SELECT_TIMEOUT parameter, then retry. For details, see Config and Hint configuration parameters.
cant submit job for job queue is full
The cluster has reached its limit for concurrent asynchronous tasks. You can submit a ticket to contact technical support to query the current limit for your cluster. You have two options:submit a ticketsubmit a ticketsubmit a ticketsubmit a ticketsubmit a ticketsubmit a ticketsubmit a ticket
Wait for running tasks to finish, then resubmit. To check which tasks are still running:
SELECT * FROM INFORMATION_SCHEMA.kepler_meta_async_jobs WHERE status = 'RUNNING';Submit a ticket to increase the concurrent task limit for your cluster.
Partition limit exceeded
This error (odps partition num: 191 > 192) means the number of partitions being imported at once exceeds the cluster's limit. Add the following hint before your SQL statement, adjusting <value> to match your actual partition count:
/*RC_INSERT_ODPS_MAX_PARTITION_NUM=<value>*/ INSERT INTO ...MAX_PT function error
ODPS Table XXX should be a partitioned table and has at least one partition in max_pt() function
The MAX_PT function requires the MaxCompute table to be a partitioned table with at least one partition. Add a partition to the MaxCompute table, recreate the external table, then run your query.
NoSuchPartition error
ErrorCode=NoSuchPartition, ErrorMessage=The specified partition does not exist
The MaxCompute external table has no partitions. Add at least one partition, recreate the external table, then query again.
Error response 500
Receive error response with code 500
AnalyticDB for MySQL is running the query with the Native engine, which does not support MaxCompute external tables. Add the following hint to force the Java engine:
/*native_engine_task_enabled=false*/ SELECT ...Query execution error
Four possible causes:
Permission issue: The AccessKey does not have read/write access to the MaxCompute table. Update the MaxCompute permissions and retry.
Schema mismatch: The table structure or column names in AnalyticDB for MySQL do not match MaxCompute. Recreate the external table with a schema that matches the MaxCompute table.
Missing partition: The partition referenced in your query does not exist in MaxCompute. Update the partition value in the query and retry.
Too many small files: Run the merge operation in MaxCompute, then query again:
ALTER TABLE tablename [PARTITION] MERGE SMALLFILES;For more information, see Merge small files.
Importing array-type data
MaxCompute external tables do not support nested types such as array<string>. To work around this, export the data from MaxCompute to OSS in Parquet format, then read it from OSS into AnalyticDB for MySQL.
Optimize MaxCompute import speed
If data node load is low, increase the batch output size:
SET adb_config SQL_OUTPUT_BATCH_SIZE = 6000;If the MaxCompute table has many partitions, disable multi-partition matching:
SET adb_config ENABLE_ODPS_MULTI_PARTITION_PART_MATCH = false;If neither resolves the issue, submit a ticket for assistance.
MaxCompute export errors
INSERT OVERWRITE does not overwrite data
MaxCompute external tables do not support INSERT OVERWRITE. Data is not overwritten when using this statement.
SlotExceeded error
ErrorCode=SlotExceeded, ErrorMessage=Region: cn-hangzhou Project: XXX Slot Quota Exceeded
The write slot count exceeds MaxCompute's quota. Choose one of these approaches:
Use an exclusive resource group for data transmission. Exclusive groups have higher slot quotas than the public group.
Reduce concurrency by adding a hint before your statement. The default concurrency for
INSERT INTO SELECTtasks is 16:/*TASK_WRITER_COUNT=<value>*/ INSERT INTO SELECT ...<value>must be a positive integer.
Flow control triggered
ODPS-0110044: Flow control triggered - Request rejected by catalog server throttling, threshold 8.00
This error occurs when INSERT INTO SELECT writes to multiple new partitions simultaneously. Each new partition triggers a createPartition call on the MaxCompute side, and high-frequency calls trigger flow control. Reduce concurrency with:
/*TASK_WRITER_COUNT=<value>*/ INSERT INTO SELECT ...If you are not writing to multiple partitions simultaneously, submit a ticket for assistance.
Imported row count does not match MaxCompute
AnalyticDB for MySQL deduplicates rows with the same primary key on import. Check the MaxCompute source table for duplicate primary key values.
DTS synchronization
Data Transmission Service (DTS) synchronization issues are listed below.
Unsupported data types
If the source database has columns with data types that AnalyticDB for MySQL does not support (for example, geographic location types), those columns are silently dropped during initial schema synchronization. See Basic data types and Complex data types for supported types.
Supported field type changes during synchronization
You can change field types in the source table during synchronization, but only within the same type family and only from a smaller range to a larger range:
Integer types: Tinyint → Smallint → Int → Bigint (ascending only)
Floating-point types: Float → Double (not Double → Float)
Data type change errors
'id' is LONG type, Can't change column type to DECIMAL
modify precision is not supported, col=id, type decimal, old=11, new=21These errors occur when DTS encounters a type change that is not supported (see Supported field type changes during synchronization above).
Partial database synchronization: Remove the table from the sync objects, delete it in the destination database, then add it back. DTS performs a full re-sync and skips the unsupported DDL.
Full database synchronization: Create a new table in AnalyticDB for MySQL with a different name but the same schema as the source table. Use
INSERT INTO SELECTto copy data from the source table to the new table. Delete the errored table, then useRENAMEto rename the new table to the original name. Restart the DTS task.
Invalid date values
Cannot parse "2013-05-00 00:00:00": Value 0 for dayOfMonth must be in the range [1,31]AnalyticDB for MySQL does not accept invalid date values.
Full initialization phase: Fix the invalid value in the source table (for example, change
2013-05-00to2013-05-01).Incremental synchronization phase: Remove the table from sync objects, fix the value in the source, add the table back, and restart the sync task.
Incremental synchronization (full database): Submit a ticket to enable the invalid value writing switch. Once enabled, invalid date values are written as null.
Tables without primary keys
DTS-077004: Record Replicator error. cause by [[17003,2020051513063717201600100703453067067] table not exist => xxx_table]AnalyticDB for MySQL does not support syncing tables without primary keys. For full database synchronization, identify the source table that lacks a primary key, manually create it in the destination database with a primary key, then restart the DTS task.
Excessively long default values
default value is too longSubmit a ticket to upgrade your AnalyticDB for MySQL cluster to the latest version.
Records larger than 16 MB
com.mysql.jdbc.PacketTooBigException: Packet for query is too large (120468711 > 33554432).Submit a ticket to upgrade your AnalyticDB for MySQL cluster to the latest version.
Insufficient disk space
disk is over flowFree up disk space by deleting unused data, or contact Alibaba Cloud technical support to scale out your AnalyticDB for MySQL cluster. After resolving the disk issue, restart the DTS task.
Missing tables or fields
table not exist => t1When configuring the DTS task, make sure DDL synchronization (such as CREATE TABLE statements) is enabled. If it was not selected, update the task configuration to include all DDL synchronization options.
No value in merged fields
No value presentIn a multi-table merge scenario, a field was added to one source table, making the source schemas inconsistent. Submit a ticket to contact DTS support.
Hyphens in database, table, or column names
AnalyticDB for MySQL does not allow hyphens (-) in database, table, or column names. DTS automatically maps hyphens to underscores (_) during synchronization. If synchronization fails due to other unsupported characters (such as spaces or non-ASCII characters in names), contact Alibaba Cloud technical support.
For the full list of AnalyticDB for MySQL naming restrictions, see Limits.
Data latency
If DTS replication falls behind:
Upgrade the sync link: The default DTS synchronization link is medium mode. If source write volume is high, upgrade the DTS instance configuration to get higher throughput.
Hot row contention: Tables without primary keys may experience hot row updates, which are slow. Submit a ticket to AnalyticDB for MySQL support to address this.
Cluster bottleneck: If the AnalyticDB for MySQL cluster is the bottleneck, upgrade the cluster specification.
How do I resolve errors caused by DDL synchronization failures when DTS synchronizes data to an AnalyticDB MySQL cluster?
Cause: During incremental synchronization, if a DDL operation is performed on a table at the source, because AnalyticDB for MySQL does not support certain DDLs, the DDL will be ignored, causing subsequent data synchronization tasks to fail.
Solution:
-
Create a new data synchronization task.
-
Modify the original data synchronization task.
-
Delete the source table from the synchronization task objects, and then restart the data synchronization task.
-
Rename the table in the destination AnalyticDB for MySQL cluster.
-
Add the source table back to the synchronization task. The synchronization task will automatically synchronize the full and incremental data of the table.
-
DataWorks import
Low write TPS
If cluster CPU, disk I/O, and write response time are all low during DataWorks import, the client is not sending enough data. Increase batch insert count per import and raise Expected Maximum Concurrency Of The Task. Import performance scales linearly with import pressure.
Data skew
When data skew occurs, some cluster nodes are overloaded while overall CPU and disk I/O remain low — but write response time is high. Find the skewed table under Diagnostics and Optimization > Data Modeling Diagnostics, then redesign the table's distribution key. For guidance, see Schema design.
Local file import (adb-import.sh)
Check for client bottlenecks
If the client is the bottleneck, it cannot put enough pressure on the cluster to measure its true throughput. Check the client host with:
| Command | What it checks |
|---|---|
top | CPU utilization |
free | Memory usage |
vmstat 1 1000 | Comprehensive system load |
dstat -all --disk-util or iostat 1 1000 | Disk read bandwidth and utilization |
jstat -gc <pid> 1000 | JVM Garbage Collection (GC) frequency; if GC is frequent, increase the heap in jvmopts, for example -Xmx16G -Xms16G |
You can also check the Monitoring Information and Diagnostics and Optimization pages in the AnalyticDB for MySQL console.
Parameterize the import script
To import multiple tables with a single script, replace hardcoded tableName and dataPath values with positional parameters:
tableName=$1
dataPath=$2Then call the script for each table:
sh adb-import.sh table_name001 /path/table_001
sh adb-import.sh table_name002 /path/table_002
sh adb-import.sh table_name003 /path/table_003Run the import in the background
nohup sh adb-import.sh &Check the log:
tail -f nohup.outVerify the process is still running:
ps -ef | grep importIf the log contains an exception stack trace, the import has an error. Investigate based on the exception details.
Ignore error rows
Set ignoreErrors=true to skip batches that fail due to SQL execution errors. The tool prints the error details, starting line number, and the failed SQL for review.
This setting only ignores SQL execution errors. If a row has the wrong number of columns, the import stops immediately — those errors cannot be ignored. Fix the file and reimport.
Narrow down import failures
Use these settings to isolate error rows:
Print the full failing SQL: By default, SQL is truncated at 1,000 characters. Increase the limit:
printErrorSql=true failureSqlPrintLengthLimit=1500;Reduce batch size: Smaller batches make it easier to pinpoint which row failed:
batchSize=10Target a single file: If you know which file slice contains the error, import only that file:
dataPath=/path/to/specific/file007;
Run the import on Windows
Windows does not provide batch processing scripts for .sh files. Run the JAR directly instead:
java -jar adb-import-tool.jar [-a <arg>] [-b <arg>] [-B <arg>] [-c <arg>]
[-D <arg>] [-d <arg>] [-E <arg>] [-f <arg>] [-h <arg>] [-I <arg>]
[-k <arg>] [-l <arg>] [-m <arg>] [-n <arg>] [-N <arg>] [-O <arg>]
[-o <arg>] [-p <arg>] [-P <arg>] [-Q <arg>] [-s <arg>] [-S <arg>]
[-t <arg>] [-T <arg>] [-u <arg>] [-w <arg>] [-x <arg>] [-y <arg>] [-z <arg>]Parameters
| Parameter | Required | Description |
|---|---|---|
-h, --ip <arg> | Required | Connection address of the AnalyticDB for MySQL cluster |
-u, --username <arg> | Required | Database account |
-p, --password <arg> | Required | Password for the database account |
-P, --port <arg> | Required | Port number |
-D, --databaseName <arg> | Required | Database name |
-f, --dataFile <arg> | Required | Absolute path of the file or folder to import. Supports a single file, a single folder, or multiple files (comma-separated paths) |
-t, --tableName <arg> | Required | Target table name |
-a, --createEmptyFinishFilePath <arg> | Optional | Name of a flag file to create when the import completes. Default: empty (no flag file created) |
-b, --batchSize <arg> | Optional | Number of VALUES per INSERT INTO tablename VALUES (...),(...) batch. Default: 1. For best throughput, set between 1024 and 4096 |
-B, --encryptPassword <arg> | Optional | Encrypt the database password. Default: false |
-c, --printRowCount <arg> | Optional | Print the row count of the target table after each file is imported. Default: false |
-d, --skipHeader <arg> | Optional | Skip the header row. Default: false |
-E, --escapeSlashAndSingleQuote <arg> | Optional | Escape \ and ' in column values. Default: true. Set to false if no escape characters exist in the files (improves performance) |
-I, --ignoreErrors <arg> | Optional | Ignore failed batches on SQL execution errors. Default: false |
-k, --skipLineNum <arg> | Optional | Number of rows to skip at the start of each file. Default: 0 |
-l, --delimiter <arg> | Optional | Column delimiter. Default: \|. To use invisible characters, specify them in hexadecimal, for example hex0707 for \x07\x07 |
-m, --maxConcurrentNumOfFilesToImport <arg> | Optional | Number of files to read in parallel when dataFile is a folder. Default: Integer.MAX_VALUE (all files) |
-n, --nullAsQuotes <arg> | Optional | Treat || in files as '' instead of null. Default: false |
-N, --printErrorSql <arg> | Optional | Print the SQL that caused an error. Default: true |
-O, --connectionPoolSize <arg> | Optional | Database connection pool size. Default: 2 |
-o, --encoding <arg> | Optional | File encoding. Valid values: GBK, UTF-8. Default: UTF-8 |
-Q, --disableInsertOnlyPrintSql <arg> | Optional | Print INSERT SQL without executing it. Default: false |
-s, --lineSeparator <arg> | Optional | Row delimiter. Default: \\n. To use invisible characters, specify them in hexadecimal |
-S, --printErrorStackTrace <arg> | Optional | Print the stack trace when an error occurs (requires printErrorSql=true). Default: false |
-w, --windowSize <arg> | Optional | Buffer size for INSERT SQL, used for pipeline acceleration. Default: 128 |
-x, --insertWithColumnNames <arg> | Optional | Include column names in INSERT INTO statements. Default: true |
-y, --failureSqlPrintLengthLimit <arg> | Optional | Truncation length for error SQL output. Default: 1000 |
-z, --connectionUrlParam <arg> | Optional | Additional JDBC connection parameters. Default: ?characterEncoding=utf-8 |
Examples
Import a single file with default settings:
java -Xmx8G -Xms8G -jar adb-import-tool.jar \
-h yourhost.ads.aliyuncs.com \
-u adbuser -p password \
-P 3306 -D test \
--dataFile /data/lineitem.sample \
--tableName LINEITEMImport all files in a folder with maximum throughput:
java -Xmx16G -Xms16G -jar adb-import-tool.jar \
-h yourhost.ads.aliyuncs.com \
-u adbuser -p password \
-P 3306 -D test \
--dataFile /data/tpch100g \
--tableName LINEITEM \
--concurrency 64 \
--batchSize 2048APS (Data Lakehouse Edition)
Are there separate charges for the APS ingest endpoint?
No. The ingest endpoint itself has no additional charge. APS tasks run on your cluster's resource groups, which consume resources and incur the standard resource costs.