AnalyticDB for MySQL supports asynchronously submitting data import jobs.
Scenarios
When importing data using INSERT INTO or INSERT OVERWRITE SELECT, the process executes synchronously by default. If the data volume reaches hundreds of GB, the connection from the client to the AnalyticDB for MySQL server might break, which can cause data import to fail. Therefore, for large data volumes, you should submit data import jobs asynchronously.
Asynchronously Submit Jobs
Syntax
Basic syntax
SUBMIT JOB INSERT OVERWRITE table_name select_statement;Enhanced syntax.
You can specify an alias when submitting a job.
SUBMIT JOB <job_alias_name> AS <query_body>;Uniqueness: The alias must be unique.
Format: String type. Do not enclose it in single or double quotation marks.
Character set: Only letters, digits, and underscores are supported.
Length: Maximum 128 characters.
Examples
Submit an asynchronous job
SUBMIT JOB INSERT OVERWRITE test SELECT * FROM test_external_table;After executing the statement, a job ID is returned. You can use this job ID to query the asynchronous job's status.
+---------------------------------------+ | job_id | +---------------------------------------+ | 2017112122202917203100908203303000715 |Submit an asynchronous job and specify an alias
Correct example
SUBMIT JOB test_123 AS INSERT INTO test SELECT * FROM test_external_table;Incorrect examples
Do not enclose the job alias in single or double quotation marks.
SUBMIT JOB 'test_123' AS INSERT INTO test SELECT * FROM test_external_table; SUBMIT JOB "test_123" AS INSERT INTO test SELECT * FROM test_external_table;
Configure Priority-Based Scheduling
Version 3.1.3.6 and later support priority-based scheduling. You can add the hint/*+async_job_priority=n*/ before the SQL statement to specify the asynchronous job's priority. The default priority value is 1. A larger value indicates a higher priority, and the system schedules it first.
/*+async_job_priority=10*/ SUBMIT JOB INSERT OVERWRITE test SELECT * FROM test_external_table;Query Asynchronous Job Status
Syntax
Query using a job ID
SHOW JOB STATUS WHERE job_id='job_id';Query using a job alias
SHOW JOB STATUS WHERE job_name='job_alias_name';
Examples
SHOW JOB STATUS WHERE job_id='2017112122202917203100908203303000715';The following information is returned:
+-------------------------------------------+-------------+-----------+--------------+-----------+----------+-------------------------+-------------------------+---------------------------------------------------+-------------------------------------------+------------+----------+------------+-------------+------------+------------------+------------------------+
| job_id | schema_name | status | process_rows | fail_rows | fail_msg | create_time | update_time | definition | exec_job_id | total_rows | progress | start_time | result_rows | result_url | query_priority | user_define_job_name |
+-------------------------------------------+-------------+-----------+--------------+-----------+----------+-------------------------+-------------------------+---------------------------------------------------+-------------------------------------------+------------+----------+------------+-------------+------------+------------------+------------------------+
| 2026022710213902101709304503151000743 | test_db | SUCCEEDED | 4 | 0 | NULL | 2026-02-27 10:21:39.0 | 2026-02-27 10:21:45.0 | INSERT INTO test SELECT * FROM test_external_table| 2026022710214402101706516803151005717 | 0 | NULL | NULL | 0 | NULL | NORMAL | test_123 |
+-------------------------------------------+-------------+-----------+--------------+-----------+----------+-------------------------+-------------------------+---------------------------------------------------+-------------------------------------------+------------+----------+------------+-------------+------------+------------------+------------------------+Fields in the returned information vary based on the Milvus version of the cluster. You can parse the asynchronous job status based on the status field in the returned information.
Job Statuses and Mappings
Status in Data Warehouse Edition | Status in Data Lakehouse Edition | Description |
INIT | SUBMITTING | The asynchronous job is submitted. |
SUBMITTED | RUNNING | The asynchronous job is running. |
RUNNING | ||
FINISH | SUCCEEDED | The asynchronous job succeeded. |
FAILED | FAILED | The asynchronous job failed. |
None | CANCELLED | The asynchronous job is canceled. |
CANCELLED is a new asynchronous task state and is not mapped in Data Warehouse Edition (V3.0).
Terminate Jobs
Description
Unscheduled and completed (failed or successful) jobs are removed from the queue.
If a running job is terminated, the imported data may be rolled back. You must manually delete any residual data.
You cannot terminate a job directly using its alias
user_define_job_name. You must query the job ID using the alias, and then terminate the job using the job ID.
Syntax
CANCEL JOB 'job_id';Examples
CANCEL JOB '2017112122202917203100908203303000715';