This topic describes common issues and solutions for importing and exporting data in AnalyticDB MySQL.
When the product series is not specified in the common issue scenario, the issue applies only to AnalyticDB for MySQL Data Warehouse Edition.
Overview of common issues
How do I use SQL to view running import tasks in the current database?
Does the data ingestion feature (APS) of a Data Lakehouse Edition cluster incur separate link fees?
How do I resolve the endpoint unreachable error when creating a MaxCompute external table?
How do I resolve the Project not found - 'xxx' error when creating a MaxCompute external table?
How do I resolve the Query Exceeded Maximum Time Limit error when importing MaxCompute data?
How do I resolve the cant submit job for job queue is full error when importing MaxCompute data?
How do I resolve the Receive error response with code 500 error when querying MaxCompute data?
How do I resolve the Query execution error when querying MaxCompute data?
How do I import array-type data from MaxCompute to an AnalyticDB MySQL cluster?
How do I parameterize the import script when importing local data using adb-import.sh?
How do I run the import program in the background when importing local data using adb-import.sh?
How do I ignore error rows in the import program when importing local data using adb-import.sh?
How do I stop an asynchronous import or export task?
Log on to the AnalyticDB for MySQL console, find the target asynchronous task on the Import/Export Jobs tab of the Diagnostics and Optimization page, check the Asynchronous Job Name of the asynchronous task, and then run the CANCEL JOB "${Asynchronous task name}" statement to cancel the asynchronous task. For more information about asynchronous import and export tasks, see Submit asynchronous import tasks.
How do I use SQL to view running import tasks in the current database?
Use the following SQL statement to query:
SELECT * FROM INFORMATION_SCHEMA.kepler_meta_async_jobs where status = "RUNNING"Why is the import performance poor when using JDBC to import data to an AnalyticDB for MySQL cluster?
Ensure that the data production speed of the data source is fast enough. If the data source is from another system or file, check whether the client has output bottlenecks.
Ensure the data processing speed. Check whether data production and consumption are synchronized to ensure that there is enough data waiting to be imported into AnalyticDB for MySQL.
While maintaining appropriate workloads on the client host, check the CPU utilization or disk I/O usage to determine whether system resources are sufficient.
Will there be separate charges for the ingest endpoint of the Data Lakehouse Edition cluster (APS)?
No, there will be no charges for the ingest endpoint. However, APS tasks need to run on the resource groups of the cluster, which will consume resources and incur resource costs.
Should I choose an internal network address or an Internet address when importing or exporting data through OSS external tables?
You should select an internal network address when creating OSS external tables because the backend nodes of AnalyticDB for MySQL access OSS through the internal network rather than the Internet.
How do I resolve the endpoint unreachable error when creating a MaxCompute external table?
Cause: When creating a MaxCompute external table, the specified endpoint is inaccessible, resulting in the endpoint unreachable error.
Solution: Enable the ENI network, then replace the endpoint in the table creation statement with the VPC network Endpoint corresponding to the region where the instance is located, and execute the table creation statement again.
Enabling or disabling the ENI network will cause database connections to be interrupted for about 2 minutes, during which reading and writing are not possible. Please carefully evaluate the impact before enabling or disabling the ENI network.
Log on to the AnalyticDB for MySQL console, go to , and in the Network Information section, turn on the ENI network switch.
How do I resolve the Odps external table endpoint should not contain special character error when creating a MaxCompute external table?
Cause: When creating a MaxCompute external table, the endpoint configuration is incorrect.
Solution: Replace the endpoint in the table creation statement with the VPC network Endpoint corresponding to the region where the instance is located, and execute the table creation statement again.
How do I resolve the Project not found - 'xxx' error when creating a MaxCompute external table?
Cause 1: The project does not exist in MaxCompute, or the project name is misspelled.
Solution: Modify the project name in the table creation statement and create the external table again.
Cause 2: The project exists in MaxCompute but is not in the same region as the AnalyticDB for MySQL cluster.
Solution: Ensure that the AnalyticDB for MySQL cluster and the MaxCompute project are in the same region, and then create the external table again.
How do I resolve the "Roll back this write and commit by writing one row at a time" error when importing MaxCompute data?
Cause: Due to limitations in the AnalyticDB MySQL connection layer, this error may occur when using DataX to import data.
Solution: Modify the JDBC connection string by adding the rewriteBatchedStatements=false parameter and then import the data again.
How do I resolve the Query Exceeded Maximum Time Limit error when importing MaxCompute data?
Cause: Because the MaxCompute table is large, importing data takes a long time, exceeding the time limit for INSERT operations in AnalyticDB MySQL.
Solution: Modify the INSERT_SELECT_TIMEOUT parameter and then import the data again. For specific operations, see Config and Hint configuration parameters.
How do I resolve the cant submit job for job queue is full error when importing MaxCompute data?
Cause: The number of asynchronous tasks running simultaneously in the AnalyticDB for MySQL cluster exceeds the limit. You can submit a ticket to contact technical support to query the limit on the number of asynchronous tasks that can run simultaneously in the cluster.
Solution:
You need to wait for the submitted asynchronous tasks to complete before submitting the asynchronous import task again. For information about how to query the status of asynchronous tasks, see Submit asynchronous import tasks.
You can submit a ticket to contact technical support to modify the limit on the number of asynchronous tasks that can run simultaneously in the cluster.
How do I resolve the Query execution error: odps partition num: 191 > 192, specific odps partitions error when importing MaxCompute data?
Cause: The number of partitions imported simultaneously by the AnalyticDB for MySQL cluster exceeds the limit.
Solution: Based on the specific number of partitions in the MaxCompute table, add the Hint /*RC_INSERT_ODPS_MAX_PARTITION_NUM=<value>*/ before the SQL statement to adjust the partition limit.
How do I resolve the ODPS Table XXX should be a partitioned table and has at least one partition in max_pt() function error when querying MaxCompute data?
Cause: The MaxCompute external table is a non-partitioned external table, causing the MAX_PT function to report an error.
Solution: Ensure that the MaxCompute table is a partitioned table and has at least one partition. Then recreate the MaxCompute external table and use the MAX_PT function to query the external table data.
How do I resolve the ErrorCode=NoSuchPartition, ErrorMessage=The specified partition does not exist error when querying MaxCompute data?
Cause: The MaxCompute external table has no partitions.
Solution: Ensure that the MaxCompute table is a partitioned table and has at least one partition. Then recreate the MaxCompute external table and query the external table data.
How do I resolve the Receive error response with code 500 error when querying MaxCompute data?
Cause: AnalyticDB for MySQL uses the Native engine to execute SQL statements, but the Native engine does not support MaxCompute external tables.
Solution: Add the Hint /*native_engine_task_enabled=false*/ before the SQL statement to disable the Native engine and use the Java engine to execute the SQL statement.
How do I resolve the Query execution error when querying MaxCompute data?
Cause 1: The MaxCompute permission configuration is incorrect, and the AccessKey cannot correctly read the MaxCompute table.
Solution: Modify the read and write permissions of MaxCompute and query the data again.
Cause 2: The table structure and column names in AnalyticDB for MySQL are inconsistent with those in MaxCompute.
Solution: Recreate an external table in AnalyticDB MySQL with a table structure and column names consistent with the MaxCompute table, and then query the data again.
Cause 3: The corresponding partition in MaxCompute does not exist.
Solution: Modify the MaxCompute partition specified in the data query statement and query the data again.
Cause 4: There are too many small files in MaxCompute.
Solution: Enable the small file merging feature in MaxCompute and query the data again. The example statement is as follows. For more information about merging small files in MaxCompute, see Merge small files.
ALTER TABLE tablename [PARTITION] MERGE SMALLFILES;
How do I import array-type data from MaxCompute to an AnalyticDB MySQL cluster?
Cause: MaxCompute external tables do not support nested types, so data of the array<string> type cannot be directly imported into AnalyticDB MySQL.
Solution: You can import the data from MaxCompute to OSS in Parquet format, and then AnalyticDB for MySQL can read the data stored in Parquet format in OSS.
How do I optimize the speed of importing MaxCompute data?
If the data node load is low, you can adjust the value of SQL_OUTPUT_BATCH_SIZE and then import the data again. The example statement is as follows:
set adb_config SQL_OUTPUT_BATCH_SIZE = 6000;If there are too many MaxCompute partitions, you can change the value of ENABLE_ODPS_MULTI_PARTITION_PART_MATCH to false and then import the data again. The example statement is as follows:
set adb_config ENABLE_ODPS_MULTI_PARTITION_PART_MATCH=false;
If you still have issues, please contact submit a ticket Alibaba Cloud technical support.
Why is the data not overwritten when using the INSERT OVERWRITE statement to export data from an AnalyticDB MySQL cluster to a MaxCompute external table?
MaxCompute external tables do not support data overwriting.
Why do I receive the ErrorCode=SlotExceeded, ErrorMessage=Region: cn-hangzhou Project: XXX Slot Quota Exceeded error when writing data to a MaxCompute external table using the INSERT INTO SELECT statement?
Cause: When writing data to a MaxCompute external table, the write slot exceeds the MaxCompute limit.
Solution: Choose one of the following two methods:
You can use an exclusive resource group for DataService Studio for data transmission. Compared with the public data transmission service resource group, the exclusive data transmission service resource group has more slot quotas.
The default concurrency for INSERT INTO SELECT import tasks is 16. You can add the Hint
/*TASK_WRITER_COUNT=<value>*/before the SQL statement to reduce the concurrency. The value range is an integer greater than 0.
Why do I receive the Catalog Service Failed. ErrorCode:202. ErrorMessage:ODPS-0110044: Flow control triggered - Request rejected by catalog server throttling, threshold 8.00, fallback or retry later error when writing data to a MaxCompute external table using the INSERT INTO SELECT statement?
Cause: When writing data to a MaxCompute external table, if new partitions are involved, createPartition will be called on the MaxCompute side to create the partition. If the partition creation frequency is too high, it will trigger the flow control mechanism of MaxCompute, resulting in an error.
Solution: Check whether the INSERT INTO SELECT import task has multiple partitions being written simultaneously. If yes, add the Hint /*TASK_WRITER_COUNT=<value>*/ before the SQL statement to reduce the concurrency. The value range is an integer greater than 0. If not, please submit a ticket to contact technical support for assistance.
Why is the amount of imported MaxCompute data inconsistent with that in the AnalyticDB MySQL cluster?
Cause: AnalyticDB MySQL removes duplicate primary key data.
Solution: Please check whether there is duplicate primary key data in MaxCompute.
Will an error occur when DTS synchronizes data to an AnalyticDB MySQL cluster if the source database contains data types that are not supported by the AnalyticDB MySQL cluster?
If the source database contains data types that are not supported by AnalyticDB for MySQL (such as geographic location data types), AnalyticDB for MySQL will discard columns with unsupported data types during initial schema synchronization.
For supported data types, see Basic data types and Complex data types
Does DTS support modifying the field types in the source table when synchronizing data to an AnalyticDB MySQL cluster?
During data synchronization, you can modify the field types in the source table. Currently, only changes between integer data types and between floating-point data types are supported. You can only change from a data type with a smaller value range to a data type with a larger value range, or from a single-precision data type to a double-precision data type.
Integer data types: Changes from smaller types to larger types are supported among Tinyint, Smallint, Int, and Bigint. For example, changing from Tinyint to Bigint is supported, but changing from Bigint to Tinyint is not supported.
Floating-point data types: Changing from Float to Double is supported, but changing from Double to Float is not supported.
How do I resolve errors caused by modifying the data types of the source table when DTS synchronizes data to an AnalyticDB MySQL cluster?
Error:
'id' is LONG type, Can't change column type to DECIMALmodify precision is not supported, col=id, type decimal, old=11, new=21
Cause: For detailed information, see Does DTS support modifying the field types in the source table when synchronizing data to an AnalyticDB MySQL cluster?
Solution:
For non-entire database synchronization: It is recommended to resynchronize this table (that is, first remove it from the synchronization objects, then delete the table in the destination database, and then add this table to the synchronization objects). DTS will perform full synchronization again, including schema retrieval, which will skip this type of DDL.
For entire database synchronization: Create a new table in AnalyticDB MySQL with a name different from the table that reported the error. The table structure needs to be consistent with the source table structure. Use INSERT INTO SELECT to write the data from the source table to the newly created table, delete the table that reported the error, and then use Rename to rename the new table to the name of the table that reported the error. Restart the DTS task.
How do I resolve errors caused by invalid date values when DTS synchronizes data to an AnalyticDB MySQL cluster?
Error:
Cannot parse "2013-05-00 00:00:00": Value 0 for dayOfMonth must be in the range [1,31]]Cause: AnalyticDB MySQL does not support writing invalid date values.
Solution:
If the task is in the full initialization phase, modify the value in the source table to a valid value (for example, change the error value above to 2013-05-01 00:00:00).
If the task is in the incremental synchronization phase, remove this table from the synchronization objects, modify the value in the source table to a valid value, add the table back to the synchronization objects, and then restart the synchronization task.
If the task is in the incremental synchronization phase of entire database synchronization, please contact Alibaba Cloud technical support to enable the invalid value writing switch. After the switch is enabled, all invalid value writes will be converted to null.
How do I resolve errors caused by synchronizing tables without primary keys when DTS synchronizes data to an AnalyticDB MySQL cluster?
Error:
DTS-077004: Record Replicator error. cause by [[17003,2020051513063717201600100703453067067] table not exist => xxx_table] Cause: Currently, AnalyticDB MySQL does not support synchronizing tables without primary keys.
Solution: This only occurs during entire database synchronization. You need to first determine whether the source database table has no primary key. If so, please manually create a table in the destination database and ensure that the newly created table has a primary key. After creating the table, restart the DTS task.
How do I resolve errors caused by excessively long default values for table fields when DTS synchronizes data to an AnalyticDB MySQL cluster?
Error:
default value is too longSolution: Please submit a ticket to contact Alibaba Cloud technical support to upgrade your AnalyticDB MySQL cluster to the latest version.
How do I resolve errors caused by writing records larger than 16 MB when DTS synchronizes data to an AnalyticDB MySQL cluster?
Error:
com.mysql.jdbc.PacketTooBigException: Packet for query is too large (120468711 > 33554432). You can change this value on the server by setting the max_allowed_packet' variable.Solution: Please submit a ticket to contact Alibaba Cloud technical support to upgrade your AnalyticDB MySQL cluster to the latest version.
How do I resolve errors caused by insufficient disk space when DTS synchronizes data to an AnalyticDB MySQL cluster?
Error:
disk is over flowSolution: Delete some data to free up enough disk space, or contact Alibaba Cloud technical support to scale out your AnalyticDB MySQL cluster. After ensuring that there is enough disk space, restart the DTS task.
How do I resolve errors caused by missing tables or fields when DTS synchronizes data to an AnalyticDB MySQL cluster?
Error:
table not exist => t1Solution: First, check whether you selected all DDL synchronization (such as table creation statements and other DDL statements) when configuring DTS. If not, select it.
How do I resolve the issue of not obtaining values from merged fields when DTS synchronizes data to an AnalyticDB MySQL cluster?
Error:
No value present Cause: In a multi-table merging scenario, if a field in one of the source tables is changed (for example, a new field is added), it will cause the fields of multiple source tables to be inconsistent. When writing data to the destination table, the above error occurs.
Solution: Please submit a ticket to contact DTS technical support for assistance.
Will an error occur if the database name, table name, or column name in the source instance contains hyphens (-) when DTS synchronizes data to an AnalyticDB MySQL cluster?
Because AnalyticDB for MySQL does not allow database names, table names, or column names to contain hyphens (-), to ensure successful data synchronization, the system will map hyphens (-) to underscores (_).
If you encounter other synchronization failures caused by database names, table names, or column names during data synchronization (such as table names containing spaces or Chinese characters), please contact Alibaba Cloud technical support.
For more AnalyticDB for MySQL limits, see Limits.
How do I troubleshoot data latency issues in an AnalyticDB MySQL cluster when DTS synchronizes data to the cluster?
The default DTS synchronization link specification is medium mode. When the data write volume of the source database is large, if you want to achieve the specification synchronization performance limit, you need to upgrade the instance configuration.
The primary key selection for tables without primary keys may cause hot row updates, and hot row updates are very slow. You can submit a ticket to AnalyticDB for MySQL to solve this problem.
When the write performance of the AnalyticDB for MySQL cluster has reached a bottleneck, you need to upgrade the AnalyticDB for MySQL specification.
Why does the write TPS not meet expectations when DataWorks imports data to an AnalyticDB MySQL cluster?
When the client import pressure is insufficient, the cluster CPU utilization, disk IO utilization, and write response time will be at a low level. Although the database server can promptly consume the data sent by the client, the total amount sent is small, resulting in the write TPS not meeting expectations. You can increase the batch insert count for a single import and increase the Expected Maximum Concurrency Of The Task. The data import performance will increase linearly with the increase in import pressure.
Why does data skew occur in the target table when DataWorks imports data to an AnalyticDB MySQL cluster?
When data skew occurs in the target table, some nodes of the cluster are overloaded, affecting import performance. In this case, the cluster CPU utilization and disk IO utilization are at a low level, but the write response time is high. At the same time, you can find the target table in the skew diagnosis table on the Diagnostics And Optimization Data Modeling Diagnostics page. You can redesign the table structure and then import the data. For more information, see Schema design.
How do I check whether the client or its server has load bottlenecks when importing local data using adb-import.sh?
If the client has bottlenecks, it will not be able to maximize the stress test on the database. You can check whether the client or its server has load bottlenecks using the following two methods:
Log on to the AnalyticDB for MySQL console, click Monitoring Information and Diagnostics And Optimization in the navigation pane on the left to check whether the client itself or its server has load bottlenecks.
Use the following common commands to check whether the client itself or its server has load bottlenecks.
Command | Description |
top | View CPU utilization. |
free | View memory usage. |
vmstat 1 1000 | View comprehensive load. |
dstat -all --disk-util or iostat 1 1000 | View disk read bandwidth and utilization. |
jstat -gc <pid> 1000 | View the garbage collection (Garbage Collection, GC) details of the Java process of the import tool. If GC is frequent, you can try to appropriately expand the heap memory size in the JVM parameter |
How do I parameterize the import script when importing local data using adb-import.sh?
If you ensure that the row and column separators of the import files are consistent, you can modify the tableName and dataPath parameters in the import script. By passing different table name and file path parameters, you can achieve the requirement of importing multiple tables with one script.
Example:
tableName=$1
dataPath=$2Execute the import using parameterization.
# sh adb-import.sh table_name001 /path/table_001
# sh adb-import.sh table_name002 /path/table_002
# sh adb-import.sh table_name003 /path/table_003How do I run the import program in the background when importing local data using adb-import.sh?
You can execute the following command to run the import program in the background:
# nohup sh adb-import.sh &After the import program starts running in the background, you can execute the following command to check the log. If an exception information stack is printed, it indicates that there is an error in the import, and you need to troubleshoot the problem based on the exception information. The command is as follows:
# tail -f nohup.outYou can also use the following command to check whether the import process is still executing normally:
# ps -ef|grep importHow do I ignore error rows in the import program when importing local data using adb-import.sh?
Error rows in the import program can be divided into the following two categories:
SQL execution errors.
For this type of error, you can ignore error rows by setting the parameter
ignoreErrors=true. In this case, detailed error files, starting line numbers (becausebatchSizeis set, error rows will be withinbatchSizelines after the starting line number), and the SQL that failed to execute will be printed in the execution result.The number of columns in the file does not meet expectations.
When the number of columns in the file does not meet expectations, the system will immediately stop importing the file and print an error message. However, because this error is caused by an invalid file, it will not be ignored, and you need to manually check the correctness of the file. This type of error will print the following error message:
[ERROR] 2021-03-22 00:46:40,444 [producer- /test2/data/lineitem.csv.split00.100-41] analyticdb.tool.ImportTool (ImportTool.java:591) -bad line found and stop import! 16, file = /test2/data/tpch100g/lineitem.csv.split00.100, rowCount = 7, current row = 3|123|179698|145|73200.15|0.06|0.00|R|F|1994-02-02|1994-01-04|1994-02- 23|NONE|AIR|ongside of the furiously brave acco|
How do I narrow down the scope of troubleshooting for import failures when importing local data using adb-import.sh?
To help locate the cause of import failures more quickly, you can narrow down the scope of failure causes from the following aspects:
When an import fails, the AnalyticDB for MySQL import tool will print error logs and detailed error reasons. By default, SQL statements are truncated (with a maximum of 1000 characters). To print more complete SQL information, you can use the following command to expand the
failureSqlPrintLengthLimitparameter to a reasonable value (such as 1500):printErrorSql=true failureSqlPrintLengthLimit=1500;Because SQL has set
batchSize, it usually executes SQL in batches of thousands of rows, which is not conducive to identifying error rows. You can reduce thebatchSizeparameter (for example, set it to 10) to help locate the error rows. The parameter modification command is as follows:batchSize=10;If the file has been split and you know which file slice contains the error rows, to reproduce the problem, you can modify the
dataPathparameter to import a single file that contains error rows and check the error message. The statement is as follows:dataPath=/u01/this/is/the/directory/where/product_info/stores/file007;
How do I run the import program in a Windows environment when importing local data using adb-import.sh?
Windows environment does not provide batch processing scripts. You can directly use the following method to call the JAR file to execute:
usage: java -jar adb-import-tool.jar [-a <arg>] [-b <arg>] [-B <arg>] [-c <arg>]
[-D <arg>] [-d <arg>] [-E <arg>] [-f <arg>] [-h <arg>] [-I <arg>]
[-k <arg>] [-l <arg>] [-m <arg>] [-n <arg>] [-N <arg>] [-O <arg>]
[-o <arg>] [-p <arg>] [-P <arg>] [-Q <arg>] [-s <arg>] [-S <arg>]
[-t <arg>] [-T <arg>] [-u <arg>] [-w <arg>][-x <arg>] [-y <arg>] [-z <arg>]Parameter | Required | Description |
| Required | The connection address of the AnalyticDB for MySQL cluster. |
| The database account of the AnalyticDB for MySQL cluster. | |
| The password corresponding to the database account of the AnalyticDB for MySQL cluster. | |
| The port number used by the AnalyticDB for MySQL cluster. | |
| The database name of the AnalyticDB for MySQL cluster. | |
| The absolute path of the file or folder to be imported. The following import scenarios are supported:
| |
| The name of the table to be imported. | |
| Optional | Whether to generate a flag file after the import is completed. The default is an empty string, indicating that no flag file is generated. To generate a flag file, simply enter the file name. For example, you can set |
| Set the number of VALUES to be written in batch in Note To better achieve the effect of batch data writing, it is recommended to set this value between 1024 and 4096. | |
| Whether to use an encryption algorithm to encrypt the database password. Default value: false, indicating that the database password is not encrypted using an encryption algorithm. | |
| Whether to print the actual number of rows in the target table after each file is imported. Default value: false, indicating not to print. | |
| Whether to skip the table header. Default value: false, indicating not to skip the table header. | |
| Whether to escape the Note Escaping has a certain performance impact on client string parsing. If you ensure that there are no escape characters in the files to be imported, you can set this parameter to false. | |
| Whether to ignore failed batches when encountering errors during data import. Default value: false, indicating not to ignore. | |
| The number of rows to skip, similar to the | |
| Column delimiter. AnalyticDB for MySQL uses the visible character | |
| When dataFile is a folder, the number of files to read in parallel. Default value: | |
| When there is | |
| Whether to print the SQL that caused an error when encountering errors during data import. Default value: true, indicating to print the error SQL. | |
| The connection pool size of the AnalyticDB for MySQL database. Default value: 2. | |
| File encoding. Valid values: GBK or UTF-8 (default). | |
| Whether not to execute INSERT but only print the INSERT SQL command when importing data to the database. Optional. Default value: false, indicating to execute INSERT. | |
| Row delimiter. AnalyticDB for MySQL uses the visible character | |
| When encountering errors during data import and | |
| The buffer size of INSERT SQL. This facilitates pipeline acceleration and separation of IO and computation when sending INSERT SQL commands to AnalyticDB for MySQL, thereby improving client performance. Default value: 128. | |
| Whether to include column names when executing the | |
| When an INSERT command fails and the error SQL needs to be printed, use this parameter to set the truncation length of the error SQL to be printed. Default value: 1000. | |
| Database connection parameters. Default value: Example: |
Examples:
Example 1: Import a single file using the default parameter configuration. The command is as follows:
java -Xmx8G -Xms8G -jar adb-import-tool.jar -hyourhost.ads.aliyuncs.com -uadbuser -ppassword -P3306 -Dtest --dataFile /data/lineitem.sample --tableName LINEITEMExample 2: Modify relevant parameters to maximize throughput when importing all files in a folder. The command is as follows:
java -Xmx16G -Xms16G -jar adb-import-tool.jar -hyourhost.ads.aliyuncs.com -uadbuser -ppassword -P3306 -Dtest --dataFile /data/tpch100g --tableName LINEITEM --concurrency 64 --batchSize 2048