Apache Sqoop is a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datasets, such as relational databases.

Background information

Import data from MySQL to HDFS

Run the following command on the master node of a cluster:
sqoop import --connect <dburi>/<dbname> --username <username> --password <password> --table <tablename> --target-dir <hdfs-dir> --split-by <split-column> --check-column <col> --incremental <mode> --last-value <value>
ParameterDescription
dburiThe access URL of the database. Example: jdbc:mysql://192.168.**.**:3306/.
dbnameThe name of the database.
usernameThe username that you want to use to log on to the database.
passwordThe password that you want to use to log on to the database.
tablenameThe name of the MySQL table.
hdfs-dirThe Hadoop Distributed File System (HDFS) directory to which you want to import data. Example: /user/hive/result.
split-columnOptional. The column that you want to use to split tasks. The default value is the primary key column.
colOptional. The column that you want to use to determine the rows to be imported in incremental import scenarios.
modeOptional. The incremental import mode. Valid values: append and lastmodified.
valueOptional. The maximum value of the column in the previous incremental import.

For more information about the parameters, see the 7.2. Syntax section of Sqoop User Guide.

Import data from HDFS to MySQL

After you create MySQL tables that comply with the data structure of HDFS, run the following command on the master node of a cluster:
sqoop export --connect <dburi>/<dbname> --username <username> --password <password> --table <tablename> --export-dir <hdfs-dir>
ParameterDescription
dburiThe access URL of the database. Example: jdbc:mysql://192.168.**.**:3306/.
dbnameThe name of the database.
usernameThe username that you want to use to log on to the database.
passwordThe password that you want to use to log on to the database.
tablenameThe name of the MySQL table.
hdfs-dirThe Hadoop Distributed File System (HDFS) directory to which you want to import data. Example: /user/hive/result.

For more information about the parameters, see the 7.2. Syntax section of Sqoop User Guide.

Import data from MySQL to Hive

Run the following command on the master node of a cluster:
sqoop import --connect <dburi>/<dbname> --username <username> --password <password> --table <tablename> --check-column <col> --incremental <mode> --last-value <value> --fields-terminated-by "\t" --lines-terminated-by "\n" --hive-import --target-dir <hdfs-dir> --hive-table <hive-tablename>
ParameterDescription
dburiThe access URL of the database. Example: jdbc:mysql://192.168.**.**:3306/.
dbnameThe name of the database.
usernameThe username that you want to use to log on to the database.
passwordThe password that you want to use to log on to the database.
tablenameThe name of the MySQL table.
colOptional. The column that you want to use to determine the rows to be imported in incremental import scenarios.
modeOptional. The incremental import mode. Valid values: append and lastmodified.
valueOptional. The maximum value of the column in the previous incremental import.
hdfs-dirThe Hadoop Distributed File System (HDFS) directory to which you want to import data. Example: /user/hive/result.
hive-tablenameThe name of the Hive table.

For more information about the parameters, see the 7.2. Syntax section of Sqoop User Guide.

Import data from Hive to MySQL

You can use the same command that is used to import data from HDFS to MySQL. You must specify the HDFS directory of the Hive tables from which you import data to MySQL. For more information, see Import data from HDFS to MySQL.

Import data from MySQL to OSS

Run the following command on the master node of a cluster:
sqoop import --connect <dburi>/<dbname> --username <username> --password <password> --table <tablename> --target-dir <oss-dir> --temporary-rootdir <oss-tmpdir> --check-column <col> --incremental <mode> --last-value <value>
ParameterDescription
dburiThe access URL of the database. Example: jdbc:mysql://192.168.**.**:3306/.
dbnameThe name of the database.
usernameThe username that you want to use to log on to the database.
passwordThe password that you want to use to log on to the database.
tablenameThe name of the MySQL table.
oss-dirThe Object Storage Service (OSS) directory to which you want to import data. Example: oss://<accessid>:<accesskey>@<bucketname>.oss-cn-hangzhou-internal.aliyuncs.com/result.
oss-tmpdirThe OSS directory to which you want to temporarily write data. If you set the mode parameter to append, you must specify this parameter.

If you set the mode parameter to append, Apache Sqoop imports data to a temporary directory, and then renames the files in the temporary directory to store the files to the destination directory. If the destination directory exists in HDFS, Apache Sqoop does not import data to the directory or overwrite data in the directory.

colOptional. The column that you want to use to determine the rows to be imported in incremental import scenarios.
modeOptional. The incremental import mode. Valid values: append and lastmodified.
valueOptional. The maximum value of the column in the previous incremental import.

For more information about the parameters, see the 7.2. Syntax section of Sqoop User Guide.

Import data from OSS to MySQL

After you create MySQL tables that comply with the data structure of OSS, run the following command on the master node of a cluster:
sqoop export --connect <dburi>/<dbname> --username <username> --password <password> --table <tablename> --export-dir <oss-dir>
ParameterDescription
dburiThe access URL of the database. Example: jdbc:mysql://192.168.**.**:3306/.
dbnameThe name of the database.
usernameThe username that you want to use to log on to the database.
passwordThe password that you want to use to log on to the database.
tablenameThe name of the MySQL table.
oss-dirThe Object Storage Service (OSS) directory to which you want to import data. Example: oss://<accessid>:<accesskey>@<bucketname>.oss-cn-hangzhou-internal.aliyuncs.com/result.

For more information about the parameters, see the 7.2. Syntax section of Sqoop User Guide.

Use an SQL statement to import data

Run the following command:
sqoop import --connect <dburi>/<dbname> --username <username> --password <password> --query <query-sql> --split-by <sp-column> --hive-import --hive-table <hive-tablename> --target-dir <hdfs-dir>
ParameterDescription
dburiThe access URL of the database. Example: jdbc:mysql://192.168.**.**:3306/.
dbnameThe name of the database.
usernameThe username that you want to use to log on to the database.
passwordThe password that you want to use to log on to the database.
query-sqlThe SQL statement that is used to select the data you want to import. Example: SELECT * FROM profile WHERE id>1 AND \$CONDITIONS.
sp-columnThe name of the column to be split. In most cases, the value of this parameter is the primary key of the MySQL table.
hdfs-dirThe Hadoop Distributed File System (HDFS) directory to which you want to import data. Example: /user/hive/result.
hive-tablenameThe name of the Hive table.

For more information about the parameters, see the 7.2. Syntax section of Sqoop User Guide.