This topic describes how to use Spark Streaming SQL to perform data analysis and interactive development on the JDBC data source.

CREATE TABLE syntax

CREATE TABLE tbName
USING jdbc2
OPTIONS(propertyName=propertyValue[,propertyName=propertyValue]*);

Table schema

When you create a JDBC data table, you do not need to explicitly define the fields in the data table. Example:
spark-sql> CREATE DATABASE IF NOT EXISTS default;
spark-sql> USE default;
spark-sql> DROP TABLE IF EXISTS rds_table_test;
spark-sql> CREATE TABLE rds_table_test
         > USING jdbc2
         > OPTIONS (
         > url="jdbc:mysql://rm-bp11*********i7w9.mysql.rds.aliyuncs.com:3306/default? useSSL=true",
         > driver="com.mysql.jdbc.Driver",
         > dbtable="test",
         > user="root",
         > password="thisisapassword",
         > batchsize="100",
         > isolationLevel="NONE");

spark-sql> DESC rds_table_test;
id  int NULL
name  string  NULL
Time taken: 0.413 seconds, Fetched 2 row(s)

Parameters

Parameter Description Required
url The URL of the database. Yes
driver The JDBC driver connected to the database. Example: com.mysql.jdbc.Driver"eper.quorum":"a.b.c.d:2181"}. Yes
dbtable The name of the data table. Yes
user The username of the account that is used to connect to the database. Yes
password The password of the account that is used to connect to the database. Yes
batchsize The number of data entries updated in each batch.

This parameter takes effect only when you write data into a database.

No
isolationLevel The level of transaction isolation. Default value: READ_UNCOMMITTED. No
The following table describes the transaction isolation levels and their support to read phenomena.
Transaction isolation level Dirty read Non-repeatable read Phantom read
READ_UNCOMMITTED Supported Supported Supported
READ_COMMITTED Not supported Supported Supported
REPEATABLE_READ Not supported Not supported Supported
SERIALIZABLE Not supported Not supported Not supported
NONE Not supported Not supported Not supported

Write data

If you want to write data into a database, you can run the following command to set an associated SQL statement:
spark-sql> SET streaming.query.${queryName}.sql=insert into `test` (`id`,`name`) values(?, ?) ;
spark-sql> SET ...
spark-sql> INSERT INTO rds_table_test SELECT ...