This topic describes how to run a Flink DataStream job to read data from Alibaba Cloud DataHub and write data to ApsaraDB for HBase.
- Java Development Kit (JDK) 8 is installed on your machine.
- Maven 3.X is installed on your machine.
- An integrated development environment (IDE) for Java or Scala is installed on your machine. We recommend that you use IntelliJ IDEA. The JDK and Maven are configured.
- A topic is created in DataHub, and test data exists in the topic.
Note The test data must contain three fields, whose data types are BOOLEAN, STRING, and STRING in sequence.
- An ApsaraDB for HBase instance is created. The ApsaraDB for HBase instance resides
in the same region and the same virtual private cloud (VPC) as your Realtime Compute
for Apache Flink cluster in exclusive mode. A table with several column families is
created in the ApsaraDB for HBase instance. To use Shell to access ApsaraDB for HBase,see
Use HBase Shell to access ApsaraDB for HBase.
- ApsaraDB for HBase Standard Edition is used in this topic.
- You must add the IP address of your Realtime Compute for Apache Flink cluster to a whitelist of ApsaraDB for HBase.
Develop a job
- Download and decompress the Hbase_Demo-master.zip package to your machine.
- Open IntelliJ IDEA and choose . Select the decompressed Hbase_Demo-master folder and click OK.
- Double-click the HbaseDemo.java file in the \Hbase_Demo-master\src\main\java directory. Then, configure the parameters related to DataHub and ApsaraDB for HBase
in the HbaseDemo.java file.
// Configure the parameters related to DataHub. //private static String endPoint ="public endpoint";// Access DataHub over the Internet. If you enter an internal endpoint, you do not need to enter the public endpoint. private static String endPoint = "inner endpoint";// Access DataHub over an internal network. private static String projectName = "yourProject"; private static String topicSourceName = "yourTopic"; private static String accessId = "yourAK"; private static String accessKey = "yourAS"; private static Long datahubStartInMs = 0L;// Set the time that corresponds to the start offset. // Configure the parameters related to ApsaraDB for HBase. private static String zkQuorum = "yourZK"; private static String tableName = "yourTable"; private static String columnFamily = "yourcolumnFamily";
- Go to the directory where the pom.xml file is saved. Then, run the following command to package the file:
mvn package -Dcheckstyle.skip
Based on the artifactId parameter that you set in the pom.xml file for your project, a JAR package named Hbase_Demo-1.0-SNAPSHOT-shaded.jar appears in the target directory. This indicates that job development is completed.
Publish a job
-- Required. The full name of the main class. blink.main.class=Hbase_Demo.HbaseDemo -- The name of the job. blink.job.name=datahub_demo -- The resource name of the JAR package that contains the full name of the main class. If multiple JAR packages exist, you must specify this parameter. --blink.main.jar=Hbase_Demo-1.0-snapshot.jar -- The default state backend configuration. The configuration takes effect when the job code is not explicitly configured. state.backend.type=niagara state.backend.niagara.ttl.ms=129600000 -- The default checkpoint configuration. The configuration takes effect when the job code is not explicitly configured. blink.checkpoint.interval.ms=180000
Verify the test result
- Send test data to DataHub in the Realtime Compute console.
CREATE TABLE kafka_src ( a BOOLEAN ) WITH ( type = 'random' ); CREATE TABLE event_logs ( `a` BOOLEAN, b VARCHAR, `c` VARCHAR ) WITH ( type = 'datahub', endPoint = '<yourEndpoint>', project = '<yourProject>', topic = '<yourTopic>', accessId='<yourAccessId>', accessKey='<yourAccessKey>' ); INSERT INTO event_logs SELECT a,'rowkey3' as b,'123' as c FROM kafka_src;
- Connect to the HBase cluster.For more information about how to connect to an HBase cluster, see Use HBase Shell to access ApsaraDB for HBase.
- Run the
scan 'hbase_sink'command to query the data written to ApsaraDB for HBase.If the information similar to that in the following figure appears, Realtime Compute for Apache Flink writes the DataHub data to ApsaraDB for HBase.
FAQQ: If an error similar to the following one appears when a job is running, a JAR package conflict occurs. What do I do?
<relocations combine.self="override"> <relocation> <pattern>org.glassfish.jersey</pattern> <shadedPattern>com.alibaba.blink.shaded.datahub.org.glassfish.jersey</shadedPattern> </relocation> </relocations>