How to connect real-time computing Flink with its own environment
1. How to store the Jar running the job on OSS
There are two ways to upload job jars on the VVP platform.
Method 1. With the resource upload function provided by VVP, you can directly use this function to upload Jar. Currently, this function supports the upload of Jar packages within 200 megabytes. When using it, you can directly select the uploaded jar package when creating a job. The demonstration is as follows:
● Enter the VVP platform, click the resource upload function on the left, and then click upload resources in the upper right corner of the opened page, select the Jar package to be uploaded, and complete the upload;
● After the upload is successful, click Create Assignment on the left, and complete the assignment name and other information. In the Jar URI column, select the Jar package just uploaded from the drop-down list, click OK to complete the creation job, and then start it for use.
Method 2: Upload the Jar to be used directly on the OSS console, and then use the Jar link provided by OSS to use it. It is also relatively simple to use, directly use the Jar link provided by OSS, the demonstration is as follows:
● Open the OSS console, select the Bucket used when creating the VVP, then select the directory, click Upload File, you can set its permission to public read when uploading, click Upload File to complete;
● When using, click "Details" on the right side of the uploaded package on the OSS console to obtain the URL link of the Jar package.
● When creating a job, fill in the link of the URL of the jar package in the Jar URI, as shown in the following figure:
It should be noted that the link provided on the OSS details page is accessed from the public network, and the activated VVP cannot directly access the public network. Therefore, when creating a job using HTTPS, you need to use the endpoint accessed by VPC, so that the job can be started normally.
If you want to use the public network to obtain an HTTPS link, how do you do it? You can open up the VVP to the public network first
● First, create a NAT gateway. When creating, select "Purchase ERP in combination", then select the region and supplement the name and other information, and then bind the elastic public network IP to complete the creation;
● Next, create a SNAT entry. After creating the NAT, click "Create SNAT Entry", select the switch in the pop-up window and add the name information to complete the creation.
After completing the above two steps, the VVP instance has been connected to the public network, and the jar package accessible from the https public network can be directly used when creating a Deployment.
2. How Flink interacts with typical data sources on the VVP platform
This part introduces how to interact with some external data storage systems through SQL and connectors, taking SLS and Kafka as data sources to read and write data as examples.
(Practical demonstration) Click on the SQL editor to create a Datagen Table, which is used for random generation of data, and then click Run. Then click to generate an SLS Table, supplement the required parameter information, and then click Create to complete.
After the creation is complete, write the SQL statement, such as insert into sls select id, name from datagen, then save it and click Run to create the Deployment and start it.
After the job runs successfully, query the data on SLS. As shown in the figure below, it means that datagen has generated data and successfully written it into SLS.
Similarly, we can follow the above steps to read data from SLS and write to Kafka:
● Create a Kafka table on the sql editor page of vvp
● Use SQL syntax to read data from SLS and write to Kafka and start
● After the job runs successfully, it starts reading data from SLS and writing it to Kafka
3. How to enter the Flink indicators on the VVP platform into the external Metrics system
Next, I will introduce how to put the indicators of running jobs into some systems and perform indicator observation. VVP provides two methods:
Method 1. By default, VVP imports Flink job indicators into arms without additional processing. After running the job directly, you can see it through the indicator button, as shown in the following figure:
Method 2. If you have an indicator system and want to import Flink’s job indicators into your own system, there are two main points: first, ensure the connectivity between the jobs on VVP and your own indicator system network; second, configure the corresponding in Flink conf The metrics reporter.
Example: use the pushGateway method of premetheus, so the reporter class chooses
org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter. Configure the port and host of pushGateway as shown in the figure above, and the Metric reporter is configured. After the job starts successfully, check the indicators on the configured grafana disk, as shown in the following example.
4. How to enter the Flink job log into the external system
If the job fails suddenly during the running of the job, and we want to view the log of the failed job, we need to save the log of the Flink job. The VVP platform provides two solutions for this purpose, writing Logs to OSS or SLS. Simply put, when creating a job, configure some Log parameters in the Log configuration item.
Method 1: Write logs to OSS. When creating a job, in the Log configuration in the advanced configuration, choose to use user-defined, then put the configuration in (help document) in the custom configuration, and then replace some parameters with the necessary parameters of OSS. OK.
When you need to view the log, you can find the file where the log is stored through the guidance of the help document, and then click Download to view it.
Method 2, write logs to SLS. Similar to method 1, except that the LOG configuration items are slightly different; the method of downloading and viewing is the same as method 1.
There are two ways to upload job jars on the VVP platform.
Method 1. With the resource upload function provided by VVP, you can directly use this function to upload Jar. Currently, this function supports the upload of Jar packages within 200 megabytes. When using it, you can directly select the uploaded jar package when creating a job. The demonstration is as follows:
● Enter the VVP platform, click the resource upload function on the left, and then click upload resources in the upper right corner of the opened page, select the Jar package to be uploaded, and complete the upload;
● After the upload is successful, click Create Assignment on the left, and complete the assignment name and other information. In the Jar URI column, select the Jar package just uploaded from the drop-down list, click OK to complete the creation job, and then start it for use.
Method 2: Upload the Jar to be used directly on the OSS console, and then use the Jar link provided by OSS to use it. It is also relatively simple to use, directly use the Jar link provided by OSS, the demonstration is as follows:
● Open the OSS console, select the Bucket used when creating the VVP, then select the directory, click Upload File, you can set its permission to public read when uploading, click Upload File to complete;
● When using, click "Details" on the right side of the uploaded package on the OSS console to obtain the URL link of the Jar package.
● When creating a job, fill in the link of the URL of the jar package in the Jar URI, as shown in the following figure:
It should be noted that the link provided on the OSS details page is accessed from the public network, and the activated VVP cannot directly access the public network. Therefore, when creating a job using HTTPS, you need to use the endpoint accessed by VPC, so that the job can be started normally.
If you want to use the public network to obtain an HTTPS link, how do you do it? You can open up the VVP to the public network first
● First, create a NAT gateway. When creating, select "Purchase ERP in combination", then select the region and supplement the name and other information, and then bind the elastic public network IP to complete the creation;
● Next, create a SNAT entry. After creating the NAT, click "Create SNAT Entry", select the switch in the pop-up window and add the name information to complete the creation.
After completing the above two steps, the VVP instance has been connected to the public network, and the jar package accessible from the https public network can be directly used when creating a Deployment.
2. How Flink interacts with typical data sources on the VVP platform
This part introduces how to interact with some external data storage systems through SQL and connectors, taking SLS and Kafka as data sources to read and write data as examples.
(Practical demonstration) Click on the SQL editor to create a Datagen Table, which is used for random generation of data, and then click Run. Then click to generate an SLS Table, supplement the required parameter information, and then click Create to complete.
After the creation is complete, write the SQL statement, such as insert into sls select id, name from datagen, then save it and click Run to create the Deployment and start it.
After the job runs successfully, query the data on SLS. As shown in the figure below, it means that datagen has generated data and successfully written it into SLS.
Similarly, we can follow the above steps to read data from SLS and write to Kafka:
● Create a Kafka table on the sql editor page of vvp
● Use SQL syntax to read data from SLS and write to Kafka and start
● After the job runs successfully, it starts reading data from SLS and writing it to Kafka
3. How to enter the Flink indicators on the VVP platform into the external Metrics system
Next, I will introduce how to put the indicators of running jobs into some systems and perform indicator observation. VVP provides two methods:
Method 1. By default, VVP imports Flink job indicators into arms without additional processing. After running the job directly, you can see it through the indicator button, as shown in the following figure:
Method 2. If you have an indicator system and want to import Flink’s job indicators into your own system, there are two main points: first, ensure the connectivity between the jobs on VVP and your own indicator system network; second, configure the corresponding in Flink conf The metrics reporter.
Example: use the pushGateway method of premetheus, so the reporter class chooses
org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter. Configure the port and host of pushGateway as shown in the figure above, and the Metric reporter is configured. After the job starts successfully, check the indicators on the configured grafana disk, as shown in the following example.
4. How to enter the Flink job log into the external system
If the job fails suddenly during the running of the job, and we want to view the log of the failed job, we need to save the log of the Flink job. The VVP platform provides two solutions for this purpose, writing Logs to OSS or SLS. Simply put, when creating a job, configure some Log parameters in the Log configuration item.
Method 1: Write logs to OSS. When creating a job, in the Log configuration in the advanced configuration, choose to use user-defined, then put the configuration in (help document) in the custom configuration, and then replace some parameters with the necessary parameters of OSS. OK.
When you need to view the log, you can find the file where the log is stored through the guidance of the help document, and then click Download to view it.
Method 2, write logs to SLS. Similar to method 1, except that the LOG configuration items are slightly different; the method of downloading and viewing is the same as method 1.
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00