EventBridge Integrated Cloud Service Practice
Overview of EvenBridge Integration
EventBridge is a serverless event bus launched by Alibaba Cloud, with the goal of expanding the event ecosystem, breaking data silos between systems, and establishing an event integration ecosystem. Provide unified standardized access and management capabilities for events, improve integration and integration pathways, and help customers quickly implement event driven core atomic functions. EventBridge can be quickly integrated into systems such as BPM, RPA, CRM, etc.
EventBridge expands the EventBridge event ecosystem through three directions: event standardization, integration standardization, and component standardization:
Event standardization: Embrace the CloudEvents 1.0 open source community standard protocol, natively support the CloudEvents community SDK and API, and fully embrace the open source community event standard ecosystem;
• Access standardization: Provide a standard event push protocol, PutEvent, and support two event access models, Pull and Push, which can effectively reduce the difficulty of event access and provide a comprehensive standardized process for event access on the cloud;
Component standardization: Encapsulates a standardized event downstream component toolchain system, including Schema registration, event analysis, event retrieval, event dashboards, and more. Provide a comprehensive event tool chain ecosystem.
In the field of integration, EventBridge focuses on creating two core scenarios: event integration and data integration. The following will describe these two scenarios in detail.
Event Integration
Currently, EventBridge has over 80 cloud product event sources and over 800 event types. The entire event ecosystem is still gradually enriching.
So, how does EventBridge achieve event integration for cloud products?
Firstly, on the EventBridge console, you can see an event bus named default, to which all cloud product events will be posted;
Then click on Create Rule to select the cloud product and its related events of interest for event monitoring and delivery.
Taking two examples as examples, let's take a look at the integration of EventBridge events.
OSS Event Integration
Taking OSS event sources as an example, let's explain how to integrate OSS events.
OSS events are now mainly divided into four categories: operation audit related, cloud monitoring related, configuration audit related, and cloud product related events such as PutObject uploading files. The event sources of other cloud products are similar, and can be basically divided into these types of events.
Below is a demonstration of an event driven online file decompression service:
There will be a zip folder under the OSS Bucket to store the files that need to be extracted, and an unzip folder to store the extracted files;
After uploading a file to the OSS bucket, the event of file upload will be triggered and delivered to the cloud service dedicated bus of EventBridge;
Then, an event rule will be used to filter the events of the zip bucket and post them to the HTTP endpoint of the decompression service;
After receiving the event, the decompression service downloads and decompresses the file from OSS according to the file path in the event, and after decompressing, transfers the file to the unzip directory;
At the same time, there will also be an event rule that listens for file upload events in the unzip directory and converts the events to push them to the nail group.
Let's take a look at how it is implemented together:
Go to the link below to view the video:
https://www.bilibili.com/video/BV1s44y1g7dk/
1) Firstly, create a bucket with a zip directory below to store the uploaded compressed files and an unzip directory to store the extracted files.
2) Deploy decompression services and expose public network access addresses.
The source code address of the decompression service is:
https://github.com/AliyunContainerService/serverless-k8s-examples/tree/master/oss-unzip?spm=a2c6h.12873639.article -detail.15.5a585d52apSWbk
You can also use ASK for direct deployment, and the yaml file address is:
https://github.com/AliyunContainerService/serverless-k8s-examples/blob/master/oss-unzip/hack/oss-unzip.yaml
3) Create an event rule to listen for events related to uploaded files in the zip directory and post it to the HTTP endpoint of the decompression service.
Here, use subject to match the zip directory.
4) Create another event rule to listen to events in the unzip directory and post decompression events to the nail group.
Here, we also use subject to match the unzip directory.
EventBridge extracts parameters from the event through JSONPath, places these values in variables, and finally renders the final output to the event target through template definition. The event format of OSS event sources can also refer to official documents:
https://help.aliyun.com/document_detail/205739.html#section -G8i-7p9-xpk, and use JSONPath to define variables based on actual business needs. 5) Finally, upload a file through the OSS console for verification.
You can see that the eventbridge.zip you just uploaded has been decompressed and uploaded, and you can also receive notification of the completion of the decompression in the DingTalk group. In addition, you can also view the trajectory of the content of the event that has been delivered on the event tracking side.
You can see two upload events: one is uploaded through the console, and the other is uploaded after decompressing the file.
You can view the trajectory and successfully deliver it to the HTTP endpoint of the decompression service and the nail robot.
Integrate cloud products by customizing event sources and cloud product event targets
The demo just demonstrated is an event source for integrating cloud services. Now let's take a look at how to integrate cloud products by customizing event sources and cloud product event targets through a demo.
The final effect of this demo is to automatically clean the data through EventBridge and submit it to RDS. The event content is a JSON with two fields, one name and one age. Now we hope to filter out users over 10 years old and store them in RDS.
The overall architecture is shown in the figure, using an MNS Queue as a custom event source, and filtering and converting events through the EventBridge to output directly to RDS.
1) Firstly, an MNS Queue has been created, along with an RDS instance and database table. The table structure is as follows:
2) Create a custom event bus, select the event provider as MNS, and the queue is a pre created queue;
After creating it, we can see an already running event source here;
3) Next, create rules to post to RDS
The configured event mode content is as follows:
Numerical matching can refer to official documents:
https://help.aliyun.com/document_detail/181432.html#section -dgh-5cq-w6c
4) Click Next, select the event target as the database, fill in the database information, configure conversion rules, and complete the creation.
5) Finally, first send a message using MNS Queue, which has an age greater than 10.
You can see that this event has been output to RDS.
Next, send another message less than 10 to the MNS Queue.
This event was filtered out and not output to RDS.
Events can also be viewed through event tracking:
You can see that an event was successfully delivered to RDS, but one event was filtered out and not delivered.
data Integration
Event flow is a more lightweight and real-time end-to-end event flow test channel provided by EventBridge for data integration, with the main goal of synchronizing events between two endpoints and providing filtering and transformation functions. Currently, it supports event flow between various messaging products of Alibaba Cloud.
Unlike the event bus model, in the event flow, there is no need for an event bus. Its 1:1 model is more lightweight, and the direct approach to the target also makes the event more real-time; Through event flow, we can achieve protocol conversion, data synchronization, and cross regional backup capabilities between different systems.
Below, we will use an example to explain how to use event flow to route RocketMQ messages to MNS Queue and integrate the two products.
The overall structure is shown in the figure, routing messages with TAG MNS in RocketMQ to MNQ Queue through EventBridge.
Let's take a look at how to achieve:
Firstly, create an event stream, select the source RocketMQ instance, and fill in Tag as mns.
Leave the event pattern content blank to match all.
Select MNS as the target and select the target queue to complete the creation.
After completing the creation, click Start to start the event flow task.
After the event flow is started, we can send messages to the source RocketMQ Topic through the console or SDK. When a Tag is mns, we can see that the message is routed to mns; When a Tag is not mns, the message will not be routed to mns.
EventBridge is a serverless event bus launched by Alibaba Cloud, with the goal of expanding the event ecosystem, breaking data silos between systems, and establishing an event integration ecosystem. Provide unified standardized access and management capabilities for events, improve integration and integration pathways, and help customers quickly implement event driven core atomic functions. EventBridge can be quickly integrated into systems such as BPM, RPA, CRM, etc.
EventBridge expands the EventBridge event ecosystem through three directions: event standardization, integration standardization, and component standardization:
Event standardization: Embrace the CloudEvents 1.0 open source community standard protocol, natively support the CloudEvents community SDK and API, and fully embrace the open source community event standard ecosystem;
• Access standardization: Provide a standard event push protocol, PutEvent, and support two event access models, Pull and Push, which can effectively reduce the difficulty of event access and provide a comprehensive standardized process for event access on the cloud;
Component standardization: Encapsulates a standardized event downstream component toolchain system, including Schema registration, event analysis, event retrieval, event dashboards, and more. Provide a comprehensive event tool chain ecosystem.
In the field of integration, EventBridge focuses on creating two core scenarios: event integration and data integration. The following will describe these two scenarios in detail.
Event Integration
Currently, EventBridge has over 80 cloud product event sources and over 800 event types. The entire event ecosystem is still gradually enriching.
So, how does EventBridge achieve event integration for cloud products?
Firstly, on the EventBridge console, you can see an event bus named default, to which all cloud product events will be posted;
Then click on Create Rule to select the cloud product and its related events of interest for event monitoring and delivery.
Taking two examples as examples, let's take a look at the integration of EventBridge events.
OSS Event Integration
Taking OSS event sources as an example, let's explain how to integrate OSS events.
OSS events are now mainly divided into four categories: operation audit related, cloud monitoring related, configuration audit related, and cloud product related events such as PutObject uploading files. The event sources of other cloud products are similar, and can be basically divided into these types of events.
Below is a demonstration of an event driven online file decompression service:
There will be a zip folder under the OSS Bucket to store the files that need to be extracted, and an unzip folder to store the extracted files;
After uploading a file to the OSS bucket, the event of file upload will be triggered and delivered to the cloud service dedicated bus of EventBridge;
Then, an event rule will be used to filter the events of the zip bucket and post them to the HTTP endpoint of the decompression service;
After receiving the event, the decompression service downloads and decompresses the file from OSS according to the file path in the event, and after decompressing, transfers the file to the unzip directory;
At the same time, there will also be an event rule that listens for file upload events in the unzip directory and converts the events to push them to the nail group.
Let's take a look at how it is implemented together:
Go to the link below to view the video:
https://www.bilibili.com/video/BV1s44y1g7dk/
1) Firstly, create a bucket with a zip directory below to store the uploaded compressed files and an unzip directory to store the extracted files.
2) Deploy decompression services and expose public network access addresses.
The source code address of the decompression service is:
https://github.com/AliyunContainerService/serverless-k8s-examples/tree/master/oss-unzip?spm=a2c6h.12873639.article -detail.15.5a585d52apSWbk
You can also use ASK for direct deployment, and the yaml file address is:
https://github.com/AliyunContainerService/serverless-k8s-examples/blob/master/oss-unzip/hack/oss-unzip.yaml
3) Create an event rule to listen for events related to uploaded files in the zip directory and post it to the HTTP endpoint of the decompression service.
Here, use subject to match the zip directory.
4) Create another event rule to listen to events in the unzip directory and post decompression events to the nail group.
Here, we also use subject to match the unzip directory.
EventBridge extracts parameters from the event through JSONPath, places these values in variables, and finally renders the final output to the event target through template definition. The event format of OSS event sources can also refer to official documents:
https://help.aliyun.com/document_detail/205739.html#section -G8i-7p9-xpk, and use JSONPath to define variables based on actual business needs. 5) Finally, upload a file through the OSS console for verification.
You can see that the eventbridge.zip you just uploaded has been decompressed and uploaded, and you can also receive notification of the completion of the decompression in the DingTalk group. In addition, you can also view the trajectory of the content of the event that has been delivered on the event tracking side.
You can see two upload events: one is uploaded through the console, and the other is uploaded after decompressing the file.
You can view the trajectory and successfully deliver it to the HTTP endpoint of the decompression service and the nail robot.
Integrate cloud products by customizing event sources and cloud product event targets
The demo just demonstrated is an event source for integrating cloud services. Now let's take a look at how to integrate cloud products by customizing event sources and cloud product event targets through a demo.
The final effect of this demo is to automatically clean the data through EventBridge and submit it to RDS. The event content is a JSON with two fields, one name and one age. Now we hope to filter out users over 10 years old and store them in RDS.
The overall architecture is shown in the figure, using an MNS Queue as a custom event source, and filtering and converting events through the EventBridge to output directly to RDS.
1) Firstly, an MNS Queue has been created, along with an RDS instance and database table. The table structure is as follows:
2) Create a custom event bus, select the event provider as MNS, and the queue is a pre created queue;
After creating it, we can see an already running event source here;
3) Next, create rules to post to RDS
The configured event mode content is as follows:
Numerical matching can refer to official documents:
https://help.aliyun.com/document_detail/181432.html#section -dgh-5cq-w6c
4) Click Next, select the event target as the database, fill in the database information, configure conversion rules, and complete the creation.
5) Finally, first send a message using MNS Queue, which has an age greater than 10.
You can see that this event has been output to RDS.
Next, send another message less than 10 to the MNS Queue.
This event was filtered out and not output to RDS.
Events can also be viewed through event tracking:
You can see that an event was successfully delivered to RDS, but one event was filtered out and not delivered.
data Integration
Event flow is a more lightweight and real-time end-to-end event flow test channel provided by EventBridge for data integration, with the main goal of synchronizing events between two endpoints and providing filtering and transformation functions. Currently, it supports event flow between various messaging products of Alibaba Cloud.
Unlike the event bus model, in the event flow, there is no need for an event bus. Its 1:1 model is more lightweight, and the direct approach to the target also makes the event more real-time; Through event flow, we can achieve protocol conversion, data synchronization, and cross regional backup capabilities between different systems.
Below, we will use an example to explain how to use event flow to route RocketMQ messages to MNS Queue and integrate the two products.
The overall structure is shown in the figure, routing messages with TAG MNS in RocketMQ to MNQ Queue through EventBridge.
Let's take a look at how to achieve:
Firstly, create an event stream, select the source RocketMQ instance, and fill in Tag as mns.
Leave the event pattern content blank to match all.
Select MNS as the target and select the target queue to complete the creation.
After completing the creation, click Start to start the event flow task.
After the event flow is started, we can send messages to the source RocketMQ Topic through the console or SDK. When a Tag is mns, we can see that the message is routed to mns; When a Tag is not mns, the message will not be routed to mns.
Related Articles
-
A detailed explanation of Hadoop core architecture HDFS
Knowledge Base Team
-
What Does IOT Mean
Knowledge Base Team
-
6 Optional Technologies for Data Storage
Knowledge Base Team
-
What Is Blockchain Technology
Knowledge Base Team
Explore More Special Offers
-
Short Message Service(SMS) & Mail Service
50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00