The message delivery feature lets you send Flink job startup logs, resource usage data, and job events to external message queues or storage systems in real time. This feature supports data persistence, historical log retention, flexible integration, and real-time analysis. It also enables ad-hoc queries on historical data for troubleshooting, performance optimization, and audit analysis. This topic explains how to configure message delivery and view delivered messages.
Background information
You can deliver messages across regions. The following table lists the supported message types and their delivery timing.
Category | Description | Delivery timing |
Job startup logs | Logs generated during the entire startup process—from Flink environment initialization to JobManager startup and Flink execution graph generation. | One delivery occurs when the job starts successfully or reaches its final state (failed or finished). |
Resource usage |
Important This is for tracking resource capacity only. It does not support alerting. | For an active namespace, resource usage data is sent every 30 seconds. |
Job events | The startup status at each point in time during job startup. | Each job event triggers immediate delivery. |
Job resource consumption | Resource usage data for running streaming jobs only. Batch jobs and jobs running on session clusters are excluded. | While the job runs, resource consumption data is sent every 10 minutes. |
Usage notes
Messages can be delivered only to Simple Log Service (SLS). You must create an SLS project and Logstore first. For details, see Collect and analyze ECS text logs using LoongCollector.
The message delivery feature itself is free. However, using SLS features—such as Logstore indexing—incurs traffic fees. For details, see Billing overview.
To query and analyze logs in SLS, enable indexing. Indexing generates index traffic and uses storage space. You decide whether to enable it. For pricing details, see Billing overview.
You can set server-side encryption for your Logstore. Session record delivery inherits this setting. For details, see Data encryption.
Message delivery supports only job startup logs to SLS. To output job runtime logs to OSS, SLS, or Kafka, use other methods. For configuration steps, see Configure job log output.
Changes to message delivery settings take up to 10 seconds to take effect.
Procedure
Step 1: Configure the message delivery channel
Go to the message delivery configuration page.
Log on to the Realtime Compute for Apache Flink management console.
Click Actions, then click Console for your target workspace.
In the navigation pane on the left, choose .
Configure SLS message delivery parameters.
On the Message Delivery Configuration tab, turn on Deliver to SLS.
Configure SLS settings.
Parameter
Description
Authorization mode
STS Token: Use this mode to deliver messages only to Logstores in SLS projects located in the same region as your Flink workspace. You only need to specify the SLS project and Logstore.
AccessKey: Use this mode to deliver messages to Logstores in SLS projects located in any region. You must specify the endpoint, AccessKey ID, and AccessKey secret.
SLS project
The name of your SLS project.
SLS Logstore
SLS Logstore
Endpoint
The endpoint URL for the SLS service.
When the authorization mode is STS Token, the system automatically sets the endpoint to the one corresponding to the region of your Flink workspace. When the authorization mode is AccessKey, you need to manually configure it.
Delivery scope
The specific message content. For details, see Field descriptions.
AccessKeyId
The AccessKey ID and AccessKey secret of your Alibaba Cloud account.
ImportantTo prevent exposure of your AccessKey pair, manage it using variables. Click the drop-down arrow to select an existing variable, or click
on the right side of the field to create a new one.For more information about variable management and how to view your AccessKey ID and secret, see Variable management and How do I view my AccessKey ID and AccessKey secret?
AccessKeySecret
Click Save.
Step 2: View delivered messages
Procedure
Click SLS project, then click Open the SLS console on the right.

View raw log details.

Field descriptions
The meanings of the Topic field for the four message types are as follows.
Job startup logs (JOB_START_LOG)
Field | Description |
messageType | The message type. The value is fixed to JOB_START_LOG. |
deploymentId | The job deployment ID. |
deploymentName | The job deployment name. |
jobId | The job instance ID. |
tag | The job tag. This field is empty if no tag is configured. |
length | The total length of the log. |
offset | The starting position of this log entry when logs are sharded. |
content | The details of the job startup log. |
workspace | The workspace ID. |
namespace | The namespace name. |
messageId | The message ID. |
timestamp | The timestamp. |
Resource usage (JOB_RESOURCE_QUOTA)
Field | Description |
messageType | The message type. The value is fixed to RESOURCE_QUOTA. |
namespaceTotalCpuMemory | The total number of Compute Units (CUs) in the namespace. |
namespaceTotalCpu | The total number of CUs in the namespace. |
namespaceTotalMemory | The total amount of memory in the namespace. |
namespaceUsedCpuMemory | The number of consumed CUs in the namespace. |
namespaceUsedCpu | The number of consumed CUs in the namespace. |
namespaceUsedMemory | The amount of memory used in the namespace. |
resourceQueueName | The queue name. |
resourceQueueTotalCpuMemory | The total number of CUs in the queue. |
resourceQueueTotalCpu | The total number of CUs in the queue. |
resourceQueueTotalMemory | The total amount of memory in the queue. |
resourceQueueUsedCpuMemory | The number of consumed CUs in the queue. |
resourceQueueUsedCpu | The number of consumed CUs in the queue. |
resourceQueueUsedMemory | The amount of memory used in the queue. |
workspace | The workspace ID. |
namespace | The namespace name. |
messageId | The message ID. |
timestamp | The timestamp. |
Full job event delivery (JOB_EVENT)
Field | Description |
messageType | The message type. The value is fixed to JOB_EVENT. |
deploymentId | The job deployment ID. |
deploymentName | The job deployment name. |
jobId | The job instance ID. |
tag | The job tag. This field is empty if no tag is configured. |
eventId | The event ID. |
eventName | The event name. |
content | The details of the job startup log. |
workspace | The workspace ID. |
namespace | The namespace name. |
messageId | The message ID. |
timestamp | The timestamp. |
Job resource consumption (JOB_RESOURCE_USAGE)
Field | Description |
messageType | The message type. The value is fixed to JOB_RESOURCE_USAGE. |
deploymentId | The job deployment ID. |
deploymentName | The job deployment name. |
jobId | The job instance ID. |
tag | The job tag. This field is empty if no tag is configured. |
jobUsedCpu | The number of CUs used by the job. |
jobUsedMemory | The amount of memory used by the job. |
workspace | The workspace ID. |
namespace | The namespace name. |
messageId | The message ID. |
timestamp | The timestamp. |
References
To configure logs for a single job, see Configure job log output.
To view logs in the development console, see View startup and operational logs, View job running events, View exception logs, and View historical job instance logs.