All Products
Search
Document Center

Realtime Compute for Apache Flink:Configure log export

Last Updated:Mar 26, 2026

Realtime Compute for Apache Flink lets you route job logs to Object Storage Service (OSS), Simple Log Service (SLS), or Kafka — giving you a durable, queryable log store outside the Flink console.

Before you begin

  • After saving log export settings, restart the job to apply the changes.

  • If Allow Log Archives remains on, job logs continue to be stored in the OSS bucket associated with the workspace. Turn it off only if you no longer need to view logs in the console.

    image

  • Reference namespace variables in ${secret_values.xxxx} format anywhere in your log configuration.

Configure log export for a single job

Via the console

  1. Open the log settings for the deployment.

    1. Log on to the Realtime Compute for Apache Flink console.

    2. Find the target workspace and click Console in the Actions column.

    3. In the left-side navigation pane, choose O\&M \> Deployments, and select the target deployment.

    4. On the Configuration tab, find the Logging section and click Edit in the upper-right corner.

    5. Set Logging Profile to Custom Template.

  2. Click Apply with System Profiles \> default.

  3. Click Add Appender and select a storage system.

  4. Fill in the log output settings for your chosen storage system. To send logs at different levels to different storage systems, see Configure log filter rules based on log levels.

    Export logs to SLS

    image

    Parameter Description
    name A custom name for this appender.
    type Fixed to SLS.
    pattern The log format. Default: %d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n, which produces entries like 2024-10-01 14:23:45,678{GMT+8} INFO com.example.MyClass - This is a test log message.
    flushIntervalSeconds How often (in seconds) logs are flushed to SLS.
    flushIntervalEventCount Number of log entries per flush batch. Logs are flushed when either flushIntervalEventCount or flushIntervalSeconds is reached, whichever comes first.
    authenticationMode AccessKey or SLS Token. SLS Token applies to intra-region SLS projects and requires Ververica Runtime (VVR) 11.5 or later.
    project Your SLS project name.
    logStore The target Logstore name.
    endpoint The private endpoint of your SLS project. See Endpoints.
    accessKeyId AccessKey ID used to access the SLS project. See Obtain an AccessKey pair.
    Important

    Store credentials as namespace variables instead of entering them in plaintext.

    accessKeySecret AccessKey secret corresponding to the AccessKey ID above.

    Cross-account access: To export logs to an SLS project owned by a different Alibaba Cloud account, use that account to create a custom RAM policy and attach it to the RAM role used by the account managing Realtime Compute for Apache Flink resources. Choose the policy scope that fits your needs:

    • Access all SLS resources: ``json { "Version": "1", "Statement": [ { "Effect": "Allow", "Action": [ "log:Get*", "log:PostLogStoreLogs" ], "Resource": "*" } ] } ``

    • Access a specific SLS resource: ``json { "Version": "1", "Statement": [ { "Effect": "Allow", "Action": [ "log:PostLogStoreLogs", "log:GetLogStore" ], "Resource": "acs:log:ap-southeast-1:152940222687****:project/test-vvp-sls/logstore/test-ltest" } ] } ``

    Export logs to OSS

    image

    Parameter Description
    name A custom name for this appender.
    type Fixed to OSS.
    pattern The log format. Default: %d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n.
    baseUri Your OSS bucket name.
    endpoint The region-specific internal endpoint of OSS. See Regions and endpoints. To find it in the OSS console: click your bucket name, go to Overview in the middle navigation pane, then copy the Endpoint value in the Access from ECS over the VPC (internal network) row.
    accessKeyId AccessKey ID for cross-account OSS access. See Obtain an AccessKey pair.
    Important

    Store credentials as namespace variables instead of entering them in plaintext. Skip if the OSS bucket is in the same account.

    secretAccessKey AccessKey secret for cross-account access. Skip if the OSS bucket is in the same account.
    flushIntervalSeconds How often (in seconds) logs are written to OSS.
    flushIntervalEventCount Number of log entries per write batch. Logs are written when either flushIntervalEventCount or flushIntervalSeconds is reached, whichever comes first.
    rollingBytes Maximum size of a single log file. A new file starts when this limit is reached.

    Export logs to Kafka

    image

    Kafka clusters with Kerberos authentication enabled are not supported.

    Prerequisite: Add the following line to the Other Configuration field in the deployment's Parameters section. This explicitly specifies the KafkaAppender class path so Realtime Compute for Apache Flink can load the logging plug-in. After completing the prerequisite, configure the log output parameters:

    1. On the deployment details page, select the Configuration tab.

    2. In the Parameters section, click Edit.

    3. Add the following to the Other Configuration field and save: ``yaml plugin.classloader.parent-first-patterns.additional: com.ververica.platform.logging.appender ``

    Parameter Description
    name A custom name for this appender.
    type Fixed to KafkaVVP.
    pattern The log format. Default: %d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n.
    bootstrap.servers Endpoints of the Kafka brokers.
    acks Number of partition replicas that must acknowledge a message before the producer considers it written. See acks.
    buffer.memory Producer buffer size, in bytes.
    retries Maximum number of retries after a message send failure.
    compression.type Compression algorithm. Valid values: none, gzip, snappy, lz4, zstd.
  5. Click Save.

  6. Start the deployment.

Via XML

  1. Open the XML editor for the deployment.

    1. Log on to the Realtime Compute for Apache Flink console.

    2. Find the target workspace and click Console in the Actions column.

    3. In the left-side navigation pane, click O\&M \> Deployments, then click the target deployment name.

    4. On the Configuration tab, find the Logging section and click Edit in the upper-right corner.

    5. Set Logging Profile to Custom Template.

    6. Click Edit XML.

  2. Paste the XML template for your storage system and replace the placeholder values. To send logs at different levels to different storage systems, see Configure log filter rules based on log levels.

    Export logs to OSS

    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration xmlns="http://logging.apache.org/log4j/2.0/config"
    strict="true" packages="com.ververica.platform.logging.appender" status="WARN">
      <Appenders>
        <Appender name="StdOut" type="Console">
          <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" type="PatternLayout" charset="UTF-8"/>
        </Appender>
        <Appender name="RollingFile" type="RollingFile" fileName="${sys:log.file}" filePattern="${sys:log.file}.%i">
          <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" type="PatternLayout" charset="UTF-8"/>
          <Policies>
            <SizeBasedTriggeringPolicy size="20 MB"/>
          </Policies>
          <DefaultRolloverStrategy max="4"/>
        </Appender>
        <Appender name="OSS" type="OSS">
          <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" type="PatternLayout" charset="UTF-8"/>
    
          <!-- The final effective log path is: ${baseUri}/logs/${namespace}/${deploymentId}/{jobId}/ -->
          <Property name="namespace">{{ namespace }}</Property> <!-- Do not modify this line -->
          <Property name="baseUri">oss://YOUR-BUCKET-NAME/</Property>
          <Property name="endpoint">https://YOUR-ENDPOINT</Property>
          <Property name="accessKeyId">${secret_values.accessKeyId}</Property>
          <Property name="secretAccessKey">${secret_values.accessKeySecret}</Property>
          <Property name="flushIntervalSeconds">10</Property>
          <Property name="flushIntervalEventCount">100</Property>
          <Property name="rollingBytes">10485760</Property>
        </Appender>
       <Appender name="StdOutErrConsoleAppender" type="Console">
         <Layout pattern="%m" type="PatternLayout" charset="UTF-8"/>
       </Appender>
       <Appender name="StdOutFileAppender" type="RollingFile" fileName="${sys:stdout.file}" filePattern="${sys:stdout.file}.%i">
         <Layout pattern="%m" type="PatternLayout" charset="UTF-8"/>
         <Policies>
         <SizeBasedTriggeringPolicy size="1 GB"/>
         </Policies>
         <DefaultRolloverStrategy max="2"/>
       </Appender>
       <Appender name="StdErrFileAppender" type="RollingFile" fileName="${sys:stderr.file}" filePattern="${sys:stderr.file}.%i">
         <Layout pattern="%m" type="PatternLayout" charset="UTF-8"/>
         <Policies>
         <SizeBasedTriggeringPolicy size="1 GB"/>
         </Policies>
         <DefaultRolloverStrategy max="2"/>
       </Appender>
      </Appenders>
      <Loggers>
        <Logger level="INFO" name="org.apache.hadoop"/>
        <Logger level="INFO" name="org.apache.kafka"/>
        <Logger level="INFO" name="org.apache.zookeeper"/>
        <Logger level="INFO" name="akka"/>
        <Logger level="ERROR" name="org.jboss.netty.channel.DefaultChannelPipeline"/>
        <Logger level="OFF" name="org.apache.flink.runtime.rest.handler.job.JobDetailsHandler"/>
        <Logger level="ERROR" name="org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss"/>
      <Logger level="INFO" name="StdOutErrRedirector.StdOut" additivity="false">
        <AppenderRef ref="StdOutFileAppender"/>
        <AppenderRef ref="StdOutErrConsoleAppender"/>
      </Logger>
      <Logger level="INFO" name="StdOutErrRedirector.StdErr" additivity="false">
        <AppenderRef ref="StdErrFileAppender"/>
        <AppenderRef ref="StdOutErrConsoleAppender"/>
      </Logger>
        {%- for name, level in userConfiguredLoggers -%}
          <Logger level="{{ level }}" name="{{ name }}"/>
        {%- endfor -%}
        <Root level="{{ rootLoggerLogLevel }}">
          <AppenderRef ref="StdOut"/>
          <AppenderRef ref="RollingFile"/>
          <AppenderRef ref="OSS"/>
        </Root>
      </Loggers>
    </Configuration>
    Do not modify the Twig template variables (namespace, deploymentId, jobId, deploymentName). Changing them causes startup errors.
    Placeholder Description
    YOUR-BUCKET-NAME Your OSS bucket name.
    YOUR-ENDPOINT The region-specific internal endpoint of OSS. See Regions and endpoints. To find it in the OSS console: click your bucket, go to Overview, then copy the Endpoint value in the Access from ECS over the VPC (internal network) row.
    YOUR-OSS-ACCESSKEYID AccessKey ID for cross-account OSS access. See Obtain an AccessKey pair.
    Important

    Use namespace variables instead of plaintext credentials. Skip if the OSS bucket is in the same account.

    YOUR-OSS-ACCESSKEYSECRET AccessKey secret for cross-account access. Skip if the OSS bucket is in the same account.
    flushIntervalSeconds How often (in seconds) logs are written to OSS. Default: 10.
    flushIntervalEventCount Number of log entries per write batch. Logs are written when either this value or flushIntervalSeconds is reached. Default: 100.
    rollingBytes Maximum size (bytes) of a single log file. Default: 10485760 (10 MB).

    Export logs to SLS

    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration xmlns="http://logging.apache.org/log4j/2.0/config"
    strict="true" packages="com.ververica.platform.logging.appender" status="WARN">
      <Appenders>
        <Appender name="StdOut" type="Console">
          <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" type="PatternLayout" charset="UTF-8"/>
        </Appender>
        <Appender name="RollingFile" type="RollingFile" fileName="${sys:log.file}" filePattern="${sys:log.file}.%i">
          <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" type="PatternLayout" charset="UTF-8"/>
          <Policies>
            <SizeBasedTriggeringPolicy size="5 MB"/>
          </Policies>
          <DefaultRolloverStrategy max="1"/>
        </Appender>
        <Appender name="SLS" type="SLS">
          <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" type="PatternLayout" charset="UTF-8"/>
    
          <!-- The final effective log path is: ${baseUri}/logs/${namespace}/${deploymentId}/{jobId}/ -->
          <Property name="namespace">{{ namespace }}</Property> <!-- Do not modify this line -->
          <Property name="project">YOUR-SLS-PROJECT</Property>
          <Property name="logStore">YOUR-SLS-LOGSTORE</Property>
          <Property name="endpoint">YOUR-SLS-ENDPOINT</Property>
          <Property name="accessKeyId">${secret_values.accessKeyId}</Property>
          <Property name="accessKeySecret">${secret_values.accessKeySecret}</Property>
          <Property name="topic">{{ namespace }}:{{ deploymentId }}:{{ jobId }}</Property>
          <Property name="deploymentName">{{ deploymentName }}</Property>
          <Property name="flushIntervalSeconds">10</Property>
          <Property name="flushIntervalEventCount">100</Property>
        </Appender>
       <Appender name="StdOutErrConsoleAppender" type="Console">
         <Layout pattern="%m" type="PatternLayout" charset="UTF-8"/>
       </Appender>
       <Appender name="StdOutFileAppender" type="RollingFile" fileName="${sys:stdout.file}" filePattern="${sys:stdout.file}.%i">
         <Layout pattern="%m" type="PatternLayout" charset="UTF-8"/>
         <Policies>
         <SizeBasedTriggeringPolicy size="1 GB"/>
         </Policies>
         <DefaultRolloverStrategy max="2"/>
       </Appender>
       <Appender name="StdErrFileAppender" type="RollingFile" fileName="${sys:stderr.file}" filePattern="${sys:stderr.file}.%i">
         <Layout pattern="%m" type="PatternLayout" charset="UTF-8"/>
         <Policies>
         <SizeBasedTriggeringPolicy size="1 GB"/>
         </Policies>
         <DefaultRolloverStrategy max="2"/>
       </Appender>
      </Appenders>
      <Loggers>
        <Logger level="INFO" name="org.apache.hadoop"/>
        <Logger level="INFO" name="org.apache.kafka"/>
        <Logger level="INFO" name="org.apache.zookeeper"/>
        <Logger level="INFO" name="akka"/>
        <Logger level="ERROR" name="org.jboss.netty.channel.DefaultChannelPipeline"/>
        <Logger level="OFF" name="org.apache.flink.runtime.rest.handler.job.JobDetailsHandler"/>
        <Logger level="ERROR" name="org.apache.flink.fs.osshadoop.shaded.com.aliyun.oss"/>
      <Logger level="INFO" name="StdOutErrRedirector.StdOut" additivity="false">
        <AppenderRef ref="StdOutFileAppender"/>
        <AppenderRef ref="StdOutErrConsoleAppender"/>
      </Logger>
      <Logger level="INFO" name="StdOutErrRedirector.StdErr" additivity="false">
        <AppenderRef ref="StdErrFileAppender"/>
        <AppenderRef ref="StdOutErrConsoleAppender"/>
      </Logger>
        {%- for name, level in userConfiguredLoggers -%}
          <Logger level="{{ level }}" name="{{ name }}"/>
        {%- endfor -%}
        <Root level="{{ rootLoggerLogLevel }}">
          <AppenderRef ref="StdOut"/>
          <AppenderRef ref="RollingFile"/>
          <AppenderRef ref="SLS"/>
        </Root>
      </Loggers>
    </Configuration>
    Placeholder Description
    YOUR-SLS-PROJECT Your SLS project name.
    YOUR-SLS-LOGSTORE The target Logstore name.
    YOUR-SLS-ENDPOINT The private endpoint of your SLS project. See Endpoints.
    YOUR-SLS-ACCESSKEYID AccessKey ID for the SLS project. See Obtain an AccessKey pair.
    Important

    Use namespace variables instead of plaintext credentials. For cross-account access, create a custom RAM policy — see the cross-account access details in the console section above.

    YOUR-SLS-ACCESSKEYSECRET AccessKey secret corresponding to the ID above.
    flushIntervalSeconds How often (in seconds) logs are flushed to SLS. Default: 10.
    flushIntervalEventCount Number of log entries per flush batch. Logs are flushed when either this value or flushIntervalSeconds is reached. Default: 100.

    Export logs to Kafka

    Kafka clusters with Kerberos authentication enabled are not supported.

    Prerequisite: Add the KafkaAppender class path to Other Configuration before applying this template. Do this once per namespace or per job:

    • For all jobs in the namespace: go to O\&M \> Configurations, paste the snippet below into Other Configuration, and save.

    • For a single job: on the deployment details page, go to Configuration \> Parameters \> Edit, add the snippet to Other Configuration, and save.

    plugin.classloader.parent-first-patterns.additional: com.ververica.platform.logging.appender

    Then paste the following XML and replace the placeholders:

    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration xmlns="http://logging.apache.org/log4j/2.0/config"
    strict="true" packages="com.ververica.platform.logging.appender" status="WARN">
      <Appenders>
        <Appender name="StdOut" type="Console">
          <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" type="PatternLayout"/>
        </Appender>
        <Appender name="RollingFile" type="RollingFile" fileName="${sys:log.file}" filePattern="${sys:log.file}.%i">
          <Layout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" type="PatternLayout"/>
          <Policies>
            <SizeBasedTriggeringPolicy size="20 MB"/>
          </Policies>
          <DefaultRolloverStrategy max="4"/>
        </Appender>
        <Appender type="KafkaVVP" name="KafkaVVPAppender" topic="YOUR-TOPIC-NAME">
            <Layout type="PatternLayout" pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n"/>
            <Property name="bootstrap.servers">YOUR-KAFKA-BOOTSTRAP-SERVERS</Property>
             <Property name="acks">YOUR-ACKS-VALUE</Property>
             <Property name="buffer.memory">YOUR-BUFFER-MEMORY-SIZE</Property>
              <Property name="retries">YOUR-RETRIES-NUMBER</Property>
             <Property name="compression.type">YOUR-COMPRESSION-TYPE</Property>
        </Appender>
        <Appender type="Async" name="AsyncAppender">
            <AppenderRef ref="KafkaVVPAppender"/>
        </Appender>
      </Appenders>
      <Loggers>
        <Logger level="INFO" name="org.apache.hadoop"/>
        <Logger level="INFO" name="org.apache.kafka"/>
        <Logger level="INFO" name="org.apache.zookeeper"/>
        <Logger level="INFO" name="akka"/>
        <Logger level="ERROR" name="org.jboss.netty.channel.DefaultChannelPipeline"/>
        <Logger level="OFF" name="org.apache.flink.runtime.rest.handler.job.JobDetailsHandler"/>
        {%- for name, level in userConfiguredLoggers -%}
          <Logger level="{{ level }}" name="{{ name }}"/>
        {%- endfor -%}
        <Root level="{{ rootLoggerLogLevel }}">
          <AppenderRef ref="StdOut"/>
          <AppenderRef ref="RollingFile"/>
          <AppenderRef ref="AsyncAppender"/>
        </Root>
      </Loggers>
    </Configuration>
    Placeholder Description
    YOUR-TOPIC-NAME The target Kafka topic name.
    YOUR-KAFKA-BOOTSTRAP-SERVERS Endpoints of the Kafka brokers.
    YOUR-ACKS-VALUE Number of partition replicas that must acknowledge a message before the producer considers it written. See acks.
    YOUR-BUFFER-MEMORY-SIZE Producer buffer size, in bytes.
    YOUR-RETRIES-NUMBER Maximum number of retries after a message send failure.
    YOUR-COMPRESSION-TYPE Compression algorithm. Valid values: none, gzip, snappy, lz4, zstd.

    Include any Apache Kafka client parameter in the configuration. See Apache Kafka documentation.

  3. Click Save.

  4. Start the deployment.

Configure log export for all jobs

This applies log export settings to every job in a namespace.

Important

After saving, all jobs in the namespace send logs to the configured storage system (OSS, SLS, or Kafka). If you plan to use Kafka, add the KafkaAppender class path first: 1. Go to O\&M \> Configurations. 2. In the Other Configuration field, paste the following and save:

   plugin.classloader.parent-first-patterns.additional: com.ververica.platform.logging.appender
  1. Open the namespace configuration page.

    1. Log on to the Realtime Compute for Apache Flink console.

    2. Find the target workspace and click Console in the Actions column.

    3. In the top navigation bar, select the target namespace.

    4. In the left-side navigation pane, click O\&M \> Configurations.

    5. On the Deployment Defaults tab, select Defaults for Stream or Defaults for Batch.

    6. In the Logging section, set Logging Profile to Custom Template.

  2. Configure the log export settings using the XML templates from the Via XML section above.

  3. Click Save Changes.

Configure log filter rules based on log levels

Use ThresholdFilter from Log4j2 to route logs at different levels to different storage systems. For example, send INFO-and-above logs to the development console and ERROR-and-above logs to SLS. This approach:

  • Reduces noise: Only relevant log levels reach each storage system.

  • Lowers cost: Less data ingested into premium storage like SLS.

  • Simplifies management: Each system receives the log tier it is best suited for.

Procedure:

  1. In the Logging section, set Logging Profile to Custom Template.

  2. Paste and adapt the following XML. This example sends INFO+ logs to the console and ERROR+ logs to SLS:

    • StdOut appender: The ThresholdFilter accepts INFO+ logs and denies everything below.

    • SLS appender: The ThresholdFilter accepts ERROR+ logs only. Replace YOUR_NAMESPACE, YOUR_SLS_PROJECT, and other placeholders with your actual values. For the full list of SLS appender properties, see Export logs to SLS.

    • StdOutLogger and SLSLogger: Each logger routes to its dedicated appender based on the configured level.

    • Root logger: SLS is excluded to prevent duplicate log entries when a specific logger already routes to it.

    If you use a custom appender as the SLS appender and its type is not SLS, verify that the appender type is supported by Realtime Compute for Apache Flink and implements the connection logic for Simple Log Service.
    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration xmlns="http://logging.apache.org/log4j/2.0/config" strict="true" status="WARN">
      <Appenders>
        <!-- Console appender: INFO level and above -->
        <Appender name="StdOut" type="Console">
          <ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="DENY"/>
          <Layout type="PatternLayout" pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" charset="UTF-8"/>
        </Appender>
    
        <!-- RollingFile appender: logs all levels per Root logger level -->
        <Appender name="RollingFile" type="RollingFile">
          <!-- Configuration remains unchanged -->
          <!-- ... -->
        </Appender>
    
        <!-- SLS appender: ERROR level and above -->
        <Appender name="SLS" type="SLS">
          <ThresholdFilter level="ERROR" onMatch="ACCEPT" onMismatch="DENY"/>
          <Layout type="PatternLayout" pattern="%d{yyyy-MM-dd HH:mm:ss,SSS}{GMT+8} %-5p %-60c %x - %m%n" charset="UTF-8"/>
          <!-- SLS-specific properties -->
          <Property name="namespace">YOUR_NAMESPACE</Property>
          <Property name="project">YOUR_SLS_PROJECT</Property>
          <Property name="logStore">YOUR_SLS_LOGSTORE</Property>
          <Property name="endpoint">YOUR_SLS_ENDPOINT</Property>
          <!-- Access credentials and other properties -->
          <!-- ... -->
        </Appender>
    
        <!-- Other appenders remain unchanged -->
        <!-- ... -->
      </Appenders>
    
      <Loggers>
        <Logger name="StdOutLogger" level="INFO" additivity="false">
          <AppenderRef ref="StdOut"/>
        </Logger>
    
        <Logger name="SLSLogger" level="ERROR" additivity="false">
          <AppenderRef ref="SLS"/>
        </Logger>
    
        <!-- Other loggers with their specific configurations -->
        <!-- ... -->
    
        <!-- Root logger: omit SLS to avoid duplicate logging -->
        <Root level="INFO">
          <AppenderRef ref="StdOut"/>
          <AppenderRef ref="RollingFile"/>
        </Root>
      </Loggers>
    </Configuration>

    Key points in this configuration: For more information about Log4j2 configuration, see Apache Log4j 2.

References