All Products
Search
Document Center

Simple Log Service:Ship logs to a SIEM system over Syslog

Last Updated:Aug 04, 2023

Syslog is a widely used logging standard that is applicable to most security information and event management (SIEM) systems, such as IBM QRadar and HP ArcSight. This topic describes how to ship logs from Simple Log Service to a SIEM system over Syslog.

Background information

  • Syslog is defined in RFC 5424 and RFC 3164. RFC 3164 was published in 2001, and RFC 5424 was an upgraded version published in 2009. We recommend that you use RFC 5424 because this version is compatible with RFC 3164 and solves more issues than RFC 3164. For more information, see RFC 5424 and RFC 3164.

  • Syslog over TCP/TLS: Syslog defines the standard format of log messages. Both TCP and UDP support Syslog to ensure the stability of data transmission. RFC 5425 defines the use of Transport Layer Security (TLS) to enable secure transport of Syslog messages. If your SIEM system supports TCP or TLS, we recommend that you send Syslog messages over TCP or TLS. For more information, see RFC 5425.

  • Syslog facility: the program component defined by earlier versions of UNIX. You can select user as the default facility. For more information, see Program components.

  • Syslog severity: the severity defined for Syslog messages. You can set the log with specific content to a higher severity level based on your business requirements. The default value is info. For more information, see Log levels.

Note

Code examples in this topic are used for reference only. For more information about the latest code examples, visit GitHub.

Shipping process

We recommend that you write a program based on consumer groups in Simple Log Service. This way, you can use the program to ship Syslog messages over TCP or TLS to the SIEM system.

Write a main program

The following code shows the control logic of a main program:

def main():
    option, settings = get_monitor_option()

    logger.info("*** start to consume data...")
    worker = ConsumerWorker(SyncData, option, args=(settings,) )
    worker.start(join=True)

if __name__ == '__main__':
    main()

Configure the program

  • Configure the following information:

    • Log file of the program: is used for subsequent testing and diagnosis of potential issues.

    • Basic options: include consumer group settings and connection settings of Simple Log Service.

    • Advanced options for consumer groups: are used for performance tuning. We recommend that you do not change the settings of these options.

    • Parameters and options for the Syslog server of the SIEM system

      Note

  • Code example

    Read the code comments in the following example and modify the parameters based on your business requirements.

    #encoding: utf8
    import os
    import logging
    from logging.handlers import RotatingFileHandler
    
    user = logging.getLogger()
    handler = RotatingFileHandler("{0}_{1}.log".format(os.path.basename(__file__), current_process().pid), maxBytes=100*1024*1024, backupCount=5)
    handler.setFormatter(logging.Formatter(fmt='[%(asctime)s] - [%(threadName)s] - {%(module)s:%(funcName)s:%(lineno)d} %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S'))
    user.setLevel(logging.INFO)
    user.addHandler(handler)
    user.addHandler(logging.StreamHandler())
    
    logger = logging.getLogger(__name__)
    
    def get_option():
        ##########################
        # Basic options
        ##########################
    
        # Obtain parameters and options for Log Service from environment variables. 
        endpoint = os.environ.get('SLS_ENDPOINT', '')
        accessKeyId = os.environ.get('SLS_AK_ID', '')
        accessKey = os.environ.get('SLS_AK_KEY', '')
        project = os.environ.get('SLS_PROJECT', '')
        logstore = os.environ.get('SLS_LOGSTORE', '')
        consumer_group = os.environ.get('SLS_CG', '')
    
        # The starting point of data consumption. The first time that you run the program, the starting point is specified by this parameter. The next time you run the program, the consumption starts from the last consumption checkpoint. 
        # You can set the parameter to begin, end, or a time in the ISO 8601 standard. 
        cursor_start_time = "2018-12-26 0:0:0"
    
        ##########################
        # Advanced options
        ##########################
    
        # We recommend that you do not modify the consumer name, especially when concurrent consumption is required. 
        consumer_name = "{0}-{1}".format(consumer_group, current_process().pid)
    
        # The heartbeat interval. If the server does not receive a heartbeat for a specific shard for two consecutive intervals, the consumer is considered disconnected. In this case, the server allocates the task to another consumer. 
        # If the network performance is poor, we recommend that you specify a larger interval. 
        heartbeat_interval = 20
    
        # The maximum interval between two data consumption processes. If data is generated at a fast speed, you do not need to adjust the parameter. 
        data_fetch_interval = 1
    
        # Create a consumer group that contains the consumer.
        option = LogHubConfig(endpoint, accessKeyId, accessKey, project, logstore, consumer_group, consumer_name,
                              cursor_position=CursorPosition.SPECIAL_TIMER_CURSOR,
                              cursor_start_time=cursor_start_time,
                              heartbeat_interval=heartbeat_interval,
                              data_fetch_interval=data_fetch_interval)
    
        # syslog options
        settings = {
                    "host": "1.2.3.4", # Required.
                    "port": 514,       # Required. The port number.
                    "protocol": "tcp", # Required. Valid values: tcp, udp, and tls. The tls value is only applicable to Python 3. 
                    "sep": "||",       # Required. The separator that is used to separate key-value pairs. In this example, the separator is two consecutive vertical bars (||). 
                    "cert_path": None, # Optional. The path where the TLS certificate is stored. 
                    "timeout": 120, # Optional. The timeout period. The default value is 120. Unit: seconds. 
                    "facility": syslogclient.FAC_USER,  # Optional. You can refer to the values of the syslogclient.FAC_* parameter in other examples. 
                    "severity": syslogclient.SEV_INFO,  # Optional. You can refer to the values of the syslogclient.SEV_* parameter in other examples. 
                    "hostname": None,  # Optional. The machine name. The default value is the name of your computer. 
                    "tag": None # Optional. The tag. The default value is a hyphen (-). 
                }
    
        return option, settings

Consume and ship data

The following example shows how to collect data from Simple Log Service and ship the data to the Syslog server in the SIEM system. Read the code comments in the following example and modify the parameters based on your business requirements.

from syslogclient import SyslogClientRFC5424 as SyslogClient

class SyncData(ConsumerProcessorBase):
    """
    The consumer consumes data from Log Service and ships it to the Syslog server. 
    """
    def __init__(self, splunk_setting):
      """Initiate the Syslog server and test network connectivity."""
        super(SyncData, self).__init__()   # remember to call base's init

        assert target_setting, ValueError("You need to configure settings of remote target")
        assert isinstance(target_setting, dict), ValueError("The settings should be dict to include necessary address and confidentials.")

        self.option = target_setting
        self.protocol = self.option['protocol']
        self.timeout = int(self.option.get('timeout', 120))
        self.sep = self.option.get('sep', "||")
        self.host = self.option["host"]
        self.port = int(self.option.get('port', 514))
        self.cert_path=self.option.get('cert_path', None)

        # try connection
        with SyslogClient(self.host, self.port, proto=self.protocol, timeout=self.timeout, cert_path=self.cert_path) as client:
            pass

    def process(self, log_groups, check_point_tracker):
        logs = PullLogResponse.loggroups_to_flattern_list(log_groups, time_as_str=True, decode_bytes=True)
        logger.info("Get data from shard {0}, log count: {1}".format(self.shard_id, len(logs)))
        try:
            with SyslogClient(self.host, self.port, proto=self.protocol, timeout=self.timeout, cert_path=self.cert_path) as client:
                for log in logs:
                    # Put your sync code here to send to remote.
                    # the format of log is just a dict with example as below (Note, all strings are unicode):
                    #    Python2: {"__time__": "12312312", "__topic__": "topic", u"field1": u"value1", u"field2": u"value2"}
                    #    Python3: {"__time__": "12312312", "__topic__": "topic", "field1": "value1", "field2": "value2"}
                    # suppose we only care about audit log
                    timestamp = datetime.fromtimestamp(int(log[u'__time__']))
                    del log['__time__']

                    io = six.StringIO()
                    first = True
          # Modify the formatted content based on your business requirements. The data is transmitted by using key-value pairs that are separated with two consecutive vertical bars (||). 
                    for k, v in six.iteritems(log):
                        io.write("{0}{1}={2}".format(self.sep, k, v))

                    data = io.getvalue()

          # Modify the facility and severity settings based on your business requirements. 
                    client.log(data, facility=self.option.get("facility", None), severity=self.option.get("severity", None), timestamp=timestamp, program=self.option.get("tag", None), hostname=self.option.get("hostname", None))

        except Exception as err:
            logger.debug("Failed to connect to remote syslog server ({0}). Exception: {1}".format(self.option, err))

            # Add code to handle errors. For example, you can add the code to retry requests or report errors. 
            raise err

        logger.info("Complete send data to remote")

        self.save_checkpoint(check_point_tracker)

Start the program

The following code shows how to start a program named sync_data.py:

export SLS_ENDPOINT=<Endpoint of your region>
export SLS_AK_ID=<YOUR AK ID>
export SLS_AK_KEY=<YOUR AK KEY>
export SLS_PROJECT=<SLS Project Name>
export SLS_LOGSTORE=<SLS Logstore Name>
export SLS_CG=<Consumer group name, such as syc_data>

python3 sync_data.py

Limits

You can configure up to 30 consumer groups for each Logstore in Simple Log Service. If the system displays the ConsumerGroupQuotaExceed error message, we recommend that you log on to the Simple Log Service console and delete consumer groups that you no longer need.

View and monitor data consumption

You can log on to the Simple Log Service console to view the data consumption status of a consumer group. For more information, see Step 2: View the status of a consumer group.

Concurrent consumption

To consume data concurrently, you can start multiple consumer group-based programs for multiple consumers.

nohup python3 sync_data.py &
nohup python3 sync_data.py &
nohup python3 sync_data.py &
...
Note

The name of each consumer is unique within a consumer group. The names of the consumers are suffixed with process IDs. The data of one shard can be consumed by only one consumer. If a Logstore contains 10 shards and each consumer group contains only one consumer, a maximum of 10 consumer groups can consume the data of all shards at the same time.

Throughput

Throughput is tested in the following scenario: Python 3 is used to run the program in the preceding example, the bandwidth and receiving speed, such as the receiving speed on Splunk, are not limited, and a single consumer consumes about 20% of the single-core CPU resources. The test results indicate that the consumption speed of raw logs can reach 10 MB/s. Therefore, if 10 consumers consume data at the same time, the consumption speed of raw logs can reach 100 MB/s per CPU core. Each CPU core can consume up to 0.9 TB of raw logs per day.

High availability

A consumer group stores checkpoints on the server. When the data consumption process of one consumer stops, another consumer automatically takes over the data consumption process and continues the process from the checkpoint of the last consumption. You can start consumers on different machines. If a machine stops or is damaged, a consumer on another machine can take over the data consumption process and continue the process from the checkpoint of the last consumption. To have a sufficient number of consumers, you can start more consumers than shards on different machines.