Blanche
Engineer
Engineer
  • UID619
  • Fans3
  • Follows2
  • Posts59
Reads:113491Replies:0

Use ELK in the container service

Created#
More Posted time:Oct 18, 2016 16:29 PM
The log is an important component of the IT system. It records system events and the time when the events occur. We can troubleshoot system faults according to the log and make statistical analysis. Usually the log is stored in the local log file. You need to log in to the server and filter keywords using grep or other tools to view the log. But when the application will be deployed on multiple servers, this method to view logs is very inconvenient. To locate the log for a specific error, you'd have to log in to all the servers and filter keywords one file after another. That is why concentrated log storage came about: all the logs are collected in the log service and you can view and search for a log in the log service.
In the Docker environment, the concentrated log storage is even more important. Compared with the traditional O&M model, Docker usually uses the orchestration system to manage containers. The mapping between the containers and hosts is not fixed and containers may also be constantly migrated between hosts. You cannot view the log by logging in to the server and the concentrated log becomes the only choice.
The container service integrates Alibaba Cloud log service and automatically collects container logs to the log service through declarations. But some users may prefer the combination of Elasticsearch, Logstash and Kibana. So in this article, I will introduce how to use ELK in the container service.
Overall structure


We need to deploy an independent Logstash cluster. The Logstash is large and resource-consuming, so it won't be run on every server, not to mention in every Docker container. To collect the container logs, we will use Syslog, Logspout and Filebeat. Of course, you may also use other collection methods.
To fit the actual scenario as much as possible, we create two clusters here: one is the ELK cluster where only ELK is deployed, and the other is the application cluster for deploying applications.
Deploy ELK
Deploying Elasticsearch in the container service is very easy. You can use the following orchestration file to deploy it in one click.
Note: To enable other services to send logs to Logstash, we configure SLB before Logstash. First, create an SLB in the SLB Console to listen to Port 5000 and Port 5044, with no backend services added.* Do not forget to modify the SLB ID in the orchestration file.
version: '2'
services:
  elasticsearch:
    image: elasticsearch

  kibana:
    image: kibana
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200/
    labels:
      aliyun.routing.port_5601: kibana
    links:
      - elasticsearch

  logstash:
    image: registry.cn-hangzhou.aliyuncs.com/acs-sample/logstash
    hostname: logstash
    ports:
      - 5044:5044
      - 5000:5000
    labels:
      aliyun.lb.port_5044: 'tcp://${SLB_ID}:5044' #先创建slb
      aliyun.lb.port_5000: 'tcp://${SLB_ID}:5000'
    links:
      - elasticsearch


In this orchestration file, we use the official images for the Elasticsearch and Kibana, with no changes made. Logstash needs a configuration file, so I have to make an image on my own to store the configuration file. The source code of the image is here: https://github.com/AliyunContainerService/demo-logstash. The configuration file of the Logstash is as follows.
input {
    beats {
        port => 5044
        type => beats
    }

    tcp {
        port => 5000
        type => syslog
    }

}

filter {
}

output {
    elasticsearch {
        hosts => ["elasticsearch:9200"]
    }

    stdout { codec => rubydebug }
}


This is very simple Logstash configuration. We provided two input formats, namely syslog and filebeats, and their external ports are 5044 and 5000 respectively.
Okay. Now we can try to visit Kibana. The URL can be found in the router list of the application.


Exciting indeed. Kibana can be opened, though with no logs. Next, we can try to figure out how to write some logs into Elasticsearch.
Collect logs
In Docker, the standard logs adopt Stdout file pointer. So we first demonstrate how to collect Stdout to ELK. If you are using file logs, you can also use Filebeat directly. We adopt WordPress for the demonstration. The following is an orchestration template of WordPress. We create an application wordpress in another cluster.
version: '2'
services:
  mysql:
      image: mysql
      environment:
        - MYSQL_ROOT_PASSWORD=password

  wordpress:
      image: wordpress
      labels:
        aliyun.routing.port_80: wordpress
      links:
        - mysql:mysql
      environment:
        - WORDPRESS_DB_PASSWORD=password
      logging:
        driver: syslog
        options:
          syslog-address: 'tcp://${SLB_IP:5000'


After successful deployment, find the access address of wordpress and open the page.


Open the Kibana page and we can see some logs have been collected.
Guest