This tutorial describes how to expose data by using instrumentation points in ZooKeeper, capture data by using Create a Prometheus Monitoring of Application Real-Time Monitoring Service (ARMS), and display data on the ARMS Prometheus Grafana dashboard, to ultimately monitor ZooKeeper with ARMS Prometheus Monitoring.

Background information

The following figure shows the workflow.

How it works

Step 1: Start the JMX service

Enable the Java Management Extensions (JMX) service in ZooKeeper to acquire resource information.

  1. Add JMXPORT=8999 in line 44 in the /opt/zk/zookeeper-3.4.10/bin/zkServer.sh file as follows:
    if [ "x$JMXLOCALONLY" = "x" ]
    then
        JMXLOCALONLY=false
    fi
    
    JMXPORT=8999 ## Added here in line 44
    
    if [ "x$JMXDISABLE" = "x" ] || [ "$JMXDISABLE" = 'false' ]
    then
      echo "ZooKeeper JMX enabled by default $JMXPORT ..." >&2
      if [ "x$JMXPORT" = "x" ]
      then
  2. Restart ZooKeeper.
    /opt/zk/zookeeper-3.4.10/bin/zkServer.sh start /opt/zk/zookeeper-3.4.10/conf/zoo_sample.cfg &

Step 2: Start jmx_exporter in ZooKeeper

Start jmx_exporter to allow access to JMX information through HTTP so that ARMS Prometheus Monitoring can capture data.

  1. Download zookeeper.yaml to the /opt/exporter_zookeeper/ directory.
  2. Add hostPort: localhost:8999 to the first line in the downloaded file /opt/exporter_zookeeper/zookeeper.yaml to expose the running port of the JMX service to jmx_exporter.
  3. Download the executable file of jmx_exporter to the opt/exporter_zookeeper/ directory.
  4. Start jmx_exporter.
    java -Dcom.sun.management.jmxremote.ssl=false -
    Dcom.sun.management.jmxremote.authenticate=false -
    Dcom.sun.management.jmxremote.port=8998 -cp /opt/exporter_zookeeper/jmx_prometheus_httpserver-0.12.0-jar-with-dependencies.jar 
    io.prometheus.jmx.WebServer 8997 /opt/exporter_zookeeper/zookeeper_exporter.yaml &
    The application configuration is complete. You can run the following command to check whether jmx_exporter is running properly:
    curl http://<IP address of the server where jmx_exporter is located>:9997/metrics

Step 3: Configure ARMS Prometheus Monitoring to capture the data of ZooKeeper

Configure ARMS Prometheus Monitoring in the ARMS console to capture the data of ZooKeeper.

  1. In the left-side navigation pane, click Prometheus Monitoring.
  2. On the top of the Prometheus monitoring page, select the region where the Container Service Kubernetes cluster is located, and click the name of the target cluster.
  3. On the page that appears, click the Details tab and then click Edit prometheus.yaml.prom_kafka yaml
  4. Paste the following code to the file.
      global:
      scrape_interval:     15s # Set the scrape interval to every 15 seconds. 
    Default is every 1 minute.
      evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
    scrape_configs:
      - job_name: 'zookeeper'    
        static_configs:    
        - targets: ['121.40.124.46:8997']

Step 4: Display ZooKeeper data on the Grafana dashboard

Import the Grafana dashboard template in the ARMS console and specify the Container Service Kubernetes cluster where the Prometheus data source is located.

  1. Go to Host Dashboard.
  2. In the left-side navigation pane, choose + > Import, enter 10981 in the Grafana.com Dashboard field, and click Load.
    Import Grafana dashboard
  3. On the Import page, set the following information and click Import.
    Import Grafana dashboard with options
    1. Enter a custom dashboard name in the Name field.
    2. Select your Container Service Kubernetes cluster from the Folder drop-down list.
    3. Select your Container Service Kubernetes cluster from the drop-down list at the bottom.
    After the configuration is complete, the ARMS Prometheus Grafana ZooKeeper dashboard appears, as shown in the following figure.prom-zoo gra

Step 5: Create an alert

What to do next

After the ARMS Prometheus Grafana ZooKeeper dashboard is configured, you can view Prometheus Monitoring metrics and customize the dashboard. For more information, see the following documents.