Elastic Cloud Kubernetes(ECK) installation Elasticsearch, Kibana practical tutorial-Alibaba Cloud Developer Community

Elastic Cloud Kubernetes(ECK) is a k8s operator-based plug-in officially released by Elastic. It extends the basic orchestration function of k8s and allows you to easily install and manage Elasticsearch in k8s, kibana and APM clusters. With ECK, we can simplify the following key operations:

  1. manage and monitor multiple clusters
  2. scale up or down clusters
  3. change cluster configurations
  4. scheduled backup
  5. use TLS certificates to protect cluster security
  6. build a hot-warm-cold architecture with zone awareness

supported versions

deploy an ECK in a Kubernetes cluster

this topic uses a native Kubernetes cluster as an example. The process of GKE and Amazon EKS is similar.

  1. Installation custom resource definitions and operator and its RBAC rules
kubectl apply -f https://download.elastic.co/downloads/eck/1.2.1/all-in-one.yaml
  1. monitoring operator logs
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

in a private k8s cluster, you may not be able to access the public network. You can download the yaml file to your local computer and modify the address of the operator image. all-in-one.yaml is a collection of multiple yaml files. Find statefulset.yaml, and change the image to the image address in the private repository, and change -- container-registry to the private repository address.

# Source: eck/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elastic-operator
  namespace: elastic-system
  labels:
    control-plane: elastic-operator
spec:
  selector:
    matchLabels:
      control-plane: elastic-operator
  serviceName: elastic-operator
  template:
    metadata:
      annotations:
        # Rename the fields "error" to "error.message" and "source" to "event.source"
        # This is to avoid a conflict with the ECS "error" and "source" documents.
        "co.elastic.logs/raw": "[{\"type\":\"container\",\"json.keys_under_root\":true,\"paths\":[\"/var/log/containers/*${data.kubernetes.container.id}.log\"],\"processors\":[{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"error\",\"to\":\"_error\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_error\",\"to\":\"error.message\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"source\",\"to\":\"_source\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_source\",\"to\":\"event.source\"}]}}]}]"
      labels:
        control-plane: elastic-operator
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: elastic-operator
      containers:
      - image: "your.com/eck-operator:1.2.1"
        imagePullPolicy: IfNotPresent
        name: manager
        args:
          - "manager"
          - "--log-verbosity=0"
          - "--metrics-port=0"
          - "--container-registry=your.com"
          - "--max-concurrent-reconciles=3"
          - "--ca-cert-validity=8760h"
          - "--ca-cert-rotate-before=24h"
          - "--cert-validity=8760h"
          - "--cert-rotate-before=24h"
          - "--enable-webhook"
        env:
          - name: OPERATOR_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: OPERATOR_IMAGE
            value: "harbor.dcos.xixian.unicom.local/mtc/eck-operator:1.2.1"
          - name: WEBHOOK_SECRET
            value: "elastic-webhook-server-cert"
        resources:
            limits:
              cpu: 1
              memory: 512Mi
            requests:
              cpu: 100m
              memory: 150Mi
        ports:
        - containerPort: 9443
          name: https-webhook
          protocol: TCP
        volumeMounts:
          - mountPath: /tmp/k8s-webhook-server/serving-certs
            name: cert
            readOnly: true
      volumes:
        - name: cert
          secret:
            defaultMode: 420
            secretName: "elastic-webhook-server-cert"
---

If an error occurs during the installation of all-in-one.yaml, you can split the yaml file and install it separately to eliminate the error.

After ECK is installed successfully, a elastic-system the namespace of the, in the space there is a eck-operator Pod the Pod monitors the status of the cluster in the background and responds accordingly according to the user's instructions.

Deploy ELasticsearch cluster

to be close to the actual application, we deploy an Elasticsearch cluster with 3 master nodes and Network block storage.

Create a PV

take ceph storage as an example to create 3 PVS with a capacity of 500GB

apiVersion: v1
kind: PersistentVolume
metadata:
   name: pv-es-data-00  ##pv名称
spec:
   capacity:
     storage: 500Gi ## pv大小,与云硬盘大小一致即可
   accessModes:
     - ReadWriteOnce ## pv读写类型,填写云硬盘支持的类型
   mountOptions:
     - rw ##挂载类型有只读(ro),读写{rw},挂载类型和accessModes要对应起来
   persistentVolumeReclaimPolicy: Retain ##建议选择Retain模式
   csi:
       driver: ckecsi ##固定不变
       volumeHandle: welkinbig.es-00-608521303445 ##与cbs实例列表接口instanceId字段对应
       fsType: xfs ##挂载文件系统类型xfs,ext4等
       volumeAttributes:
          monitors: 10.172.xx.xx:6789,10.172.xx.xx:6789,10.172.xx.xx:6789
          pool: welkinbig 
          imageFormat: "2" ##固定不变
          imageFeatures: "layering" ##固定不变
          adminId: admin ##固定不变
          userId: '60852xxxxxxx' ##账户ID
          volName: es-00-608521303445 ##云硬盘实例列表接口imageName字段
          mounter: rbd
          608521xxxxxx: AQDcz0xf7s2SBhAAqGxxxxxxxxxxxxxxxxxx
          admin: AQB4kjxfPP1HLxAAXfixxxxxxxxxxxxxxxxxx
       controllerPublishSecretRef:
        name: xx-secret   ##秘钥名称
        namespace: default 
       nodeStageSecretRef:
        name: xx-secret
        namespace: default
       nodePublishSecretRef:
        name: xx-secret
        namespace: default

deploy an Elasticsearch cluster

run the following yaml file in kubectl. The version field specifies the version of the Elasticsearch instance to be installed. The image label specifies the private repository address of the Elasticsearch image. A count of 3 indicates that there are three nodes. node.master: true indicates that the created node is the primary node. node.data: true indicates that the created node is a data node that can be used to store data.

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: es-cluster
spec:
  version: 7.9.0
  image: your.com/elasticsearch:7.9.0-ik-7.9.0
  nodeSets:
  - name: master-nodes
    count: 3
    config:
      node.master: true
      node.data: true
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 500Gi
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          env:
          - name: ES_JAVA_OPTS
            value: -Xms4g -Xmx4g
          resources:
            requests:
              cpu: 4
              memory: 8Gi
            limits:
              cpu: 4
              memory: 8Gi

Monitor the health status and creation process of the cluster

you can call this operation to query the status of an Elasticsearch cluster, including the health status, version, and number of nodes:

kubectl get elasticsearch
NAME          HEALTH    NODES     VERSION   PHASE         AGE
quickstart    green     3         7.9.0     Ready         1m

when the cluster is just created, the HEALTH and PHASE parameters must be left blank. After a certain period of time, after the cluster is created, the PHASE changes to Ready and the HEALTH changes to green.

You can run the following command to view the Pod status:

kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=es-cluster'
NAME                      READY   STATUS    RESTARTS   AGE
es-cluster-es-default-0   1/1     Running   0          79s

view Pod logs:

kubectl logs -f es-cluster-es-default-0

access an Elasticsearch cluster

ECK creates a ClusterIP Service to access the Elasticsearch cluster:

kubectl get service es-cluster-es-http
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
es-cluster-es-http   ClusterIP   10.15.251.145   <none>        9200/TCP   34m
  1. obtain an access credential

ECK automatically creates a default user elastic, and the password is stored in the k8s secret:

kubectl get secret es-cluster-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
  1. access from within the cluster

the password in the command is replaced by the password obtained in step 1.-k indicates that the certificate error is ignored.

curl -u "elastic:$PASSWORD" -k "https://es-cluster-es-http:9200"
{
  "name" : "es-cluster-es-default-0",
  "cluster_name" : "es-cluster",
  "cluster_uuid" : "XqWg0xIiRmmEBg4NMhnYPg",
  "version" : {...},
  "tagline" : "You Know, for Search"
}

JVM heap settings

in podTemplate set ES_JAVA_OPTS environment variables to change the JVM heap capacity of es. We strongly recommend that you requests and limits set this parameter to the same value to ensure that the pod can obtain sufficient resources in the Kubernetes cluster.

podTemplate:
      spec:
        containers:
        - name: elasticsearch
          env:
          - name: ES_JAVA_OPTS
            value: -Xms2g -Xmx2g
          resources:
            requests:
              memory: 4Gi
              cpu: 0.5
            limits:
              memory: 4Gi
              cpu: 2

Node configuration

any definition in elasticsearch.yml the settings in the configuration file can be found in spec.nodeSets[?] .config defined in.

spec:
  nodeSets:
  - name: masters
    count: 3
    config:
      node.master: true
      node.data: false
      node.ingest: false
      node.ml: false
      xpack.ml.enabled: true
      node.remote_cluster_client: false
  - name: data
    count: 10
    config:
      node.master: false
      node.data: true
      node.ingest: true
      node.ml: true
      node.remote_cluster_client: false

Volume declaration template

to prevent data loss when a pod is deleted, OPerator creates a PersistentVolumeClaim with a capacity of 1GI for each pod in the cluster by default. In a production environment, you should define a volume claim template and a storage class for the appropriate capacity to associate persistent volumes. The name of the coupon declaration must be elasticsearch-data . If the storage class is not used in k8s for security management, you can skip the storage class. Depending on the k8s configuration and the underlying file system, some persistent volumes cannot change the volume capacity after they are created. When defining volume declarations, consider future storage requirements to ensure sufficient storage space to cope with business growth.

spec:
  nodeSets:
  - name: default
    count: 3
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 500Gi
        storageClassName: standard

Virtual memory

by default, es uses memory mapping (mmap) to efficiently access indexes. Generally, the default virtual address space of Linux systems is small, which cannot meet the requirements of es and may cause OOM exceptions. In the production environment, we recommend that you set Linux kernel parameters. vm.max_map_count for 262144 , without setting node.store.allow_mmap. The preceding kernel settings can be modified directly on the host or through the initial container. You can use the following sample, add a can es pod start front modifying kernel parameters initial containers:

podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']

customize configuration files and plug-ins

you can customize an Elasticsearch configuration file and plug-in in two ways:

  1. create an Elasticsearch image with configuration files and plug-ins installed.
  2. Install the plug-in or configuration file when the Pod is started

the advantage of the first option is that it can be verified before ECK installs the image, while the second option has maximum flexibility. However, the second option means that errors in the configuration file can only be found during running, and the plug-in needs to be downloaded through the public network.

For a private cluster, you may not be able to access the public network in the cluster. Therefore, we recommend that you install the plug-in by packaging an image. The following example shows how to customize an image for installing the plug-in.

  1. Create a Dockerfile that contains the following content:
FROM elasticsearch:7.9.0
COPY ./elasticsearch-analysis-ik-7.9.0.zip /home/
RUN sh -c '/bin/echo -e "y" | bin/elasticsearch-plugin install  file:/home/elasticsearch-analysis-ik-7.9.0.zip'
  1. create an image
docker build --tag elasticsearch-ik:7.9.0

in the preceding example, you can use other plug-ins to modify the Dockerfile. The following example shows how to add a synonym token file to the synonym token filter in es. Of course, you can also use the same method to mount any file to the configuration file directory of es.

spec:
  nodeSets:
  - name: default
    count: 3
    podTemplate:
      spec:
        containers:
        - name: elasticsearch 
          volumeMounts:
          - name: synonyms
            mountPath: /usr/share/elasticsearch/config/dictionaries
        volumes:
        - name: synonyms
          configMap:
            name: synonyms 

In the preceding code, you must create a config map that contains the configuration file in the same namespace.

Deploy Kibana

it is easy to connect to an Elasticsearch cluster managed by ECK:

create a kibana instance and associate it with an Elasticsearch cluster

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  version: 7.9.0
  image: your.com/kibana:7.9.0
  count: 1
  elasticsearchRef:
    name: es-cluster
    namespace: default

namespace is optional. If the Elasticsearch cluster and kibana are running in the same namespace.

The Kibana configuration file is automatically created by ECK and a secure link is created between es.

Monitor kibana health status and creation process

you can use kubectl to query the details of kibana instances of the same es type:

kubectl get kibana

view the pods associated with the instance:

kubectl get pod --selector='kibana.k8s.elastic.co/name=kibana'

connect to kibana

ECK automatically creates a ClusterIP Service for kibana:

kubectl get service kibana-kb-http

the username and password of kibana are the same as those of the Elasticsearch cluster:

curl -u "elastic:$PASSWORD" -k "https://kibana-kb-http:5601"

summary

this topic describes how to use ECK to install es and kibana in a Kubernetes cluster and how to set key parameters. The example in this topic is close to the actual production environment and has certain reference value. K8s has become the actual standard of Container Orchestration. It is also a trend for k8s to take over the operation and maintenance of databases. Unlike managing common applications, the difficulty of managing databases lies in how to persist data. k8s provides two solutions. One is hostpath, which stores data on the hard disk of the host where the node is located, and the other is network storage, including block storage or file storage. Compared with Method 1, Method 2 has a certain performance gap due to the loss of network transmission. However, Method 2 separates the application and storage of the database, databases can be scheduled to any node, which brings greater flexibility and higher resource utilization. With the help of network storage, data has higher security.

Selected, One-Stop Store for Enterprise Applications
Support various scenarios to meet companies' needs at different stages of development

Start Building Today with a Free Trial to 50+ Products

Learn and experience the power of Alibaba Cloud.

Sign Up Now