×
Community Blog Kubernetes Stateful Services on Alibaba Cloud

Kubernetes Stateful Services on Alibaba Cloud

This article describes how to set up Cassandra with Kubernetes on Alibaba Cloud.

By Alex, Alibaba Cloud Community Blog author.

Moving forward with our Kubernetes series which covers containerization, managing the lifecycles of applications, deploying multi-container images, microservices, and discussing several related practices, let's now explore some further concepts involving stateful services in this tutorial.

Stateful workloads like database clusters do not use the Replica Sets and require a different approach. StatefulSets enable deploying, managing, and scaling traditional workloads such as databases. Using StatefulSets powers up distributed systems that are both persistent and stateful by ensuring the following:

  • Indexing of pods with unique identifiers
  • Systemic creation of pods
  • Persistent network identities.

StatefulSets guarantee that pods are ordered and unique to run services effectively. Usually, the state is maintained at pod initiation and consequently restarts to ensure that applications that depend on available states of knowledge run smoothly.

A practical example is saving data to persistent disk storage in the server, whether a database or a key-value store where applications and clients access such data. This tutorial explores this concept by deploying a Cassandra database application.

The requirements for this tutorial includes the following:

  • A Kubernetes cluster on Alibaba Cloud
  • A Docker Hub account
  • A Cassandra database image from a repository

Objectives

This tutorial helps to understand the concepts of Kubernetes by clearly demonstrating the following aspects:

  • How to create a Cassandra service from a Docker image
  • How to create pods to run in the services
  • How to Apply the principles of StatefulSets including validation, modification, and deletion

Getting Started

In the current series of articles, we covered the basics of Pods, Services, and ReplicaSets along with a detailed account of configuration and deployment of Kubeadm, Kubectl in a three-node cluster.

In the previous tutorial, we configured a Kubernetes cluster using three servers, and with this article, we employ a Cassandra SeedProvider that uses StatefulSets to discover Cassandra nodes deployed in the cluster.

Run the following command.

sudo nano /etc

Use the sample files provided in the following links to make deployment possible.

Next, make a directory where we to store the above-downloaded files.

cd ~
mkdir application
cd application
mkdir cassandra
cd cassandra
wget https://kubernetes.io/examples/application/cassandra/cassandra-service.yaml
wget https://kubernetes.io/examples/application/cassandra/cassandra-statefulset.yaml

Cassandra Headless Service

Kubernetes supports both normal and headless services. The key difference between the two types of services is that a normal service loads pods over a service IP and manages DNS entries behind the scenes. A headless service, on the other hand, features no service IP and therefore, doesn't integrate load balancing as compared to the normal service. A headless service uses records pointed to the pods which work well as Cassandra does not require load balancing since the nodes connect directly with clients. Clients in turn connect to cassandra.data.svc.cluster.local. The chosen image is located on the following Docker image link.

The Cassandra headless service lists pods hosting the Cassandra instance. The first step is to create a service that uses DNS to link Cassandra pods and clients in the cluster. Run the following command to confirm the successful download of files.

sudo nano application/cassandra/cassandra-service.yaml

The following snippet shows the output for the above command.

application/cassandra/cassandra-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: cassandra
  name: cassandra
  namespace: data
spec:
  clusterIP: None
  ports:
  - port: 9042
  selector:
    app: cassandra

Once the file downloads create a Cassandra service from the file by executing the following command.

kubectl apply -f /home/flasky/application/cassandra/cassandra-service.yaml

The following error occurs in case the cluster is not properly installed.

error: unable to recognize "https://k8s.io/examples/application/cassandra/cassandra-service.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused

To fix the error, run the following calico network commands:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run the following command again.

kubectl apply -f /home/flasky/application/cassandra/cassandra-service.yaml

Let's test and see how it works.

kubectl get svc cassandra

The following snippet shows the output for the above command.

NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cassandra   ClusterIP   None         <none>        9042/TCP   57s

Create a Cassandra Ring

Now let's create a Cassandra Ring comprising three pods. First, take a look at the second downloaded file by running the following command.

sudo nano application/cassandra/cassandra-statefulset.yaml

In the file, replace the image link with the Docker image link as shown below.

application/cassandra/cassandra-statefulset.yaml  

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: cassandra
  labels:
    app: cassandra
spec:
  serviceName: cassandra
  replicas: 3
  selector:
    matchLabels:
      app: cassandra
  template:
    metadata:
      labels:
        app: cassandra
    spec:
      terminationGracePeriodSeconds: 1800
      containers:
      - name: cassandra
        image: gcr.io/google-samples/cassandra@sha256:7a3d20afa0a46ed073a5c587b4f37e21fa860e83c60b9c42fec1e1e739d64007
        imagePullPolicy: Always
        ports:
        - containerPort: 7000
          name: intra-node
        - containerPort: 7001
          name: tls-intra-node
        - containerPort: 7199
          name: jmx
        - containerPort: 9042
          name: cql
        resources:
          limits:
            cpu: "500m"
            memory: 1Gi
          requests:
            cpu: "500m"
            memory: 1Gi
        securityContext:
          capabilities:
            add:
              - IPC_LOCK
        lifecycle:
          preStop:
            exec:
              command: 
              - /bin/sh
              - -c
              - nodetool drain
        env:
          - name: MAX_HEAP_SIZE
            value: 512M
          - name: HEAP_NEWSIZE
            value: 100M
          - name: CASSANDRA_SEEDS
            value: "cassandra-0.cassandra.default.svc.cluster.local"
          - name: CASSANDRA_CLUSTER_NAME
            value: "K8Demo"
          - name: CASSANDRA_DC
            value: "DC1-K8Demo"
          - name: CASSANDRA_RACK
            value: "Rack1-K8Demo"
          - name: POD_IP
            valueFrom:
              fieldRef:
                fieldPath: status.podIP
        readinessProbe:
          exec:
            command:
            - /bin/bash
            - -c
            - /ready-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        volumeMounts:
        - name: cassandra-data
          mountPath: /cassandra_data
  volumeClaimTemplates:
  - metadata:
      name: cassandra-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: fast
      resources:
        requests:
          storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
  type: pd-ssd

Next, run the following command to create the StatefulSet from the file.

kubectl apply -f /home/flasky/application/cassandra/cassandra-statefulset.yaml

Note: Make sure to check the location of the downloaded file by running the command below.

pwd

Ideally, there should be no errors, but in case, any error occurs, check the preceding section related to creating Cassandra headless service and how to solve it.

Validate the deployment using the command given below.

kubectl get statefulset cassandra

The following snippet shows the output for the above command.

NAME        DESIRED   CURRENT   AGE
cassandra   3         0         57s

Ensure that the three replicas specified while creating the StatefulSet are up and running. Run the following command to verify the same.

kubectl get pods -l="app=cassandra"

The following snippet shows the output for the above command.

NAME          READY     STATUS    RESTARTS   AGE
cassandra-0   1/1       Running   0          6m
cassandra-1   1/1       Running   0          3m
cassandra-2   1/1       Running   0          45s

Allow 10 minutes and run the command again in case the three pods do not reflect in the first go. Run the following command to check the status of the application.

kubectl exec -it cassandra-0 -- nodetool status

The following snippet shows the output for the above command.

Datacenter: DC1
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load           Tokens    Owns        Host ID   Rack
192.168.0.4        98.77 KiB    32        74.0%        id        Rack1
192.168.0.4        91.02 KiB    32        58.8%        id        Rack1
192.168.0.4        97.56 KiB    32        67.1%        id        Rack1

Test Scaling Using Kubectl

Now, test the scalability of the deployed StatefulSet. Run the command below to find the StatefulSet.

kubectl get statefulsets <stateful-set-name>

As mentioned earlier, for this article there are three replicas in the StatefulSet. However, scaling allows changing the same using the following command.

kubectl scale statefulsets <stateful-set-name> --replicas=<10>

Delete StatefulSets

Kubernetes allows deleting StatefulSets using the kubectl delete command. The command requires an argument specifying the file or name of the StatefulSet as shown below.

kubectl delete -f <Cassandra-statefulset.yaml>
kubectl delete statefulsets <cassandra>

Use the kubectl delete command to delete the headless services.

kubectl delete service <cassandra-service.yaml>

Post deletion of StatefulSets, the replica sets are set to zero and thus, pods are deleted. However, to avoid complete data loss, it's important to delete a StatefulSet in such a way that it does not eliminate the associated persistent volumes.

Nevertheless, to completely delete a StatefulSet, run the following commands.

grace=$(kubectl get pods <stateful-set-pod> --template '{{.spec.terminationGracePeriodSeconds}}')
kubectl delete statefulset -l app=cassandra
sleep $grace
kubectl delete pvc -l app=Cassandra

Ensure to replace the 'app' argument with a specific app. For this article, it is Cassandra.

Cascade Delete

The concept of cascade delete in Kubernetes allows dealing with pods that are in undesirable states for a long period. It eliminates the need to manually delete pods in the server. Cascade flag or argument deletes just the StatefulSet and not the pods. Include it in the delete command as shown below.

kubectl delete -f <cassandra-statefulset.yaml> --cascade=false

While the preceding command deletes StatefulSet, the 3 or 4 replicas remain intact and still have the label cassandra. Run the following command to delete them as well.

kubectl delete pods -l app=myapp

Making Changes to StatefulSet File

The kubectl edit command enables making changes to StatefulSet files directly without entailing to reapply the changes.

kubectl edit statefulset cassandra

Executing the above command opens StatefulSet file in an editor. Now, consider the following code to change the number of replicas to 10, for instance.

apiVersion: apps/v1 
kind: StatefulSet
metadata:
  creationTimestamp: 2019-03-29T15:33:17Z
  generation: 1
  labels:
  app: cassandra
  name: cassandra
  namespace: default
  resourceVersion: "323"
  selfLink: /apis/apps/v1/namespaces/default/statefulsets/cassandra
  uid: 7a209352-0976-15t6-e907-437j940u864
spec:
  replicas: 10

Save the changes and exit the editor. Validate the changes by running the command below.

kubectl get statefulset cassandra

The following snippet reflects the output of the preceding command.

NAME        DESIRED   CURRENT   AGE
cassandra   4         4         3m

Evidently, the changes have taken effect.

Conclusion

This article helps to understand how to deploy a highly available Cassandra ring using Kubernetes StatefulSet. It demonstrates how to scale up and down services and get access to StatefulSets from Kubernetes applications. Further, it throws light on how to deploy highly available applications running on Kubernetes with Alibaba Cloud.

Don't have an Alibaba Cloud account? Sign up for an account and try over 40 products for free worth up to $1200. Get Started with Alibaba Cloud to learn more.

0 0 0
Share on

Alex

53 posts | 8 followers

You may also like

Alex

53 posts | 8 followers

Related Products