How to build a Zookeeper cluster on the Alibaba Cloud ECS server?

This article will introduce how to build a Zookeeper cluster on the Alibaba Cloud ECS server.

Prepare


In order to build a Zookeeper cluster, we need the following prerequisites:

1. Three Alibaba Cloud ECS servers (if there are not three, one can also be simulated).

2. Install JDK8+ on each Alibaba Cloud ECS server, because Zookeeper depends on JDK.

3. Apache-zookeeper-3.8.0-bin.tar.gz installation package.

Precautions

1. A Zookeeper cluster needs at least three (usually an odd number) Zookeeper instances. If you don’t have enough servers, you can also simulate and build them on the same server.

2. The security group in the Alibaba Cloud ECS server needs to open the corresponding ports, including the two directions of in and out (if it is personal development and testing, all ports can be released).

Zookeeper involves three ports, namely:

2181——Provide services to the client.
2888——It is used for communication between servers in the Zookeeper cluster.
3888——Used when the Zookeeper cluster performs LEADER election.

If the port is not opened, an error message similar to the following may appear when starting Zookeeper.

3. Configure quorumListenOnAllIPs=true in the zoo.cfg configuration file.

4. Under the data directory on each Zookeeper server (the directory specified by dataDir), a myid file should be created. The content of the file is a number, which is equivalent to the unique identifier of each server in the Zookeeper cluster.

We assume that the IP addresses of the three Alibaba Cloud ECS servers are: 192.168.11.1, 192.168.11.2, and 192.168.11.3.

Next, we will introduce the construction steps in detail.

Build steps

Perform the following steps on each server.

1. Upload the installation package
Upload the apache-zookeeper-3.8.0-bin.tar.gz installation package to the /opt/zookeeper directory. Of course, you can upload it to the directory you specify according to the actual situation.

2. Unzip the installation package

3. Modify the directory name after decompression

4. Delete the installation package

5. Create Zookeeper data and log directories

mkdir -p /data/zookeeper/data
mkdir -p /data/zookeeper/log

Create Zookeeper's data and log directories through the above commands.

6. Create myid file

In the /data/zookeeper/data directory created in the previous step, create a file named myid, and the content of the file is 1 (or 2 or 3).

illustrate:
The number in the content of the myid file is actually the unique number of each server in the Zookeeper cluster. In the same Zookeeper cluster, this number must be unique.

7. Copy the configuration file

8. Configure the zoo.cfg file

9. Add Zookeeper to environment variables

10. Make environment variables take effect

11. Start Zookeeper

It is recommended to start all three Zookeeper servers after they are configured.

Because we configured Zookeeper as an environment variable above, we can execute the above command in any directory.

If you have not configured environment variables, you can execute the above startup command in the /opt/zookeeper/apache-zookeeper-3.8.0/bin directory, or use the absolute directory to execute the startup command in other directories.

12. View the status of Zookeeper

13. Client connection test

Taking server one (192.168.11.1) as an example, let's test whether the client can connect to the Zookeeper server.

The role of the Zookeeper server


In the Zookeeper cluster, the server has three roles, namely leader, follower and observer.

Leader

Leader is the core of Zookeeper cluster work, and is also the only scheduler and processor of transactional requests (write operations). It ensures the order of cluster transaction processing, and is responsible for initiating and resolution of voting, and updating system status, etc.

The scheduler of each server in the Zookeeper cluster.

Follower

Follower is responsible for processing the client's non-transactional (read operation) requests.

If a transactional request from the client is received, it will be forwarded to the leader for processing, and is also responsible for participating in voting during the leader election process.

Observer

Independent processing is possible for non-transactional requests.

For transactional requests, like follower, it is also forwarded to the leader server for processing.

The observer will not participate in any form of voting, only provides non-transactional services, and does not need to respond to the leader's proposal.

It is usually used to improve the non-transaction processing capability of the cluster without affecting the transaction processing capability of the cluster, which can improve the read capability of the cluster and reduce the complexity of cluster master selection.

Status of the Zookeeper server


The possible states of a Zookeeper server: LOOKING, FOLLOWING, OBSERVING, and LEADING.

LOOKING

When there is no leader in the Zookeeper cluster or the original leader hangs up and needs to be re-elected, the status of each server.

FOLLOWING

The state of the follower server, synchronizes the state of the leader, and participates in voting decision proposals.

OBSERVING

The state of the observer server also synchronizes the state of the leader, but does not participate in voting decision proposals.

LEADING

The state of the leader server, initiate a normal message proposal.

Related Articles

Explore More Special Offers

  1. Short Message Service(SMS) & Mail Service

    50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00

phone Contact Us