Principle of MongoDB sharded cluster architecture - Alibaba Cloud Developer Forums: Cloud Discussion Forums

Assistant Engineer
Assistant Engineer
  • UID626
  • Fans1
  • Follows1
  • Posts53

[Others]Principle of MongoDB sharded cluster architecture

More Posted time:Oct 20, 2016 14:51 PM
Why sharded cluster?
MongoDB currently has three core advantages: flexible mode + high availability + extendibility. The flexible mode is achieved through the JSON file, the high availability is ensured through the replica set, and extendibility is guaranteed through the sharded cluster.
When the MongoDB replica set encounters the following scenarios, you should consider using sharded cluster.
• The storage capacity demand exceeds the disk capacity of a single machine.
• The active data sets exceed the memory size of a single machine. As a result, many requests need to read data from the disk, compromising the performance.
• Writing IOPS exceeds the writing service capability of a single MongoDB node.
As shown in the figure above, the Sharding Cluster scatters data in the set to multiple shards (replica set or a single Mongod node) for storage, enabling the scale-out of MongoDB and enriching the application scenario of MongoDB.
Sharded cluster architecture
A sharded cluster is composed of three components, namely the shard, Mongos and the config server.

Mongos is the access entry to the sharded cluster. It is strongly recommended that all the management operations and writes/reads are performed through Mongos to ensure the consistency among multiple components in the cluster.
Mongos itself does not offer data persistence. All the meta data in the sharded cluster will be stored in the config server (to be detailed in the next section), and users' data is scattered to various shards for storage. After Mongos is started, the meta data will be loaded from the config server to provide services, and users' requests will be routed to the corresponding shard.
Data distribution policy
The sharded cluster supports to scatter the data of a single set to multiple shards for storage. Users can specify a shard key, that is, a field of a document in the set, for data distribution. Currently two major data distribution policies are supported, namely the range based sharding and hash based sharding.
Range based sharding

As shown in the figure above, for the sharding based on x field in the set, the value range of x is [minKey, maxKey] (x should be an integer, and the minKey and maxKey here are the minimum and maximum values of the integer). The value range is divided into multiple chunks and every chunk (usually configured to 64MB) contains a piece of data.
Chunk1 contains all the documents with the x values between [minKey, -75). Chunk2 contains all the documents with the x values between[-75, 25). The data of every chunk is stored on the same shard and every shard can store multiple chunks. The information about which shard the chunk is stored in will be stored in the config server and Mongos will also perform automatic load balancing according to the number of chunks on each shard.
The range based sharding can well meet the demand of “range based query”. For example, if you want to query all the documents with an x value between [-30, 10], Mongos can now directly route the request to Chunk2 to get all the conforming documents for the query.
The disadvantage of the range based sharding is that if the shardkey presents an obvious increasing (or decreasing) trend, the newly inserted documents will be distributed to the same chunk with no writing scalability. For example, when the _id is used as the shard key, the ID sequence automatically generated by MongoDB is timestamped and increasing.
Hash based sharding
Hash based sharding is to distribute documents to different chunks based on the hash values (64bit integers) calculated from the users’ shard keys following the range based sharding policy.

Hash based sharding and range based sharding are complementary. Hash based sharding can scatter documents to various chunks in a random manner, fully expanding the writing capacity and making up for the deficiency of range based sharding. But it is not able to serve for the range based queries efficiently. All the range based queries should be distributed to all the backend shards to locate the conforming documents.
Rational choice of shard key
When selecting a shard key, you should weigh your business demand and the advantages and disadvantages of the range based sharding and hash based sharding. Meanwhile, please pay attention that the number of shard key values should be enough, otherwise a single jumbo chunk may appear with a very huge size and which cannot be split. For example, in a set that stores the user information, the sharding is based on the age field, and the values are limited in number. As a result, some single chunk may be quite big in size.
Mongos, as the access entry of the sharded cluster, routes, distributes and merges all the requests. These actions are transparent to the client driver. Users can access Mongos to use it as they do for MongoDB.
Mongos will route requests to the corresponding shard according to the request type and shard key.
Query request
• If a query request does not contain a shard key, it must be routed to all the shards and then the query results should be merged to be returned to the client.
• If a query request contains a shard key, the desired chunk is calculated based on the shard key, and the request is routed to the corresponding shard.
Write request
The write request must contain a shard key. Mongos calculates which chunk the document should be stored in according to the shard key, and sends the write request to the shard of the chunk.
Update/delete request
The query conditions of an update or delete request must contain a shard key or _id. If only the shard key is contained, the request is directly routed to the specified chunk; if only the _id is contained, the request is sent to all the shards.
Other commands and requests
The handling methods of commands and requests other than the add, delete, modify and query commands vary with their own respective processing logic. Taking the listDatabases command for example, the command will forward the listDatabases request to every shard and config server and then merge the results.
Config Server
Config database
The config server stores all the meta data of the sharded cluster. All the meta data is stored in the config database. After Version 3.2, the config server can be deployed as an independent replica set, greatly facilitating the O&M of the sharded cluster.
mongos> use config
switched to db config
mongos> db.getCollectionNames()

The config.shards set stores the information of various shards. You can dynamically add or remove shards through addShard or removeShard command from the sharded cluster. As shown below, the cluster currently has two shards, both of which are replica sets.
mongos> db.addShard("mongo-9003/,,")
mongos> db.addShard("mongo-9003/,,")
mongos> db.shards.find()
{ "_id" : "mongo-9003", "host" : "mongo-9003/,," }
{ "_id" : "mongo-9004", "host" : "mongo-9004/,," }

The config.databases set stores the information of all databases, including whether to enable sharding in the DB, and the information of the primary shard. For sets with sharding not enabled in the database, all the data will be stored in the primary shard of the database.
As shown below, the shtest database has enabled sharding (through the enableSharding command). The primary shard is mongo-9003. But the test database has not enabled sharding and its primary shard is mongo-9003.
mongos> sh.enableSharding("shtest")

{ "ok" : 1 }
mongos> db.databases.find()
{ "_id" : "shtest", "primary" : "mongo-9003", "partitioned" : true }
{ "_id" : "test", "primary" : "mongo-9003", "partitioned" : false }
When the sharded cluster creates the database, it selects the shard with the least data storage currently as the primary shard of the database. Users can also call the movePrimary command to change the primary shard for load balancing purposes. Once the primary shard is changed, Mongos will automatically migrate the data to the new primary shard.
Data sharding is set-oriented. After a database enables the sharding feature, you need to call the shardCollection command to enable the storage sharding feature of the set in the database.
The following commands enable the sharding feature of the hello set in the shtest database. The x field serves as the shard key for the range based sharding.
mongos> sh.shardCollection("shtest.coll", {x: 1})
{ "collectionsharded" : "shtest.coll", "ok" : 1 }
mongos> db.collections.find()
{ "_id" : "shtest.coll", "lastmodEpoch" : ObjectId("57175142c34046c3b556d302"), "lastmod" : ISODate("1970-02-19T17:02:47.296Z"), "dropped" : false, "key" : { "x" : 1 }, "unique" : false }

After sharding is enabled in the set, a new chunk will be created by default. All the documents with the shard key values between [minKey, maxKey] will be stored in this chunk. When the hash based sharding policy is enabled, you can also create multiple chunks in advance to reduce migration of chunks.
mongos> db.chunks.find({ns: "shtest.coll"})
{ "_id" : "shtest.coll-x_MinKey", "ns" : "shtest.coll", "min" : { "x" : { "$minKey" : 1 } }, "max" : { "x" : { "$maxKey" : 1 } }, "shard" : "mongo-9003", "lastmod" : Timestamp(1, 0), "lastmodEpoch" : ObjectId("5717530fc34046c3b556d361") }

When the data written to a chunk increases to a certain threshold value, it will trigger the chunk splitting and a chunk range will be split to multiple chunks. When the number of chunks on various shards is unbalanced, it will trigger the data migration of chunks among shards. As shown below, a chunk of shtest.coll is split into three chunks after data writing.
mongos> use shtest
mongos> for (var i = 0; i < 10000; i++) { db.coll.insert( {x: i} ); }
mongos> use config
 mongos> db.chunks.find({ns: "shtest.coll"})
 { "_id" : "shtest.coll-x_MinKey", "lastmod" : Timestamp(5, 1), "lastmodEpoch" : ObjectId("5703a512a7f97d0799416e2b"), "ns" : "shtest.coll", "min" : { "x" : { "$minKey" : 1 } }, "max" : { "x" : 1 }, "shard" : "mongo-9003" }
 { "_id" : "shtest.coll-x_1.0", "lastmod" : Timestamp(4, 0), "lastmodEpoch" : ObjectId("5703a512a7f97d0799416e2b"), "ns" : "shtest.coll", "min" : { "x" : 1 }, "max" : { "x" : 31 }, "shard" : "mongo-9003" }
 { "_id" : "shtest.coll-x_31.0", "lastmod" : Timestamp(5, 0), "lastmodEpoch" : ObjectId("5703a512a7f97d0799416e2b"), "ns" : "shtest.coll", "min" : { "x" : 31 }, "max" : { "x" : { "$maxKey" : 1 } }, "shard" : "mongo-9004" }

The config.settings set mainly stores the configuration information of the sharded cluster, such as the chunk size and whether to enable balancer.
mongos> db.settings.find()
{ "_id" : "chunksize", "value" : NumberLong(64) }
{ "_id" : "balancer", "stopped" : false }

Other sets
• The config.tags set mainly stores the information about the sharding cluster tags to achieve tag-based chunk distribution.
• The config.changelog set mainly stores all the change operations to the sharding cluster, such as the chunk migration by the balancer, which will be logged in the changelog.
• The config.mongos set stores the information about all the Mongos in the current cluster.
• The config.locks set stores the information about the locks. When you perform an operation to a set, such as moveChunk, you should obtain the lock first to avoid migrating multiple Mongos to the chunk of the same set.