All Products
Search
Document Center

ApsaraDB for MongoDB:Create a multi-zone sharded cluster instance

Last Updated:Mar 28, 2026

ApsaraDB for MongoDB distributes the mongos, shard, and ConfigServer nodes of a sharded cluster instance across two or three zones in the same region. Nodes communicate over an internal network, so a single zone failure does not bring down the cluster.

Prerequisites

Before you begin, make sure you have:

Limitations

Disk typeSupported zone configurationsNotes
Cloud diskSingle zone, Double zones, Multiple zones (three zones)Multi-zone deployment is available only in specific regions. See Cloud disk-based instances (three-zone deployment) and Cloud disk-based instances (double-zone deployment).
Local diskSingle zone onlyLocal disk sharded cluster instances do not support cross-zone deployment. Replica set instances can use the Zone parameter to span zones in the format Region Zones (1 + 2 + 3) — for example, Shenzhen Zones (C + D + E). See Local disk-based instances.

    Node deployment policies

    The following table describes how nodes are distributed across zones and what fault protection each configuration provides.

    Zone configurationNode distributionZone failure behavior
    Single zoneAll mongos, shard, and ConfigServer nodes are in one zone.No cross-zone protection. A zone failure brings down the entire cluster. image
    Double zonesNodes distributed across two zones: a sharded cluster instance contains at least two mongos nodes deployed across the two zones, with subsequent nodes evenly deployed across the two zones; shard and ConfigServer nodes (primary, secondary, hidden) are spread across both zones and may shift zones after a primary/secondary or HA switchover. imageThe cluster remains available based on the configured switchover mode. See Double-zone switchover modes.
    Multiple zones (three zones)Nodes distributed across three zones: a sharded cluster instance contains at least two mongos components deployed across two zones, with the third mongos node going to the third zone by default and subsequent nodes deployed across the three zones in turn; shard nodes (primary, secondary, hidden) are not deployed across the three zones in sequence and may shift zones after a switchover; ConfigServer nodes (primary, secondary, hidden) are deployed across the three zones. imageThe high-availability (HA) system automatically switches services to another zone. The cluster remains available.

    Double-zone switchover modes

    When a zone becomes unavailable, ApsaraDB for MongoDB handles recovery based on the switchover mode configured on the instance details page.

    Important

    The default switchover mode is Manual switchover. In manual mode, the cluster does not automatically fail over. Evaluate which mode fits your availability requirements before the instance goes live.

    ModeBehaviorData loss risk
    Manual switchover (default)The HA system does not automatically switch over. It starts a dual-zone instance to restore availability. You must confirm the switchover and accept potential data loss before it proceeds.Data may be lost within the synchronization latency window.
    Automatic switchoverThe HA system automatically starts the remaining nodes as a single node to restore availability.Data may be lost within the synchronization latency window.
    Note

    If the write concern of an instance is set to WriteConcern=majority, a write operation is not confirmed until a majority of nodes acknowledge it. In a double-zone setup, if the zone that holds two nodes fails, data written to the primary node in that zone but not yet synchronized to the other zone may be lost.

    Network connectivity with ECS

    If your application runs on an Elastic Compute Service (ECS) instance, the ECS instance and the MongoDB instance must meet all of the following requirements:

    RequirementReason
    Same regionInstances in different regions cannot communicate over an internal network.
    Same network type (VPC recommended)VPC provides higher security than the classic network.
    Same VPC ID (when using VPC)Instances on different VPCs cannot communicate, even in the same region.
    Same zone (recommended)Reduces network latency between your application and the database.

    If your ECS instance uses the classic network and you want to switch to VPC, see Migrate ECS instances from the classic network to a VPC. For details on finding your ECS instance's zone and network information, see View instance information.

    Create a multi-zone sharded cluster instance

    Follow the same steps as creating a standard sharded cluster instance. When prompted to select a zone, choose Double zones or Multiple zones. For the full procedure, see Create a sharded cluster instance.

    What's next

    Use the service availability feature to view the current node distribution across zones. You can also switch node roles so your applications connect to the nearest nodes. For more information, see Switch node roles.