All Products
Search
Document Center

Object Storage Service:Data replication (OSS SDK for Python V1)

Last Updated:Mar 17, 2026

OSS replication automatically copies objects from a source bucket to a destination bucket. Use it to implement geo-disaster recovery, data residency compliance, or cross-environment data isolation.

Quick start

The minimal end-to-end flow:

  1. Set environment variables and initialize a bucket object — see Common setup.

  2. (Optional) Call get_bucket_replication_location to confirm your target region is supported.

  3. Call put_bucket_replication to create a replication rule.

  4. Call get_bucket_replication to retrieve the assigned rule ID.

  5. Call get_bucket_replication_progress to monitor replication.

  6. Call delete_bucket_replication when the rule is no longer needed.

Choose a replication mode

Use case CRR SRR
Protect against regional outages with geo-disaster recovery Yes No
Keep data in a specific region for compliance (data residency) Yes No
Reduce read latency for users in remote regions Yes No
Aggregate logs from multiple buckets into one No Yes
Isolate production and test environments No Yes
Add account-level redundancy within a region No Yes

After choosing a mode, set target_bucket_location to a region in a different Alibaba Cloud region (CRR) or the same region (SRR) when creating a replication rule.

How it works

OSS replication works as follows:

  • Asynchronous replication — OSS replicates objects asynchronously after writes complete on the source bucket. There is no guaranteed replication latency; actual latency depends on object size and cross-region network distance.

  • Historical data — By default, OSS replicates existing objects in addition to new writes. Set is_enable_historical_object_replication to False to replicate only objects written after the rule is created.

  • Incremental data — OSS continuously replicates new writes after the rule is active.

  • Delete operations — Delete operations on the source bucket are replicated to the destination. If you delete an object in the source bucket, OSS removes the corresponding object in the destination bucket.

  • Encryption support — SSE-KMS-encrypted objects can be replicated with an authorized RAM role and a destination KMS key.

API overview

Method Description Required permission
put_bucket_replication Create a replication rule oss:PutBucketReplication
get_bucket_replication Query replication rules oss:GetBucketReplication
get_bucket_replication_location Query available destination regions oss:GetBucketReplicationLocation
get_bucket_replication_progress Query replication progress oss:GetBucketReplicationProgress
delete_bucket_replication Delete a replication rule oss:DeleteBucketReplication

Prerequisites

Complete the following steps before using the SDK to manage replication rules.

Required permissions

Grant your RAM (Resource Access Management) user the permissions needed for each operation. For more information, see Attach a custom policy to a RAM user.

Operation Required permission
Create a replication rule oss:PutBucketReplication
Query replication rules oss:GetBucketReplication
Query available destination regions oss:GetBucketReplicationLocation
Query replication progress oss:GetBucketReplicationProgress
Delete a replication rule oss:DeleteBucketReplication

Required bucket configuration

Note

The source bucket and destination bucket must have the same versioning state. If the source bucket has versioning enabled, the destination bucket must also have versioning enabled. A mismatch in versioning state prevents the replication rule from functioning correctly.

Common setup

Initialize the auth and bucket objects once. All subsequent code examples in this topic reuse these objects.

# -*- coding: utf-8 -*-
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider
from oss2.models import ReplicationRule

auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
region = "cn-hangzhou"
bucket = oss2.Bucket(auth, endpoint, "examplebucket", region=region)

Replace endpoint, region, and "examplebucket" with the values for your source bucket.

Query available destination regions

Before creating a replication rule, call get_bucket_replication_location to confirm that your target region is supported.

try:
    result = bucket.get_bucket_replication_location()

    print("Available destination regions:")
    for location in result.location_list:
        print(" -", location)
except oss2.exceptions.OssError as e:
    print(f"Failed to query destination regions: {e.status} {e.request_id}")
    raise

result.location_list contains region identifiers such as oss-cn-beijing.

Related API: GetBucketReplicationLocation.

Create a replication rule

Call put_bucket_replication to create a replication rule on the source bucket. OSS assigns a rule ID at creation time; retrieve it by calling get_bucket_replication after the rule is created.

Important

The source and destination buckets must have the same versioning state: both unversioned, or both with versioning enabled.

The following example creates a basic replication rule that replicates all objects to a destination bucket:

replica_config = ReplicationRule(
    target_bucket_name='destexamplebucket',
    target_bucket_location='yourTargetBucketLocation',
)

try:
    bucket.put_bucket_replication(replica_config)
    print("Replication rule created.")
except oss2.exceptions.OssError as e:
    print(f"Failed to create replication rule: {e.status} {e.request_id}")
    raise

ReplicationRule parameters

The ReplicationRule constructor accepts the following parameters:

ReplicationRule(
    target_bucket_name='string',                    # Required
    target_bucket_location='string',                # Required
    prefix_list=['prefix1', 'prefix2'],             # Optional
    action_list=[ReplicationRule.PUT],               # Optional
    is_enable_historical_object_replication=True,    # Optional (default: True)
    target_transfer_type='oss_acc',                  # Optional
    sync_role_name='string',                         # Optional
    sse_kms_encrypted_objects_status=ReplicationRule.ENABLED,  # Optional
    replica_kms_keyid='string'                       # Optional
)
Parameter Type Required Valid values Default Description
target_bucket_name str Yes Name of the destination bucket.
target_bucket_location str Yes Region of the destination bucket.
prefix_list list No Any list of string prefixes Object name prefixes to replicate. Only objects whose names match a listed prefix are replicated. Omit to replicate all objects. Note: tag-based filtering is not supported; prefix filtering only.
action_list list No ReplicationRule.PUT, ReplicationRule.DELETE Operations to replicate. ReplicationRule.PUT replicates object creates and updates. ReplicationRule.DELETE replicates delete operations. Omit to replicate all supported operations.
is_enable_historical_object_replication bool No True, False True Whether to replicate pre-existing objects. Set to False to replicate only new writes after the rule is created.
target_transfer_type str No 'oss_acc' Transfer type. Set to 'oss_acc' to use transfer acceleration for CRR.
sync_role_name str No RAM role name authorized for data replication. Required when replicating SSE-KMS-encrypted objects.
sse_kms_encrypted_objects_status str No ReplicationRule.ENABLED, ReplicationRule.DISABLED Set to ReplicationRule.ENABLED to replicate SSE-KMS-encrypted objects.
replica_kms_keyid str No KMS key ID used to encrypt objects at the destination. Required when replicating SSE-KMS-encrypted objects.
Important

The three SSE-KMS parameters — sync_role_name, sse_kms_encrypted_objects_status, and replica_kms_keyid — must all be set together to replicate SSE-KMS-encrypted objects.

Create a rule with filtering and encryption

The following example creates a rule that replicates only objects with specific prefixes and encrypts them at the destination using SSE-KMS:

# Replicate only objects with the specified prefixes.
prefix_list = ['prefix1', 'prefix2']

replica_config = ReplicationRule(
    prefix_list=prefix_list,
    # Replicate only creates and updates.
    action_list=[ReplicationRule.PUT],
    target_bucket_name='destexamplebucket',
    target_bucket_location='yourTargetBucketLocation',
    # Disable historical data replication.
    is_enable_historical_object_replication=False,
    # Use transfer acceleration.
    target_transfer_type='oss_acc',
    # Required for SSE-KMS encryption.
    sync_role_name='roleNameTest',
    sse_kms_encrypted_objects_status=ReplicationRule.ENABLED,
    replica_kms_keyid='9468da86-3509-4f8d-a61e-6eab1eac****',
)

try:
    bucket.put_bucket_replication(replica_config)
    print("Replication rule created with filtering and encryption.")
except oss2.exceptions.OssError as e:
    print(f"Failed to create replication rule: {e.status} {e.request_id}")
    raise

Verify replication

After creating a replication rule, upload a test object to the source bucket and confirm it appears in the destination bucket.

Replication is asynchronous. The following example waits 60 seconds and then checks whether the object has appeared in the destination bucket.

import time

# Upload a test object to the source bucket.
bucket.put_object('replication-test.txt', 'This is a replication test.')

# Wait for replication to complete. Replication is asynchronous and may take some time.
time.sleep(60)

# Connect to the destination bucket.
# Replace the endpoint and region with those of the destination bucket.
dest_bucket = oss2.Bucket(auth, '<destination-endpoint>', 'destexamplebucket', region='<destination-region>')

if dest_bucket.object_exists('replication-test.txt'):
    print("Replication verified: test object found in destination bucket.")
else:
    print("Object not yet replicated. Replication is asynchronous and may take some time.")

Related API: PutBucketReplication.

Query replication rules

Call get_bucket_replication to retrieve all replication rules configured on a bucket. Use this method to retrieve the rule ID that OSS assigns when you call put_bucket_replication.

try:
    result = bucket.get_bucket_replication()

    for rule in result.rule_list:
        print("Rule ID:", rule.rule_id)
        print("Destination bucket:", rule.target_bucket_name)
        print("Destination region:", rule.target_bucket_location)
except oss2.exceptions.OssError as e:
    print(f"Failed to query replication rules: {e.status} {e.request_id}")
    raise

Each rule in result.rule_list contains the following fields:

Field Type Description
rule_id str Unique identifier of the replication rule, assigned by OSS at creation time.
target_bucket_name str Name of the destination bucket.
target_bucket_location str Region of the destination bucket.
action_list list Operations being replicated (for example, [ReplicationRule.PUT]). Present only if set at creation time.
prefix_list list Object name prefixes being filtered. Present only if set at creation time.
status str Current status of the replication rule (for example, "starting" or "doing").

For the full response schema, see GetBucketReplication.

Related API: GetBucketReplication.

Query replication progress

OSS tracks replication progress in two ways:

  • Historical object replication — A percentage indicating how much of the pre-existing data has been replicated. Available only when is_enable_historical_object_replication is enabled.

  • Incremental replication — A point-in-time marker. All objects written to the source bucket before this timestamp have been replicated. For example, a value of "2024-01-15T12:00:00.000Z" means all objects created before that time have been replicated.

Pass the rule ID to get_bucket_replication_progress to check progress for a specific rule. Retrieve the rule ID from get_bucket_replication().

try:
    # Replace 'test_replication_1' with the rule ID returned by get_bucket_replication().
    result = bucket.get_bucket_replication_progress('test_replication_1')

    print("Rule ID:", result.progress.rule_id)
    print("Historical replication enabled:", result.progress.is_enable_historical_object_replication)
    print("Historical replication progress:", result.progress.historical_object_progress)
    print("Incremental replication progress:", result.progress.new_object_progress)
except oss2.exceptions.OssError as e:
    print(f"Failed to query replication progress: {e.status} {e.request_id}")
    raise

The result.progress object contains the following fields:

Field Type Description
rule_id str Replication rule ID.
is_enable_historical_object_replication str Whether historical object replication is enabled: "true" or "false".
historical_object_progress str Percentage of historical objects replicated. For example, "85%". Only meaningful when historical object replication is enabled.
new_object_progress str Point-in-time marker for incremental replication. All objects written to the source before this timestamp have been replicated.

Related API: GetBucketReplicationProgress.

Delete a replication rule

Call delete_bucket_replication to delete a replication rule. Pass the rule ID to identify which rule to delete.

When you delete a replication rule:

  • Objects already replicated to the destination bucket are preserved.

  • Subsequent changes to the source bucket are no longer replicated.

try:
    # Replace 'test_replication_1' with the rule ID to delete.
    bucket.delete_bucket_replication('test_replication_1')
    print("Replication rule deleted successfully.")
except oss2.exceptions.OssError as e:
    print(f"Failed to delete replication rule: {e.status} {e.request_id}")
    raise

Related API: DeleteBucketReplication.

References