All Products
Search
Document Center

Object Storage Service:FAQ about data replication

Last Updated:Mar 03, 2025

This topic provides answers to some commonly asked questions about cross-account and same-account replication in Object Storage Service (OSS), including cross-region replication (CRR) and same-region replication (SRR).

What can I do if I am unable to create a data replication rule?

  1. Check whether the required permissions are missing.

    • Permissions of the RAM user

      • When you try to create a data replication rule in the OSS console as a RAM user, you cannot click OK.

        • Cause: The oss:PutBucketReplication permission is missing.

        • Solution: Grant the oss:PutBucketReplication permission to the RAM user.

      • When you try to create a data replication rule in the OSS console as a RAM user, no custom RAM role names are displayed in the RAM Role Name list.

        • Cause: The ram:ListRoles permission is missing.

        • Solution: Grant the ram:ListRoles permission to the RAM user.

      For more information, see Attach a custom policy to a RAM user.

    • Permissions of the RAM role

      • Same-account replication

        For same-account replication, you need to grant the RAM role the permissions to perform replication operations to the source and destination buckets. For more information about how to create a RAM role and grant permissions to the RAM role, see Role types.

      • Cross-account replication

        For cross-account replication from Account A to Account B, you must use Account A to grant the RAM role the permissions to replicate data in the source bucket and use Account B to grant the RAM role the permissions to receive replicated objects in the destination bucket. For more information about how to create a RAM role and grant permissions to the RAM role, see Methods.

  2. Check whether the source and destination buckets have the same versioning status.

    The source bucket and destination bucket in a replication task must have the same versioning status: versioning-enabled or unversioned.

  3. Check whether the endpoint and AccesKey pair are correct.

    When you create a data replication rule by using OSS SDKs or ossutil, check the following configurations:

Why are objects in the source bucket not replicated to the destination bucket?

If objects are not replicated to the destination bucket after you configure a data replication rule for the source bucket, troubleshoot the issue based on the following possible causes:

  1. Check whether the configurations for the source bucket are correct.

    • Check whether the status of data replication is Enabled.

    • Check whether the intended name prefix is specified in the data replication rule.

      • To replicate only objects whose names contain a specific prefix from the source bucket to the destination bucket, the prefix must be specified in the data replication rule. For example, if only the log prefix is specified in the data replication rule, only objects whose names contain the log prefix, such as log/date1.txt and log/date2.txt, are replicated. Objects whose names do not contain the log prefix, such as date3.txt, will not be replicated.

        Note

        The prefix specified in the data replication rule cannot include an asterisk (*) at the end or the name of the name. For example, a log/* prefix is invalid.

      • To replicate all objects from the source bucket to the destination bucket, do not specify any prefix.

  2. Check whether the objects that failed to be replicated are historical objects. Historical objects will be replicated only if you enable replication of historical objects in the data replication rule.

  3. Check whether the objects in the source bucket are replicated from other buckets in the same region or across regions.

    If an object in a bucket is a replica that is created based on a data replication rule, the object cannot be replicated to a destination bucket based on another data replication rule. For example, you configured a data replication rule to replicate objects from Bucket A to Bucket B and another replication rule to replicate objects from Bucket B to Bucket C. In this case, objects in Bucket B that were originally replicated from Bucket A will not be replicated from Bucket B to Bucket C.

  4. Check whether the objects that failed to be replicated are encrypted by using Key Management Service (KMS). If the objects are encrypted by using KMS, you must enable replication of KMS-encrypted objects when you create the data replication rule.

    If KMS-based encryption is configured for objects in the source or destination bucket, you must set the Replicate Objects Encrypted based on KMS parameter to Yes and configure the parameters shown in the following figure.

    kms

    • CMK ID: The customer master key (CMK) that is used to encrypt objects in the destination bucket.

      If you want to use a CMK to encrypt objects that are replicated to the destination bucket, you must create a CMK in the same region as the destination bucket in the KMS console. For more information, see Create a CMK.

    • RAM Role Name: The RAM role that is authorized to perform KMS-based encryption on the destination objects.

      • New RAM Role: A RAM role is created to encrypt the destination objects by using CMKs. The RAM role is in the kms-replication-sourceBucketName-destinationBucketName format.

      • AliyunOSSRole: The AliyunOSSRole role is used to perform KMS-based encryption on the destination objects. If the AliyunOSSRole role does not exist, OSS automatically creates the AliyunOSSRole role when you select this option.

      Note

      If you create a RAM role or modify the permissions of an existing RAM role, make sure that you attach the AliyunOSSFullAccess policy to the role. Otherwise, data may fail to be replicated.

    You can call the HeadObject operation to query the encryption status of objects in the source bucket and the GetBucketEncryption operation to query the encryption status of the destination bucket.

  5. Check whether the replication progress is 100%.

    Data is asynchronously replicated in near real time. The time required to replicate data from the source bucket to the destination bucket may range from several minutes to hours based on the size of the data. If the amount of data to be replicated is large, check whether the source data appears in the destination bucket after the progress is 100%.

Why is data not deleted from the destination bucket by a data replication task?

  • Cause 1: The destination bucket has a retention policy enabled.

    Before the specified data retention period elapses, any users, including the resource owner, cannot delete objects from the bucket.

  • Cause 2: The versioning status of the source bucket and the replication policy configurations also affect whether deletion of source objects triggers deletion of destination objects.

    Source bucket versioning

    Request details

    Replication policy

    Result

    unversioned

    Object deletion requests

    Add/Change

    Only the objects in the source bucket are deleted. The objects in the destination bucket are not deleted.

    Add/Delete/Change

    The objects deleted from the source bucket are also deleted from the destination bucket.

    Versioning-enabled

    Object deletion requests with no version IDs specified

    Add/Change

    The objects are not deleted from the source and destination bucket. Delete markers are created for objects in the source bucket and synchronized to the destination bucket.

    Add/Delete/Change

    Object deletion requests with version IDs specified

    Add/Change

    Only the objects in the source bucket are deleted. The objects in the destination bucket are not deleted.

    Add/Delete/Change

    The objects deleted from the source bucket are also deleted from the destination bucket.

How do I verify data consistency between the source and destination buckets?

You can run the following code to verify data consistency between the destination bucket and the source bucket after the replication task is complete:

import com.aliyun.oss.OSSClient;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.model.*;
import com.aliyun.oss.OSSException;
import com.aliyuncs.exceptions.ClientException;

public class Demo {
    public static void main(String[] args) throws ClientException {
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // Set srcEndpoint to the endpoint of the region in which the source bucket is located. 
        String srcEndpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        OSSClient srcClient = new OSSClient(srcEndpoint , credentialsProvider);
        // Specify the name of the source bucket. 
        String srcBucketName = "src-replication-bucket";

        // Set destEndpoint to the endpoint of the region in which the destination bucket is located. 
        String destEndpoint = "https://oss-cn-beijing.aliyuncs.com";
        OSSClient destClient = new OSSClient(destEndpoint, credentialsProvider);
        // Specify the name of the destination bucket. 
        String destBucketName = "dest-replication-bucket";
        // If the source and destination buckets are unversioned, call the listObjectsV2 operation to list the objects that are replicated from the source bucket. 
        // If versioning is enabled or suspended for the source and destination buckets, call the listVersions operation to list the objects that are replicated from the source bucket. 
        ListObjectsV2Result result;
        ListObjectsV2Request request = new ListObjectsV2Request(srcBucketName);
        do {
            result = srcClient.listObjectsV2(request);
            for (OSSObjectSummary summary : result.getObjectSummaries())
            {
                String objectName = summary.getKey();
                ObjectMetadata srcMeta;
                try {
                    // Query the metadata of the objects that are replicated from the source bucket. 
                    srcMeta = srcClient.headObject(srcBucketName, objectName);
                } catch (OSSException ossException) {
                    if (ossException.getErrorCode().equals("NoSuchKey")) {
                        continue;
                    } else {
                        System.out.println("head src-object failed: " + objectName);
                    }
                    continue;
                }

                ObjectMetadata destMeta;
                try {
                    // Query the metadata of the destination objects in the destination bucket. 
                    destMeta = destClient.headObject(destBucketName, objectName);
                } catch (OSSException ossException) {
                    if (ossException.getErrorCode().equals("NoSuchKey")) {
                        System.out.println("dest-object not exist: " + objectName);
                    } else {
                        System.out.println("head dest-object failed: " + objectName);
                    }
                    continue;
                }
                // Check whether the CRC-64 values of the source objects are the same as those of the destination objects. 
                Long srcCrc = srcMeta.getServerCRC();
                String srcMd5 = srcMeta.getContentMD5();
                if (srcCrc != null) {
                    if (destMeta.getServerCRC() != null) {
                        if (!destMeta.getServerCRC().equals(srcCrc)) {
                            System.out.println("crc not equal: " + objectName
                                    + " | srcCrc: " + srcCrc + " | destCrc: " + destMeta.getServerCRC());
                        }
                        continue;
                    }
                }
                // Check whether the MD5 values of the source objects are the same as those of the destination objects. 
                if (srcMd5!= null) {
                    if (destMeta.getContentMD5() != null) {
                        if (!destMeta.getContentMD5().equals(srcMd5)) {
                            System.out.println("md5 not equal: " + objectName
                                    + " | srcMd5: " + srcMd5 + " | destMd5: " + destMeta.getContentMD5());
                        }
                        continue;
                    }
                }
                // Check whether the ETag values of the source objects are the same as those of the destination objects. 
                if (srcMeta.getETag() == null || !srcMeta.getETag().equals(destMeta.getETag())) {
                    System.out.println("etag not equal: " + objectName
                            + " | srcEtag: " + srcMeta.getETag() + " | destEtag: " + destMeta.getETag());
                }
            }

            request.setContinuationToken(result.getNextContinuationToken());
            request.setStartAfter(result.getStartAfter());
        } while (result.isTruncated());
    }
}

Does OSS support chained replication?

No, OSS does not support chained replication. Assume that you configured a data replication rule to replicate data from Bucket A to Bucket B and another data replication rule to replicate data from Bucket B to Bucket C. In this case, data in bucket A will be replicated only to Bucket B and will not be replicated to Bucket C.

If you want to replicate data from Bucket A to Bucket C, you must configure a separate data replication rule to replicate data from Bucket A to Bucket C.

An exception is that if historical data replication is enabled for Bucket A and Bucket B, and historical data replication is in progress, data that is newly written to Bucket A may be detected by the historical data replication and replicated to Bucket C.

Does two-way synchronization between two buckets cause circular replication?

No, two-way synchronization does not cause circular replication. For example, if you configure two-way synchronization between Bucket A and Bucket B, data (historical and incremental data) that is replicated from Bucket A to Bucket B will not be replicated from Bucket B back to Bucket A. Similarly, data (historical and incremental data) that is replicated from Bucket B to Bucket A will not be replicated from Bucket A back to Bucket B.

Does a data replication rule synchronize lifecycle rule-based object deletions from the source bucket to the destination bucket?

  • It depends on how you specify Replication Policy of the replication rule. If you set Replication Policy to Add/Change: When objects are deleted from the source bucket based on a lifecycle rule, OSS does not delete their copies from the destination bucket.

  • If you set Replication Policy to Add/Delete/Change: When objects are deleted from the source bucket based on a lifecycle rule, OSS deletes their copies from the destination bucket.

    Note

    In the destination bucket, you may find objects with the same names as those that are deleted from the source bucket based on the lifecycle rule. This does not indicate that the Add/Delete/Change replication policy fails to take effect. The cause is that you may have manually written the objects with the same names as those that were deleted to the destination bucket.

Why does the replication progress of historical data remain at 0% for a long period of time?

  • Latency in replication progress update

    The replication progress of historical data is not updated in real time. The replication progress will not update until all objects are scanned. If a large number of objects are stored in the source bucket, such as hundreds of millions of objects, several hours are required before the replication progress of historical data is updated. If the replication progress of historical data is not updated, it does not mean that historical data is not replicated to the destination bucket.

You can check whether historical data in the source bucket is replicated to the destination bucket by viewing the storage capacity of the destination bucket and traffic usage, such as inbound and outbound traffic. For more information, see View the resource usage of a bucket.

  • Incorrect policy configurations for the source bucket

    OSS does not verify whether the source bucket specified in a data replication rule has the necessary permissions for replication. Consequently, even if the source bucket lacks the required permissions, the data replication rule can still be created, but it will not replicate any data. As a result, the data replication progress remains at 0%.

    In this case, you need to configure the required permissions. For more information, see Required permissions for data replication.

What can I do if data replication is slow?

  • Increase bandwidth

    A data replication task asynchronously replicates data in near real time. The period of time that is required to replicate data from the source bucket to the destination bucket may range from a few minutes to a few hours, depending on the data size. If a replication task takes a long period of time, we recommend that you check whether the replication task is slow due to bandwidth limits. If the replication is slow due to a bandwidth issue, we recommend that you contact technical support to increase the bandwidth to optimize the replication efficiency.

  • Enable RTC

    After Replication Time Control (RTC) is enabled, OSS replicates most of the objects that you uploaded to OSS within a few seconds and replicates 99.99% of the objects within 10 minutes. After you enable the RTC feature, you are charged for data replication traffic that is generated for RTC-enabled data replication tasks. For more information, see RTC.

How do I know operations related to data replication in the source and destination bucket?

You can set the event type to ObjectReplication:ObjectCreated, ObjectReplication:ObjectRemoved, and ObjectReplication:ObjectModified to receive notifications of changes that are made to objects in source and destination buckets by replication. The changes include adding, modifying, deleting, and overwriting objects. For more information, see Use event notifications to monitor object changes in real time.

Do versioning-suspended buckets support data replication?

No, versioning-suspended buckets do not support data replication. You can configure data replication rules only between two unversioned buckets or between two versioning-enabled buckets.

If the destination bucket uses KMS to encrypt data, am I charged for calling API operations that are related to KMS encryption?

If the destination bucket uses KMS to encrypt data, you are charged for calling API operations that are related to KMS encryption. For more information about the fees, see Billing of KMS.

Can I disable a data replication rule?

Yes, you can disable a replication rule by clicking Disable Replication next to the data replication rule.

After you disable the data replication rule, the replicated data is stored in the destination bucket. The incremental data in the source bucket is not replicated to the destination bucket.