All Products
Search
Document Center

ApsaraDB RDS:Migrate incremental backup data (for SQL Server 2008 R2 with cloud disks and SQL Server 2012 or later)

Last Updated:Mar 30, 2026

Use this migration method when your application requires minute-level downtime. You stage a full backup through Object Storage Service (OSS), restore it to ApsaraDB RDS for SQL Server, then apply log or differential backups to close the data gap before cutover. Because the migration uses physical backup files, the destination database is an exact copy of the source — including index fragmentation and statistical information that logical migration cannot preserve.

Important

If your application can tolerate up to 2 hours of downtime and the database is smaller than 100 GB, use full backup migration instead. This method migrates one database at a time.

How it works

The migration runs in two phases:

  1. Full data migration: Back up the source database, upload the full backup file to an OSS bucket, and restore it to the RDS instance via the ApsaraDB RDS console.

  2. Incremental phase: Repeatedly back up and upload log or differential backup files to narrow the data gap. When the last log backup is under 500 MB, stop writes to the source database, upload the final log backup, then open the destination database.

Important

The destination database remains in the In Recovery state (RDS High-availability Edition) or Restoring state (RDS Basic Edition) throughout the migration. It cannot be read or written until you open it in the final step.

Prerequisites

Self-managed SQL Server instance:

  • Uses the FULL recovery model. The SIMPLE recovery model does not support transaction log backups, which are required for incremental migration.

  • Run DBCC CHECKDB on the source database and confirm that no allocation errors or consistency errors are reported:

    CHECKDB found 0 allocation errors and 0 consistency errors in database 'xxx'.
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.

RDS instance:

  • Runs SQL Server 2012 or later, or SQL Server 2008 R2 with cloud disks.

  • No existing database on the RDS instance shares the same name as the source database, and no unattached database files with the same name exist. A naming conflict causes the restore to fail.

  • Available storage is greater than the size of the data files to migrate. Expand the storage capacity before starting if needed.

Authorization:

The ApsaraDB RDS service account must have access to your OSS bucket. To verify and grant authorization:

  1. On the Backup and Restoration page of your RDS instance, click Migrate OSS Backup Data to RDS.

  2. In the Import Guide wizard, click Next twice to reach step 3. Import Data.

  3. Check the lower-left corner of the page. If You have authorized RDS official service account to access your OSS is displayed, authorization is complete. If not, click the Authorization URL link to grant authorization.

image

If using a Resource Access Management (RAM) user, the following additional requirements apply:

Usage notes

Item Details
Scope One database per migration task. To migrate multiple databases, see Migrate data from a self-managed SQL Server instance to an ApsaraDB RDS for SQL Server instance.
Version compatibility Migrating from a later SQL Server version to an earlier version is not supported.
RAM role After authorization, a role named AliyunRDSImportRole is created in RAM. Do not modify or delete this role. If you do, re-authorize using the migration wizard.
Accounts After migration, accounts from the self-managed instance are not carried over. Create new accounts in the ApsaraDB RDS console.
OSS files Do not delete backup files from the OSS bucket before migration is complete.
Backup filenames Cannot contain special characters such as !@#$%^&*()_+-=.
Backup file suffixes .bak (full backup), .diff (differential backup), .trn or .log (log backup). Other file types are not recognized. A .bak file can contain a full, differential, or log backup — the suffix does not determine the backup type.
Converting `.lbak` files If you downloaded a log backup from the ApsaraDB RDS console (format: .zip.log), rename it to .zip, decompress it, rename the resulting database_name.lbak file to .bak, then upload it as the incremental log backup.

Migration timeline example

The following example shows how to complete the migration with less than 5 minutes of application downtime.

Phase Step Time Details
Full data migration Preparations Before 00:00 Run DBCC CHECKDB, shut down the backup system for the source database, and set the recovery model to FULL.
Step 1 00:01 Perform a full backup. Estimated duration: ~1 hour.
Step 2 02:00 Upload the full backup file to OSS. Estimated duration: ~1 hour.
Step 3 03:00 Restore from the full backup in the ApsaraDB RDS console. Estimated duration: ~19 hours.
Incremental phase Step 4 22:00 Perform a log backup on the source database. Estimated duration: ~20 minutes.
Step 5 22:20 Upload the log backup file to OSS. Estimated duration: ~10 minutes.
Step 6 22:30 Repeat Steps 4–5 until the last log backup is under 500 MB. Stop writes to the source database, perform the final log backup, and upload it.
Open the database Step 7 22:34 Final incremental upload completes (~4 minutes).
Step 8 22:35 Open the destination database. With Asynchronous DBCC mode, the database is online within 1 minute.

Step 1: Back up the source database

  1. Download the backup script and open it in SQL Server Management Studio (SSMS).

  2. Set the following parameters:

    Parameter Description
    @backup_databases_list Name of the source database. Separate multiple database names with semicolons (;) or commas (,).
    @backup_type Backup type: FULL (full backup), DIFF (differential backup), or LOG (log backup).
    @backup_folder Directory on the self-managed instance to store backup files. The directory is created automatically if it does not exist.
    @is_run 1 to perform the backup; 0 to run a check only.
  3. Run the backup script. A .bak file is generated regardless of the backup type.

Step 2: Upload backup files to OSS

Backup files must be in an OSS bucket in the same region as your RDS instance. Same-region transfer uses the internal network, avoiding internet egress fees and improving upload speed.

Prepare an OSS bucket

If a bucket already exists, make sure it meets these requirements:

  • Storage class is Standard (Infrequent Access, Archive, Cold Archive, and Deep Cold Archive are not supported).

  • Data encryption is not enabled.

If no bucket exists, create one:

Important

This bucket is used only for migration. After migration is complete, delete the bucket to avoid unnecessary costs and potential data exposure. Do not enable data encryption.

  1. Log on to the OSS console, click Buckets, then click Create Bucket.

  2. Configure the key parameters:

    Parameter Description Example
    Bucket name Globally unique; 3–63 characters; lowercase letters, digits, and hyphens only; must start and end with a lowercase letter or digit. migratetest
    Region Must match the region of your RDS instance (and the Elastic Compute Service (ECS) instance if you upload over the internal network). China (Hangzhou)
    Storage class Select Standard. Standard
  3. Retain the default values for all other parameters and complete the creation.

Upload the backup file

Choose the upload method based on file size:

Method 1: ossbrowser (recommended for most cases)

  1. Download ossbrowser.

  2. Decompress the downloaded package and launch the application (for example, oss-browser.exe on Windows x64).

  3. Select AK as the login method, enter your AccessKeyId and AccessKeySecret, and click Log On.

    Keep your AccessKey credentials secure. See Create an AccessKey pair.

    Login to ossbrowser

  4. Click the target bucket to open it.

    Open a bucket

  5. Click 上传图标, select the backup file, then click Open to upload.

Method 2: OSS console (files under 5 GB)

  1. Log on to the OSS console.

  2. Click Buckets, then click the target bucket name.

    Open bucket in console

  3. In the Objects section, click Upload Object.

    Upload object button

  4. Drag and drop the backup file into the Files to Upload area, or click Select Files to browse.

    Select files

  5. Click Upload Object at the bottom of the page.

Method 3: OSS API multipart upload (files over 5 GB)

For large backup files, use the OSS Java SDK with multipart upload. The following example reads credentials from environment variables. Set OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET before running. For full documentation, see Multipart upload.

import com.aliyun.oss.*;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.common.comm.SignVersion;
import com.aliyun.oss.internal.Mimetypes;
import com.aliyun.oss.model.*;
import java.io.File;
import java.io.FileInputStream;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;

public class Demo {

    public static void main(String[] args) throws Exception {
        // Replace with the endpoint for your bucket's region.
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Credentials are read from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        String bucketName = "examplebucket";
        // Full object path within the bucket, excluding the bucket name.
        String objectName = "exampledir/exampleobject.txt";
        // Full local path of the backup file to upload.
        String filePath = "D:\\localpath\\examplefile.txt";
        // Region identifier for the bucket, for example "cn-hangzhou".
        String region = "cn-hangzhou";

        ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
        clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);
        OSS ossClient = OSSClientBuilder.create()
                .endpoint(endpoint)
                .credentialsProvider(credentialsProvider)
                .clientConfiguration(clientBuilderConfiguration)
                .region(region)
                .build();

        try {
            // Initiate the multipart upload.
            InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, objectName);
            ObjectMetadata metadata = new ObjectMetadata();
            if (metadata.getContentType() == null) {
                metadata.setContentType(Mimetypes.getInstance().getMimetype(new File(filePath), objectName));
            }
            request.setObjectMetadata(metadata);
            InitiateMultipartUploadResult upresult = ossClient.initiateMultipartUpload(request);
            String uploadId = upresult.getUploadId();

            List<PartETag> partETags = new ArrayList<PartETag>();
            // Each part is 1 MB; adjust based on your file size and network conditions.
            final long partSize = 1 * 1024 * 1024L;

            final File sampleFile = new File(filePath);
            long fileLength = sampleFile.length();
            int partCount = (int) (fileLength / partSize);
            if (fileLength % partSize != 0) {
                partCount++;
            }

            // Upload each part sequentially.
            for (int i = 0; i < partCount; i++) {
                long startPos = i * partSize;
                long curPartSize = (i + 1 == partCount) ? (fileLength - startPos) : partSize;
                UploadPartRequest uploadPartRequest = new UploadPartRequest();
                uploadPartRequest.setBucketName(bucketName);
                uploadPartRequest.setKey(objectName);
                uploadPartRequest.setUploadId(uploadId);
                InputStream instream = new FileInputStream(sampleFile);
                instream.skip(startPos);
                uploadPartRequest.setInputStream(instream);
                // The last part can be smaller than 100 KB; all other parts must be at least 100 KB.
                uploadPartRequest.setPartSize(curPartSize);
                // Part numbers range from 1 to 10,000.
                uploadPartRequest.setPartNumber(i + 1);
                UploadPartResult uploadPartResult = ossClient.uploadPart(uploadPartRequest);
                partETags.add(uploadPartResult.getPartETag());
                instream.close();
            }

            // Complete the upload. OSS assembles all parts in order.
            CompleteMultipartUploadRequest completeMultipartUploadRequest =
                    new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
            CompleteMultipartUploadResult completeMultipartUploadResult = ossClient.completeMultipartUpload(completeMultipartUploadRequest);
            System.out.println("Upload successful, ETag: " + completeMultipartUploadResult.getETag());

        } catch (OSSException oe) {
            System.out.println("OSS rejected the request: " + oe.getErrorMessage()
                    + " (Code: " + oe.getErrorCode() + ", Request ID: " + oe.getRequestId() + ")");
        } catch (ClientException ce) {
            System.out.println("Client error communicating with OSS: " + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}

Step 3: Create a migration task

  1. Go to the Instances page. In the top navigation bar, select the region of your RDS instance, then click the instance ID.

  2. In the left navigation pane, click Backup and Restoration.

  3. At the top of the page, click Migrate OSS Backup Data to RDS.

  4. In the Import Guide wizard, click Next twice to reach the Import Data step.

    If this is your first time using the migration wizard, click Authorization and complete the authorization process. Without authorization, the OSS Bucket drop-down list remains empty.
  5. Configure the following parameters, then click OK.

    Parameter Description
    Database name Name of the destination database on your RDS instance. Must not conflict with any existing database or unattached database files on the instance.
    OSS bucket The OSS bucket containing your full backup file.
    OSS file Click the magnifying glass icon to search for the backup file by filename prefix. Results show the filename, size, and last modified time.
    Cloud migration method Select Access Pending (Incremental Backup). This keeps the database in a restoring state so incremental backups can be applied (BackupMode = UPDF, IsOnlineDB = False). Do not select Immediate Access (Full Backup) — that option opens the database immediately after the full backup and does not accept incremental backups (BackupMode = FULL, IsOnlineDB = True).
  6. Click Refresh to monitor the task status. If the task fails, see Troubleshooting.

Step 4: Import log or differential backup files

After the full backup is restored, apply incremental backups to close the data gap.

  1. Go to the Instances page, select your region, and click the instance ID.

  2. In the left navigation pane, click Backup and Restoration, then click the Cloud Migration Records of Backup Data tab.

  3. Find the destination database and click Upload Incremental Files in the Task actions column. Select the incremental file and click OK.

Repeat this step for each log backup file, uploading them in chronological order.

Keep the last log or differential backup file under 500 MB. Stop all writes to the source database before generating the final backup to ensure data consistency.

Step 5: Open the database

After all backup files are imported, open the destination database to make it available for reads and writes.

  1. Go to the Instances page, select your region, and click the instance ID.

  2. In the left navigation pane, click Backup and Restoration, then click the Cloud Migration Records of Backup Data tab.

  3. Find the destination database and click Open Database in the Task actions column.

  4. Select a consistency check mode and click OK.

    Mode Behavior Use when
    Asynchronous DBCC Opens the database immediately, then runs DBCC CHECKDB in the background. Minimizes downtime. (CheckDBMode = AsyncExecuteDBCheck) The application is sensitive to downtime but the consistency check result is not immediately critical.
    Synchronous DBCC Runs DBCC CHECKDB before opening the database. Increases time to open. (CheckDBMode = SyncExecuteDBCheck) Consistency check results are required before the database goes live.

Step 6: View migration records

Go to Backup and Restoration > Cloud Migration Records of Backup Data to review all migration tasks. Click View File Details in the Task actions column to see the status and details of every backup file associated with a task.

After migration is complete, ApsaraDB RDS automatically backs up the database according to the automatic backup policy configured for your instance. To trigger an immediate backup, run a manual backup.

Troubleshooting

For errors that occur during full backup migration, see Troubleshooting in this topic.

The following errors are specific to incremental backup migration.

The destination database fails to open

*Is the source SQL Server edition higher than the target RDS edition?*

Error message: Failed to open database xxx.

The self-managed SQL Server instance uses features (such as data compression or table partitioning) not supported by the target RDS SQL Server edition. For example, if the source runs Enterprise Edition and the destination runs Web Edition, Enterprise-only features cause this error.

Solutions:

LSN mismatch in the backup chain

*Were incremental backups uploaded out of chronological order?*

Error message: The log in this backup set begins at LSN XXX, which is too recent to apply to the database. RESTORE LOG is terminating abnormally.

The log sequence numbers (LSNs) in the incremental backup file do not continue from the previous backup. This happens when incremental files are applied to the wrong base backup or uploaded out of order.

Solution: Upload incremental backup files in strict chronological order, matching the sequence in which they were created.

Consistency errors found after Asynchronous DBCC

*Did DBCC CHECKDB report consistency errors on the destination database?*

Error message: asynchronously DBCC checkdb failed: CHECKDB found 0 allocation errors and 2 consistency errors in table 'XXX' (object ID XXX).

The background consistency check found errors in the destination database, indicating corruption in the source data.

Solutions:

  • Run the following on the destination database to repair errors (this may result in data loss):

    Important

    This statement may cause data loss. Use only if acceptable.

    DBCC CHECKDB (DBName, REPAIR_ALLOW_DATA_LOSS)
  • Alternatively, fix the source database first, then re-migrate:

    DBCC CHECKDB (DBName, REPAIR_ALLOW_DATA_LOSS)

A full backup file was selected when an incremental file is expected

*Did you select a full backup file in the Upload Incremental Files step?*

Error message: Backup set (xxx) is a Database FULL backup, we only accept transaction log or differential backup.

After the full backup is restored, only log or differential backup files are accepted in the Upload Incremental Files step.

Solution: Select a log or differential backup file.

Database count limit exceeded

*Has the RDS instance reached its maximum number of databases?*

Error message: The database (xxx) migration failed due to databases count limitation.

Solution: Migrate to another RDS instance, or delete databases that are no longer needed.

RAM user permission issues

*Is the OK button dimmed when configuring the migration task?*

The RAM user likely does not have the required permissions. Verify that AliyunOSSFullAccess, AliyunRDSFullAccess, and the custom AliyunRDSImportRole policy are all granted (see Prerequisites).

*Does the RAM user get a no permission error when granting the AliyunRDSImportRole permission?*

API reference

API Description
CreateMigrateTask Creates a migration task that restores a backup file from OSS to an ApsaraDB RDS for SQL Server instance.
CreateOnlineDatabaseTask Opens the destination database after all backup files are imported.
DescribeMigrateTasks Lists migration tasks for an ApsaraDB RDS for SQL Server instance.
DescribeOssDownloads Retrieves file details for a migration task.