All Products
Search
Document Center

ApsaraDB RDS:Migrate full backups to the cloud

Last Updated:Mar 30, 2026

Migrate a self-managed SQL Server database to ApsaraDB RDS for SQL Server by uploading a full backup file (.bak) to Object Storage Service (OSS), then importing it into your RDS instance through the console. This approach works for one-time migrations, disaster recovery, and cloud-based data backup.

How it works

  1. Back up your on-premises SQL Server database to a .bak file.

  2. Upload the backup file to an OSS bucket in the same region as your RDS instance.

  3. Trigger an import task from the RDS console — the system pulls the file from OSS and restores it.

Important

This solution supports database-level migration only. To migrate multiple or all databases at once, use the instance-level migration solution.

Limitations

Before you start, review these constraints to confirm this solution fits your use case:

  • Migration scope: Database-level only. One database per task.

  • Version compatibility: The source SQL Server version cannot be newer than the destination RDS instance version. For example, you cannot restore a SQL Server 2016 backup to a SQL Server 2012 instance.

  • Backup type: Full backup files (.bak) only. Differential backups and log backups are not accepted by the import process.

  • File name: Cannot contain special characters such as !@#$%^&*()_+-=.

  • File format: If the source is a backup file downloaded from ApsaraDB RDS for SQL Server (.zip format), decompress it to .bak before uploading.

  • Supported extensions: .bak (full backup), .diff (differential backup), .trn or .log (log backup). Files with other extensions are not recognized.

  • AliyunRDSImportRole: After granting OSS access to the RDS service account, a role named AliyunRDSImportRole is created in RAM. Do not modify or delete this role — doing so will cause migration tasks to fail. If you accidentally delete it, re-grant permissions through the migration wizard.

  • Post-migration accounts: After migration, the original database accounts from the source instance become unavailable. Create new accounts in the ApsaraDB RDS console.

  • OSS file retention: Do not delete the backup file from OSS until the migration task completes.

Billing

Scenario Cost
Uploading a backup file to OSS Free
Storing the backup file in OSS OSS storage fees apply. See OSS pricingOSS pricing.
Importing over the internal network (same region) Free
Importing over the internet OSS outbound traffic fees apply. See OSS pricingOSS pricing.

Prerequisites

Before you begin, make sure that:

  • The RDS instance has enough remaining storage space to hold the data file. If not, upgrade the instance storage.

  • For SQL Server 2012 or later, or SQL Server 2008 R2 with cloud disks: The RDS instance does not contain a database with the same name as the database to be migrated.

  • For SQL Server 2008 R2 with high-performance local disks: A database with the same name as the database to be migrated already exists on the RDS instance.

  • If using a Resource Access Management (RAM) user:

    • The RAM user has AliyunOSSFullAccess and AliyunRDSFullAccess permissions. See Manage OSS permissions using RAM and Manage ApsaraDB RDS permissions using RAM.

    • Your Alibaba Cloud account has granted the ApsaraDB RDS service account access to your OSS resources. See the authorization steps below.

    • Your Alibaba Cloud account has created a custom access policy and attached it to the RAM user. The policy must include: ``json { "Version": "1", "Statement": [ { "Action": [ "ram:GetRole" ], "Resource": "acs:ram:*:*:role/AliyunRDSImportRole", "Effect": "Allow" } ] } ``

      Policy content

      {
          "Version": "1",
          "Statement": [
              {
                  "Action": [
                      "ram:GetRole"
                  ],
                  "Resource": "acs:ram:*:*:role/AliyunRDSImportRole",
                  "Effect": "Allow"
              }
          ]
      }

Authorization instructions

Authorize RDS to access OSS

To check or grant authorization:

  1. Go to the Backup and Restoration page of the RDS instance and click Migrate OSS Backup Data to RDS.

  2. In the Import Guide, click Next twice to reach step 3. Import Data.

  3. If the message You have authorized RDS official service account to access your OSS appears in the lower-left corner, authorization is already in place. Otherwise, click Authorization URL to grant access.

image

Prepare the source database

In your on-premises SQL Server environment, run DBCC CHECKDB to verify the database has no errors before backing it up:

DBCC CHECKDB

A clean database returns:

CHECKDB found 0 allocation errors and 0 consistency errors in database 'xxx'.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Do not proceed if the check reports errors. See DBCC CHECKDB failed for how to fix issues.

Step 1: Back up the local database

Choose the procedure that matches your RDS instance type.

SQL Server 2012 or later, or SQL Server 2008 R2 with cloud disks

Stop all write operations to the database before starting the backup. Data written during the backup process is not included in the backup file.
  1. Download the backup script and open it in SQL Server Management Studio (SSMS).

  2. In the script, set the parameters in the SELECT statement under YOU HAVE TO INIT PUBLIC VARIABLES HERE:

    Parameter Description
    @backup_databases_list Databases to back up. Separate multiple database names with semicolons (;) or commas (,).
    @backup_type Backup type: FULL, DIFF, or LOG.
    @backup_folder Local directory for the backup file. Created automatically if it does not exist.
    @is_run 1 to run the backup; 0 to perform a dry run (check only).
  3. Run the script.

SQL Server 2008 R2 with high-performance local disks

  1. Open SQL Server Management Studio (SSMS) and log in to the database you want to migrate.

  2. Check the current recovery model:

    USE master;
    GO
    SELECT name, CASE recovery_model
    WHEN 1 THEN 'FULL'
    WHEN 2 THEN 'BULK_LOGGED'
    WHEN 3 THEN 'SIMPLE' END model FROM sys.databases
    WHERE name NOT IN ('master','tempdb','model','msdb');
    GO
  3. If the model column is not FULL, set it to FULL:

    Important

    Setting the recovery model to FULL increases the volume of transaction log data. Make sure you have enough disk space before proceeding.

    ALTER DATABASE [dbname] SET RECOVERY FULL;
    GO
    ALTER DATABASE [dbname] SET AUTO_CLOSE OFF;
    GO
  4. Back up the database. The following example backs up dbtest to d:\backup\backup.bak:

    USE master;
    GO
    BACKUP DATABASE [dbtest] TO DISK = 'd:\backup\backup.bak' WITH COMPRESSION, INIT;
    GO
  5. Verify the backup file:

    USE master;
    GO
    RESTORE FILELISTONLY
      FROM DISK = N'D:\backup\backup.bak';

    If the command returns a result set, the file is valid. If it returns an error, redo the backup.

  6. (Optional) Restore the original recovery model if you changed it in step 3:

    ALTER DATABASE [dbname] SET RECOVERY SIMPLE;
    GO

    Skip this step if the recovery model was already FULL before you started.

Step 2: Upload the backup file to OSS

Prepare an OSS bucket

You need an OSS bucket in the same region as your RDS instance. When the bucket and the instance are in the same region, the migration runs over the internal network — free of outbound traffic charges and faster than going over the internet.

If you already have a bucket, confirm that it meets these requirements:

  • Storage class: Standard. Infrequent Access, Archive, Cold Archive, and Deep Cold Archive are not supported.

  • Server-side encryption: disabled.

If you need to create a bucket, make sure you have activated OSS, then:

  1. Log in to the OSS console, click Buckets, and then click Create Bucket.

  2. Set the following parameters. Leave all other parameters at their default values.

    Important

    Do not enable server-side encryption when creating the bucket. Delete the bucket after migration to prevent data exposure and reduce costs.

    Parameter Description Example
    Bucket name Globally unique, cannot be changed after creation. Lowercase letters, digits, and hyphens only. Must start and end with a lowercase letter or digit. 3–63 characters. migratetest
    Region Must match the region of your RDS instance. China (Hangzhou)
    Storage class Select Standard. Standard

Upload the backup file

Choose an upload method based on your file size.

Using ossbrowser (Recommended)

Option 1: ossbrowser (recommended for most cases)

  1. Download ossbrowser.

  2. On Windows x64: decompress oss-browser-win32-x64.zip and run oss-browser.exe.

  3. Select Log On With AK, enter your AccessKeyId and AccessKeySecret, and click Log On.

    An AccessKey pair is used for identity verification. Keep your AccessKey confidential.

    Login to ossbrowser

  4. Click the destination bucket.

    Enter the bucket

  5. Click 上传图标, select the backup file, and click Open.

Using the OSS console

Option 2: OSS console (for files smaller than 5 GB)

  1. Log in to the OSS console.

  2. Click Buckets, then the name of your bucket.

    Enter the bucket via console

  3. In the Objects list, click Upload Object.

    Upload file via console

  4. Drag the backup file to the Files to Upload area, or click Select Files to browse.

    Select files to upload

  5. Click Upload Object.

Using the OSS API for multipart upload (Python 3 project example)

Option 3: OSS API multipart upload (for files larger than 5 GB)

Use the alibabacloud-oss-v2 Python SDK for multipart upload with resumable support. Install the dependency first:

pip install alibabacloud-oss-v2

Set the following environment variables before running the script:

Variable Description
OSS_ACCESS_KEY_ID Your AccessKey ID
OSS_ACCESS_KEY_SECRET Your AccessKey secret
OSS_SESSION_TOKEN STS token (required only when using STS credentials)
# -*- coding: utf-8 -*-
"""
Alibaba Cloud OSS Python SDK v2
Dependency: pip install alibabacloud-oss-v2
"""

import os
import sys
from pathlib import Path
import alibabacloud_oss_v2 as oss
from alibabacloud_oss_v2 import exceptions as oss_ex


def get_client_from_env(region: str, endpoint: str | None = None) -> oss.Client:
    """
    Create a v2 client from environment variables.
    - Prioritize using Region (recommended), but also support custom Endpoints (optional).
    - Compatible with both AK and STS:
        * AK: Requires OSS_ACCESS_KEY_ID / OSS_ACCESS_KEY_SECRET
        * STS: Also requires OSS_SESSION_TOKEN (compatible with the old variable OSS_SECURITY_TOKEN)
    """
    # Compatibility: If the user uses the old variable OSS_SECURITY_TOKEN, map it to the v2 expected OSS_SESSION_TOKEN
    sec_token_legacy = os.getenv("OSS_SECURITY_TOKEN")
    if sec_token_legacy and not os.getenv("OSS_SESSION_TOKEN"):
        os.environ["OSS_SESSION_TOKEN"] = sec_token_legacy

    ak = os.getenv("OSS_ACCESS_KEY_ID")
    sk = os.getenv("OSS_ACCESS_KEY_SECRET")
    st = os.getenv("OSS_SESSION_TOKEN")  # STS Token (optional)

    if not (ak and sk):
        raise ValueError("No valid AK found. Set the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables. "
                         "If using STS, also set OSS_SESSION_TOKEN (or the old name OSS_SECURITY_TOKEN).")

    # Indicate the type of credential used
    if st:
        print("STS Token (OSS_SESSION_TOKEN) detected. Using STS credentials.")
    else:
        print("No STS Token detected. Using AccessKey (AK) credentials.")

    credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
    cfg = oss.config.load_default()
    cfg.credentials_provider = credentials_provider

    # Basic network configuration
    cfg.region = region  # Example: 'cn-hangzhou'
    if endpoint:
        # Optional: Custom Endpoint (e.g., internal network, accelerated, dedicated domain)
        cfg.endpoint = endpoint

    # You can also add other configurations here, such as: cfg.use_accelerate_endpoint = True
    return oss.Client(cfg)


def resumable_upload_file_v2(
    client: oss.Client,
    bucket_name: str,
    object_key: str,
    file_path: str,
    part_size: int = 1 * 1024 * 1024,
    parallel_num: int = 4,
    checkpoint_dir: str | None = None,
):
    """
    Implement concurrent multipart upload with resumable upload.

    :param client: Initialized oss.Client
    :param bucket_name: Destination bucket name
    :param object_key: Destination object key (without bucket name)
    :param file_path: Full path of the local file
    :param part_size: Part size in bytes, default is 1 MB
    :param parallel_num: Number of concurrent upload threads, default is 4
    :param checkpoint_dir: Directory to store breakpoint information; if None, resumable upload is disabled
    """
    file_path = str(file_path)
    if not Path(file_path).exists():
        raise FileNotFoundError(f"Error: Local file not found. Check the file_path configuration: {file_path}")

    # Construct the Uploader; enable resumable upload based on whether checkpoint_dir is provided
    if checkpoint_dir:
        uploader = client.uploader(
            enable_checkpoint=True,
            checkpoint_dir=checkpoint_dir,
            part_size=part_size,
            parallel_num=parallel_num,
        )
    else:
        uploader = client.uploader(
            part_size=part_size,
            parallel_num=parallel_num,
        )

    print(f"Starting to upload file: {file_path}")
    print(f"Destination Bucket: {bucket_name}")
    print(f"Destination Object: {object_key}")
    print(f"Part size: {part_size} bytes, Concurrency: {parallel_num}")
    if checkpoint_dir:
        print(f"Resumable upload: Enabled (checkpoint_dir={checkpoint_dir})")
    else:
        print("Resumable upload: Disabled (set checkpoint_dir to enable)")

    # Execute the upload (Uploader automatically chooses between multi/single part concurrent upload based on size)
    result = uploader.upload_file(
        oss.PutObjectRequest(bucket=bucket_name, key=object_key),
        filepath=file_path,
    )

    print("-" * 30)
    print("File uploaded successfully!")
    print(f"HTTP Status: {result.status_code}")
    print(f"ETag: {result.etag}")
    print(f"Request ID: {result.request_id}")
    # CRC-64 checksum; v2 enables data validation by default
    print(f"CRC64: {result.hash_crc64}")
    print("-" * 30)


def main():
    # Before running the code example, make sure you have set the corresponding environment variables.
    # macOS/Linux:
    #   AK method:
    #     export OSS_ACCESS_KEY_ID=YOUR_AK_ID
    #     export OSS_ACCESS_KEY_SECRET=YOUR_AK_SECRET
    #   STS method:
    #     export OSS_ACCESS_KEY_ID=YOUR_STS_ID
    #     export OSS_ACCESS_KEY_SECRET=YOUR_STS_SECRET
    #     export OSS_SECURITY_TOKEN=YOUR_STS_TOKEN
    #
    # Windows:
    #   Powershell: $env:OSS_ACCESS_KEY_ID="YOUR_AK_ID"
    #   cmd: set OSS_ACCESS_KEY_ID=YOUR_AK_ID

    # ===================== Parameters (modify as needed) =====================
    # Region example: 'cn-hangzhou'; we recommend using Region first
    region = "cn-hangzhou"

    # Optional: Custom Endpoint (for internal network, dedicated domain, accelerated domain name, etc.)
    # Example: 'https://oss-cn-hangzhou.aliyuncs.com'
    endpoint = 'https://oss-cn-hangzhou.aliyuncs.com'

    # Bucket and Object
    bucket_name = "examplebucket"
    object_key = "test.bak"

    # Full path of the local file to upload.
    # Windows example: r'D:\localpath\examplefile.txt'  (note the r at the beginning)
    # macOS/Linux example: '/Users/test/examplefile.txt'
    file_path = r"D:\oss\test.bak"

    # Sharding and concurrency
    part_size = 1 * 1024 * 1024  # Default is 1 MB; OSS requires a minimum part size of 100 KB
    parallel_num = 4

    # Resumable upload directory (pass None to disable; we recommend specifying a writable directory)
    checkpoint_dir = str(Path.cwd() / ".oss_checkpoints")
    # =================== End of parameters ===================

    print("Script execution starts...")
    try:
        client = get_client_from_env(region=region, endpoint=endpoint)
        # If resumable upload is enabled, make sure the directory exists
        if checkpoint_dir:
            Path(checkpoint_dir).mkdir(parents=True, exist_ok=True)

        resumable_upload_file_v2(
            client=client,
            bucket_name=bucket_name,
            object_key=object_key,
            file_path=file_path,
            part_size=part_size,
            parallel_num=parallel_num,
            checkpoint_dir=checkpoint_dir,
        )
    except FileNotFoundError as e:
        print(e)
    except oss_ex.ServiceError as e:
        # Error returned by the OSS server
        print("\nAn OSS server-side error occurred.")
        print(f"HTTP Status: {getattr(e, 'status_code', 'N/A')}")
        print(f"Error Code: {getattr(e, 'code', 'N/A')}")
        print(f"Message: {getattr(e, 'message', 'N/A')}")
        print(f"Request ID: {getattr(e, 'request_id', 'N/A')}")
        print(f"Endpoint: {getattr(e, 'request_target', 'N/A')}")
    except oss_ex.BaseError as e:
        # SDK local/serialization/deserialization/credential errors
        print("\nAn OSS SDK client-side error occurred.")
        print(str(e))
    except Exception as e:
        print(f"\nAn unknown error occurred: {e}")


if __name__ == "__main__":
    main()

Get the backup file URL (SQL Server 2008 R2 with high-performance local disks only)

After uploading the file, generate a temporary URL to use in the import step:

  1. Log in to the OSS console and click Buckets.

  2. Click the name of the destination bucket.

  3. In the left navigation pane, select File Management > Files.

  4. In the Actions column for the backup file, click Details. Set Expiration (Seconds) to 28800 (8 hours).

    Important

    The migration task uses this URL to download the file. If the URL expires before the task completes, the migration fails.

  5. Click Copy File URL.

    Copy file URL

  6. To migrate over the internal network, change the public endpoint in the URL to the internal endpoint. For example, change oss-cn-shanghai.aliyuncs.com to oss-cn-shanghai-internal.aliyuncs.com.

    The internal endpoint format varies by region. See Endpoints and data centersEndpoints and data centers.

Step 3: Import the OSS backup data

Choose the procedure that matches your RDS instance type.

SQL Server 2012 or later, or SQL Server 2008 R2 with cloud disks

  1. Go to the Instances page. In the top navigation bar, select the region of your RDS instance, then click the instance ID.

  2. In the left navigation pane, click Backup and Restoration.

  3. Click Migrate OSS Backup Data to RDS.

  4. In the Import Guide, click Next twice to reach the data import step.

    On first use, you must authorize ApsaraDB RDS to access OSS. Click Authorization URL and complete the authorization. Without this, the OSS Bucket list will be empty. If the file you uploaded is not visible, check that the file extension meets the requirements in Limitations and that the bucket and RDS instance are in the same region.
  5. Configure the import settings:

    Parameter Description
    Database name The name for the restored database on the RDS instance. The name must follow SQL Server naming conventions and must not conflict with any existing database or unattached database file on the instance. If the backup set contains a database file with the same name as the target database, you can restore the database using that file — the database file name must match the target database name.
    OSS bucket Select the bucket containing the backup file.
    OSS file Click the search icon to find the file by prefix. The list shows file name, size, and update time.
    Cloud migration method Immediate Access (Full Backup): Full migration from a single full backup file. Sets BackupMode=FULL and IsOnlineDB=True. Access Pending (Incremental Backup): Migration using a full backup plus log or differential backups. Sets BackupMode=UPDF and IsOnlineDB=False.
    Consistency check mode Asynchronous DBCC: Opens the database immediately and runs DBCC CHECKDB in the background. Reduces downtime. Sets CheckDBMode=AsyncExecuteDBCheck. Use this when minimizing downtime is the priority. Synchronous DBCC: Runs DBCC CHECKDB before opening the database. Takes longer but confirms data consistency upfront. Sets CheckDBMode=SyncExecuteDBCheck.
  6. Click OK.

After the task completes, the RDS instance backs up at the next scheduled time per the automatic backup policy. The resulting backup set includes the migrated data and is available on the Backup and Restoration page. To generate a backup immediately, trigger a manual backup.

SQL Server 2008 R2 with high-performance local disks

  1. Go to the Instances page, select the region, and click the instance ID.

  2. In the left navigation pane, click Databases.

  3. In the Actions column for the destination database, click Migrate Backup Files from OSS.

    Migrate backup files from OSS

  4. In the Import Guide, review the information and click Next.

  5. Review the OSS upload prompts and click Next.

  6. In the OSS URL of backup file field, enter the URL you copied in step 2, then click OK.

    This instance type supports one-time import of a full backup file only.

    Enter the OSS URL

Step 4: Monitor the migration task

Choose the view that matches your RDS instance type.

SQL Server 2012 or later, or SQL Server 2008 R2 with cloud disks

Go to Backup and Restoration and click the Cloud Migration Records of Backup Data tab. The tab shows the task status, start time, and end time. By default, records from the past week are displayed — adjust the time range as needed.

Migration records

If Task Status is Failed, check Task Description or click View File Details to identify the cause, then rerun the task after resolving the issue.

SQL Server 2008 R2 with high-performance local disks

On the Data Migration To Cloud page, find the migration task to view its progress.

If Task Status is Failed, check Task Description or click View File Details to identify the cause, then rerun the task after resolving the issue.

Common return messages

Task type Status Task description Meaning
One-time full backup import Success success Migration completed successfully.
One-time full backup import Failed Failed to download backup file since OSS URL was expired. The OSS download URL expired before the task finished. Regenerate the URL and retry.
One-time full backup import Failed Your backup is corrupted or newer than RDS, failed to verify. The backup file is corrupted, or the source SQL Server version is newer than the RDS instance version.
One-time full backup import Failed DBCC checkdb failed The source database has consistency errors.
One-time full backup import Failed autotest_2008r2_std_testmigrate_log.trn is a Transaction Log backup, we only accept a FULL Backup. The file is a log backup. Provide a full backup file instead.
One-time full backup import Failed autotest_2008r2_std_testmigrate_diff.bak is a Database Differential backup, we only accept a FULL Backup. The file is a differential backup. Provide a full backup file instead.

Troubleshooting

The database (xxx) already exists on RDS. Please back it up and drop it, then try again. or Database 'xxx' already exists. Choose a different database name.

ApsaraDB RDS for SQL Server does not allow migrating into an existing database. Back up the existing database, delete it, then rerun the migration task.

Backup set (xxx.bak) is a Database Differential backup, we only accept a FULL Backup.

The provided file is a differential backup. This migration method accepts full backup files only.

Backup set (xxx.trn) is a Transaction Log backup, we only accept a FULL Backup.

The provided file is a log backup. This migration method accepts full backup files only.

Failed to verify xxx.bak, backup file was corrupted or newer edition than RDS. {#backup-verification-failed}

Two possible causes:

  • Corrupted file: Create a new full backup of the source database and start a new migration task.

  • Version mismatch: The source SQL Server version is newer than the destination RDS instance version. Use an RDS instance running the same version or newer. To upgrade an existing RDS instance, see Upgrade the database engine version.

DBCC checkdb failed

The source database has errors. Fix them with the following command, then migrate again:

DBCC CHECKDB (DBName, REPAIR_ALLOW_DATA_LOSS) WITH NO_INFOMSGS, ALL_ERRORMSGS
Important

This command may cause data loss.

Not Enough Disk Space for restoring, space left (xxx MB) < needed (xxx MB). or Not Enough Disk Space, space left xxx MB < bak file xxx MB.

The RDS instance does not have enough storage. Upgrade the instance storage.

Cannot open database "xxx" requested by the login. The login failed.

The account used to connect to the RDS instance lacks permissions for the database. On the Account Management page, grant the required permissions. See Grant permissions to an account and Permissions supported by different account types.

Your RDS doesn't have any init account yet, please create one and grant permissions on RDS console to this migrated database (xxx).

The RDS instance has no privileged account. The backup was restored successfully, but no permissions could be granted. Create a privileged account.

The OK button is grayed out when configuring the migration task

The RAM user has insufficient permissions. Review the RAM user requirements in Prerequisites.

permission denied when granting permissions for AliyunRDSImportRole as a RAM user

Use your Alibaba Cloud account to temporarily add AliyunRAMFullAccess to the RAM user.

API reference

APIDescription
CreateMigrateTaskCreates a migration task that restores a backup file from OSS to an ApsaraDB RDS for SQL Server instance.
CreateOnlineDatabaseTaskOpens the database of a migration task.
DescribeMigrateTasksLists migration tasks for an ApsaraDB RDS for SQL Server instance.
DescribeOssDownloadsQueries file details of a migration task.