All Products
Search
Document Center

ApsaraDB RDS:Migrate the incremental backup data of a self-managed SQL Server instance to an ApsaraDB RDS instance that runs SQL Server 2008 R2 with cloud disks or runs SQL Server 2012 or later

Last Updated:Jan 09, 2024

ApsaraDB RDS for SQL Server provides the cloud migration of incremental backup data. You can store full backup files of the source database on a self-managed SQL Server instance to an Object Storage Service (OSS) bucket, and restore the full backup data of the source database to the destination database on your ApsaraDB RDS for SQL Server instance in the ApsaraDB RDS console. Then, you can import the differential backup files or log backup files to the destination database on your RDS instance in the ApsaraDB RDS console to implement the cloud migration of incremental backup data. This migration method reduces downtime to minutes.

Scenarios

The migration method in this topic is suitable in the following scenarios:

  • Migrate data to the destination database on your RDS instance in physical mode rather than in logical mode.

    Note
    • Physical migration allows you to migrate data by using physical backup files. Logical migration allows you to write the executed DML statements to the destination database on your RDS instance.

    • Physical migration ensures 100% data consistency between the source database and the destination database. Logical migration cannot ensure 100% data consistency. For example, index fragmentation and statistical information may change after the migration.

  • Migrate data with minute-level downtime.

    Note

    If your self-managed SQL Server instance has a data volume of less than 100 GB and does not provide time-sensitive services, we recommend that you migrate the data to your RDS instance by using full backup files. The migration may cause a downtime of 2 hours. For more information, see Migrate the full backup data of a self-managed SQL Server instance to an ApsaraDB RDS instance that runs SQL Server 2008 R2 with cloud disks or runs SQL Server 2012 or later.

Prerequisites

  • The RDS instance runs SQL Server 2008 R2 with cloud disks or runs SQL Server 2012 or later. The names of existing databases on your RDS instance are different from the name of the source database on the self-managed SQL Server instance. For more information about how to create an RDS instance, see Create an ApsaraDB RDS for SQL Server instance.

    Note

    RDS instances that run SQL Server 2008 R2 with cloud disks are no longer available for purchase. For more information, see [EOS/Discontinuation] ApsaraDB RDS instances running SQL Server 2008 R2 with cloud disks are no longer available for purchase from July 14, 2023.

  • The available storage of the RDS instance is sufficient. If the available storage is insufficient, you must expand the storage capacity of the RDS instance before you start the migration. For more information, see Change the specifications of an ApsaraDB RDS for SQL Server instance.

  • A privileged account is created for the RDS instance. For more information, see Create accounts and databases.

  • The self-managed SQL Server instance uses the FULL recovery model.

    Note
    • Transaction log backups are required when you migrate the incremental backup data of a self-managed SQL Server instance to an RDS instance. If the self-managed SQL Server instance uses the SIMPLE recovery model, transaction logs cannot be backed up.

    • If the size of differential backup files is large, the time that is required to migrate the incremental backup data of a self-managed SQL Server instance to an RDS instance may increase.

  • The output of the DBCC CHECKDB statement that is executed in the self-managed SQL Server instance indicates that no allocation errors or consistency errors occur. If no allocation errors or consistency errors occur, the following execution result is returned:

    ...
    CHECKDB found 0 allocation errors and 0 consistency errors in database 'xxx'.
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
  • OSS is activated. For more information, see Activate OSS.

  • The OSS bucket and the RDS instance reside in the same region. For more information, see Step 2: Upload the generated full backup file to the OSS bucket.

  • If you use a RAM user, make sure that the following requirements are met:
    • The AliyunOSSFullAccess and AliyunRDSFullAccess policies are attached to the RAM user. For more information about how to grant permissions to RAM users, see Use RAM to manage OSS permissions and Use RAM to manage ApsaraDB RDS permissions.
    • The service account of ApsaraDB RDS is authorized by using your Alibaba Cloud account to access the OSS bucket.
    • A custom policy is manually created by using your Alibaba Cloud account and is attached to the RAM user. For more information about how to create a custom policy, see Create a custom policy on the JSON tab.
      You must use the following content for the custom policy:
      {
          "Version": "1",
          "Statement": [
              {
                  "Action": [
                      "ram:GetRole"
                  ],
                  "Resource": "acs:ram:*:*:role/AliyunRDSImportRole",
                  "Effect": "Allow"
              }
          ]
      }

Usage notes

  • The migration method that is described in this topic is at the database level. You can migrate the backup data only of one database on a self-managed SQL Server instance to your RDS instance at a time. If you want to migrate the backup data of multiple or all databases on a self-managed SQL Server instance at a time, we recommend that you use an instance-level migration method. For more information, see Migrate data from a self-managed SQL Server instance to an ApsaraDB RDS for SQL Server instance.

  • The migration from a later SQL Server version to an earlier SQL Server version is not supported. For example, if a self-managed SQL Server instance runs SQL Server 2016 and your RDS instance runs SQL Server 2012, you cannot migrate the backup data of the self-managed SQL Server instance to the RDS instance.

  • The names of the backup files cannot contain special characters, such as at signs (@) and vertical bars (|). If the names of the backup files contain special characters, the migration fails.

  • After you authorize the service account of ApsaraDB RDS to access the OSS bucket, a role named AliyunRDSImportRole is created in Resource Access Management (RAM). Do not modify or delete this role. If you modify or delete this role, the backup files cannot be downloaded from the OSS bucket. If you modify or delete this role, you must re-authorize the service account by using the migration wizard.

  • The RDS instance does not carry over the accounts of the self-managed SQL Server instance. After the migration is complete, you must create accounts for the RDS instance in the ApsaraDB RDS console.

  • Before the migration is complete, do not delete the backup files from the OSS bucket. If you delete the backup files before the migration is complete, the migration fails.

  • The names of backup files must be suffixed with bak, diff, trn, or log. The following list describes the suffixes:

    • bak: indicates full backup files.

    • diff: indicates differential backup files.

    • trn or log: indicates log backup files of transactions.

    Note

    If the backup files are not generated by using the backup script that is provided in this topic or the backup files do not use the preceding suffixes, the system may fail to identify the types of the backup files. This affects subsequent operations.

Migration process

Migration phase

Step

Description

Full backup and restoration

Step1. Before 00:00

Complete the following preparations:

  • Execute the DBCC CHECKDB statement on the source database of a self-managed SQL Server instance and verify that no allocation errors or consistency errors occur.

  • Shut down the backup system for the source database.

  • Change the recovery model of the source database to FULL.

Step2. 00:01

Perform a full backup on the source database. Time required: about 1 hour.

Step3. 02:00

Upload the full backup file to the OSS bucket. Time required: about 1 hour.

Step4. 03:00

Restore data from the full backup file to your RDS instance in the ApsaraDB RDS console. Time required: about 19 hours.

Incremental backup and restoration

Step5. 22:00

Perform a log backup on the source database. Time required: about 20 minutes.

Step6. 22:20

Upload the log backup file to the OSS bucket. Time required: about 10 minutes.

Step6. 22:30

  • Repeat Step 5 and Step 6 to perform a log backup on the source database, upload the log backup file to the OSS bucket, and then restore data from the log backup file to your RDS instance. Perform these operations until the size of the last log backup file is less than 500 MB.

  • Stop data writes to the source database, perform the last log backup, and then upload the last log backup file to the OSS bucket.

Database opening

Step8. 22:34

Restore data from the last log backup file to your RDS instance. Time required: about 4 minutes.

Step9. 22:35

Open the destination database on your RDS instance. If you execute the DBCC statement in asynchronous mode, the destination database can be opened in 1 minute.

The preceding migration provides an example of how to minimize downtime. Your application can continue to run, and you do not need to stop your application until the last log backup. In this example, the downtime of your application does not exceed 5 minutes.

Step 1: Back up the source database

  1. Download the backup script file. Then, open the file by using SQL Server Management Studio (SSMS).

  2. Configure the following parameters.

    Parameter

    Description

    @backup_databases_list

    The name of the source database that you want to back up. If you specify multiple databases, separate the names of these databases with semicolons (;) or commas (,).

    @backup_type

    The backup type. Valid values:

    • FULL: full backup

    • DIFF: differential backup

    • LOG: log backup

    @backup_folder

    The directory that is used to store the backup files. If the specified directory does not exist, the system automatically creates one.

    @is_run

    Specifies whether to perform a backup or a check. Valid values:

    • 1: performs a backup.

    • 0: performs a check.

  3. Execute the backup script.

Upload the generated full backup file to the OSS bucket

  1. Create an OSS bucket.
    1. Log on to the OSS console.
    2. In the left-side navigation pane, click Buckets. On the page that appears, click Create Bucket.
    3. Configure the following parameters. Retain the default values for other parameters.
      Note The created OSS bucket is used only for the data migration and is no longer used after the data migration is complete. You need to only configure key parameters. To prevent data leaks and excessive costs, we recommend that you delete the OSS bucket after the data migration is complete at the earliest opportunity.
      ParameterDescriptionExample
      Bucket NameThe name of the OSS bucket. The name is globally unique and cannot be modified after it is configured.
      Naming conventions:
      • The name can contain lowercase letters, digits, and hyphens (-).
      • The name must start and end with a lowercase letter or a digit.
      • The name must be 3 to 63 characters in length.
      migratetest
      RegionThe region of the OSS bucket. If you want to upload data to the OSS bucket from an Elastic Compute Service (ECS) instance over an internal network and then restore the data to the RDS instance over the internal network, make sure that the OSS bucket, ECS instance, and RDS instance reside in the same region. China (Hangzhou)
  2. Upload backup files to the OSS bucket.

    After the full backup on the self-managed SQL Server instance is complete, you must use one of the following methods to upload the generated full backup file to the OSS bucket:

    Method 1: Use the ossbrowser tool (recommended)

    1. Download ossbrowser. For more information, see Install and log on to ossbrowser.
    2. Decompress the downloaded oss-browser-win32-x64.zip package in a 64-bit Windows operating system. Then, double-click oss-browser.exe to run the program. The 64-bit Windows operating system is used as an example.
    3. On the AK Login tab, configure the AccessKeyId and AccessKeySecret parameters, retain the default values for other parameters, and then click Login. ossbrowser logon
      Note An AccessKey pair is used to verify the identity of an Alibaba Cloud account and ensure data security. We recommend that you keep the AccessKey pair confidential. For more information about how to create and obtain an AccessKey pair, see Create an AccessKey pair.
    4. Click the name of the OSS bucket. OSS bucket logon
    5. Click the Upload icon icon, select the backup file that you want to upload, and then click Open to upload the backup file to the OSS bucket.

    Method 2: Use the OSS console

    Note If the size of the backup file is less than 5 GB, we recommend that you upload the backup file in the OSS console.
    1. Log on to the OSS console.
    2. In the left-side navigation pane, click Buckets. On the page that appears, click the name of the required bucket. OSS bucket logon from a web page
    3. On the Files page, click Upload. File upload from a web page
    4. Drag the backup file to the Files to Upload section or click Select Files to select the backup file that you want to upload. File scan from a web page
    5. In the lower part of the page, click Upload to upload the backup file to the OSS bucket.

    Method 3: Call the OSS API

    Note If the size of the backup file is larger than 5 GB, we recommend that you call the OSS API to upload the backup file to an OSS bucket by using multipart upload.

    In this example, a Java project is used to describe how to obtain access credentials from environment variables. You need to configure environment variables before you run the sample code. For more information about how to configure the access credentials, see Configure access credentials. For more information about sample code, see Multipart upload.

    import com.aliyun.oss.ClientException;
    import com.aliyun.oss.OSS;
    import com.aliyun.oss.common.auth.*;
    import com.aliyun.oss.OSSClientBuilder;
    import com.aliyun.oss.OSSException;
    import com.aliyun.oss.internal.Mimetypes;
    import com.aliyun.oss.model.*;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.InputStream;
    import java.util.ArrayList;
    import java.util.List;
    
    public class Demo {
    
        public static void main(String[] args) throws Exception {
            // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
            String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
            // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
            EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
            // Specify the name of the bucket. Example: examplebucket. 
            String bucketName = "examplebucket";
            // Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path. 
            String objectName = "exampledir/exampleobject.txt";
            // Specify the path of the local file that you want to upload. 
            String filePath = "D:\\localpath\\examplefile.txt";
    
            // Create an OSSClient instance. 
            OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider);
            try {
                // Create an InitiateMultipartUploadRequest object. 
                InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, objectName);
    
                // The following code provides an example on how to specify the request headers when you initiate a multipart upload task: 
                 ObjectMetadata metadata = new ObjectMetadata();
                // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString());
                // Specify the caching behavior of the web page for the object. 
                // metadata.setCacheControl("no-cache");
                // Specify the name of the downloaded object. 
                // metadata.setContentDisposition("attachment;filename=oss_MultipartUpload.txt");
                // Specify the encoding format for the content of the object. 
                // metadata.setContentEncoding(OSSConstants.DEFAULT_CHARSET_NAME);
                // Specify whether existing objects are overwritten by objects that have the same names when the multipart upload task is initiated. In this example, the x-oss-forbid-overwrite parameter is set to true. This value specifies that an existing object cannot be overwritten by the object that has the same name. 
                // metadata.setHeader("x-oss-forbid-overwrite", "true");
                // Specify the server-side encryption method that you want to use to encrypt each part of the object that you want to upload. 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
                // Specify the algorithm that you want to use to encrypt the object. If you do not configure this parameter, AES-256 is used to encrypt the object. 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_DATA_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
                // Specify the ID of the customer master key (CMK) that is managed by Key Management Service (KMS). 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION_KEY_ID, "9468da86-3509-4f8d-a61e-6eab1eac****");
                // Specify the storage class of the object. 
                // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard);
                // Specify tags for the object. You can specify multiple tags for the object at a time. 
                // metadata.setHeader(OSSHeaders.OSS_TAGGING, "a:1");
                // request.setObjectMetadata(metadata);
    
                // Specify ContentType based on the object type. If you do not specify this parameter, the default value of the ContentType field is application/oct-srream. 
                if (metadata.getContentType() == null) {
                    metadata.setContentType(Mimetypes.getInstance().getMimetype(new File(filePath), objectName));
                }
    
                // Initialize the multipart upload task. 
                InitiateMultipartUploadResult upresult = ossClient.initiateMultipartUpload(request);
                // Obtain the upload ID. 
                String uploadId = upresult.getUploadId();
                // Cancel the multipart upload task or list uploaded parts based on the upload ID. 
                // If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.  
                // If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task. 
                // System.out.println(uploadId);
    
                // partETags is the set of PartETags. A PartETag consists of the part number and ETag of an uploaded part. 
                List<PartETag> partETags =  new ArrayList<PartETag>();
                // Specify the size of each part. The part size is used to calculate the number of parts of the object. Unit: bytes. 
                final long partSize = 1 * 1024 * 1024L; // Set the part size to 1 MB. 
    
                // Calculate the number of parts based on the size of the uploaded data. In the following code, a local file is used as an example to show how to use the File.length() method to obtain the size of the uploaded data. 
                final File sampleFile = new File(filePath);
                long fileLength = sampleFile.length();
                int partCount = (int) (fileLength / partSize);
                if (fileLength % partSize != 0) {
                    partCount++;
                }
                // Upload each part until all parts are uploaded. 
                for (int i = 0; i < partCount; i++) {
                    long startPos = i * partSize;
                    long curPartSize = (i + 1 == partCount) ? (fileLength - startPos) : partSize;
                    UploadPartRequest uploadPartRequest = new UploadPartRequest();
                    uploadPartRequest.setBucketName(bucketName);
                    uploadPartRequest.setKey(objectName);
                    uploadPartRequest.setUploadId(uploadId);
                    // Specify the input stream of the multipart upload task. 
                    // In the following code, a local file is used as an example to show how to create a FIleInputstream and use the InputStream.skip() method to skip the specified data. 
                    InputStream instream = new FileInputStream(sampleFile);
                    instream.skip(startPos);
                    uploadPartRequest.setInputStream(instream);
                    // Configure the size available for each part. Each part except the last part must be equal to or greater than 100 KB. 
                    uploadPartRequest.setPartSize(curPartSize);
                    // Specify part numbers. Each part has a part number that ranges from 1 to 10000. If the number that you specify does not fall within the range, OSS returns the InvalidArgument error code. 
                    uploadPartRequest.setPartNumber( i + 1);
                    // Parts are not uploaded in sequence. Parts can be uploaded from different OSS clients. OSS sorts the parts based on the part numbers, and then combines the parts to obtain a complete object. 
                    UploadPartResult uploadPartResult = ossClient.uploadPart(uploadPartRequest);
                    // When a part is uploaded, OSS returns a result that contains a PartETag. The PartETags are stored in partETags. 
                    partETags.add(uploadPartResult.getPartETag());
                }
    
    
                // Create a CompleteMultipartUploadRequest object. 
                // When you call the CompleteMultipartUpload operation, you must provide all valid partETags. After OSS receives the partETags, OSS verifies all parts one by one. After part verification is successful, OSS combines these parts into a complete object. 
                CompleteMultipartUploadRequest completeMultipartUploadRequest =
                        new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
    
                // The following code provides an example on how to configure the access control list (ACL) of the object when the multipart upload task is completed: 
                // completeMultipartUploadRequest.setObjectACL(CannedAccessControlList.Private);
                // Specify whether to list all parts that are uploaded by using the current upload ID. For OSS SDK for Java 3.14.0 and later, you can set partETags in CompleteMultipartUploadRequest to null only when you list all parts uploaded to the OSS server to combine the parts into a complete object. 
                // Map<String, String> headers = new HashMap<String, String>();
                // If you set x-oss-complete-all to yes in the request, OSS lists all parts that are uploaded by using the current upload ID, sorts the parts by part number, and then performs the CompleteMultipartUpload operation. 
                // If you set x-oss-complete-all to yes in the request, the request body cannot be specified. If you specify the request body, an error is reported. 
                // headers.put("x-oss-complete-all","yes");
                // completeMultipartUploadRequest.setHeaders(headers);
    
                // Complete the multipart upload task. 
                CompleteMultipartUploadResult completeMultipartUploadResult = ossClient.completeMultipartUpload(completeMultipartUploadRequest);
                System.out.println(completeMultipartUploadResult.getETag());
            } catch (OSSException oe) {
                System.out.println("Caught an OSSException, which means your request made it to OSS, "
                        + "but was rejected with an error response for some reason.");
                System.out.println("Error Message:" + oe.getErrorMessage());
                System.out.println("Error Code:" + oe.getErrorCode());
                System.out.println("Request ID:" + oe.getRequestId());
                System.out.println("Host ID:" + oe.getHostId());
            } catch (ClientException ce) {
                System.out.println("Caught an ClientException, which means the client encountered "
                        + "a serious internal problem while trying to communicate with OSS, "
                        + "such as not being able to access the network.");
                System.out.println("Error Message:" + ce.getMessage());
            } finally {
                if (ossClient != null) {
                    ossClient.shutdown();
                }
            }
        }
    }

Step 3: Creates a cloud migration task

  1. Go to the Instances page. In the top navigation bar, select the region in which the RDS instance resides. Then, find the RDS instance and click the ID of the instance.
  2. In the left-side navigation pane, click Backup and Restoration.

  3. In the upper-right corner of the page, click Migrate OSS Backup Data to RDS.

  4. In the Import Guide Wizard, click Next twice to import data.

    Note

    If you use the OSS-based migration wizard for the first time, you must authorize the service account of ApsaraDB RDS to access the OSS bucket. In this case, you must click Authorize and complete the authorization. Otherwise, the OSS Bucket drop-down list in the Import Data step is empty.

  5. Configure the following parameters and click OK.

    Parameter

    Description

    Database Name

    Enter the name of the destination database on your RDS instance. The destination database is used to store the data that is migrated from the source database on the self-managed SQL Server instance. The name of the destination database must be different from the name of the source database.

    Note

    The name of the destination database must meet the requirements of open source SQL Server.

    OSS Bucket

    Select the OSS bucket that stores the backup file.

    OSS File

    Specify the backup file that you want to import. You can enter a prefix in the search box and click the search icon to search for the backup file by using fuzzy match. The name, size, and update time of each backup file whose name contains the prefix are displayed. Select the backup file that you want to migrate to the RDS instance.

    Cloud Migration Method

    Select Access Pending (Incremental Backup). Valid values:

    • Immediate Access (Full Backup): If you want to migrate only a full backup file, select this migration method. In this case, the following parameter settings take effect in the CreateMigrateTask operation: BackupMode = FULL and IsOnlineDB = True.

    • Access Pending (Incremental Backup): If you want to migrate a full backup file and a log or differential backup file, select this migration method. In this case, the following parameter setting takes effect in the CreateMigrateTask operation: BackupMode = UPDF and IsOnlineDB = False.

Wait until the migration task is complete. You can click Refresh to view the latest status of the migration task.

Step 4: Import the log or differential backup file

After the full backup file of the source database on the self-managed SQL Server instance is imported into the destination database on your RDS instance, you must import the log or differential backup file.

  1. Go to the Instances page. In the top navigation bar, select the region in which the RDS instance resides. Then, find the RDS instance and click the ID of the instance.
  2. In the left-side navigation pane, click Backup and Restoration. On the page that appears, click the Cloud Migration Records of Backup Data tab.

  3. Find the destination database and click Upload Incremental Files in the Task Actions column. Select the log or differential backup file and click OK.

    Note
    • If you have multiple log or differential backup files, you must use the same method to upload the log backup files one by one.

    • Make sure that the size of the last log or differential backup file does not exceed 500 MB. This minimizes the time that is required to complete the migration.

    • Before the last log or differential backup file is generated, you must stop data writes to the source database. This ensures data consistency between the source database and the destination database on your RDS instance.

Step 5: Open the database

After you import all the backup files into the destination database on your RDS instance, the destination database is in the In Recovery or Restoring state. If your RDS instance runs RDS High-availability Edition, the destination database is in the In Recovery state. If your RDS instance runs RDS Basic Edition, the destination database is in the Restoring state. In these cases, you cannot perform read or write operations on the destination database. Before you can perform read and write operations, you must open the destination database.

  1. Go to the Instances page. In the top navigation bar, select the region in which the RDS instance resides. Then, find the RDS instance and click the ID of the instance.
  2. In the left-side navigation pane, click Backup and Restoration. On the page that appears, click the Cloud Migration Records of Backup Data tab.

  3. Find the destination database and click Open Database in the Task Actions column.

  4. Select a consistency check mode and click OK.

    Note

    ApsaraDB RDS provides the following consistency check modes:

    • Asynchronous DBCC: The DBCC CHECKDB statement is executed after the destination database is opened. This reduces the time that is required to open the destination database and minimizes the downtime of your application. If the destination database is large, a long period of time is required to execute the DBCC CHECKDB statement. If your application is sensitive to downtime but insensitive to the result of the DBCC CHECKDB statement, we recommend that you select this consistency check mode. In this case, the following parameter setting takes effect in the CreateMigrateTask operation: CheckDBMode = AsyncExecuteDBCheck.

    • Synchronous DBCC: The DBCC CHECKDB statement is executed at the same time when the destination database is opened. If you want to identify consistency errors between the source database and the destination database based on the result of the DBCC CHECKDB statement, we recommend that you select this consistency check mode. However, the time that is required to open the destination database increases. In this case, the following parameter setting takes effect in the CreateMigrateTask operation: CheckDBMode = SyncExecuteDBCheck.

Step 6: View details of the imported backup files

If you want to view details of the backup files that are imported by using a migration task, perform the following operations: Go to the Backup and Restoration page. Click the Cloud Migration Records of Backup Data tab. Find the required migration task and click View File Details in the Task Actions column. Then, view the details of the imported backup files.

Common errors

For more information about the common errors that may occur during the migration of full backup data, see Common errors.

During the migration of incremental backup data, you may encounter the following errors:

  • The destination database cannot be opened.

    • Error message: Failed to open database xxx.

    • Cause: Some advanced features are enabled for the self-managed SQL Server instance. However, these advanced features are not supported by your RDS instance. For example, the self-managed SQL Server instance runs an Enterprise Edition of SQL Server and your RDS instance runs a Web edition of SQL Server. If the data compression and partition features are enabled for the self-managed SQL Server instance, this error is reported when you open the destination database on the RDS instance.

    • Solution

      • Disable the advanced features for the self-managed SQL Server instance, back up data again, and then migrate the data by using OSS.

      • Purchase an RDS instance that runs the same SQL Server edition as the self-managed SQL Server instance. Then, migrate the data of the source database on the self-managed SQL Server instance to the new RDS instance.

  • The log sequence numbers (LSNs) in the backup chain are not consecutive.

    • Error message: The log in this backup set begins at LSN XXX, which is too recent to apply to the database.RESTORE LOG is terminating abnormally.

    • Cause: The LSNs in the log or differential backup file are different from the LSNs in the previous backup file that is used for the restoration.

    • Solution: Select the log or differential backup file whose LSNs are the same as the LSNs of the previous backup file that is used for the restoration.

  • The DBCC CHECKDB statement cannot be executed in asynchronous mode.

    • Error message: asynchronously DBCC checkdb failed: CHECKDB found 0 allocation errors and 2 consistency errors in table 'XXX' (object ID XXX).

    • Cause: After data is restored to your RDS instance with the Asynchronous DBCC consistency check mode selected, ApsaraDB RDS executes the DBCC CHECKDB statement. If the destination database fails the consistency check, consistency errors occur in the source database.

    • Solution

      • Execute the following statement on the destination database:

        DBCC CHECKDB (DBName,REPAIR_ALLOW_DATA_LOSS)
        Note

        If you use this statement to fix the error, data may be lost.

      • Execute the following statement on the source database to fix the error and then migrate data again:

        DBCC CHECKDB (DBName,REPAIR_ALLOW_DATA_LOSS)
  • The selected backup file is a full backup file.

    • Error message: Backup set (xxx) is a Database FULL backup, we only accept transaction log or differential backup.

    • Cause: After the data is restored to your RDS instance by using a full backup file, you can select only a log or differential backup file. If you select a full backup file again, this error is reported.

    • Solution: Select a log or differential backup file.

  • The number of specified source databases exceeds the upper limit.

    • Error message: The database (xxx) migration failed due to databases count limitation.

    • Cause: If the number of specified source databases exceeds the upper limit, this error is reported.

    • Solution: Migrate the data of source databases to another RDS instance. Otherwise, delete unnecessary databases.

  • The RAM user does not have the required permissions.

    The parameters that are described in Step 5 of Create a migration task are correctly configured, but the OK button is dimmed. Why?

    The reason may be that you use a RAM user and the RAM user does not have the required permissions. Make sure that the required permissions have been granted based on the "Prerequisites" section of this topic.

Related operations

OperationDescription
Create a migration taskCreates a data migration task.
Open the database to which backup data is migratedOpens the database to which backup data is migrated on an ApsaraDB RDS for SQL Server instance.
Query backup data migration tasksQueries the tasks that are created to migrate the backup data of an ApsaraDB RDS for SQL Server instance.
Query the backup file details of a backup data migration taskQueries the backup file details of a backup data migration task for an ApsaraDB RDS for SQL Server instance.