All Products
Search
Document Center

ApsaraDB RDS:Migrate the full backup data of a self-managed SQL Server instance to an ApsaraDB RDS instance that runs SQL Server 2008 R2 with local disks

Last Updated:Jan 09, 2024

ApsaraDB RDS for SQL Server instances allow you to easily migrate data to the cloud. You need to only upload the full backup data of the source database on your self-managed SQL Server instance to an Object Storage Service (OSS) bucket, and then migrate the full backup data to the destination database on your ApsaraDB RDS for SQL Server instance by using the ApsaraDB RDS console. The backup feature of Microsoft is compatible with ApsaraDB RDS for SQL Server. You can use this feature in scenarios such as data backup, migration, and disaster recovery.

Note

Prerequisites

  • The RDS instance runs SQL Server 2008 R2 with local disks.

    Note

    The RDS instances that run SQL Server 2008 R2 with cloud disks are discontinued.

  • A destination database is created on your RDS instance and has the same name as the source database on your self-managed SQL Server instance. For more information, see Create accounts and databases.

  • If you use a RAM user, make sure that the following requirements are met:

    • The AliyunOSSFullAccess and AliyunRDSFullAccess policies are attached to the RAM user. For more information about how to grant permissions to RAM users, see Use RAM to manage OSS permissions and Use RAM to manage ApsaraDB RDS permissions.

    • The service account of ApsaraDB RDS is authorized by using your Alibaba Cloud account to access the OSS bucket.

    • A custom policy is manually created by using your Alibaba Cloud account and is attached to the RAM user. For more information about how to create a custom policy, see the Create a custom policy on the JSON tab section in Create custom policies.

      You must use the following content for the custom policy:

      {
          "Version": "1",
          "Statement": [
              {
                  "Action": [
                      "ram:GetRole"
                  ],
                  "Resource": "acs:ram:*:*:role/AliyunRDSImportRole",
                  "Effect": "Allow"
              }
          ]
      }

Usage notes

The migration method that is described in this topic is at the database level. You can migrate the backup data only of a single self-managed SQL Server instance to your RDS instance at a time. If you want to migrate the backup data of multiple or all databases in a self-managed SQL Server instance at a time, we recommend that you use an instance-level migration method. For more information, see Migrate data from a self-managed SQL Server instance to an ApsaraDB RDS for SQL Server instance.

Billing

If you use the method described in this topic to migrate data, you are charged only for the use of OSS buckets.

image

Scenario

Billing rule

Upload backup files to an OSS bucket

Free of charge.

Store backup files in an OSS bucket

You are charged storage fees. For more information, visit the Pricing page of OSS.

Migrate backup files from an OSS bucket to your RDS instance

  • If you migrate backup files from an OSS bucket to your RDS instance over an internal network, no fees are generated.

  • If you migrate backup files over the Internet, you are charged for the OSS bucket based on the outbound Internet traffic. For more information, visit the Pricing page of OSS.

Step 1: Back up the source database on the self-managed instance

  1. Start the Microsoft SQL Server Management Studio (SSMS) client.

  2. Log on to the source database.

  3. Execute the following statements to check the recovery model:

    USE master;
    GO
    SELECT name, CASE recovery_model
    WHEN 1 THEN 'FULL'
    WHEN 2 THEN 'BULK_LOGGED'
    WHEN 3 THEN 'SIMPLE' END model FROM sys.databases
    WHERE name NOT IN ('master','tempdb','model','msdb');
    GO
    • If the value of model is not FULL, perform Step 4.

    • If the value of model is FULL, perform Step 5.

  4. Execute the following statements to set the recovery model to FULL:

    ALTER DATABASE [dbname] SET RECOVERY FULL;
    go
    ALTER DATABASE [dbname] SET AUTO_CLOSE OFF;
    go
    Important

    After the recovery model is set to FULL, more logs are generated. Make sure that disk space is sufficient.

  5. Execute the following statements to back up the source database. In this example, the dbtest database is backed up to the backup.bak file.

    USE master;
    GO
    BACKUP DATABASE [dbtest] to disk ='d:\backup\backup.bak' WITH COMPRESSION,INIT;
    GO
  6. Execute the following statements to check the integrity of the backup file:

    USE master
     GO
     RESTORE FILELISTONLY 
       FROM DISK = N'D:\backup\backup.bak';
    Important
    • If a result set is returned, the backup file is valid.

    • If an error is returned, the backup file is invalid. In this case, you must back up the source database again.

  7. Optional. Execute the following statement to restore the recovery model of the database:

    Important

    If the recovery model of your database is FULL, skip this step.

    ALTER DATABASE [dbname] SET RECOVERY SIMPLE;
    GO

Step 2: Upload the backup file of the source database to an OSS bucket

Important

If an OSS bucket is created, check whether the bucket meets the following requirements:

  • The storage class of the OSS bucket is Standard. The storage class cannot be Standard, Infrequent Access (IA), Archive, Cold Archive, or Deep Cold Archive. For more information, see Overview.

  • Data encryption is not enabled for the OSS bucket. For more information, see Data encryption.

  1. Create an OSS bucket.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. On the Buckets page, click Create Bucket.

    3. Configure the following parameters. Retain the default values for other parameters.

      Important
      • The created OSS bucket is used only for the data migration and is no longer used after the data migration is complete. You need to only configure key parameters. To prevent data leaks and excessive costs, we recommend that you delete the OSS bucket after the data migration is complete at the earliest opportunity.

      • Do not enable data encryption when you create an OSS bucket. For more information, see Data encryption.

      Parameter

      Description

      Example

      Bucket Name

      The name of the OSS bucket. The name is globally unique and cannot be modified after it is configured.

      Naming conventions:

      • The name can contain lowercase letters, digits, and hyphens (-).

      • It must start and end with a lowercase letter or a digit.

      • The name must be 3 to 63 characters in length.

      migratetest

      Region

      The region of the OSS bucket. If you want to upload data to the OSS bucket from an Elastic Compute Service (ECS) instance over an internal network and then restore the data to the RDS instance over the internal network, make sure that the OSS bucket, ECS instance, and RDS instance reside in the same region.

      China (Hangzhou)

      Storage Class

      The storage class of the bucket. Select Standard. The cloud migration operations described in this topic cannot be performed in buckets of other storage classes.

      Standard

  2. Upload backup files to the OSS bucket.

    Note

    If the RDS instance and the OSS bucket reside in the same region, they can communicate with each other over an internal network. You can use the internal network to upload the backup data. The method is faster, and no fees are generated for Internet traffic. We recommend that you upload the backup file to an OSS bucket that is in the same region as the destination RDS instance.

    After the full backup on the self-managed SQL Server instance is complete, you must use one of the following methods to upload the generated full backup file to the OSS bucket:

    Method 1: Use the ossbrowser tool (recommended)

    1. Download ossbrowser. For more information, see Install and log on to ossbrowser.

    2. Decompress the downloaded oss-browser-win32-x64.zip package in a 64-bit Windows operating system. Then, double-click oss-browser.exe to run the program. The 64-bit Windows operating system is used as an example.

    3. On the AK Login tab, configure the AccessKeyId and AccessKeySecret parameters, retain the default values for other parameters, and then click Login.登录ossbrowser

      Note

      An AccessKey pair is used to verify the identity of an Alibaba Cloud account and ensure data security. We recommend that you keep the AccessKey pair confidential. For more information about how to create and obtain an AccessKey pair, see Create an AccessKey pair.

    4. Click the name of the OSS bucket.进入bucket中

    5. Click the 上传图标 icon, select the backup file that you want to upload, and then click Open to upload the backup file to the OSS bucket.

    Method 2: Use the OSS console

    Note

    If the size of the backup file is less than 5 GB, we recommend that you upload the backup file in the OSS console.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the bucket for which you want to configure bucket policies.网页进入bucket

    3. On the Objects page, click Upload Object.网页上传文件

    4. Drag the backup file to the Files to Upload section or click Select Files to select the backup file that you want to upload.网页扫描文件

    5. In the lower part of the page, click Upload Object to upload the backup file to the OSS bucket.

    Method 3: Call the OSS API

    Note

    If the size of the backup file is larger than 5 GB, we recommend that you call the OSS API to upload the backup file to an OSS bucket by using multipart upload.

    In this example, a Java project is used to describe how to obtain access credentials from environment variables. Before you run the sample code, make sure that the environment variables are configured. For more information about how to configure the access credentials, see Configure access credentials. For more information about sample code, see Multipart upload.

    import com.aliyun.oss.ClientException;
    import com.aliyun.oss.OSS;
    import com.aliyun.oss.common.auth.*;
    import com.aliyun.oss.OSSClientBuilder;
    import com.aliyun.oss.OSSException;
    import com.aliyun.oss.internal.Mimetypes;
    import com.aliyun.oss.model.*;
    import java.io.File;
    import java.io.FileInputStream;
    import java.io.InputStream;
    import java.util.ArrayList;
    import java.util.List;
    
    public class Demo {
    
        public static void main(String[] args) throws Exception {
            // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
            String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
            // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
            EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
            // Specify the name of the bucket. Example: examplebucket. 
            String bucketName = "examplebucket";
            // Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path. 
            String objectName = "exampledir/exampleobject.txt";
            // Specify the path of the local file that you want to upload. 
            String filePath = "D:\\localpath\\examplefile.txt";
    
            // Create an OSSClient instance. 
            OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider);
            try {
                // Create an InitiateMultipartUploadRequest object. 
                InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, objectName);
    
                // The following code provides an example on how to specify the request headers when you initiate a multipart upload task: 
                 ObjectMetadata metadata = new ObjectMetadata();
                // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString());
                // Specify the caching behavior of the web page for the object. 
                // metadata.setCacheControl("no-cache");
                // Specify the name of the downloaded object. 
                // metadata.setContentDisposition("attachment;filename=oss_MultipartUpload.txt");
                // Specify the encoding format for the content of the object. 
                // metadata.setContentEncoding(OSSConstants.DEFAULT_CHARSET_NAME);
                // Specify whether existing objects are overwritten by objects that have the same names when the multipart upload task is initiated. In this example, the x-oss-forbid-overwrite parameter is set to true. This value specifies that an existing object cannot be overwritten by the object that has the same name. 
                // metadata.setHeader("x-oss-forbid-overwrite", "true");
                // Specify the server-side encryption method that you want to use to encrypt each part of the object that you want to upload. 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
                // Specify the algorithm that you want to use to encrypt the object. If you do not configure this parameter, AES-256 is used to encrypt the object. 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_DATA_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);
                // Specify the ID of the customer master key (CMK) that is managed by Key Management Service (KMS). 
                // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION_KEY_ID, "9468da86-3509-4f8d-a61e-6eab1eac****");
                // Specify the storage class of the object. 
                // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard);
                // Specify tags for the object. You can specify multiple tags for the object at a time. 
                // metadata.setHeader(OSSHeaders.OSS_TAGGING, "a:1");
                // request.setObjectMetadata(metadata);
    
                // Specify ContentType based on the object type. If you do not specify this parameter, the default value of the ContentType field is application/oct-srream. 
                if (metadata.getContentType() == null) {
                    metadata.setContentType(Mimetypes.getInstance().getMimetype(new File(filePath), objectName));
                }
    
                // Initialize the multipart upload task. 
                InitiateMultipartUploadResult upresult = ossClient.initiateMultipartUpload(request);
                // Obtain the upload ID. 
                String uploadId = upresult.getUploadId();
                // Cancel the multipart upload task or list uploaded parts based on the upload ID. 
                // If you want to cancel a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task.  
                // If you want to list the uploaded parts in a multipart upload task based on the upload ID, obtain the upload ID after you call the InitiateMultipartUpload operation to initiate the multipart upload task but before you call the CompleteMultipartUpload operation to complete the multipart upload task. 
                // System.out.println(uploadId);
    
                // partETags is the set of PartETags. A PartETag consists of the part number and ETag of an uploaded part. 
                List<PartETag> partETags =  new ArrayList<PartETag>();
                // Specify the size of each part. The part size is used to calculate the number of parts of the object. Unit: bytes. 
                final long partSize = 1 * 1024 * 1024L; // Set the part size to 1 MB. 
    
                // Calculate the number of parts based on the size of the uploaded data. In the following code, a local file is used as an example to show how to use the File.length() method to obtain the size of the uploaded data. 
                final File sampleFile = new File(filePath);
                long fileLength = sampleFile.length();
                int partCount = (int) (fileLength / partSize);
                if (fileLength % partSize != 0) {
                    partCount++;
                }
                // Upload each part until all parts are uploaded. 
                for (int i = 0; i < partCount; i++) {
                    long startPos = i * partSize;
                    long curPartSize = (i + 1 == partCount) ? (fileLength - startPos) : partSize;
                    UploadPartRequest uploadPartRequest = new UploadPartRequest();
                    uploadPartRequest.setBucketName(bucketName);
                    uploadPartRequest.setKey(objectName);
                    uploadPartRequest.setUploadId(uploadId);
                    // Specify the input stream of the multipart upload task. 
                    // In the following code, a local file is used as an example to show how to create a FIleInputstream and use the InputStream.skip() method to skip the specified data. 
                    InputStream instream = new FileInputStream(sampleFile);
                    instream.skip(startPos);
                    uploadPartRequest.setInputStream(instream);
                    // Configure the size available for each part. Each part except the last part must be equal to or greater than 100 KB. 
                    uploadPartRequest.setPartSize(curPartSize);
                    // Specify part numbers. Each part has a part number that ranges from 1 to 10000. If the number that you specify does not fall within the range, OSS returns the InvalidArgument error code. 
                    uploadPartRequest.setPartNumber( i + 1);
                    // Parts are not uploaded in sequence. Parts can be uploaded from different OSS clients. OSS sorts the parts based on the part numbers, and then combines the parts to obtain a complete object. 
                    UploadPartResult uploadPartResult = ossClient.uploadPart(uploadPartRequest);
                    // When a part is uploaded, OSS returns a result that contains a PartETag. The PartETags are stored in partETags. 
                    partETags.add(uploadPartResult.getPartETag());
                }
    
    
                // Create a CompleteMultipartUploadRequest object. 
                // When you call the CompleteMultipartUpload operation, you must provide all valid partETags. After OSS receives the partETags, OSS verifies all parts one by one. After part verification is successful, OSS combines these parts into a complete object. 
                CompleteMultipartUploadRequest completeMultipartUploadRequest =
                        new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);
    
                // The following code provides an example on how to configure the access control list (ACL) of the object when the multipart upload task is completed: 
                // completeMultipartUploadRequest.setObjectACL(CannedAccessControlList.Private);
                // Specify whether to list all parts that are uploaded by using the current upload ID. For OSS SDK for Java 3.14.0 and later, you can set partETags in CompleteMultipartUploadRequest to null only when you list all parts uploaded to the OSS server to combine the parts into a complete object. 
                // Map<String, String> headers = new HashMap<String, String>();
                // If you set x-oss-complete-all to yes in the request, OSS lists all parts that are uploaded by using the current upload ID, sorts the parts by part number, and then performs the CompleteMultipartUpload operation. 
                // If you set x-oss-complete-all to yes in the request, the request body cannot be specified. If you specify the request body, an error is reported. 
                // headers.put("x-oss-complete-all","yes");
                // completeMultipartUploadRequest.setHeaders(headers);
    
                // Complete the multipart upload task. 
                CompleteMultipartUploadResult completeMultipartUploadResult = ossClient.completeMultipartUpload(completeMultipartUploadRequest);
                System.out.println(completeMultipartUploadResult.getETag());
            } catch (OSSException oe) {
                System.out.println("Caught an OSSException, which means your request made it to OSS, "
                        + "but was rejected with an error response for some reason.");
                System.out.println("Error Message:" + oe.getErrorMessage());
                System.out.println("Error Code:" + oe.getErrorCode());
                System.out.println("Request ID:" + oe.getRequestId());
                System.out.println("Host ID:" + oe.getHostId());
            } catch (ClientException ce) {
                System.out.println("Caught an ClientException, which means the client encountered "
                        + "a serious internal problem while trying to communicate with OSS, "
                        + "such as not being able to access the network.");
                System.out.println("Error Message:" + ce.getMessage());
            } finally {
                if (ossClient != null) {
                    ossClient.shutdown();
                }
            }
        }
    }
  3. Configure the validity period and obtain the URL of the backup file.

    1. Log on to the OSS console.

    2. In the left-side navigation pane, click Buckets. Find the bucket to which you upload the backup file and click its name.

    3. In the left-side navigation pane, choose Object Management > Objects.

    4. Select the backup file.

    5. In the View Details panel, change the value of the Validity Period (Seconds) parameter to 28800, which is equivalent to 8 hours.

      Important

      The URL of the backup file is required when you migrate the file from the OSS bucket to your RDS instance. The data migration fails when the validity period of the URL expires.

    6. Click Copy Object URL.

      image.png

    7. Modify the data backup file URL that you obtain.

      By default, the URL contains the public endpoint of the file. If you migrate data over an internal network, you must replace the public endpoint with the internal endpoint in the URL.

      For example, if the URL of the backup file is http://rdstest.oss-cn-shanghai.aliyuncs.com/testmigraterds_20170906143807_FULL.bak?Expires=15141****&OSSAccessKeyId=TMP****, you need to replace the public endpoint oss-cn-shanghai.aliyuncs.com in the URL with the internal endpoint oss-cn-shanghai-internal.aliyuncs.com.

      Important

      The internal endpoint varies based on the network type and region. For more information, see Regions and endpoints.

Step 3: Restore data to your RDS instance by using the backup file in the OSS bucket

  1. Go to the Instances page. In the top navigation bar, select the region in which the RDS instance resides. Then, find the RDS instance and click the ID of the instance.
  2. In the left-side navigation pane, click Databases.

  3. Find the destination database and click Migrate Backup Files from OSS in the Actions column.

  4. In the Import Guide wizard, read the on-screen instructions and click Next.

  5. Read the on-screen instructions and click Next.

  6. Enter the URL of the backup file in the OSS URL of the Backup File field and click OK.

    image.png

    Note

    RDS instances that run SQL Server 2008 R2 with local disks support only one-time migration of full backup files.

Step 4: View the status of the migration task

In the left-side navigation pane, click Database Migration to Cloud. Then, find the migration task in the migration task list.

Important

If Task Status is Failed, you can click Task Description or View File Details in the Actions column to view the cause. Fix the problem and repeat the preceding migration procedure again.

Related operations

Operation

Description

CreateMigrateTask

Creates a data migration task.

CreateOnlineDatabaseTask

Opens the database to which backup data is migrated on an ApsaraDB RDS for SQL Server instance.

DescribeMigrateTasks

Queries the tasks that are created to migrate the backup data of an ApsaraDB RDS for SQL Server instance.

DescribeOssDownloads

Queries the backup file details of a backup data migration task for an ApsaraDB RDS for SQL Server instance.