MSSQL · best practices · cloud migration RDS SQL Server for instance-level databases-Alibaba Cloud Developer Community

Summary

up to now, we have completed the sharing of eight monthly reports in the series of SQL Server backup and restoration topics: three common database backups, formulation of backup policies, search for backup chains, the relationship between the three database recovery modes and backup, the use of filegroups to implement cold and hot data isolation backup scheme, how to monitor backup and restoration progress, alibaba Cloud RDS SQL a solution for automatic migration to the cloud and the RDS SDK shared last month to migrate databases to Alibaba Cloud, this topic describes how to migrate a user-created instance-level database offline or on an ECS instance to Alibaba Cloud RDS SQL Server with one click.

Scenario

in the RDS SDK solution we shared last month to migrate databases to Alibaba Cloud RDS SQL Server, we have implemented how to automatically migrate a user database from a user-created SQL Server instance offline or on an ECS instance to the RDS SQL Server. In other words, it implements a database-level migration to the cloud, that is, a database is migrated to the cloud at a time.

However, some users may encounter such a scenario. I have dozens SQL Server hundreds of instances offline, and each instance has dozens of hundreds of databases. In total, thousands of databases are migrated to the cloud. If you migrate data to the cloud at the database level, the migration solution is insufficient and inefficient. To solve the RDS for SQL Server of migrating a large number of databases to the cloud, simplify the procedures for migrating data to the cloud, and improve the efficiency of migrating data to the cloud, we urgently need to solve the RDS SQL Server of migrating instance-level databases to the cloud.

Implementation analysis

in the previous month's sharing of RDS SDK to migrate databases to Alibaba Cloud RDS SQL Server, we have implemented a single database migration to the cloud, so we can use the following solution to migrate instances to the cloud:

**Upload the full backup files of all databases on your offline instance to a folder in OSS.

**Traverse all database backup files in the folder on OSS.

**A Migration Task is generated for each backup file.

Input parameters

based on the above analysis, our implementation method needs to include the following six input parameters, and the analysis of these six input parameters is shown in the following table:

access_key_id		:   阿里云用户 access key id
access_key_secret	:   阿里云用户access key secret
rds_instance_id	:   RDS SQL实例ID
oss_endpoint		:   OSS Endpoint地址
oss_bucket 		:   OSS Bucket名
directory		    :   用户数据库备份文件在OSS上文件夹路径,如果是根目录,请传入“/”

specific Implementation

preparations

for more information, see the last month's monthly report MSSQL · best practices · RDS SDK preparations for migrating databases to Alibaba Cloud RDS SQL Server.

Code implementation

in this article, we use python RDS SDK to migrate databases to the cloud RDS SQL Server. Of course, you can also use other versions such as C# and Java. The detailed code is as follows:

#!/usr/bin/python

# -*- coding: utf-8 -*-

"""***************************************************************************************
# Script name	: RDSSQLCreateMigrateTasksBatchly.py
# Author		: jianming.wjm@alibaba-inc.com
# Create Date   : 2018-05-17 19:27
# Language 		: Python 2.7.10
# Run platform  : Mac OS X 10.12.6

# Purpose		: 
					This script is for batchly Migration user offline SQL Server databases to alibaba cloud RDS SQL Server.
					Users' FULL backup files are located on theirselves' OSS Bucket folder already.
					This script helps users to do migration all offline databases backed-up under the OSS Bucket folder to RDS SQL.
					We achieve those accomplishments by call alibaba cloud RDS OPENAPI.

# Limitation	:
	RDS Edition : Support RDS edition listed below
					'2012','2012_web','2012_std', '2012_ent', '2012_std_ha', '2012_ent_ha',
   					'2014_web','2014_std', '2014_ent', '2014_std_ha', '2014_ent_ha',
   					'2016_web','2016_std', '2016_ent', '2016_std_ha', '2016_ent_ha'

# Preparation	:
				  1. python 2.7.x installing (I'm using 2.7.10)
				  2. pip install aliyun-python-sdk-rds
				  3. pip install oss2

# Usage 		:
	Help 		: python RDSSQLCreateMigrateTasksBatchly.py -h
	Example 	: 
					python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>
					
					variables description
					access_key_id		:	alibaba cloud user access key id, fg: LTAIKeRvKPRwkaU3
					access_key_secret	:	alibaba cloud user access key secret, fg: BbZ7xhrertQ0dfgMqfAZPByhnp4G2k
					rds_instance_id		:   RDS SQL instance ID, fg: rm-2zesz4564ud8s7123
					oss_endpoint		:   OSS Endpoint address, fg: oss-cn-beijing.aliyuncs.com
					oss_bucket 			:   OSS Bucket name, fg: atp-test-on-ecs
					directory 			:   Sub folder name under OSS Bucket, fg: Migration/OPENAPIDemo


# Output 		: There two sesction output, one is the input variables and the other is the migration requests and response.
*********************Input variables*************************************

************************************************************************

*********************Migration requests**********************************

************************************************************************


# Modify Author : jianming.wjm@alibaba-inc.com
# Modify Date   : 2018-05-19 21:43
# Function:
#**************************************************************************************
"""

import json
import os
import sys, getopt
import re
import oss2
import time

from aliyunsdkcore.client import AcsClient
from aliyunsdkrds.request.v20140815 import DescribeMigrateTasksForSQLServerRequest
from aliyunsdkrds.request.v20140815 import CreateMigrateTaskRequest
from aliyunsdkrds.request.v20140815 import DescribeDBInstanceAttributeRequest


def main(argv):
	access_key_id =  access_key_secret =  rds_instance_id =  oss_endpoint =  oss_bucket =  directory = ''

	# usage help
	try:
		opts, args = getopt.getopt(argv,"hk:s:i:e:b:d:",["access_key_id=", "access_key_secret=", "rds_instance_id=", "oss_endpoint=", "oss_bucket=", "directory="])
	except getopt.GetoptError:
		print ('%s -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>' % (sys.argv[0]))
		sys.exit(2)

	for opt, arg in opts:
		if opt == '-h':
			print ('%s -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>' % (sys.argv[0]))
			sys.exit()
		elif opt in ("-k", "-K", "--access_key_id"):
			access_key_id = arg
		elif opt in ("-s", "-S", "--access_key_secret"):
			access_key_secret = arg
		elif opt in ("-i", "-I", "--rds_instance_id"):
			rds_instance_id = arg
		elif opt in ("-e", "-E", "--oss_endpoint"):
			oss_endpoint = arg
		elif opt in ("-b", "-B", "--oss_bucket"):
			oss_bucket = arg
		elif opt in ("-d", "-D", "--directory"):
			if arg.endswith("/"):
				directory = arg
			else:
				directory = str("%s/" % arg)

	# show the input parameters
   	print ("\n*********************Input variables*************************************\n" \
   		"access_key_id = %s\naccess_key_secret = %s\nrds_instance_id = %s\noss_endpoint = %s\noss_bucket = %s\ndirectory = %s\n" \
   		"************************************************************************"
   		% (access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory))


   	# check RDS & OSS region to make sure they are located in the same region.
   	success, rds_details = rds_instnace_details(access_key_id, access_key_secret, rds_instance_id)
   	if not success:
   		print ("%s" % rds_details)
   		sys.exit()

   	rds_db_version, rds_engine, rds_region = rds_details["EngineVersion"], rds_details["Engine"], rds_details["RegionId"]

   	success, oss_details = oss_bucket_details(access_key_id, access_key_secret, oss_endpoint, oss_bucket)
   	if not success:
   		print ("%s" % oss_details)
   		sys.exit()

   	oss_region = oss_details.location
   	# support db version checking.


   	if rds_engine != 'SQLServer' \
   		or rds_db_version not in [	'2008r2', '2012','2012_web','2012_std', '2012_ent', '2012_std_ha', '2012_ent_ha',
   									'2014_web','2014_std', '2014_ent', '2014_std_ha', '2014_ent_ha',
   									'2016_web','2016_std', '2016_ent', '2016_std_ha', '2016_ent_ha']:
   		print("RDS engine doesn't support, this is only for RDS SQL Server engine.")
   		sys.exit()

   	# RDS & OSS Bucket are not under the same region.
   	if not oss_region.endswith(rds_region):
   		print("RDS & OSS Bucket are not located in the same region.")
   		sys.exit()

   	# RDS & OSS Bucket are in the same region.
   	print ("\n*********************Migration requests**********************************")
   	full_migrate(access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory, rds_db_version)
   	print ("************************************************************************")


def rds_instnace_details(access_key_id, access_key_secret, rds_instance_id):
	request = DescribeDBInstanceAttributeRequest.DescribeDBInstanceAttributeRequest()
	request.set_DBInstanceId(rds_instance_id)
	success, response = _send_request(access_key_id, access_key_secret, request)

	if success:
		if response["Items"]["DBInstanceAttribute"]:
			# print response["Items"]["DBInstanceAttribute"][0]["EngineVersion"]
			# print response["Items"]["DBInstanceAttribute"][0]["RegionId"]
			return True, response["Items"]["DBInstanceAttribute"][0]
		else:
			return False, "Couldn't find specify RDS [%s]." % rds_instance_id
	
	
	return False, response

def full_migrate(access_key_id, access_key_secret, rds_instance_id, oss_endpoint, oss_bucket, directory, rds_db_version):
	"""
	this supoort full backup files migration.
	"""

	# get all backup objects under sub_folder
	key_parts_list, do = oss_list_objects(access_key_id, access_key_secret, oss_endpoint, oss_bucket, directory), 0

	# foreach object
	for key_parts in key_parts_list:
		print ("\n--%s. [%s] migrate to your RDS: [%s] and the database name will be: [%s]." % (do, key_parts.file_key, rds_instance_id, key_parts.db_name))
		do += 1

		# print ("%s" % key_parts.sign_url)

		request = CreateMigrateTaskRequest.CreateMigrateTaskRequest()
		request.set_DBInstanceId(rds_instance_id)
		request.set_DBName(key_parts.db_name)
		request.set_BackupMode("FULL")
		request.set_IsOnlineDB(True)
		if rds_db_version == '2008r2':
			request.set_DBName(key_parts.db_name.lower())
			request.set_OSSUrls(key_parts.sign_url)
		else:
			request.set_OSSUrls("")
			request.set_OssObjectPositions("%s:%s:%s" % (oss_endpoint, oss_bucket, key_parts.file_key)) # OSSEndpoint:OSSBucket:OSSFileKey
			request.set_CheckDBMode("SyncExecuteDBCheck")

		success, response = _send_request(access_key_id, access_key_secret, request)

		if success:
			print response

			print ("--I'm sleeping for 2 seconds....")
			time.sleep(2)
		else:
			print ("OPENAPI Response Error !!!!! : %s" % response)

"""
send request to OPENAPI
and get the response details
"""
def _send_request(access_key_id, access_key_secret, request, region='cn-hangzhou'):
    request.set_accept_format('json')
    try:
    	# clt = AcsClient(access_key_id, access_key_secret, 'cn-hangzhou')
    	clt = AcsClient(access_key_id, access_key_secret, region)
        response_str = clt.do_action_with_exception(request)
        response_detail = json.loads(response_str)
        return True, response_detail
    except Exception as e:
        return False, e


class oss_key_parts(object):
    """
    if the whole object file key looks like blow:
    Migration/OPENAPIDemo/TestMigration_FULL_20180518153544.bak
	
	then

	: param str file_key: OSS object file key.
	: param str sub_folder: OSS sub folder name, such as Migration/OPENAPIDemo
	: param str file_name: OSS object file name, such as TestMigration_FULL_20180518153544.bak
    : param str db_name: database name, such as 'TestMigration'
    : param str bak_type: backup type , such as 'FULL'
    : param str date: backup date time, such as '20180518153544'
    : param str ext: backup file extendsion, such as 'bak'

    """
    def __init__(self):
    	self.file_key = ''
    	self.sub_folder = ''
    	self.file_name = ''
        self.db_name = ''
        self.bak_type = ''
        self.date = ''
        self.ext = ''
        self.sign_url = ''

"""
parse the OSS file key string into oss key parts
and return oss_key_parts object.
"""
def oss_key_parse(file_key):

    key_parts = oss_key_parts()
    try:
        if file_key.find('/') >= 0:
            file_key_parts = file_key.rsplit('/', 1)
        else:
            file_key_parts = file_key
            file_key_parts = ['/', file_key]

        key_parts.file_key = file_key
        key_parts.sub_folder = file_key_parts[0]
        key_parts.file_name = file_key_parts[1]

        key_list = file_key_parts[1].rsplit('_', 2)

        key_parts.db_name, \
        key_parts.bak_type, \
        key_parts.date, \
        key_parts.ext = key_list[0], \
                        key_list[1], \
                        key_list[2].rsplit('.', 1)[0], \
                        key_list[2].rsplit('.', 1)[1]
    except Exception, e:
        pass

    return key_parts

def oss_list_objects(access_key_id, access_key_secret, oss_endpoint, oss_bucket, directory):
	"""
	list all OSS objects under the specified sub folder
	and return the objects list.
	"""
	bucket = oss2.Bucket(oss2.Auth(access_key_id, access_key_secret), oss_endpoint, oss_bucket)
	key_parts_list = []

	# OSS Bucket Root
	if directory == '/':
		for object_info in oss2.ObjectIterator(bucket, delimiter='/'):
			if not object_info.is_prefix():
				key_part = oss_key_parse(object_info.key)

				# get object sign_url
				key_part.sign_url = bucket.sign_url('GET', object_info.key, 24 * 3600)

				if key_part.ext in['bak', 'trn', 'log', 'diff']:
					key_parts_list.append(key_part)
				else:
					print ("Warning!!!!!, [%s] is not backup file, filtered." % (key_part.file_key))
	else:
		for i, object_info in enumerate(oss2.ObjectIterator(bucket, prefix=directory)):
			# have to the backup files, not folder
			if not object_info.is_prefix():
				if object_info.key.startswith(directory) and object_info.key != directory:
					# print ("%s" % (object_info.key))
					key_part = oss_key_parse(object_info.key)
					
					# get object sign_url
					key_part.sign_url = bucket.sign_url('GET', object_info.key, 24 * 3600)

					if key_part.ext in['bak', 'trn', 'log', 'diff']:
						key_parts_list.append(key_part)
					else:
						print ("Warning!!!!!, [%s] is not a vaild backup file, filtered." % (key_part.file_key))

	if not key_parts_list:
		print("There is no backup file on OSS Bucket [%s] under [%s] folder, check please." % (oss_bucket, directory))

	return key_parts_list


def oss_bucket_details(access_key_id, access_key_secret, oss_endpoint, oss_bucket):
	try:
		bucket = oss2.Bucket(oss2.Auth(access_key_id, access_key_secret), oss_endpoint, oss_bucket)
		bucket_info = bucket.get_bucket_info()
		# print ("bucket name:%s, region: %s" % (bucket_info.name, bucket_info.location))
		return True, bucket_info
	except Exception as e:
		return False, e

if __name__ == '__main__':
    main(sys.argv[1:])

of course, you can also download the python script.

Usage

we will briefly describe how to use one-click migration at the instance level to migrate data to the cloud from the following three aspects:

**View Help

**Example

**Output result

view Help

you only need to use-h to view how to use the script:

python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -h
~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k <access_key_id> -s <access_key_secret> -i <rds_instance_id> -e <oss_endpoint> -b <oss_bucket> -d <directory>

example

the following is a specific example:

python ~/Downloads/RDSSQLCreateMigrateTasksBatchly.py -k LTAIQazXKPRwwErT -s BMkIUhroubQOLpOMqfA09IKlqp4G2k -i rm-2zesz5774ud8s71i5 -e oss-cn-beijing.aliyuncs.com -b atp-test-on-ecs -d Migration/OPENAPIDemo

output result

the result output after running the preceding command is divided into two parts:

**Part I input parameters: displays all your input parameters to query input errors.

**The second part of the message: tell you which backup file will be migrated to which instance which database the following instance output information:

*********************Input variables*************************************
access_key_id = LTAIQazXKPRwwErT
access_key_secret = BMkIUhroubQOLpOMqfA09IKlqp4G2k
rds_instance_id = rm-2zesz5774ud8s71i5
oss_endpoint = oss-cn-beijing.aliyuncs.com
oss_bucket = atp-test-on-ecs
directory = Migration/OPENAPIDemo/
************************************************************************

*********************Migration requests**********************************

--0. [Migration/OPENAPIDemo/TestCollation_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [TestCollation].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106121', u'RequestId': u'67E0DD7F-7219-4F67-AAE7-B27273921303', u'TaskId': u'68244691', u'DBName': u'TestCollation'}
--I'm sleeping for 2 seconds....

--1. [Migration/OPENAPIDemo/TestMigration_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [TestMigration].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106122', u'RequestId': u'0916CD14-861B-4BF7-A68A-409E3996B0D3', u'TaskId': u'68244695', u'DBName': u'TestMigration'}
--I'm sleeping for 2 seconds....

--2. [Migration/OPENAPIDemo/testdb_FULL_20180523225534.bak] migrate to your RDS: [rm-2zesz5774ud8s71i5] and the database name will be: [testdb].
{u'DBInstanceId': u'rm-2zesz5774ud8s71i5', u'BackupMode': u'FULL', u'MigrateTaskId': u'106123', u'RequestId': u'5835B154-2EE3-4059-BFC4-6F798CDCE546', u'TaskId': u'68244699', u'DBName': u'testdb'}
--I'm sleeping for 2 seconds....
************************************************************************

final summary

this article allows you to easily migrate SQL Server-instance-level databases offline or on-premises ECS instances to the cloud with one click. This greatly improves the migration efficiency and simplifies operations, this greatly improves the migration experience.

Reference

migrate SQL Server instance-level databases to the cloud

Selected, One-Stop Store for Enterprise Applications
Support various scenarios to meet companies' needs at different stages of development

Start Building Today with a Free Trial to 50+ Products

Learn and experience the power of Alibaba Cloud.

Sign Up Now