This topic describes the parameters that you need to configure when you use Data Online Migration to migrate data.
Alibaba Cloud OSS
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data to or from an Alibaba Cloud Object Storage Service (OSS) bucket.
Domain Name (OSS Endpoint in the console of the previous version)
The following table describes the valid formats of OSS endpoints.
No.
Format
Description
1
http://oss-cn-hangzhou.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over the Internet by using HTTP.
2
http://oss-cn-hangzhou-internal.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over an internal network by using HTTP.
3
https://oss-cn-hangzhou.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over the Internet by using HTTPS.
4
https://oss-cn-hangzhou-internal.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over an internal network by using HTTPS.
If you migrate data from a third-party cloud service to OSS, OSS uploads data over an internal network. For more information about OSS endpoints, see Regions and endpoints.
Bucket (OSS Bucket in the console of the previous version)
The name of the OSS bucket. The prefix and the suffix of the bucket name cannot contain invalid characters such as spaces, line feeds, and tab keys.
Prefix (OSS Prefix in the console of the previous version)
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be
docs/
.AccessKeyId (Access Key Id in the console of the previous version) and SecretAccessKey (Access Key Secret in the console of the previous version)
The AccessKey pair that is used to access the OSS bucket. An AccessKey pair consists of an AccessKey ID and an AccessKey secret. You can use the AccessKey pair of an Alibaba Cloud account or a Resource Access Management (RAM) user. You cannot use the AccessKey pair of a temporary user. For more information, see Resource access management. You cannot grant permissions on bills to RAM users.
If you want to use a RAM user to migrate data from OSS to another Alibaba Cloud service, you can use the following policy to grant the required permissions to the RAM user.
If you want to use a RAM user to migrate data to OSS, you can use the following policy to grant the required permissions to the RAM user.
You can create a RAM user and use the AccessKey pair of the RAM user for a data migration job. After the migration job is complete, you can delete the AccessKey pair.
Data Size and File Count (available only in the console of the previous version)
The amount of data and the number of files that you want to migrate. You can log on to the OSS console to view the amount of data and the number of the files in the specified bucket or in the specified directory.
AWS S3
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from an AWS Simple Storage Service (AWS S3) bucket.
If access to the AWS S3 bucket from which you want to migrate data is restricted, you cannot use Data Online Migration to migrate data from the bucket. Before you migrate data, disable the access restriction.
Domain Name (Endpoint in the console of the previous version)
Data Online Migration supports AWS S3 buckets that are deployed in the following AWS S3 regions. The specified AWS S3 endpoint can only be in one of the following formats. HTTPS and HTTP are supported.
RegionName
Region
Endpoint
US East (Ohio)
us-east-2
(Recommended) http://s3.us-east-2.amazonaws.com
http://s3-us-east-2.amazonaws.com
US East (N. Virginia)
us-east-1
(Recommended) http://s3.us-east-1.amazonaws.com
http://s3-us-east-1.amazonaws.com
US West (N. California)
us-west-1
(Recommended) http://s3.us-west-1.amazonaws.com
http://s3-us-west-1.amazonaws.com
US West (Oregon)
us-west-2
(Recommended) http://s3.us-west-2.amazonaws.com
http://s3-us-west-2.amazonaws.com
Canada (Central)
ca-central-1
(Recommended) http://s3.ca-central-1.amazonaws.com
http://s3-ca-central-1.amazonaws.com
Asia Pacific (Seoul)
ap-northeast-2
(Recommended) http://s3.ap-northeast-2.amazonaws.com
http://s3-ap-northeast-2.amazonaws.com
Asia Pacific (Osaka-Local)
ap-northeast-3
(Recommended) http://s3.ap-northeast-3.amazonaws.com
http://s3-ap-northeast-3.amazonaws.com
Asia Pacific (Singapore)
ap-southeast-1
(Recommended) http://s3.ap-southeast-1.amazonaws.com
http://s3-ap-southeast-1.amazonaws.com
Asia Pacific (Sydney)
ap-southeast-2
(Recommended) http://s3.ap-southeast-2.amazonaws.com
http://s3-ap-southeast-2.amazonaws.com
Asia Pacific (Tokyo)
ap-northeast-1
(Recommended) http://s3.ap-northeast-1.amazonaws.com
http://s3-ap-northeast-1.amazonaws.com
China (Beijing)
cn-north-1
http://s3.cn-north-1.amazonaws.com.cn
China (Ningxia)
cn-northwest-1
http://s3.cn-northwest-1.amazonaws.com.cn
Europe (Frankfurt)
eu-central-1
(Recommended) http://s3.eu-central-1.amazonaws.com
http://s3-eu-central-1.amazonaws.com
Europe (Ireland)
eu-west-1
(Recommended) http://s3.eu-west-1.amazonaws.com
http://s3-eu-west-1.amazonaws.com
Europe (London)
eu-west-2
(Recommended) http://s3.eu-west-2.amazonaws.com
http://s3-eu-west-2.amazonaws.com
Europe (Paris)
eu-west-3
(Recommended) http://s3.eu-west-3.amazonaws.com
http://s3-eu-west-3.amazonaws.com
South America (São Paulo)
sa-east-1
(Recommended) http://s3.sa-east-1.amazonaws.com
http://s3-sa-east-1.amazonaws.com
Bucket (Bucket in the console of the previous version)
The name of the AWS S3 bucket. The prefix and the suffix of the bucket name cannot contain invalid characters such as spaces, line feeds, and tab keys.
Prefix (OSS Prefix in the console of the previous version)
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be
docs/
.AccessKeyId (Access Key Id in the console of the previous version) and SecretAccessKey (Secret Access Key in the console of the previous version)
The access key that is used to access the AWS S3 bucket. An access key consists of an access key ID and an access key secret. To create an access key, perform the following operations: Log on to the AWS Identity and Access Management (IAM) console, create an IAM user, use the
AmazonS3ReadOnlyAccess
policy to grant the required permissions to the IAM user, and then create an access key for the IAM user. After the migration job is complete, you can delete the IAM user.Data Size and File Count
The amount of data and the number of files that you want to migrate. You must set the Data Size and File Count parameters based on the actual amount of data and the actual number of files that you want to migrate.
Azure Blob
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from a Microsoft Azure Blob Storage (Azure Blob) container.
Storage Account, Key, and Connection Strings
The information about the storage account, access key, and connection strings. To obtain the information, perform the following operations: Log on to the Microsoft Azure console. In the left-side navigation pane, click Storage accounts and click the storage account that you want to use. In the Settings section, click Access keys to view the information about the storage account, access key, and connection strings.
Container
The name of the Azure Blob container.
OSS Prefix
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be
docs/
.Data Size and File Count
The amount of data and the number of files that you want to migrate. The following example describes how to obtain values for the parameters. Log on to the Microsoft Azure console. In the left-side navigation pane, choose . On the container-property page, click Calculate Size to obtain the information about the storage usage of the container.
Tencent Cloud COS
Data Online Migration supports Tencent Cloud Object Storage (COS) V5 and uses COS API V5 to access the COS service. For more information, see COS API V5 Documentation. This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from a Tencent COS bucket.
Domain Name (Region in the console of the previous version)
The abbreviation of the name of the region in which the bucket is deployed. The following table describes the regions that are supported by COS. For more information, see Regions and Access Endpoints.
Regions in the Chinese mainland
Region
Abbreviation
Chinese mainland
Regions on Public Cloud
Beijing Zone 1
ap-beijing-1
Beijing
ap-beijing
Nanjing
ap-nanjing
Shanghai
ap-shanghai
Guangzhou
ap-guangzhou
Chengdu
ap-chengdu
Chongqing
ap-chongqing
Regions on Financial Cloud
South China (Shenzhen Finance)
ap-shenzhen-fsi
East China (Shanghai Finance)
ap-shanghai-fsi
North China (Beijing Finance)
ap-beijing-fsi
Hong Kong (China) and regions outside China
Region
Abbreviation
Asia Pacific
Regions on Public Cloud
Hong Kong (China)
ap-Hong Kong(China)
Singapore
ap-singapore
Mumbai
ap-mumbai
Jakarta
ap-jakarta
Seoul
ap-seoul
Bangkok
ap-bangkok
Tokyo
ap-tokyo
North America
Silicon Valley (US West)
na-siliconvalley
Virginia (US East)
na-ashburn
Toronto
na-toronto
South America
São Paulo
sa-saopaulo
Europe
Frankfurt
eu-frankfurt
Moscow
eu-moscow
Bucket (Bucket in the console of the previous version)
The name of the COS bucket. A COS bucket name is in the
Custom bucket name-APPID
format. APPID is the identifier of your Tencent Cloud account. Example:example-1234567890
. You can set the Bucket parameter toexample
.Prefix (Prefix in the console of the previous version)
The directory in which the files that you want to migrate are located. The value of this parameter must start and end with a forward slash (/). For example, the value can be
/docs/
.APPID (available only in the console of the previous version)
The APPID of your Tencent Cloud account. To view the APPID, log on to the COS console and go to the Account Info page.
AccessKeyId (Secret Id in the console of the previous version) and SecretAccessKey (Secret Key in the console of the previous version)
The API key that is used to access the COS bucket. An API key consists of a SecretId and a SecretKey. To view or create an API key, log on to the COS console and choose . On the API Key Management page, you can view or create an API key. We recommend that you create an API key for the data migration job and delete the API key after the migration job is complete.
Data Size and File Count (available only in the console of the previous version)
The amount of data and the number of files that you want to migrate. To view the amount of data and the number of files in the specified bucket, log on to the COS console. In the left-side navigation pane, choose . On the page that appears, you can view the amount of data and the number of objects in the COS bucket.
Qiniu Cloud KODO
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from a Qiniu Cloud Object Storage (KODO) bucket.
Domain Name (Endpoint in the console of the previous version)
The endpoint of the KODO bucket. To view the endpoint, perform the following operations: Log on to the Qiniu Cloud console. In the Object Storage console, find and click the ID of the bucket from which you want to migrate data. On the tab for domain name management, you can bind domain names and view domain names.
The endpoints of buckets are in the
http://<Domain name>
format. The following examples show the valid types of endpoints.http://oy4jki81y.bkt.clouddn.com http://78rets.com1.z0.glb.clouddn.com http://cartoon.u.qiniudn.com
NoteOnly the preceding three types of endpoints are supported.
Bucket (Bucket in the console of the previous version)
The name of the KODO bucket.
Prefix (OSS Prefix in the console of the previous version)
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be
docs/
.Access Key and Secret Key
The Access Key and the Secret Key that are used to access the KODO bucket. To view the Access Key and the Secret Key, log on to the Qiniu Cloud console and choose .
NoteYou must specify a valid Access Key and a valid Secret Key.
Data Size and File Count (available only in the console of the previous version)
The amount of data and the number of files that you want to migrate. To view the amount of data and the number of files, log on to the Qiniu Cloud console, navigate to the Object Storage page, and then click the name of the bucket in which the data that you want to migrate is stored. On the page that appears, click the Content Management tab. On the Content Management tab, view the number of files and the amount of data stored in the bucket.
Baidu Cloud BOS
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from a Baidu Object Storage (BOS) bucket.
Endpoint
The endpoint. You can use Data Online Migration to migrate data from a BOS bucket that is deployed in the North China - Beijing, South China - Guangzhou, or East China - Suzhou region. The following table describes the valid endpoints.
Region
Domain name
Protocol
Endpoint
North China - Beijing
bj.bcebos.com
HTTP and HTTPS
http://bj.bcebos.com
South China - Guangzhou
gz.bcebos.com
http://gz.bcebos.com
East China - Suzhou
su.bcebos.com
http://su.bcebos.com
To view the region in which your bucket resides, perform the following operations: Log on to the Baidu Cloud console and choose Cloud Services > BOS. In the bucket list under Storage Management, click the name of the bucket in which the files that you want to migrate are stored. On the page that appears, you can view the region where the bucket is deployed.
Prefix
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be
docs/
.Access Key ID and Secret Access Key
The Access Key ID and the Secret Access Key that are used to access the BOS bucket. To view the Access Key ID and the Secret Access Key of your Baidu Cloud account, log on to the BOS console and choose .
Data Size and File Count
The amount of data and the number of files that you want to migrate. Log on to the Baidu Cloud console and choose Cloud Services > BOS. In the bucket list under Storage Management, click the name of the bucket in which the files that you want to migrate are stored. On the page that appears, you can view the region where the bucket is deployed. You must set the Data Size and File Count parameters based on the actual amount of data and the actual number of files that you want to migrate.
HTTP HTTPS
Console of the new version
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from an HTTP or HTTPS source in the console of the new version.
Inventory Path
The HTTP/HTTPS listing files contain two types of files, which are one manifest.json file, and one or more example.csv.gz files. An example.csv.gz file is a compressed CSV listing file. The size of a single example.csv.gz file cannot exceed 25 MB. The manifest.json file is a CSV file with column configuration. You can upload this type of file to an OSS or AWS S3 bucket.
Create a CSV list file.
Create a CSV list file on your on-premises machine. A list file can contain up to eight columns separated by commas (,). Each line represents one file to be migrated. Multiple files are separated by using line feeds (
\n
). The following tables describe the columns.ImportantThe Key and Url columns are required. Other columns are optional.
Required columns
Column
Required
Description
Limit
Url
Yes
The download URL of the file to be migrated. Data Online Migration uses HTTP GET requests to download files from the HTTP or HTTPS URLs and uses HTTP HEAD requests to obtain the metadata of the files.
NoteMake sure that the URL can be accessed by using commands such as [curl -L --HEAD "$url"] and [curl -L --GET "$url"].
The values of the Url and Key columns must be encoded. Otherwise, the migration may fail due to the special characters contained in the values.
Before you encode the value of the Url column, make sure that the URL can be accessed by using CLI tools such as
curl
. Then, perform URL encoding.Before you encode the value of the Key column, make sure that you can obtain the required object name in OSS after the migration. Then, perform URL encoding.
ImportantAfter you encode the values of the Url and Key columns, make sure that the following requirements are met. Otherwise, the migration may fail, or the source files are not migrated to the specified destination path.
A plus sign (+) in the original string is encoded as %2B.
A percent sign (%) in the original string is encoded as %25.
A comma (,) in the original string is encoded as %2C.
For example, if the original string is
a+b%c,d.file
, the encoded string isa%2Bb%25c%2Cd.file
.Key
Yes
The name of the file to be migrated. After a file is migrated, the name of the object that corresponds to the file consists of a prefix and the file name.
The following code provides an example on how to encode the values of the Url and Key columns in Python:
# -*- coding: utf-8 -*- import sys if sys.version_info.major == 3: from urllib.parse import quote_plus else: from urllib import quote_plus raw_urls = [ # Format: ($url, $key) # url: These urls can be accessed normally by using linux 'curl' or 'wget' cmd. # key: These keys are the ObjectName you expect on OSS. ("http://www.example1.com/path/ab.file?t=aef87", "ab.file"), ("http://www.example2.com/path/a+b.file", "a+b.file"), ("http://www.example3.com/path/a%b.file", "a%b.file"), ("http://www.example4.com/path/a,b.file", "a,b.file"), ("http://www.example5.com/path/a b.file", "a b.file"), ("http://www.example6.com/path/a and b.file", "a and b.file"), ("http://www.example7.com/path/a%E4%B8%8Eb.file", "a%E4%B8%8Eb.file"), ("http://www.example8.com/path/a\\b.file", "a\\b.file") ] for item in raw_urls: url, key = item[0], item[1] enc_url = quote_plus(url) enc_key = quote_plus(key) # The enc_url and enc_key vars are encoded format, you can use them to build csv files. print("(%s, %s) -> (%s, %s)" % (url, key, enc_url, enc_key))
All columns
Column
Required
Description
Key
Yes
The name of the file to be migrated. After a file is migrated, the name of the object that corresponds to the file consists of a prefix and the file name.
Url
Yes
The download URL of the file to be migrated. Data Online Migration uses HTTP GET requests to download files from the HTTP or HTTPS URLs and uses HTTP HEAD requests to obtain the metadata of the files.
Size
No
The size of the file to be migrated.
StorageClass
No
The storage class of the file in the source bucket.
LastModifiedDate
No
The time when the file to be migrated was last modified.
ETag
No
The entity tag (ETag) of the file to be migrated.
HashAlg
No
The hash algorithm of the file to be migrated.
HashValue
No
The hash value of the file to be migrated.
NoteThe order of the preceding columns varies in CSV files. You need to only make sure that the order of the columns in a CSV file is the same as that in the fileSchema column of the manifest.json file.
Compress one or more CSV list files.
Compress the CSV file into a CSV GZ file. The following examples show how to compress one or more CSV files:
Compress a CSV file
In this example, a file named file1 resides in the dir directory. Run the following command to compress the file:
gzip -r dir
NoteIf you run the preceding
gzip
command to compress a file, the source file is not retained. To retain both the compressed file and the source file, run thegzip -c Source file >Source file.gz
command.The
file1.gz
file is generated.Compress multiple CSV files
In this example, the file1, file2, and file3 files reside in the dir dictionary. Run the following command to compress the files:
gzip -r dir
NoteThe
gzip
command is not used to package a directory. It only separately compresses all files in the directory.The file1.gz, file2.gz, and file3.gz files are generated.
Create a manifest.json file.
You can use a manifest.json file to configure multiple CSV files. The following information shows the content of a manifest.json file:
fileFormat: the format of the list file. Example: CSV.
fileSchema: the columns in the CSV file. Pay attention to the order of columns.
files:
key: the location of the CSV file in the source bucket.
mD5checksum: the MD5 value of the CSV file. The value is a hexadecimal MD5 string, which is not case-sensitive. Example: 91A76757B25C8BE78BC321DEEBA6A5AD. If you do not specify this parameter, the CSV file is not verified.
size: the size of the CSV file.
The following sample code provides an example:
{ "fileFormat":"CSV", "fileSchema":"Url, Key, Bucket, Size, StorageClass, LastModifiedDate, ETag, HashAlg, HashValue ", "files":[{ "key":"dir/example1.csv.gz", "mD5checksum":"", "size":0 },{ "key":"dir/example2.csv.gz", "mD5checksum":"", "size":0 }] }
Upload the list files that you create to OSS or Amazon S3.
Upload the list files to OSS. For more information, see Simple upload.
NoteAfter the list files are uploaded to OSS, Data Online Migration downloads the list files and migrates the files based on the specified URLs.
When you create a migration task, specify the bucket in which a list file is stored. You must specify the path of the list file in the
Directory in which the list file resides/manifest.json
format. Example: dir/manifest.json.
Upload the list files to Amazon S3.
NoteAfter the list files are uploaded to Amazon S3, Data Online Migration downloads the list files and migrates the files based on the specified URLs.
When you create a migration task, specify the bucket in which a list file is stored. You must specify the path of the list file in the
Directory in which the list file resides/manifest.json
format. Example: dir/manifest.json.
Inventory Domain Name
The OSS endpoint. The following table describes the valid formats of OSS endpoints.
No.
Format
Description
1
http://oss-cn-hangzhou.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over the Internet by using HTTP.
2
http://oss-cn-hangzhou-internal.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over an internal network by using HTTP.
3
https://oss-cn-hangzhou.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over the Internet by using HTTPS.
4
https://oss-cn-hangzhou-internal.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over an internal network by using HTTPS.
For more information about OSS endpoints, see Regions and endpoints.
For information about AWS S3 endpoints, see the AWS S3 section of this topic.
Inventory AccessKeyId and Inventory SecretAccessKey
The AccessKey pair that is used to download the listing file.
When you upload the listing file to OSS, enter the AccessKey ID and AccessKey secret of your Alibaba Cloud account or a RAM user. If you want to use the credentials of a RAM user, you must grant the RAM user the permissions to call the GetObject operation.
When you upload the listing file to AWS S3, enter the AccessKey pair that is used to access the AWS S3 bucket. Delete the key pair after the migration job is complete.
Console of the previous version
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from an HTTP or HTTPS source in the console of the previous version.
File Path
The path of the listing file that contains the URLs of the files that you want to migrate. Before you create a migration job, you must create a local file that contains the URLs of the files that you want to migrate as a listing file. The content of listing files consists of two columns.
The left-side column contains the HTTP or HTTPS URLs of the files that you want to migrate. Special characters in the URLs must be URL-encoded. Data Online Migration uses the GET method to download files and the HEAD method to obtain the metadata of the files from the specified HTTP or HTTPS URLs.
The right-side column contains the paths of the files that you want to migrate in the Folder name/File name format. After the specified files are migrated, the values in this column are used as the names of the OSS objects that correspond to these files. Separate values in the two columns with tab keys (
\t
).
Each line specifies a file. Separate lines with line feeds (
\n
). The following examples show the format.http://127.0.0.1/docs/my.doc docs/my.doc http://127.0.0.1/pics/my.jpg pics/my.jpg http://127.0.0.1/exes/my.exe exes/my.exe
The URL of the listing file is in the following format:
After the listing file is created, upload it to OSS. Then, enter the OSS path of the listing file in the Inventory Path field. Data Online Migration downloads the listing file and migrates files based on the specified URLs in the listing file. The OSS path of the listing file is in the oss://{Bucket}/{The name of the list file} format. The following section provides an example:
oss://mybucket/httplist.txt
List Access Endpoint
The OSS endpoint. The following table describes the valid formats of OSS endpoints.
No.
Format
Description
1
http://oss-cn-hangzhou.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over the Internet by using HTTP.
2
http://oss-cn-hangzhou-internal.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over an internal network by using HTTP.
3
https://oss-cn-hangzhou.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over the Internet by using HTTPS.
4
https://oss-cn-hangzhou-internal.aliyuncs.com
You can use this type of OSS endpoints to upload or download data over an internal network by using HTTPS.
For more information about OSS endpoints, see Regions and endpoints.
List Access AK and List Access SK
The AccessKey pair that is used to download the listing file. You can use the AccessKey pair of your Alibaba Cloud account or a RAM user. If you want to use the credentials of a RAM user, you must grant the RAM user the permissions to call the GetObject operation.
Data Size and File Count
The amount of data and the number of files that you want to migrate. You must set the Data Size and File Count parameters based on the actual amount of data and the actual number of files that you want to migrate.
USS
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from UPYUN Storage Service (USS).
Domain Address
The USS endpoint that the OSS SDKs can access. You can use one of the endpoints that are listed in the following table to call the RESTful API of USS.
NoteIf you do not know the network type of your USS service, use http://v0.api.upyun.com.
Network type
Endpoint
Description
Intelligent Routing
http://v0.api.upyun.com
Recommended
China Telecom
http://v1.api.upyun.com
N/A
China Unicom or China Netcom
http://v2.api.upyun.com
N/A
China Mobile (Tietong)
http://v3.api.upyun.com
N/A
Service Name
The name of your USS service. To view the service name, log on to the UPYUN console and navigate to the UPYUN Storage Service page.
Migration Folder
The directory in which the files that you want to migrate are located. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be docs/.
Operator Name and Operator Secret
The name of the operator. To view the operator name, perform the following operations: Log on to the UPYUN Cloud console, click your account name in the upper-right corner of the page, and select Account Management from the drop-down list. On the page that appears, you can view operators or add an operator. You can add an operator and specify a password for the data migration job. To allow the operator to migrate data, you must grant the operator the read permissions.
On the UPYUN Storage Service page, click the name of the USS service in which the files that you want to migrate are stored, choose
, and then grant the operator the required permissions.Data Size and File Count
The amount of data and the number of files that you want to migrate. To view the information, perform the following operations: Log on to the UPYUN console and choose . On the page that appears, view the amount of data and the number of files. You must set the Data Size and File Count parameters based on the actual amount of data and the actual number of files that you want to migrate.
Kingsoft Cloud KS3
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from a Kingsoft Standard Storage Service (KS3) bucket.
Endpoint
For more information about the mapping between regions and endpoints, see Regions and endpoints.
NoteTo view the region in which a bucket is deployed, log on to the KS3 console, and choose .
Bucket
The name of the KS3 bucket.
Prefix
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be
docs/
.Access Key ID and Secret Key
The Access Key ID and the Secret Access Key that are used to access the KS3 bucket. To create an Access Key ID and a Secret Access Key, log on to the KS3 console, and choose . On the page that appears, create an IAM user, and use the
KS3FullAccess
policy to grant the IAM user the required permissions. On the IAM user details page, create an AccessKey ID and a Secret Access Key. You can delete the IAM user after the migration job is complete.Data Size and File Count
The amount of data and the number of files that you want to migrate. To view the information, perform the following operations: Log on to the KS3 console and choose . On the page that appears, view the amount of data and the number of files that are stored in the bucket. You must set the Data Size and File Count parameters based on the actual amount of data and the actual number of files that you want to migrate.
Huawei Cloud OBS
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from a Huawei Object Storage Service (OBS) bucket.
Endpoint
The following table describes the regions and the corresponding OBS endpoints. For more information, see Regions and endpoints.
Region
Abbreviation
Endpoint
Protocol
CN North-Beijing4
cn-north-4
obs.cn-north-4.myhuaweicloud.com
HTTPS and HTTP
CN North-Beijing1
cn-north-1
obs.cn-north-1.myhuaweicloud.com
CN East-Shanghai2
cn-east-2
obs.cn-east-2.myhuaweicloud.com
CN East-Shanghai1
cn-east-3
obs.cn-east-3.myhuaweicloud.com
CN South-Guangzhou
cn-south-1
obs.cn-south-1.myhuaweicloud.com
CN Southwest-Guiyang1
cn-southwest-2
obs.cn-southwest-2.myhuaweicloud.com
AP-Bangkok
ap-southeast-2
obs.ap-southeast-2.myhuaweicloud.com
AP-Hong Kong
ap-southeast-1
obs.ap-southeast-1.myhuaweicloud.com
AP-Singapore
ap-southeast-3
obs.ap-southeast-3.myhuaweicloud.com
AF-Johannesburg
af-south-1
obs.af-south-1.myhuaweicloud.com
NoteTo view the endpoint of the bucket in which the files that you want to migrate are stored, log on to the OBS console. On the Object Storage page, click the name of the bucket. On the Overview page, view the endpoint in the Basic Information section.
Bucket
The name of the OBS bucket.
Prefix
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be
docs/
.Access Key ID and Secret Access Key
The access key of your account. To view the access key, perform the following operations: Log on to the Huawei Cloud console, move the pointer over your account name in the upper-right corner, and then select My Credentials from the drop-down list. In the left-side navigation pane, click Access Keys and click Create Access Key to create an access key.
Data Size and File Count
The amount of data and the number of files that you want to migrate. To view the information, perform the following operations: Log on to the Huawei Cloud console, click the icon in the upper-left corner of the page, choose , and then click the name of the bucket in which the files that you want to migrate are stored. In the left-side navigation pane, click Overview. In the Basic Statistics section, view the storage usage and the number of objects in the bucket.
UCloud UFile
The section describes the parameters that you need to configure when you use Data Online Migration to migrate data from a UCloud UFile bucket.
Region
The region where the UCloud UFile bucket that stores the files to be migrated is deployed. The following table lists the regions that Data Transport supports. For more information, see Regions and zones.
Region
Abbreviation
China (Beijing 1)
cn-bj1
China (Beijing 2)
cn-bj2
China (Hong Kong)
hk
China (Guangzhou)
cn-gd
China (Shanghai 2)
cn-sh2
US (Los Angeles)
us-ca
Singapore
sg
Indonesia (Jakarta)
idn-jakarta
Nigeria (Lagos)
afr-nigeria
Brazil (São Paulo)
bra-saopaulo
UAE (Dubai)
uae-dubai
Vietnam (Ho Chi Minh City)
vn-sng
China (Taipei)
tw-tp
US (Washington)
us-ws
Germany (Frankfurt)
ge-fra
Bucket
The name of the UCloud UFile bucket.
Prefix
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be
docs/
.Public Key and Private Key
The API public key and the API private key that are used to access the UCloud UFile bucket. You can log on to the UCloud console and navigate to the API key management page to view the API public key and the API private key.
Data Size and File Count
The amount of data and the number of files that you want to migrate. To view the information, perform the following operations: Log on to the UCloud console and navigate to the UFile page. On this page, click Bucket and then click the name of the bucket in which the files that you want to migrate are stored. On the Overview tab, view the storage usage of the bucket. You must set the Data Size and File Count parameters based on the actual amount of data and the actual number of files that you want to migrate.
GCP
This section describes the parameters that you need to configure when you use Data Online Migration to migrate data from a Google Cloud Platform (GCP) bucket.
Bucket
The name of the GCP bucket.
Prefix
The directory in which the files that you want to migrate are located. The value cannot be a file name. The value cannot start with a forward slash (/) and must end with a forward slash (/). For example, the value can be docs/.
Key File
Log on to the GCP console.
In the left-side navigation pane, click Service Accounts.
On the Service Accounts page, find the service account. In the Actions column, move the pointer over the icon and select
.In the Create private key for <Username> dialog box, select JSON as the key type, and click CREATE.
The system automatically downloads the JSON file of the key. In the Create Data Address panel, click the Click to Upload JSON File button next to the Key File parameter to upload the downloaded JSON file.
Data Size and File Count
The amount of data and the number of files that you want to migrate. To ensure that the migration is efficient, you must set the Data Size and File Count parameters based on the actual amount of data and the actual number of files that you want to migrate. If you set the Prefix parameter to a directory, enter the actual amount of data and the actual number of files in the specified directory.