Data migration between Alibaba Cloud Object Storage Service (OSS) buckets means copying data from one OSS bucket to another. You can use the migration feature to efficiently transfer and manage data between different OSS buckets in scenarios such as data backup, data migration, and disaster recovery. This topic describes the usage notes of, limits on, and procedure for data migration between OSS buckets.
Usage notes
When you migrate data by using Data Online Migration, take note of the following items:
Data Online Migration accesses the source data by using the public interfaces provided by the storage service provider of the source data. The access behavior depends on the interface implementation of the storage service provider.
When Data Online Migration is used for migration, it consumes resources at the source and destination data addresses. This may affect your business. To ensure business continuity, we recommend that you enable throttling for your migration tasks or run them during off-peak hours after careful assessment.
Before a migration task starts, Data Online Migration checks the files at the source data address and the destination data address. If a file at the source data address and a file at the destination data address have the same name, and the File Overwrite Method parameter of the migration task is set to Yes, the file at the destination data address is overwritten during migration. If the two files contain different information and the file at the destination data address needs to be retained, we recommend that you change the name of one file or back up the file at the destination data address.
The LastModified property of the source file is retained after the file is migrated to the destination bucket. If a lifecycle rule is configured for the destination bucket and takes effect, the migrated file whose last modification time is within the specified time period of the lifecycle rule may be deleted or archived in specific storage types.
Limits
If the static website hosting feature is enabled for the files at the source data address, directories that do not exist are found during the scanning for data migration. For example, if you upload the myapp/resource/1.jpg file and enable the static website hosting feature for the file, the following objects are found during the scanning for data migration: myapp/, myapp/resource/, and myapp/resource/1.jpg. The myapp/ and myapp/resource/ directories fail to be migrated because they do not exist. The myapp/resource/1.jpg file is migrated as expected.
Symbolic links that exist at the source data address are directly migrated to the destination data address. For more information, see Configure symbolic links.
Data Online Migration allows you to migrate only the data of a single bucket per task. You cannot migrate all data that belongs to your account in a single task.
Data Online Migration does not support data migration in Alibaba Finance Cloud or in Alibaba Gov Cloud.
Only specific attributes of data can be migrated between OSS buckets.
Attributes that can be migrated are x-oss-meta-*, LastModifyTime, Content-Type, Cache-Control, Content-Encoding, Content-Disposition, Content-Language, and Expires.
Attributes that cannot be migrated include but are not limited to StorageClass, Acl, server-side encryption, Tagging, and user-defined x-oss-persistent-headers.
NoteThe attributes that cannot be migrated include but are not limited to the preceding attributes. Check the actual migration results to find out other attributes that cannot be migrated.
Step 1: Select a region
Log on to the Data Online Migration console as the Resource Access Management (RAM) user that you created for data migration.
NoteTo migrate data across Alibaba Cloud accounts, you can log on as a RAM user that is created within the source or destination Alibaba Cloud account.
In the upper-left corner of the top navigation bar, select the region in which the source data address resides. The following figure shows the position of the drop-down list of the region.
ImportantThe data addresses and migration tasks that you create in a region cannot be used in another region. Select the region with caution.
We recommend that you select the region in which the source data address resides. If the region in which the source data address resides is not supported by Data Online Migration, select the region that is closest to the region in which the source data address resides.
To speed up cross-border data migration, we recommend that you enable transfer acceleration. If you enable transfer acceleration for OSS buckets, you are charged transfer acceleration fees. For more information, see Transfer acceleration.
Step 2: Create a source data address
In the left-side navigation pane, choose Data Online Migration > Address Management. On the Address Management page, click Create Address.
In the Create Address panel, configure the parameters and click OK. The following table describes the parameters.
Parameter
Required
Description
Name
Yes
The name of the source data address. The name must meet the following requirements:
The name is 3 to 63 characters in length.
The name is case-sensitive and can contain lowercase letters, digits, hyphens (-), and underscores (_).
The name is encoded in the UTF-8 format. The name cannot start with a hyphen (-) or underscore (_).
Type
Yes
The type of the source data address. Select Alibaba OSS.
Region
Yes
The region in which the source data address resides, such as China (Hangzhou).
AccessKeyId
Yes
The AccessKey pair of the RAM user that is used to read data from the source data address. The AccessKey pair is used by OSS to check whether the RAM user has the permissions to read data from the source data address.
NoteFor data migration across Alibaba Cloud accounts, enter the AccessKey pair of a RAM user that is created within the source Alibaba Cloud account.
SecretAccessKey
Yes
Bucket
Yes
The name of the OSS bucket in which the data to be migrated is stored.
Prefix
No
The prefix of the source data address. You can specify a prefix to migrate specific data. The value cannot start with a forward slash (/) but must end with a forward slash (/). Example:
data/to/oss/
.Specify a prefix for the source data address: For example, you set the prefix of the source data address to
example/src/
, store a file named example.jpg in example/src/, and set the prefix of the destination data address toexample/dest/
. After the example.jpg file is migrated to the destination data address, the full path of the file isexample/dest/example.jpg
.Do not specify a prefix for the source data address: For example, you specify no prefix for the source data address, the path of the file to be migrated is
srcbucket/example.jpg
, and you set the prefix of the destination data address todestbucket/
. After the example.jpg file is migrated to the destination data address, the full path of the file isdestbucket/srcbucket/example.jpg
.
Tunnel
No
The name of the tunnel that you want to use.
ImportantThis parameter is required only when you migrate data to the cloud by using leased lines or VPN gateways or migrate data from self-managed databases to the cloud.
If data at the destination data address is stored in a local file system or you need to migrate data over a leased line in an environment such as Alibaba Finance Cloud or Apsara Stack, you must create and deploy an agent.
Agent
No
The name of the agent that you want to use.
ImportantThis parameter is required only when you migrate data to the cloud by using leased lines or VPN gateways or migrate data from self-managed databases to the cloud.
You can select up to 30 agents at a time for a specific tunnel.
Step 3: Create a destination data address
In the left-side navigation pane, choose Data Online Migration > Address Management. On the Address Management page, click Create Address.
In the Create Address panel, configure the parameters and click OK. The following table describes the parameters.
The name is 3 to 63 characters in length.
The name is case-sensitive and can contain lowercase letters, digits, hyphens (-), and underscores (_).
The name is encoded in the UTF-8 format. The name cannot start with a hyphen (-) or underscore (_).
Specify a prefix for the destination data address: For example, you set the prefix of the source data address to
example/src/
, store a file named example.jpg in example/src/, and set the prefix of the destination data address toexample/dest/
. After the example.jpg file is migrated to the destination data address, the full path of the file isexample/dest/example.jpg
.Do not specify a prefix for the destination data address: If you do not specify a prefix for the destination data address, the source data is migrated to the root directory of the destination bucket.
This parameter is required only when you migrate data to the cloud by using leased lines or VPN gateways or migrate data from self-managed databases to the cloud.
If data at the destination data address is stored in a local file system or you need to migrate data over a leased line in an environment such as Alibaba Finance Cloud or Apsara Stack, you must create and deploy an agent.
This parameter is required only when you migrate data to the cloud by using leased lines or VPN gateways or migrate data from self-managed databases to the cloud.
You can select up to 30 agents at a time for a specific tunnel.
Parameter | Required | Description |
Name | Yes | The name of the destination data address. The name must meet the following requirements: |
Type | Yes | The type of the destination data address. Select Alibaba OSS. |
Region | No | The region in which the destination data address resides, such as China (Hangzhou). |
AccessKeyId | Yes | The AccessKey pair of the RAM user that is used to write data to the destination data address. The AccessKey pair is used by OSS to check whether the RAM user has the permissions to write data to the destination data address. Note For data migration across Alibaba Cloud accounts, enter the AccessKey pair of a RAM user that is created within the destination Alibaba Cloud account. |
SecretAccessKey | Yes | |
Bucket | Yes | The name of the OSS bucket to which the data is migrated. |
Prefix | No | The prefix of the destination data address. You can specify a prefix to migrate specific data. The value cannot start with a forward slash (/) but must end with a forward slash (/). Example: |
Tunnel | No | The name of the tunnel that you want to use. Important |
Agent | No | The name of the agent that you want to use. Important |
Step 4: Create a migration task
In the left-side navigation pane, choose Data Online Migration > Migration Tasks. On the Migration Tasks page, click Create Task.
In the Select Address step, configure the parameters and click Next. The following table describes the parameters.
Parameter
Required
Description
Name
Yes
The name of the migration task. The name must meet the following requirements:
The name is 3 to 63 characters in length.
The name is case-sensitive and can contain lowercase letters, digits, hyphens (-), and underscores (_).
The name is encoded in the UTF-8 format. The name cannot start with a hyphen (-) or underscore (_).
Source Address
Yes
The source data address that you created.
Destination Address
Yes
The destination data address that you created.
In the Task Configurations step, configure the parameters that are described in the following table.
Parameter
Required
Description
Migration Bandwidth
No
The maximum bandwidth that is available to the migration task. Valid values:
Default: Use the default upper limit for the migration bandwidth. The actual migration bandwidth is based on the file size and the number of files.
Specify an upper limit: Specify a custom upper limit for the migration bandwidth as prompted.
ImportantThe actual migration bandwidth is based on multiple factors, such as the source data address, network, throttling at the destination data address, and file size. Therefore, the actual migration bandwidth may not reach the specified upper limit.
Specify a reasonable value for the upper limit of the migration bandwidth based on the evaluation of the source data address, migration purpose, business situation, and network bandwidth. Inappropriate throttling may affect business performance.
Files Migrated Per Second
No
The maximum number of files that can be migrated per second. Valid values:
Default: Use the default upper limit for the number of files that can be migrated per second.
Specify an upper limit: Specify a custom upper limit as prompted for the number of files that can be migrated per second.
ImportantThe actual migration speed is based on multiple factors, such as the source data address, network, throttling at the destination data address, and file size. Therefore, the actual migration speed may not reach the specified upper limit.
Specify a reasonable value for the upper limit of the migration speed based on the evaluation of the source data address, migration purpose, business situation, and network bandwidth. Inappropriate throttling may affect business performance.
Overwrite Mode
No
Specifies whether to overwrite a file at the destination data address if the file has the same name as a file at the source data address. Valid values:
Do not overwrite: does not migrate the file at the source data address.
Overwrite All: overwrites the file at the destination data address.
Overwrite based on the last modification time:
If the last modification time of the file at the source data address is later than that of the file at the destination data address, the file at the destination data address is overwritten.
If the last modification time of the file at the source data address is the same as that of the file at the destination data address, the file at the destination data address is overwritten if the files differ from one of the following aspects: size and content type.
If you select Overwrite based on the last modification time, a newer file may be overwritten by an older one that has the same name.
If you select Overwrite based on the last modification time, make sure that the file at the source data address contains information such as the last modification time, size, and Content-Type header. Otherwise, the overwrite policy may become invalid and unexpected migration results may occur.
WarningMigration Logs
Yes
Specifies whether to push migration logs to Simple Log Service. Valid values:
Do not push (default): does not push migration logs.
Push: pushes migration logs to Simple Log Service. You can view the migration logs in the Simple Log Service console.
Push only file error logs: pushes only error migration logs to Simple Log Service. You can view the error migration logs in the Simple Log Service console.
If you select Push or Push only file error logs, Data Online Migration creates a project in Simple Log Service. The name of the project is in the aliyun-oss-import-log-Alibaba Cloud account ID-Region of the Data Online Migration console format. Example: aliyun-oss-import-log-137918634953****-cn-hangzhou.
ImportantTo prevent errors in the migration task, make sure that the following requirements are met before you select Push or Push only file error logs:
Simple Log Service is activated.
You have confirmed the authorization on the Authorize page.
Authorize
No
This parameter is displayed if you set the Migration Logs parameter to Push or Push only file error logs.
Click Authorize to go to the Cloud Resource Access Authorization page. On this page, click Confirm Authorization Policy. The RAM role AliyunOSSImportSlsAuditRole is created and permissions are granted to the RAM role.
File Name
No
The filter based on the file name.
Both inclusion and exclusion rules are supported. However, only the syntax of specific regular expressions is supported. For more information about the syntax of regular expressions, visit re2. Examples:
.*\.jpg$ indicates all files whose names end with .jpg.
By default, ^file.* indicates all files whose names start with file in the root directory.
If a prefix is configured for the source data address and the prefix is data/to/oss/, you need to use the ^data/to/oss/file.* filter to match all files whose names start with file in the specified directory.
.*/picture/.* indicates files whose paths contain a subdirectory called picture.
ImportantIf an inclusion rule is configured, all files that meet the inclusion rule are migrated. If multiple inclusion rules are configured, files are migrated as long as one of the inclusion rules is met.
For example, the picture.jpg and picture.png files exist and the inclusion rule .*\.jpg$ is configured. In this case, only the picture.jpg file is migrated. If the inclusion rule .*\.png$ is configured at the same time, both files are migrated.
If an exclusion rule is configured, all files that meet the exclusion rule are not migrated. If multiple exclusion rules are configured, files are not migrated as long as one of the exclusion rules is met.
For example, the picture.jpg and picture.png files exist and the exclusion rule .*\.jpg$ is configured. In this case, only the picture.png file is migrated. If the exclusion rule .*\.png$ is configured at the same time, neither file is migrated.
Exclusion rules take precedence over inclusion rules. If a file meets both an exclusion rule and an inclusion rule, the file is not migrated.
For example, the file.txt file exists, and the exclusion rule .*\.txt$ and the inclusion rule file.* are configured. In this case, the file is not migrated.
File Modification Time
No
The filter based on the last modification time of files.
You can specify the last modification time as a filter rule. If you specify a time period, only the files whose last modification time is within the specified time period are migrated. Examples:
If you specify January 1, 2019 as the start time and do not specify the end time, only the files whose last modification time is not earlier than January 1, 2019 are migrated.
If you specify January 1, 2022 as the end time and do not specify the start time, only the files whose last modification time is not later than January 1, 2022 are migrated.
If you specify January 1, 2019 as the start time and January 1, 2022 as the end time, only the files whose last modification time is not earlier than January 1, 2019 and not later than January 1, 2022 are migrated.
Execution Time
No
ImportantIf the current execution of a migration task is not complete by the next scheduled start time, the task starts its next execution at the subsequent scheduled start time after the current migration is complete. This process continues until the task is run the specified number of times.
The time when the migration task is run. Valid values:
Immediately: The task is immediately run.
Scheduled Task: The task is run within the specified time period every day. By default, the task is started at the specified start time and stopped at the specified stop time.
Periodic Scheduling: The task is run based on the execution frequency and number of execution times that you specify.
Execution Frequency: You can specify the execution frequency of the task. Valid values: Every Hour, Every Day, Every Week, Certain Days of the Week, and Custom. For more information, see the Supported execution frequencies section of this topic.
Executions: You can specify the maximum number of execution times of the task as prompted. By default, if you do not specify this parameter, the task is run once.
Read Data Online Migration Agreement. Select I have read and agree to the Alibaba Cloud International Website Product Terms of Service. and I have understood that when the migration task is complete, the migrated data may be different from the source data. Therefore, I have the obligation and responsibility to confirm the consistency between the migrated data and source data. Alibaba Cloud is not responsible for the confirmation of the consistency between the migrated data and source data.. Then, click Next.
Verify that the configurations are correct and click OK. The migration task is created.
Supported execution frequencies
Frequency | Description | Example |
Every Hour | Schedule a migration task to run every hour. If you select this execution frequency, you can also specify the maximum number of execution times of the task. | Schedule a migration task to run every hour for three times. If the current time is 08:05, the task kicks off its first execution at the next full hour, which is 09:00.
|
Every Day | Schedule a migration task to run every day. If you select this execution frequency, you must schedule the task to run at a full hour from 00:00 to 23:00. You can also specify the maximum number of execution times of the task. | Schedule a migration task to run at 10:00 every day for five times. If the current time is 08:05, the task kicks off its first execution at 10:00 on that same day.
|
Every Week | Schedule a migration task to run every week. If you select this execution frequency, you must specify a day of the week and schedule the task to run at a full hour from 00:00 to 23:00. You can also specify the maximum number of execution times of the task. | Schedule a migration task to run at 10:00 every Monday for 10 times. If the current time is 08:05 on Monday, the task kicks off its first execution at 10:00 on that same day.
|
Certain Days of the Week | Schedule a migration task to run on specific days of the week. If you select this execution frequency, you must specify several days of the week and schedule the task to run at a full hour from 00:00 to 23:00. | Schedule a migration task to run at 10:00 every Monday, Wednesday, and Friday. If the current time is 08:05 on Wednesday, the task kicks off its first execution at 10:00 on that same day.
|
Custom | Use a CRON expression to specify a custom start time for a migration task. | Note A CRON expression consists of five fields that are separated by spaces. The five fields specify the execution time of a migration task in the following order: minute, hour, day of the month, month, and day of the week. Sample CRON expressions:
|
Step 5: Verify data
Data Online Migration solely handles the migration of data and does not ensure data consistency or integrity. After a migration task is complete, you must review all the migrated data and verify the data consistency between the source and destination data addresses.
Make sure that you verify the migrated data at the destination data address after a migration task is complete. If you delete the data at the source data address before you verify the migrated data at the destination data address, you are liable for the losses and consequences caused by any data loss.