All Products
Search
Document Center

Data Transport:Upload data to the cloud and verify the migration

Last Updated:Dec 23, 2025

This topic describes how to return a Data Transport III device, upload data to the cloud, and verify the migration result after the data migration is complete and the device is powered off.

Prerequisites

All data has been migrated to the Data Transport III device. The migration task is stopped and the device is powered off.

Return the device

1. Confirm the task status

Run the following commands in sequence. Confirm that the task status is succeeded and that the data passes CRC-64 verification against the source files.

  • Run the cdmgwclient command to enter the directory.

  • Run the bash console.sh status <job_name> command to view the task status. Replace <job_name> with the actual job name.

2. Stop the migration service

Important

After the migration is complete, you must perform the following operations. Otherwise, the migrated data may be incomplete. Do not forcibly power off the device.

Run the following command to stop the migration service.

  • beforepoweroff

3. Uninstall and lock the storage pool

Run the following command to uninstall and lock the storage pool.

  • crypt close

4. Power off the device

Run the poweroff command to power off the device.

5. Check the accessories

  1. Confirm that the four optical transceiver modules are present on the 25G optical port.

  2. Confirm that both power cables are present.

  3. Hand over the Data Transport device to the Alibaba Cloud logistics partner.

Upload data to the cloud and verify the migration result

Grant permissions to a RAM role

Alibaba Cloud uses a specific RAM role to upload data from the Data Transport device to your specified Object Storage Service (OSS) bucket. You must grant the required permissions to the RAM role. The procedure is as follows:

Important

When you use a bucket policy to grant permissions, the new policy overwrites the existing policy. Make sure that the new policy includes the content of the existing policy. Otherwise, operations that rely on the existing policy may fail.

  • Log on to the OSS console.

  • In the navigation pane on the left, click Buckets and select the destination bucket.

  • In the navigation pane on the left, choose Access Control > Bucket Policy.

  • On the Add by Syntax tab, add a custom bucket policy. Then, click Edit and Save.

    • The policy must grant the RAM role permissions to list, read, delete, and write all resources in the bucket.

Note

The following policy is for reference only. Replace the parameters with your actual values. When you replace the parameters, do not retain the angle brackets (<>). The parameters are described as follows:

  • <mybucket>: The name of your destination bucket.

  • <myuid>: The UID of the Alibaba Cloud account that owns the destination bucket.

Replace only the preceding parameters. The example already includes the specific RAM role information for Data Transport (UID: 1737441177608761, RAM role name: mgw-data-transport-role). For more information about OSS access policies, see Common examples of RAM policies.

{
  "Version": "1",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "oss:List*",
        "oss:Get*",
        "oss:Put*",
        "oss:AbortMultipartUpload"
      ],
      "Principal": [
         "arn:sts::1737441177608761:assumed-role/mgw-data-transport-role/*"
      ],
      "Resource": [
        "acs:oss:*:<myuid>:<mybucket>",
        "acs:oss:*:<myuid>:<mybucket>/*"
      ]
    }
  ]
}

Upload data to the cloud

Alibaba Cloud personnel use the authorized RAM role to upload the data from the Data Transport device to the cloud. After the data upload is complete, they contact you and provide a migration report. You can use the report to verify the migration result. This completes the migration task.

Important

After the migration is complete, the migration report is stored in your specified OSS bucket. For more information, see the Migration report section.

Migration report

After a migration report is generated, it is stored in your specified OSS bucket. The report files are located in the following paths:

OSS://<bucket>/<prefix>/aliyun_import_report/<uid>/<jobid>/<runtimeid>/total_list/
OSS://<bucket>/<prefix>/aliyun_import_report/<uid>/<jobid>/<runtimeid>/failed_list/
OSS://<bucket>/<prefix>/aliyun_import_report/<uid>/<jobid>/<runtimeid>/skipped_list/

Field name

Description

bucket

Your specified OSS bucket.

prefix

The prefix of the destination folder in your OSS bucket.

uid

The UID provided by Alibaba Cloud personnel: 1737441177608761.

jobid

The job ID provided by Alibaba Cloud personnel.

runtimeid

The task execution record ID provided by Alibaba Cloud personnel.

The migration report includes three types of file lists: a list of all migrated files, a list of failed files, and a list of skipped files. You can download these files to view the details. We recommend that you use the ossbrowser 2.0 (Preview) graphical management tool or the ossutil 1.0 command line interface to view the files.

Note

Migration report file naming conventions

  • uid@jobid@runtimeid_total_list_n: The list of all migrated files. There may be multiple files. n is an integer that is greater than or equal to 0.

  • uid@jobid@runtimeid_failed_list_n: The list of files that failed to migrate. There may be multiple files. n is an integer that is greater than or equal to 0.

  • uid@jobid@runtimeid_skipped_list_n: The list of files that were skipped during migration. There may be multiple files. n is an integer that is greater than or equal to 0.

The fields in the migration report files describe various properties of the files during the migration from the Data Transport device to the destination OSS bucket. The fields include the following:

Field

Description

Source file name

The name of the source file.

Object file name

The name of the object file.

Source file size

The size of the source file.

Object file size

The size of the object file.

Source file MD5

The MD5 hash of the source file. This is used for data consistency verification.

Object file MD5

The MD5 hash of the object file. This is used for data consistency verification.

Source file CRC-64

The CRC-64 value of the source file. This is used for data consistency verification.

Object file CRC-64

The CRC-64 value of the object file. This is used for data consistency verification.

Last modified time of the source file

The time the source file was last modified.

Last modified time of the object file

The time the object file was last modified.

Source object version ID (for versioning-enabled buckets only)

This value is empty.

Object version ID (for versioning-enabled buckets only)

This value is empty.

Migration start time

The time the file migration started.

Migration end time

The time the file migration ended.

Abnormal migration (false: Normal, true: Abnormal)

A Boolean flag that indicates whether the migration was abnormal. A value of `false` indicates that the migration was normal. A value of `true` indicates that the migration was abnormal.

Reason for abnormality

A description of the reason for the abnormality.

Verify the migration result

Data Transport is responsible only for data migration and does not guarantee data consistency or integrity. After the migration task is complete, you must verify all migrated data by performing data consistency verification between the source and destination. After the migration is complete, promptly revoke the authorization of the specific Data Transport RAM role.

Warning

After the migration task is complete, you must verify the migrated data at the destination. If you delete the source data before you verify that the destination data is correct, you are responsible for any resulting data loss and all consequences.