All Products
Search
Document Center

Server Migration Center:Migrate a Linux server

Last Updated:Dec 01, 2025

This topic describes how to use the SMC SDK for Java to create a migration job. You can migrate a source Linux server to a custom image or a target ECS instance.

Procedure

Ensure you have completed the necessary preparations, imported the migration source, and prepared the access credentials.

Step 1: View disk and partition information

To configure the migration job correctly, you must map the source disk partitions. This example assumes a source Linux server with one system disk and two data disks.

Log on to the SMC console. Click the source ID of your imported migration source. Navigate to the Basic Information tab. Record the disk and partition layout (e.g., size, device name). You will map this data to the request parameters in Step 3.

image

Step 2: Install the Java dependency library

Add the specific SMC SDK dependency to your project's pom.xml file.

<dependency>
  <groupId>com.aliyun</groupId>
  <artifactId>smc20190601</artifactId>
  <!-- Use the latest version. -->
  <version>1.1.4</version>
</dependency>

Step 3: Run the job

The following examples demonstrate how to initiate the migration. Replace the placeholder parameters with your actual configuration values. For a comprehensive list of parameters, refer to the CreateReplicationJob.

Migrate to an image

import java.util.ArrayList;
import java.util.List;

import com.aliyun.smc20190601.models.CreateReplicationJobRequest;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisks;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisksData;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisksDataPart;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisksSystem;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisksSystemPart;
import com.aliyun.smc20190601.models.CreateReplicationJobResponse;
import com.aliyun.tea.TeaException;

public class SampleLinux {

    /**
     * <b>description</b> :
     * <p>Initialize the Client using credentials</p>
     * @return Client
     * @throws Exception
     */
    public static com.aliyun.smc20190601.Client createClient() throws Exception {
        com.aliyun.teaopenapi.models.Config config = new com.aliyun.teaopenapi.models.Config();
        config.setAccessKeyId(System.getenv("ALIBABA_CLOUD_ACCESS_KEY_ID"));
        config.setAccessKeySecret(System.getenv("ALIBABA_CLOUD_ACCESS_KEY_SECRET"));
        config.endpoint = "smc.aliyuncs.com";
        return new com.aliyun.smc20190601.Client(config);
    }

    /**
     * Create a migration job
     *  1. Generate a new ECS image
     *  2. Migrate disks
     *  3. Enable migration dry run
     *  4. Use automatic incremental synchronization
     */
    public static void main(String[] args_) throws Exception {

        com.aliyun.smc20190601.Client client = SampleLinux.createClient();
        CreateReplicationJobRequest request = new CreateReplicationJobRequest();
        request.setRegionId("cn-beijing");
        request.setSourceId("s-b********");
        // Target type:Image. SMC generates an Alibaba Cloud image after migration.
        request.setTargetType("Image");
        // 0: Server migration 1: Operating system migration 2: Cross-zone migration 3: VMware agentless migration
        request.setJobType(0);
        // 0: Public network transmission mode. This requires the source server to have internet access, and data is transmitted over the public internet. 2: Private network transmission mode. If this mode is selected, the VSwitchId parameter must be set (the VpcId parameter is optional as the service can query it internally). 
        request.setNetMode(0);
        /**
         * Replication parameters, formatted as a JSON string
         * image_check: Whether to check the image. Default is true.
         * test_run: Whether to perform a migration dry run. Default is auto_running, meaning the migration starts after the dry run passes.
         * bandwidth_limit: Bandwidth limit. Default is 0. Unit: Mbps.
         */
        request.setReplicationParameters("{\"bandwidth_limit\":0,\"compress_level\":7,\"checksum\":false,\"image_check\":true,\"use_ssl_tunnel\":true,\"transport_mode\":\"\",\"upload_logs_enable\":true,\"test_run\":\"auto_running\"}");
        /**
         * Incremental migration parameters.
         * true (default): One-time migration job.
         * false: Incremental migration job. After creation, the job runs periodically based on the configured Frequency parameter.
         */
        request.setRunOnce(false);
        // The time interval for running incremental migration jobs. Unit: hours. Value range: 1~168.
        request.setFrequency(1);
        // The maximum number of images to retain for incremental migration jobs. Value range: 1~10. A value of 1 indicates retaining only the image generated after the most recent data synchronization.
        request.setMaxNumberOfImageToKeep(1);


        // Set the disk parameters for migration. Use the information you get from Step 1: View disk and partition information.
        CreateReplicationJobRequestDisks disk = new CreateReplicationJobRequestDisks();
        request.setDisks(disk);


        // Set the system disk. 
        CreateReplicationJobRequestDisksSystem systemDisk = new CreateReplicationJobRequestDisksSystem();
        // Set the system disk size. Unit: GiB. The value must be greater than the actual occupied size of the source server disk.
        systemDisk.setSize(40);
        List<CreateReplicationJobRequestDisksSystemPart> systemParts = new ArrayList<>();
        CreateReplicationJobRequestDisksSystemPart systemPart = new CreateReplicationJobRequestDisksSystemPart();
        // Whether to enable block replication for the system disk partition.
        systemPart.setBlock(true);
        // System disk partition path.
        systemPart.setPath("/");
        // System disk partition size. Unit: Byte. 39 * 1024 * 1024 * 1024 = 41875931136
        systemPart.setSizeBytes(41875931136L);
        systemParts.add(systemPart);
        systemDisk.part = systemParts;
        disk.setSystem(systemDisk);


        List<CreateReplicationJobRequestDisksData> dataDisks = new ArrayList<>();
        disk.setData(dataDisks);
        // Set the first data disk
        CreateReplicationJobRequestDisksData dataDisk1 = new CreateReplicationJobRequestDisksData();
        dataDisk1.setSize(40);
        List<CreateReplicationJobRequestDisksDataPart> dataDisk1Parts = new ArrayList<>();
        CreateReplicationJobRequestDisksDataPart disksData1Part = new CreateReplicationJobRequestDisksDataPart();
        disksData1Part.setBlock(true);
        disksData1Part.setPath("/data1");
        // Data disk partition size. Unit: Byte. 39 * 1024 * 1024 * 1024 = 41875931136
        disksData1Part.setSizeBytes(41875931136L);
        dataDisk1Parts.add(disksData1Part);
        dataDisk1.setPart(dataDisk1Parts);
        dataDisks.add(dataDisk1);
        // Set the second data disk
        CreateReplicationJobRequestDisksData dataDisk2 = new CreateReplicationJobRequestDisksData();
        dataDisk2.setSize(60);
        List<CreateReplicationJobRequestDisksDataPart> dataDisk2Parts = new ArrayList<>();
        CreateReplicationJobRequestDisksDataPart disksData2Part = new CreateReplicationJobRequestDisksDataPart();
        disksData2Part.setBlock(true);
        disksData2Part.setPath("/data2");
        // Data disk partition size. Unit: Byte. 59 * 1024 * 1024 * 1024 = 63350767616
        disksData2Part.setSizeBytes(64406683648L);
        dataDisk2Parts.add(disksData2Part);
        dataDisk2.setPart(dataDisk2Parts);
        dataDisks.add(dataDisk2);
        try {
            // Please print the API return value when running the copied code
            CreateReplicationJobResponse response = client.createReplicationJob(request);
            System.out.println(response.getBody().getJobId());
            System.out.println(response.getBody().getRequestId());
        } catch (TeaException error) {
            // This is for demo only. Please handle exceptions with caution and do not ignore them in engineering projects.
            // Error message
            System.out.println(error.getMessage());
            // Diagnosis URL
            System.out.println(error.getData().get("Recommend"));
            com.aliyun.teautil.Common.assertAsString(error.message);
        } catch (Exception _error) {
            TeaException error = new TeaException(_error.getMessage(), _error);
            // This is for demo only. Please handle exceptions with caution and do not ignore them in engineering projects.
            // Error message
            System.out.println(error.getMessage());
            // Diagnosis URL
            System.out.println(error.getData().get("Recommend"));
            com.aliyun.teautil.Common.assertAsString(error.message);
        }
    }
}

Migrate to an instance

import java.util.ArrayList;
import java.util.List;

import com.aliyun.smc20190601.models.CreateReplicationJobRequest;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisks;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisksData;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisksDataPart;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisksSystem;
import com.aliyun.smc20190601.models.CreateReplicationJobRequest.CreateReplicationJobRequestDisksSystemPart;
import com.aliyun.smc20190601.models.CreateReplicationJobResponse;
import com.aliyun.tea.TeaException;

public class SampleLinuxTargetInstance {

    /**
     * <b>description</b> :
     * <p>Initialize the Client using credentials</p>
     * @return Client
     * @throws Exception
     */
    public static com.aliyun.smc20190601.Client createClient() throws Exception {
        com.aliyun.teaopenapi.models.Config config = new com.aliyun.teaopenapi.models.Config();
        config.setAccessKeyId(System.getenv("ALIBABA_CLOUD_ACCESS_KEY_ID"));
        config.setAccessKeySecret(System.getenv("ALIBABA_CLOUD_ACCESS_KEY_SECRET"));
        config.endpoint = "smc.aliyuncs.com";
        return new com.aliyun.smc20190601.Client(config);
    }

    /**
     * Create a migration job
     *  1. Migrate to a target ECS instance
     *  2. Migrate disks
     *  3. Enable migration dry run
     *  4. Use automatic incremental synchronization
     */
    public static void main(String[] args_) throws Exception {

        com.aliyun.smc20190601.Client client = SampleLinuxTargetInstance.createClient();
        CreateReplicationJobRequest request = new CreateReplicationJobRequest();
        request.setRegionId("cn-beijing");
        request.setSourceId("s-bp******");
        // Target type. TargetInstance: SMC migrates the source directly to the target instance. When this parameter is set, the InstanceId parameter must also be specified.
        request.setTargetType("TargetInstance");
        request.setInstanceId("i-2******");
        // 0: Server migration 1: Operating system migration 2: Cross-zone migration 3: VMware agentless migration
        request.setJobType(0);
        // 0: Public network transmission mode. This requires the source server to have internet access, and data is transmitted over the public internet. 2: Private network transmission mode. If this mode is selected, the VSwitchId parameter must be set (the VpcId parameter is optional as the service can query it internally).
        request.setNetMode(0);
        /**
         * Replication parameters, formatted as a JSON string
         * image_check: Whether to check the image. Default is true.
         * test_run: Whether to perform a migration dry run. Default is auto_running, meaning the migration starts after the dry run passes.
         * bandwidth_limit: Bandwidth limit. Default is 0. Unit: Mbps.
         */
        request.setReplicationParameters("{\"bandwidth_limit\":0,\"compress_level\":7,\"checksum\":false,\"image_check\":true,\"use_ssl_tunnel\":true,\"transport_mode\":\"\",\"upload_logs_enable\":true,\"test_run\":\"auto_running\"}");
        /**
         * Incremental migration parameters.
         * true (default): One-time migration job.
         * false: Incremental migration job. After creation, the job runs periodically based on the configured Frequency parameter.
         */
        request.setRunOnce(false);
        // The time interval for running incremental migration jobs. Unit: hours. Value range: 1~168.
        request.setFrequency(1);
        // The maximum number of images to retain for incremental migration jobs. Value range: 1~10. A value of 1 indicates retaining only the image generated after the most recent data synchronization.
        request.setMaxNumberOfImageToKeep(1);


        // Set the disk parameters for migration
        CreateReplicationJobRequestDisks disk = new CreateReplicationJobRequestDisks();
        request.setDisks(disk);


        // Set the system disk
        CreateReplicationJobRequestDisksSystem systemDisk = new CreateReplicationJobRequestDisksSystem();
        // Set the system disk size. Unit: GiB. The value must be greater than the actual occupied size of the source server disk.
        systemDisk.setSize(40);
        List<CreateReplicationJobRequestDisksSystemPart> systemParts = new ArrayList<>();
        CreateReplicationJobRequestDisksSystemPart systemPart = new CreateReplicationJobRequestDisksSystemPart();
        // Whether to enable block replication for the system disk partition. 
        systemPart.setBlock(true);
        // System disk partition path.
        systemPart.setPath("/");
        // System disk partition size. Unit: Byte. 39 * 1024 * 1024 * 1024 = 41875931136
        systemPart.setSizeBytes(41875931136L);
        systemParts.add(systemPart);
        systemDisk.part = systemParts;
        disk.setSystem(systemDisk);


        List<CreateReplicationJobRequestDisksData> dataDisks = new ArrayList<>();
        disk.setData(dataDisks);
        // Set the first data disk
        CreateReplicationJobRequestDisksData dataDisk1 = new CreateReplicationJobRequestDisksData();
        dataDisk1.setSize(40);
        // Set the ID of the data disk on the target instance that corresponds to this source data disk.
        dataDisk1.setDiskId("d-2z*********");
        List<CreateReplicationJobRequestDisksDataPart> dataDisk1Parts = new ArrayList<>();
        CreateReplicationJobRequestDisksDataPart disksData1Part = new CreateReplicationJobRequestDisksDataPart();
        disksData1Part.setBlock(true);
        disksData1Part.setPath("/data1");
        // Data disk partition size. Unit: Byte. 39 * 1024 * 1024 * 1024 = 41875931136
        disksData1Part.setSizeBytes(41875931136L);
        dataDisk1Parts.add(disksData1Part);
        dataDisk1.setPart(dataDisk1Parts);
        dataDisks.add(dataDisk1);
        // Set the second data disk
        CreateReplicationJobRequestDisksData dataDisk2 = new CreateReplicationJobRequestDisksData();
        dataDisk2.setSize(60);
        // Set the ID of the data disk on the target instance that corresponds to this source data disk.
        dataDisk2.setDiskId("d-2*********");
        List<CreateReplicationJobRequestDisksDataPart> dataDisk2Parts = new ArrayList<>();
        CreateReplicationJobRequestDisksDataPart disksData2Part = new CreateReplicationJobRequestDisksDataPart();
        disksData2Part.setBlock(true);
        disksData2Part.setPath("/data2");
        // Data disk partition size. Unit: Byte. 59 * 1024 * 1024 * 1024 = 63350767616
        disksData2Part.setSizeBytes(64406683648L);
        dataDisk2Parts.add(disksData2Part);
        dataDisk2.setPart(dataDisk2Parts);
        dataDisks.add(dataDisk2);
        try {
            //  Please print the API return value when running the copied code
            CreateReplicationJobResponse response = client.createReplicationJob(request);
            System.out.println(response.getBody().getJobId());
            System.out.println(response.getBody().getRequestId());
        } catch (TeaException error) {
            // This is for demo only. Please handle exceptions with caution and do not ignore them in engineering projects.
            // Error message
            System.out.println(error.getMessage());
            // Diagnosis URL
            System.out.println(error.getData().get("Recommend"));
            com.aliyun.teautil.Common.assertAsString(error.message);
        } catch (Exception _error) {
            TeaException error = new TeaException(_error.getMessage(), _error);
            // This is for demo only. Please handle exceptions with caution and do not ignore them in engineering projects.
            // Error message
            System.out.println(error.getMessage());
            // Diagnosis URL
            System.out.println(error.getData().get("Recommend"));
            com.aliyun.teautil.Common.assertAsString(error.message);
        }
    }
}

If the job creation fails, check the returned error message and refer to the FAQ for debugging steps.