All Products
Search
Document Center

Object Storage Service:Multipart upload

Last Updated:Nov 27, 2023

Object Storage Service (OSS) provides the multipart upload feature. Multipart upload allows you to split a large object into multiple parts to upload. After these parts are uploaded, you can call the CompleteMultipartUpload operation to combine the parts into a complete object.

Background information

You can use the MultipartUpload method to upload a large object. In multipart upload, an object is split into multiple parts that are separately uploaded. If some parts fail to be uploaded, OSS records the upload progress. When you resume the upload, you need to upload only the parts that failed to be uploaded.

Important

To upload an object whose size is larger than 100 MB, we recommend that you use multipart upload to increase the success rate of the upload. If multipart upload is used to upload an object whose size is smaller than 100 MB and the value of the partSize parameter is inappropriate, the upload progress may not be fully displayed. To upload an object whose size is smaller than 100 MB, we recommend that you use simple upload.

If the ConnectionTimeoutError error occurs when you use the MultipartUpload method, you need to fix the error. You can reduce the part size, extend the timeout period, or resend the request. You can also capture the ConnectionTimeoutError error and fix the error based on the cause. For more information, see Network connection timeout handling.

The following table describes the parameters that you can configure when you upload an object by using multipart upload.

Category

Parameter

Description

Required parameters

name {String}

The full path of the object. Do not include the bucket name in the path.

file {String|File}

The path of the local file or HTML5 file that you want to upload.

[options] {Object} Optional parameters

[checkpoint] {Object}

The checkpoint file that records the result of the multipart upload task. If you enable resumable upload, you must configure this parameter. The checkpoint file stores the upload progress. If a part fails to be uploaded, OSS can resume the upload of the part based on the progress information recorded in this file. After the local file is uploaded, the file that stores progress information is deleted.

[parallel] {Number}

The number of parts that can be concurrently uploaded. Default value: 5. Use the default value if you do not have special requirements.

[partSize] {Number}

The part size. Valid values: 100 KB to 5 GB. Default value: 1 * 1024 * 1024 (1 MB). Use the default value if you do not have special requirements.

[progress] {Function}

The callback function that is used to query the progress of the upload task. The callback function can be an async function. The callback function takes the following parameters:

  • percentage {Number}: the progress of the upload task in percentage. Valid values: decimals between 0 and 1.

  • checkpoint {Object}: the checkpoint file that records the result of the multipart upload task.

  • res {Object}: the response that is returned when a part is uploaded.

[meta] {Object}

The user metadata. User metadata headers are prefixed by x-oss-meta.

[mime] {String}

The Content-Type request header.

[headers] {Object}

Other headers. For more information, see RFC 2616. Examples:

  • Cache-Control: the caching behavior in HTTP requests and responses by using specific directives. Example: Cache-Control: public, no-cache.

  • Content-Disposition: specifies whether the returned content is displayed as a web page or downloaded as an attachment and saved on your local device. Example: Content-Disposition: somename.

  • Content-Encoding: the method that is used to compress data of the specified media type. Example: Content-Encoding: gzip.

  • Expires: the expiration time of the cache. Unit: milliseconds.

Multipart upload example

Important

OSS SDK for Node.js does not support MD5 verification in multipart upload tasks. If you need to verify data integrity, we recommend that you use the CRC-64 library after the multipart upload task is complete.

The following sample code provides an example on how to use the multipartUpload method to perform multipart upload:

const OSS = require('ali-oss');
const path = require("path");

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  // Specify the name of the bucket. 
  bucket: 'yourbucketname'
});


const progress = (p, _checkpoint) => {
  // Record the upload progress of the object. 
  console.log(p); 
  // Record the checkpoint information about the multipart upload task. 
  console.log(_checkpoint); 
};

const headers = {  
  // Specify the storage class of the object. 
  'x-oss-storage-class': 'Standard', 
  // Specify tags for the object. You can specify multiple tags for the object. 
  'x-oss-tagging': 'Tag1=1&Tag2=2', 
  // Specify whether to overwite an existing object with the same name when the multipart upload task is initialized. In this example, this parameter is set to true, which indicates that an existing object with the same name as the object to upload is not overwritten. 
  'x-oss-forbid-overwrite': 'true'
}

// Start the multipart upload task. 
async function multipartUpload() {
  try {
    // Specify the full path of the object. Example: exampledir/exampleobject.txt. Then, specify the full path of the local file. Example: D:\\localpath\\examplefile.txt. Do not include the bucket name in the full path. 
    // By default, if you set this parameter to the name of a local file such as examplefile.txt without specifying the local path, the local file is uploaded from the local path of the project to which the sample program belongs. 
    const result = await client.multipartUpload('exampledir/exampleobject.txt', path.normalize('D:\\localpath\\examplefile.txt'), {
      progress,
      // headers,
      // Configure the meta parameter to specify metadata for the object. You can call the HeadObject operation to obtain the object metadata. 
      meta: {
        year: 2020,
        people: 'test',
      },
    });
    console.log(result);
    // Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path. 
    const head = await client.head('exampledir/exampleobject.txt');
    console.log(head);
  } catch (e) {
    // Handle timeout exceptions. 
    if (e.code === 'ConnectionTimeoutError') {
      console.log('TimeoutError');
      // do ConnectionTimeoutError operation
    }
    console.log(e);
  }
}

multipartUpload();

The multipartUpload method in the preceding code sample contains the initMultipartUpload, uploadPart, and completeMultipartUpload operations. If you want to perform multipart upload step by step, call the preceding operations in sequence. For more information, see .initMultipartUpload, .uploadPart, and .completeMultipartUpload.

Cancel a multipart upload task

You can use the client.abortMultipartUpload method to cancel a multipart upload task. If you cancel a multipart upload task, you cannot use the upload ID to upload parts and the uploaded parts are deleted.

The following sample code provides an example on how to cancel a multipart upload task:

const OSS = require("ali-oss");

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: "yourregion",
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  // Specify the name of the bucket. 
  bucket: "yourbucketname",
});

async function abortMultipartUpload() {
  // Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path. 
  const name = "exampledir/exampleobject.txt";
  // Specify the upload ID. You can obtain the upload ID from the response to the InitiateMultipartUpload operation. 
  const uploadId = "0004B999EF518A1FE585B0C9360D****";
  const result = await client.abortMultipartUpload(name, uploadId);
  console.log(result);
}

abortMultipartUpload();

List multipart upload tasks

You can use the client.listUploads method to list all ongoing multipart upload tasks, which are tasks that have been initiated but are not completed or canceled.

const OSS = require("ali-oss");

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: "yourregion",
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  // Specify the name of the bucket. 
  bucket: "yourbucketname",
});

async function listUploads(query = {}) {
  // You can configure the following parameters for query: prefix, marker, delimiter, upload-id-marker, and max-uploads. 
  const result = await client.listUploads(query);

  result.uploads.forEach((upload) => {
    // Specify the upload IDs of the multipart upload tasks. 
    console.log(upload.uploadId);
    // Combine all parts into a complete object and specify the full path of the object. 
    console.log(upload.name);
  });
}

const query = {
  // Specify the maximum number of multipart upload tasks to return for the current list operation. The default value and the maximum value of max-uploads are both 1000. 
  "max-uploads": 1000,
};
listUploads(query);

List uploaded parts

You can use the client.listParts method to list all parts that are uploaded by using the multipart upload task with the specified upload ID.

const OSS = require("ali-oss");

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: "yourregion",
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  // Specify the name of the bucket. 
  bucket: "yourbucketname",
});

async function listParts() {
  const query = {
    // Specify the maximum number of parts to return for the current list operation. The default value and the maximum value of max-parts are both 1000. 
    "max-parts": 1000,
  };
  let result;
  do { 
    result = await client.listParts(
      // Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path. 
      "exampledir/exampleobject.txt",
      // Obtain the upload ID from the response to the InitiateMultipartUpload operation. You must obtain the upload ID before you call the CompleteMultipartUpload operation to complete the multipart upload task.
      "0004B999EF518A1FE585B0C9360D****",
      query
    );
    // Specify the starting position of the next list operation. Only parts with part numbers greater than the value of this parameter are listed. 
    query["part-number-marker"] = result.nextPartNumberMarker;
    result.parts.forEach((part) => {
      console.log(part.PartNumber);
      console.log(part.LastModified);
      console.log(part.ETag);
      console.log(part.Size);
    });
  } while (result.isTruncated === "true");
}

listParts();

FAQ

How do I obtain the MD5 hash of an object that is uploaded by using multipart upload?

The MD5 hash of an object uploaded by using multipart upload is returned as the Content-MD5 header in the response to the CompleteMultipartUpload operation.

References

  • For the complete sample code for multipart upload, visit GitHub.

  • The multipartUpload method that is used by OSS SDK for Node.js to perform multipart upload contains the following three API operations:

  • For more information about the API operation that you can call to cancel a multipart upload task, see AbortMultipartUpload.

  • For more information about the API operation that you can call to list uploaded parts, see ListParts.

  • For more information about the API operation that you can call to list ongoing multipart upload tasks, see ListMultipartUploads. Ongoing multipart upload tasks are tasks that have been initiated but are not completed or canceled.