Use multipart upload to upload large files to OSS reliably. Instead of sending the entire file in a single request, you split the file into parts, upload the parts in parallel, and then combine them into a complete object. This approach improves throughput, recovers quickly from network interruptions, and lets you resume interrupted uploads without starting over.
Prerequisites
Before you begin, ensure that you have:
An OSS bucket in the region where you want to store the object. See Regions and endpoints
The
oss:PutObjectpermission on the bucket. See Grant permissions to a RAM user using a custom policyA valid STS temporary access credential (AccessKey ID, AccessKey secret, and security token)
How it works
A multipart upload follows three steps:
Initiate — Call
client.initiateMultipartUpload. OSS returns a globally unique upload ID that identifies this upload session.Upload parts — Call
client.uploadPartfor each part, identified by a part number. Parts can be uploaded in parallel.Complete — Call
client.completeMultipartUpload. OSS assembles the parts in part-number order into a complete object.
Part numbering behavior:
Part numbers identify each part's position in the final object.
Uploading a new part with an existing part number overwrites the previous part.
OSS returns the MD5 hash of each received part in the
ETagheader.If the MD5 hash of the uploaded data does not match the hash computed by the SDK, OSS returns the
InvalidDigesterror.
Upload a file in parts
The following example splits a local file into 10 MB parts and uploads them concurrently using Promise.all.
import Client, { FilePath, RequestError, THarmonyEmptyBodyApiRes } from '@aliyun/oss';
import { fileIo as fs } from '@kit.CoreFileKit';
// Create an OSS client instance.
const client = new Client({
// Replace with the AccessKey ID from your STS temporary access credential.
accessKeyId: 'yourAccessKeyId',
// Replace with the AccessKey secret from your STS temporary access credential.
accessKeySecret: 'yourAccessKeySecret',
// Replace with the security token from your STS temporary access credential.
securityToken: 'yourSecurityToken',
// Specify the region where the bucket is located. For example, if the bucket is in
// the China (Hangzhou) region, set the region to oss-cn-hangzhou.
region: 'oss-cn-hangzhou',
});
const bucket = 'yourBucketName'; // Replace with your actual bucket name.
const key = 'yourObjectName'; // Replace with your actual object name.
const multipartUpload = async () => {
try {
// Step 1: Initiate the multipart upload and get the upload ID.
const initRes = await client.initiateMultipartUpload({ bucket, key });
const uploadId = initRes.data.uploadId;
// Step 2: Split the file into parts and upload them concurrently.
const filePath = new FilePath('yourFilePath'); // Replace with the actual local file path.
const fileStat = await fs.stat(filePath.filePath);
const chunkSize = 1024 * 1024 * 10; // 10 MB per part.
const totalParts = Math.ceil(fileStat.size / chunkSize);
const waitList: Promise<THarmonyEmptyBodyApiRes>[] = [];
for (let partNumber = 1; partNumber <= totalParts; partNumber++) {
const offset = (partNumber - 1) * chunkSize;
const uploadPromise = client.uploadPart({
bucket,
key,
uploadId,
partNumber,
data: filePath,
length: Math.min(chunkSize, fileStat.size - offset), // Size of the current part.
offset, // Start offset of the current part.
});
waitList.push(uploadPromise);
}
// Wait for all parts to finish uploading.
await Promise.all(waitList);
// Step 3: Complete the multipart upload.
const completeRes = await client.completeMultipartUpload({
bucket,
key,
uploadId,
completeAll: true, // Automatically assemble all uploaded parts.
});
console.log(JSON.stringify(completeRes));
} catch (err) {
if (err instanceof RequestError) {
console.log('code: ', err.code);
console.log('message: ', err.message);
console.log('requestId: ', err.requestId);
console.log('status: ', err.status);
console.log('ec: ', err.ec);
} else {
console.log('unknown error: ', err);
}
}
};
multipartUpload();Key parameters:
| Parameter | Description |
|---|---|
uploadId | The unique identifier returned by initiateMultipartUpload. Pass it to all subsequent uploadPart and completeMultipartUpload calls. |
partNumber | The sequential number of each part, starting from 1. Determines the part's position in the final object. |
offset | The byte offset into the source file where this part begins. |
length | The size of this part in bytes. The last part can be smaller than chunkSize. |
completeAll | When set to true, OSS assembles all uploaded parts automatically, without requiring an explicit part list. |
More operations
Abort a multipart upload
Call client.abortMultipartUpload to cancel an in-progress upload and free the storage used by the uploaded parts.
import Client, { RequestError } from '@aliyun/oss';
const client = new Client({
accessKeyId: 'yourAccessKeyId',
accessKeySecret: 'yourAccessKeySecret',
securityToken: 'yourSecurityToken',
region: 'oss-cn-hangzhou',
});
const bucket = 'yourBucketName';
const key = 'yourObjectName';
const abortMultipartUpload = async () => {
try {
const res = await client.abortMultipartUpload({
bucket,
key,
// The upload ID returned by initiateMultipartUpload, or retrieved via listMultipartUploads.
uploadId: 'yourUploadId',
});
console.log(JSON.stringify(res));
} catch (err) {
if (err instanceof RequestError) {
console.log('code: ', err.code);
console.log('message: ', err.message);
console.log('requestId: ', err.requestId);
console.log('status: ', err.status);
console.log('ec: ', err.ec);
} else {
console.log('unknown error: ', err);
}
}
};
abortMultipartUpload();List uploaded parts
Call client.listParts to retrieve the parts already uploaded for a given upload ID. Use this to check upload progress or to resume an interrupted upload.
import Client, { RequestError } from '@aliyun/oss';
const client = new Client({
accessKeyId: 'yourAccessKeyId',
accessKeySecret: 'yourAccessKeySecret',
securityToken: 'yourSecurityToken',
region: 'oss-cn-hangzhou',
});
const bucket = 'yourBucketName';
const key = 'yourObjectName';
const listParts = async () => {
try {
const res = await client.listParts({
bucket,
key,
// The upload ID returned by initiateMultipartUpload, or retrieved via listMultipartUploads.
uploadId: 'yourUploadId',
});
console.log(JSON.stringify(res));
} catch (err) {
if (err instanceof RequestError) {
console.log('code: ', err.code);
console.log('message: ', err.message);
console.log('requestId: ', err.requestId);
console.log('status: ', err.status);
console.log('ec: ', err.ec);
} else {
console.log('unknown error: ', err);
}
}
};
listParts();List in-progress multipart uploads
Call client.listMultipartUploads to list all uploads in a bucket that have been initiated but not yet completed or aborted. This is useful for auditing incomplete uploads and identifying upload IDs to resume or abort.
import Client, { RequestError } from '@aliyun/oss';
const client = new Client({
accessKeyId: 'yourAccessKeyId',
accessKeySecret: 'yourAccessKeySecret',
securityToken: 'yourSecurityToken',
region: 'oss-cn-hangzhou',
});
const bucket = 'yourBucketName';
const listMultipartUploads = async () => {
try {
const res = await client.listMultipartUploads({ bucket });
console.log(JSON.stringify(res));
} catch (err) {
if (err instanceof RequestError) {
console.log('code: ', err.code);
console.log('message: ', err.message);
console.log('requestId: ', err.requestId);
console.log('status: ', err.status);
console.log('ec: ', err.ec);
} else {
console.log('unknown error: ', err);
}
}
};
listMultipartUploads();Resume an interrupted upload
On HarmonyOS, the app may be suspended or the network interrupted mid-upload. Because the upload ID is returned at initiation and persists in OSS, you can resume without re-uploading completed parts:
Save the
uploadIdreturned byinitiateMultipartUploadto persistent storage (for example, a local file or database).If the upload is interrupted, call
client.listPartswith the saveduploadIdto determine which parts were already uploaded.Upload only the missing parts using
client.uploadPart.Call
client.completeMultipartUploadonce all parts are present.
This pattern avoids wasting bandwidth and is especially important on mobile networks.