When uploading large files over unstable networks, a single failed request forces you to restart the entire upload. Resumable upload solves this by splitting a file into parts, uploading them concurrently, and saving progress to a checkpoint file after each part completes. If a part fails, the next attempt resumes from the saved position rather than the beginning. Objects larger than 5 GB cannot be uploaded in a single operation and must use resumable upload.
When to use resumable upload
| Scenario | Why it helps |
|---|---|
| Files larger than 5 GB | A single-operation upload fails for objects exceeding 5 GB |
| Unreliable networks | Parts are retried individually, so a network drop does not restart the entire upload |
| High-bandwidth environments | Parallel part uploads maximize available bandwidth for faster throughput |
How it works
The SDK splits the local file into parts and uploads them concurrently.
After each part completes, the SDK saves progress to a checkpoint file.
If a part fails, the next attempt resumes from the position recorded in the checkpoint file.
After all parts upload successfully, OSS combines them into a complete object. The SDK then deletes the checkpoint file.
Prerequisites
Before you begin, ensure that you have:
An OSS bucket. See Create a bucket
The
oss:PutObjectpermission. See Attach a custom policy to a RAM userWrite access to the directory where the checkpoint file is stored
Usage notes
The examples in this topic use the public endpoint for the China (Hangzhou) region. To access OSS from another Alibaba Cloud service in the same region, use the internal endpoint instead. For a full list of regions and endpoints, see OSS regions and endpoints.
The checkpoint file contains a checksum and cannot be modified. If the file is corrupted, all parts must be re-uploaded.
If the local file changes during an upload, all parts must be re-uploaded.
Key parameters
These parameters control how the upload runs. The defaults are safe for most use cases. Tune part size and concurrency for large files or high-bandwidth connections.
| Parameter | Description | Default | Valid range |
|---|---|---|---|
| Part size | Size of each uploaded part | 100 KB | 100 KB–5 GB |
| Concurrency | Number of parts uploaded in parallel | 1 | Varies by SDK |
| Checkpoint file path | Where upload progress is saved | Same directory as the source file | Any writable path |
| Resume on failure | Whether to resume from the checkpoint file instead of restarting | Disabled | — |
Note: SetCheckpointDir(or equivalent) tonullor omit the checkpoint path to disable resumability. The upload restarts from the beginning if interrupted.
Upload a file
All examples below load credentials from environment variables and enable resumable upload with a checkpoint file. Replace the placeholders with your actual values before running.
| Placeholder | Description | Example |
|---|---|---|
<region> | Region where the bucket is located | cn-hangzhou |
<bucket-name> | Bucket name | examplebucket |
<object-name> | Full object path in OSS (excluding bucket name) | exampledir/exampleobject.txt |
<local-file-path> | Full path of the local file to upload | D:\localpath\examplefile.txt |
<checkpoint-path> | Path where the checkpoint file is stored | D:\local |
import argparse
import alibabacloud_oss_v2 as oss
parser = argparse.ArgumentParser(description="Resumable upload example")
parser.add_argument('--region', required=True, help='Region where the bucket is located')
parser.add_argument('--bucket', required=True, help='Bucket name')
parser.add_argument('--endpoint', help='Custom endpoint (optional)')
parser.add_argument('--key', required=True, help='Object name in OSS')
parser.add_argument('--file_path', required=True, help='Local file path')
def main():
args = parser.parse_args()
# Load credentials from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg = oss.config.load_default()
cfg.credentials_provider = credentials_provider
cfg.region = args.region
if args.endpoint is not None:
cfg.endpoint = args.endpoint
client = oss.Client(cfg)
# Enable resumable upload and specify the checkpoint directory.
uploader = client.uploader(
enable_checkpoint=True,
checkpoint_dir="/Users/yourLocalPath/checkpoint/"
)
result = uploader.upload_file(
oss.PutObjectRequest(bucket=args.bucket, key=args.key),
filepath=args.file_path
)
print(f'status code: {result.status_code},'
f' request id: {result.request_id},'
f' etag: {result.etag},'
f' hash crc64: {result.hash_crc64},'
f' version id: {result.version_id}')
if __name__ == "__main__":
main()import com.aliyun.oss.ClientBuilderConfiguration;
import com.aliyun.oss.OSS;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.OSSClientBuilder;
import com.aliyun.oss.OSSException;
import com.aliyun.oss.common.comm.SignVersion;
import com.aliyun.oss.model.*;
public class UploadFile {
public static void main(String[] args) throws Exception {
String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
String region = "cn-hangzhou";
// Load credentials from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
EnvironmentVariableCredentialsProvider credentialsProvider =
CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
ClientBuilderConfiguration clientBuilderConfiguration = new ClientBuilderConfiguration();
clientBuilderConfiguration.setSignatureVersion(SignVersion.V4);
OSS ossClient = OSSClientBuilder.create()
.endpoint(endpoint)
.credentialsProvider(credentialsProvider)
.clientConfiguration(clientBuilderConfiguration)
.region(region)
.build();
try {
ObjectMetadata meta = new ObjectMetadata();
// Specify the content type of the object to upload.
// meta.setContentType("text/plain");
// Specify the ACL of the object to upload.
// meta.setObjectAcl(CannedAccessControlList.Private);
UploadFileRequest uploadFileRequest =
new UploadFileRequest("examplebucket", "exampledir/exampleobject.txt");
// Local file to upload.
uploadFileRequest.setUploadFile("D:\\localpath\\examplefile.txt");
// Number of concurrent upload threads. Default: 1.
uploadFileRequest.setTaskNum(5);
// Part size in bytes. Range: 100 KB to 5 GB. Default: 100 KB.
uploadFileRequest.setPartSize(1 * 1024 * 1024);
// Enable resumable upload. Default: false.
uploadFileRequest.setEnableCheckpoint(true);
// Checkpoint file path. Defaults to ${uploadFile}.ucp in the same directory as the source file.
// The checkpoint file is deleted automatically after the upload completes.
uploadFileRequest.setCheckpointFile("D:\\local\\checkpoint.ucp");
// Configure object metadata.
uploadFileRequest.setObjectMetadata(meta);
// Configure an upload callback. The parameter type is Callback.
// uploadFileRequest.setCallback("yourCallbackEvent");
ossClient.uploadFile(uploadFileRequest);
} catch (OSSException oe) {
System.out.println("OSS error — " + oe.getErrorMessage()
+ " | Code: " + oe.getErrorCode()
+ " | Request ID: " + oe.getRequestId());
} catch (Throwable ce) {
System.out.println("Client error — " + ce.getMessage());
} finally {
if (ossClient != null) {
ossClient.shutdown();
}
}
}
}package main
import (
"context"
"flag"
"log"
"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss"
"github.com/aliyun/alibabacloud-oss-go-sdk-v2/oss/credentials"
)
var (
region string
bucketName string
objectName string
)
func init() {
flag.StringVar(®ion, "region", "", "Region where the bucket is located.")
flag.StringVar(&bucketName, "bucket", "", "Bucket name.")
flag.StringVar(&objectName, "object", "", "Object name in OSS.")
}
func main() {
flag.Parse()
if len(bucketName) == 0 {
log.Fatalf("--bucket is required")
}
if len(region) == 0 {
log.Fatalf("--region is required")
}
if len(objectName) == 0 {
log.Fatalf("--object is required")
}
// Load credentials from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
cfg := oss.LoadDefaultConfig().
WithCredentialsProvider(credentials.NewEnvironmentVariableCredentialsProvider()).
WithRegion(region)
client := oss.NewClient(cfg)
u := client.NewUploader()
localFile := "/Users/yourLocalPath/yourFileName"
result, err := u.UploadFile(context.TODO(),
&oss.PutObjectRequest{
Bucket: oss.Ptr(bucketName),
Key: oss.Ptr(objectName),
},
localFile)
if err != nil {
log.Fatalf("Upload failed: %v", err)
}
log.Printf("Upload result: %#v\n", result)
}<?php
require_once __DIR__ . '/../vendor/autoload.php';
use AlibabaCloud\Oss\V2 as Oss;
$optsdesc = [
"region" => ['help' => 'Region where the bucket is located.', 'required' => true],
"endpoint" => ['help' => 'Custom endpoint (optional).', 'required' => false],
"bucket" => ['help' => 'Bucket name.', 'required' => true],
"key" => ['help' => 'Object name in OSS.', 'required' => true],
];
$longopts = array_map(fn($key) => "$key:", array_keys($optsdesc));
$options = getopt("", $longopts);
foreach ($optsdesc as $key => $value) {
if ($value['required'] && empty($options[$key])) {
echo "Error: --$key is required. " . $value['help'] . PHP_EOL;
exit(1);
}
}
$region = $options["region"];
$bucket = $options["bucket"];
$key = $options["key"];
// Load credentials from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
$credentialsProvider = new Oss\Credentials\EnvironmentVariableCredentialsProvider();
$cfg = Oss\Config::loadDefault();
$cfg->setCredentialsProvider($credentialsProvider);
$cfg->setRegion($region);
if (isset($options["endpoint"])) {
$cfg->setEndpoint($options["endpoint"]);
}
$client = new Oss\Client($cfg);
$filename = "/Users/yourLocalPath/yourFileName";
$uploader = $client->newUploader();
$result = $uploader->uploadFile(
request: new Oss\Models\PutObjectRequest(bucket: $bucket, key: $key),
filepath: $filename,
);
printf(
'status code: %s' . PHP_EOL .
'request id: %s' . PHP_EOL,
$result->statusCode,
$result->requestId
);const OSS = require('ali-oss');
const client = new OSS({
region: 'cn-hangzhou',
// Load credentials from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
accessKeyId: process.env.OSS_ACCESS_KEY_ID,
accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
authorizationV4: true,
bucket: 'examplebucket',
});
const filePath = 'D:\\localpath\\examplefile.txt';
let checkpoint;
async function resumeUpload() {
// Retry up to 5 times. Each retry resumes from the saved checkpoint.
for (let i = 0; i < 5; i++) {
try {
const result = await client.multipartUpload('exampledir/exampleobject.txt', filePath, {
checkpoint,
async progress(percentage, cpt) {
checkpoint = cpt; // Save progress for the next retry.
},
});
console.log(result);
break;
} catch (e) {
console.log(e);
}
}
}
resumeUpload();using Aliyun.OSS;
using Aliyun.OSS.Common;
var endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
// Load credentials from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID");
var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET");
var bucketName = "examplebucket";
var objectName = "exampledir/exampleobject.txt";
var localFilename = "D:\\localpath\\examplefile.txt";
// Set checkpointDir to null to disable resumability (the upload restarts from the beginning on failure).
string checkpointDir = "D:\\local";
const string region = "cn-hangzhou";
var conf = new ClientConfiguration();
conf.SignatureVersion = SignatureVersion.V4;
var client = new OssClient(endpoint, accessKeyId, accessKeySecret, conf);
client.SetRegion(region);
try
{
UploadObjectRequest request = new UploadObjectRequest(bucketName, objectName, localFilename)
{
// Part size in bytes.
PartSize = 8 * 1024 * 1024,
// Number of concurrent upload threads.
ParallelThreadCount = 3,
// Checkpoint directory. Set to null to disable resumability.
CheckpointDir = checkpointDir,
};
client.ResumableUploadObject(request);
Console.WriteLine("Upload succeeded: {0}", objectName);
}
catch (OssException ex)
{
Console.WriteLine("OSS error {0}: {1} | Request ID: {2}",
ex.ErrorCode, ex.Message, ex.RequestId);
}
catch (Exception ex)
{
Console.WriteLine("Error: {0}", ex.Message);
}OSSResumableUploadRequest *resumableUpload = [OSSResumableUploadRequest new];
resumableUpload.bucketName = <bucketName>;
// objectKey is the full path of the object in OSS, including the file extension.
// Example: abc/efg/123.jpg
resumableUpload.objectKey = <objectKey>;
resumableUpload.partSize = 1024 * 1024;
resumableUpload.uploadProgress = ^(int64_t bytesSent, int64_t totalByteSent, int64_t totalBytesExpectedToSend) {
NSLog(@"%lld, %lld, %lld", bytesSent, totalByteSent, totalBytesExpectedToSend);
};
NSString *cachesDir = [NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES) firstObject];
resumableUpload.recordDirectoryPath = cachesDir;
// Set to NO to preserve the checkpoint file on failure so the next upload can resume.
// Default is YES, which deletes the checkpoint file on failure and restarts the upload.
resumableUpload.deleteUploadIdOnCancelling = NO;
resumableUpload.uploadingFileURL = [NSURL fileURLWithPath:<your file path>];
OSSTask *resumeTask = [client resumableUpload:resumableUpload];
[resumeTask continueWithBlock:^id(OSSTask *task) {
if (task.error) {
NSLog(@"error: %@", task.error);
if ([task.error.domain isEqualToString:OSSClientErrorDomain] &&
task.error.code == OSSClientErrorCodeCannotResumeUpload) {
// The upload cannot be resumed. Obtain a new upload ID to start over.
}
} else {
NSLog(@"Upload succeeded");
}
return nil;
}];#include <alibabacloud/oss/OssClient.h>
using namespace AlibabaCloud::OSS;
int main(void)
{
std::string Endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
std::string Region = "cn-hangzhou";
std::string BucketName = "examplebucket";
std::string ObjectName = "exampledir/exampleobject.txt";
std::string UploadFilePath = "D:\\localpath\\examplefile.txt";
// Checkpoint directory. The directory must already exist.
// If omitted, resumable upload is disabled and progress is not saved.
std::string CheckpointFilePath = "D:\\local";
InitializeSdk();
ClientConfiguration conf;
conf.signatureVersion = SignatureVersionType::V4;
// Load credentials from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
OssClient client(Endpoint, credentialsProvider, conf);
client.SetRegion(Region);
UploadObjectRequest request(BucketName, ObjectName, UploadFilePath, CheckpointFilePath);
auto outcome = client.ResumableUploadObject(request);
if (!outcome.isSuccess()) {
std::cout << "Upload failed"
<< " | Code: " << outcome.error().Code()
<< " | Message: " << outcome.error().Message()
<< " | Request ID: " << outcome.error().RequestId() << std::endl;
ShutdownSdk();
return -1;
}
ShutdownSdk();
return 0;
}#include "oss_api.h"
#include "aos_http_io.h"
const char *endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
const char *bucket_name = "examplebucket";
const char *object_name = "exampledir/exampleobject.txt";
const char *local_filename = "D:\\localpath\\examplefile.txt";
const char *region = "cn-hangzhou";
void init_options(oss_request_options_t *options)
{
options->config = oss_config_create(options->pool);
aos_str_set(&options->config->endpoint, endpoint);
// Load credentials from environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET.
aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID"));
aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET"));
aos_str_set(&options->config->region, region);
options->config->signature_version = 4;
options->config->is_cname = 0;
options->ctl = aos_http_controller_create(options->pool, 0);
}
int main(int argc, char *argv[])
{
if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
exit(1);
}
aos_pool_t *pool;
aos_pool_create(&pool, NULL);
oss_request_options_t *oss_client_options = oss_request_options_create(pool);
init_options(oss_client_options);
aos_string_t bucket, object, file;
aos_list_t resp_body;
aos_table_t *headers = NULL;
aos_table_t *resp_headers = NULL;
aos_status_t *resp_status = NULL;
oss_resumable_clt_params_t *clt_params;
aos_str_set(&bucket, bucket_name);
aos_str_set(&object, object_name);
aos_str_set(&file, local_filename);
aos_list_init(&resp_body);
// Part size: 100 KB. Thread count: 3. Resume enabled: AOS_TRUE.
clt_params = oss_create_resumable_clt_params_content(pool, 1024 * 100, 3, AOS_TRUE, NULL);
resp_status = oss_resumable_upload_file(oss_client_options, &bucket, &object,
&file, headers, NULL, clt_params, NULL, &resp_headers, &resp_body);
if (aos_status_is_ok(resp_status)) {
printf("Upload succeeded\n");
} else {
printf("Upload failed\n");
}
aos_pool_destroy(pool);
aos_http_io_deinitialize();
return 0;
}What's next
For other upload methods, see the OSS SDK overview.
To manage or cancel incomplete multipart uploads, see the multipart upload management documentation.