All Products
Search
Document Center

Object Storage Service:Resumable upload using OSS SDK for Python 1.0

Last Updated:Mar 01, 2026

When you upload an object to Object Storage Service (OSS) by using resumable upload, you can specify a directory for the checkpoint file that stores resumable upload progress. If an object fails to be uploaded because of a network exception or program error, the upload task is resumed from the position recorded in the checkpoint file.

Prerequisites

Before you begin, make sure that you have:

Basic example

Upload a local file with automatic checkpoint management:

# -*- coding: utf-8 -*-
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider

# Get access credentials from environment variables.
auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())

# Specify the endpoint for the region of your bucket.
# Example: China (Hangzhou)
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"

# Specify the region. Required for V4 signatures.
region = "cn-hangzhou"

bucket = oss2.Bucket(auth, endpoint, "yourBucketName", region=region)

# Upload a local file to the specified object path.
# The object path (key) must not contain the bucket name.
oss2.resumable_upload(bucket, "exampledir/exampleobject.txt", "D:\\localpath\\examplefile.txt")

By default, the SDK saves checkpoint information to the .py-oss-upload directory in the HOME directory. If the upload is interrupted and restarted, it resumes from the last recorded position.

Advanced configuration

OSS SDK for Python 2.1.0 and later support additional parameters to tune resumable upload behavior.

import sys
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider

auth = oss2.ProviderAuthV4(EnvironmentVariableCredentialsProvider())
endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
region = "cn-hangzhou"
bucket = oss2.Bucket(auth, endpoint, "yourBucketName", region=region)

# Define a progress callback.
def percentage(consumed_bytes, total_bytes):
    if total_bytes:
        rate = int(100 * (float(consumed_bytes) / float(total_bytes)))
        print('\r{0}% '.format(rate), end='')
        sys.stdout.flush()

oss2.resumable_upload(
    bucket,
    "exampledir/exampleobject.txt",
    "D:\\localpath\\examplefile.txt",
    store=oss2.ResumableStore(root="/tmp"),
    multipart_threshold=100 * 1024,
    part_size=100 * 1024,
    progress_callback=percentage,
    num_threads=4
)

Parameters

ParameterTypeDefaultDescription
storeResumableStore.py-oss-upload in HOMEDirectory for checkpoint information. Pass oss2.ResumableStore(root='/tmp') to customize the path.
multipart_thresholdint10 MBFile size threshold that triggers multipart upload. The SDK uploads files smaller than this value in a single request.
part_sizeint10 MBSize of each upload part in bytes. Minimum: 100 KB. Maximum: 5 GB. Increase for fast networks; decrease for unreliable connections.
progress_callbackfunctionNoneCallback invoked with (consumed_bytes, total_bytes). total_bytes is None if the SDK cannot determine the content length.
num_threadsint1Number of concurrent upload threads. Set oss2.defaults.connection_pool_size to a value greater than or equal to num_threads.

Usage notes

  • Resumable upload is multi-threaded internally. Do not wrap calls to oss2.resumable_upload() in external threads, as this causes redundant data transmission.

  • When using num_threads, set oss2.defaults.connection_pool_size >= num_threads to avoid connection starvation.

  • Increase part_size on fast, stable networks to reduce the number of API calls. Decrease it on unstable networks for more granular checkpointing.

  • To access OSS from other Alibaba Cloud services in the same region, use an internal endpoint. For supported regions and endpoints, see Regions and endpoints.

  • To create an OSSClient instance with custom domain names or Security Token Service (STS), see Initialization.

References

Complete sample code on GitHub