All Products
Search
Document Center

Object Storage Service:File upload manager (Python SDK V2)

Last Updated:Mar 20, 2026

The Uploader module in OSS SDK for Python V2 provides a single interface for uploading local files and data streams to OSS. It always uses multipart upload under the hood — splitting the source into parts and uploading them concurrently — and optionally saves progress to disk so interrupted uploads can resume without starting over.

How it works

All uploads go through multipart upload regardless of file size. The Uploader:

  1. Splits the file or stream into parts (default: 6 MiB each).

  2. Uploads up to 3 parts concurrently (configurable).

  3. On success, combines the parts into a single object in OSS.

For upload_file, enable checkpointing to have the Uploader write completed-part status to a local directory. If the upload is interrupted — by a network error or an unexpected process exit — the Uploader reads that checkpoint file and skips already-uploaded parts on the next run.

Checkpointing (enable_checkpoint) is only available for upload_file. The upload_from method does not support this parameter.

Prerequisites

Before you begin, make sure that you have:

Quick start

Upload a local file to a bucket:

import alibabacloud_oss_v2 as oss

cfg = oss.config.load_default()
cfg.credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg.region = "<region-id>"  # e.g., cn-hangzhou

client = oss.Client(cfg)
uploader = client.uploader()

result = uploader.upload_file(
    oss.PutObjectRequest(
        bucket="<bucket-name>",
        key="<object-key>",
    ),
    filepath="/path/to/local/file",
)

print(f"Upload complete. ETag: {result.etag}, CRC-64: {result.hash_crc64}")

Replace the placeholders before running:

PlaceholderDescriptionExample
<region-id>Region where the bucket is locatedcn-hangzhou
<bucket-name>Name of the destination bucketmy-bucket
<object-key>Key of the object in OSSvideos/intro.mp4

The SDK reads credentials from the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables.

The sample code uses the public endpoint for the China (Hangzhou) region. To access OSS from other Alibaba Cloud services in the same region, use an internal endpoint. See OSS regions and endpoints.

API reference

Methods

class Uploader: ...

# Initialize an upload manager
def uploader(self, **kwargs) -> Uploader: ...

# Upload a local file
def upload_file(
    self,
    request: models.PutObjectRequest,
    filepath: str,
    **kwargs: Any,
) -> UploadResult: ...

# Upload from a data stream
def upload_from(
    self,
    request: models.PutObjectRequest,
    reader: IO[bytes],
    **kwargs: Any,
) -> UploadResult: ...

Parameters

ParameterTypeDescription
requestPutObjectRequestUpload request parameters (same as the PutObject operation). See PutObjectRequest.
filepathstrPath of the local file to upload. Used by upload_file.
readerIO[bytes]Data stream to upload. Used by upload_from.
**kwargsAnyOptional configuration overrides (see configuration options below).

Return value

TypeDescription
UploadResultUpload result, including status code, ETag, CRC-64, and version ID. See UploadResult.

Configuration options

Set these options when initializing the uploader (client.uploader(...)) to apply them to all uploads, or pass them per call to override for a specific upload.

OptionTypeDefaultDescription
part_sizeint6 MiBSize of each upload part in bytes.
parallel_numint3Number of parts to upload concurrently. Applies per upload call, not globally.
leave_parts_on_errorboolFalseWhether to keep uploaded parts in OSS if the upload fails. Set to True to inspect parts after a failure.
enable_checkpointboolFalseWhether to save upload progress to disk for resumable upload. Only valid for upload_file.
checkpoint_dirstrDirectory where checkpoint files are saved (e.g., /local/dir/). Valid only when enable_checkpoint=True.

For the complete API, see Uploader.

Enable resumable upload

When uploading large files over unreliable connections, enable checkpointing so the upload can resume from the last recorded breakpoint after an interruption:

import alibabacloud_oss_v2 as oss

cfg = oss.config.load_default()
cfg.credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg.region = "<region-id>"

client = oss.Client(cfg)

uploader = client.uploader(
    enable_checkpoint=True,
    checkpoint_dir="/Users/yourLocalPath/checkpoint/",
)

result = uploader.upload_file(
    oss.PutObjectRequest(
        bucket="<bucket-name>",
        key="<object-key>",
    ),
    filepath="/path/to/large/file",
)

print(f"Upload complete. ETag: {result.etag}")

If the upload is interrupted, run the same code again with the same bucket, key, filepath, and checkpoint_dir. The Uploader reads the checkpoint file and resumes from where it left off.

Upload a data stream

Use upload_from to upload data from an IO[bytes] object:

import alibabacloud_oss_v2 as oss

cfg = oss.config.load_default()
cfg.credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg.region = "<region-id>"

client = oss.Client(cfg)
uploader = client.uploader()

with open("/path/to/local/file", "rb") as f:
    result = uploader.upload_from(
        oss.PutObjectRequest(
            bucket="<bucket-name>",
            key="<object-key>",
        ),
        reader=f,
    )

print(f"Upload complete. ETag: {result.etag}")
upload_from does not support resumable upload. If the upload is interrupted, it must restart from the beginning.

Set part size and concurrency

Tune upload throughput by adjusting the part size and the number of concurrent uploads. A larger part size reduces API call overhead; higher concurrency makes better use of available bandwidth.

import alibabacloud_oss_v2 as oss

cfg = oss.config.load_default()
cfg.credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg.region = "<region-id>"

client = oss.Client(cfg)

uploader = client.uploader(
    part_size=100 * 1024,      # 100 KB per part
    parallel_num=5,            # 5 concurrent uploads
    leave_parts_on_error=True, # keep parts on failure for inspection
)

result = uploader.upload_file(
    oss.PutObjectRequest(
        bucket="<bucket-name>",
        key="<object-key>",
    ),
    filepath="/path/to/large/file",
)

print(f"Upload complete. ETag: {result.etag}")

To override options for a single upload without changing the uploader defaults, pass them directly to the upload call:

result = uploader.upload_file(
    oss.PutObjectRequest(bucket="<bucket-name>", key="<object-key>"),
    filepath="/path/to/file",
    part_size=50 * 1024 * 1024,  # override for this call only
)

Configure an upload callback

To notify an application server when an upload completes, pass a Base64-encoded callback configuration in the PutObjectRequest. OSS sends an HTTP POST request to the callback URL after the upload succeeds.

import base64
import alibabacloud_oss_v2 as oss

cfg = oss.config.load_default()
cfg.credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg.region = "<region-id>"

client = oss.Client(cfg)
uploader = client.uploader()

callback_url = "http://www.example.com/callback"

# Build the callback parameter and encode it in Base64
callback = base64.b64encode(
    (
        '{"callbackUrl":"' + callback_url + '",'
        '"callbackBody":"bucket=${bucket}&object=${object}'
        '&my_var_1=${x:var1}&my_var_2=${x:var2}"}'
    ).encode()
).decode()

# Build custom variables and encode them in Base64
callback_var = base64.b64encode(
    '{"x:var1":"value1","x:var2":"value2"}'.encode()
).decode()

result = uploader.upload_file(
    oss.PutObjectRequest(
        bucket="<bucket-name>",
        key="<object-key>",
        callback=callback,
        callback_var=callback_var,
    ),
    filepath="/path/to/local/file",
)

print(f"Upload complete. ETag: {result.etag}, status: {result.status_code}")

For details on the upload callback feature, see Upload callback.

What's next