The Uploader module in OSS SDK for Python V2 provides a single interface for uploading local files and data streams to OSS. It always uses multipart upload under the hood — splitting the source into parts and uploading them concurrently — and optionally saves progress to disk so interrupted uploads can resume without starting over.
How it works
All uploads go through multipart upload regardless of file size. The Uploader:
Splits the file or stream into parts (default: 6 MiB each).
Uploads up to 3 parts concurrently (configurable).
On success, combines the parts into a single object in OSS.
For upload_file, enable checkpointing to have the Uploader write completed-part status to a local directory. If the upload is interrupted — by a network error or an unexpected process exit — the Uploader reads that checkpoint file and skips already-uploaded parts on the next run.
Checkpointing (enable_checkpoint) is only available forupload_file. Theupload_frommethod does not support this parameter.
Prerequisites
Before you begin, make sure that you have:
An OSS bucket in the target region
The
oss:PutObjectpermission. For details, see Grant custom permissions to a RAM user.OSS SDK for Python V2 installed (
alibabacloud_oss_v2)
Quick start
Upload a local file to a bucket:
import alibabacloud_oss_v2 as oss
cfg = oss.config.load_default()
cfg.credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg.region = "<region-id>" # e.g., cn-hangzhou
client = oss.Client(cfg)
uploader = client.uploader()
result = uploader.upload_file(
oss.PutObjectRequest(
bucket="<bucket-name>",
key="<object-key>",
),
filepath="/path/to/local/file",
)
print(f"Upload complete. ETag: {result.etag}, CRC-64: {result.hash_crc64}")Replace the placeholders before running:
| Placeholder | Description | Example |
|---|---|---|
<region-id> | Region where the bucket is located | cn-hangzhou |
<bucket-name> | Name of the destination bucket | my-bucket |
<object-key> | Key of the object in OSS | videos/intro.mp4 |
The SDK reads credentials from the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables.
The sample code uses the public endpoint for the China (Hangzhou) region. To access OSS from other Alibaba Cloud services in the same region, use an internal endpoint. See OSS regions and endpoints.
API reference
Methods
class Uploader: ...
# Initialize an upload manager
def uploader(self, **kwargs) -> Uploader: ...
# Upload a local file
def upload_file(
self,
request: models.PutObjectRequest,
filepath: str,
**kwargs: Any,
) -> UploadResult: ...
# Upload from a data stream
def upload_from(
self,
request: models.PutObjectRequest,
reader: IO[bytes],
**kwargs: Any,
) -> UploadResult: ...Parameters
| Parameter | Type | Description |
|---|---|---|
request | PutObjectRequest | Upload request parameters (same as the PutObject operation). See PutObjectRequest. |
filepath | str | Path of the local file to upload. Used by upload_file. |
reader | IO[bytes] | Data stream to upload. Used by upload_from. |
**kwargs | Any | Optional configuration overrides (see configuration options below). |
Return value
| Type | Description |
|---|---|
UploadResult | Upload result, including status code, ETag, CRC-64, and version ID. See UploadResult. |
Configuration options
Set these options when initializing the uploader (client.uploader(...)) to apply them to all uploads, or pass them per call to override for a specific upload.
| Option | Type | Default | Description |
|---|---|---|---|
part_size | int | 6 MiB | Size of each upload part in bytes. |
parallel_num | int | 3 | Number of parts to upload concurrently. Applies per upload call, not globally. |
leave_parts_on_error | bool | False | Whether to keep uploaded parts in OSS if the upload fails. Set to True to inspect parts after a failure. |
enable_checkpoint | bool | False | Whether to save upload progress to disk for resumable upload. Only valid for upload_file. |
checkpoint_dir | str | — | Directory where checkpoint files are saved (e.g., /local/dir/). Valid only when enable_checkpoint=True. |
For the complete API, see Uploader.
Upload a data stream
Use upload_from to upload data from an IO[bytes] object:
import alibabacloud_oss_v2 as oss
cfg = oss.config.load_default()
cfg.credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg.region = "<region-id>"
client = oss.Client(cfg)
uploader = client.uploader()
with open("/path/to/local/file", "rb") as f:
result = uploader.upload_from(
oss.PutObjectRequest(
bucket="<bucket-name>",
key="<object-key>",
),
reader=f,
)
print(f"Upload complete. ETag: {result.etag}")upload_from does not support resumable upload. If the upload is interrupted, it must restart from the beginning.Set part size and concurrency
Tune upload throughput by adjusting the part size and the number of concurrent uploads. A larger part size reduces API call overhead; higher concurrency makes better use of available bandwidth.
import alibabacloud_oss_v2 as oss
cfg = oss.config.load_default()
cfg.credentials_provider = oss.credentials.EnvironmentVariableCredentialsProvider()
cfg.region = "<region-id>"
client = oss.Client(cfg)
uploader = client.uploader(
part_size=100 * 1024, # 100 KB per part
parallel_num=5, # 5 concurrent uploads
leave_parts_on_error=True, # keep parts on failure for inspection
)
result = uploader.upload_file(
oss.PutObjectRequest(
bucket="<bucket-name>",
key="<object-key>",
),
filepath="/path/to/large/file",
)
print(f"Upload complete. ETag: {result.etag}")To override options for a single upload without changing the uploader defaults, pass them directly to the upload call:
result = uploader.upload_file(
oss.PutObjectRequest(bucket="<bucket-name>", key="<object-key>"),
filepath="/path/to/file",
part_size=50 * 1024 * 1024, # override for this call only
)What's next
Developer Guide — Uploader — full Uploader documentation
upload_file.py — complete file upload example
upload_from.py — complete stream upload example