Resumable upload splits a large file into multiple parts, uploads them in sequence, and merges them into a complete object in Object Storage Service (OSS). A checkpoint file (.cpt) tracks progress so that if the upload fails due to a network error or program error, the next attempt resumes from the last successful part rather than restarting from scratch.
When to use resumable upload
Use resumable upload when uploading over an unstable network connection. For small files on a reliable connection, a simple put_object call is sufficient.
Resumable upload is useful in these situations:
Recovering from failures: A failed upload resumes from the last completed part, avoiding a full restart.
Tracking progress: A block callback reports progress as each part is uploaded.
Large files: Splitting a file into parts reduces the impact of any single network interruption.
How it works
Bucket#resumable_uploadsplits the local file into parts based on the configured part size (default: 4 MB).Each part is uploaded separately to OSS.
Progress is recorded in a
.cptcheckpoint file after each part completes.After all parts upload successfully, OSS merges them into the final object and deletes the
.cptfile.
If the upload fails at any point, call Bucket#resumable_upload again with the same .cpt file path. The upload resumes from the last completed part.
Upload a file with resumable upload
Prerequisites
Before you begin, ensure that you have:
An OSS bucket
Write permission on the bucket
The
OSS_ACCESS_KEY_IDandOSS_ACCESS_KEY_SECRETenvironment variables set with valid AccessKey credentials
Upload with progress tracking
The following example performs a resumable upload and prints progress to the console.
require 'aliyun/oss'
client = Aliyun::OSS::Client.new(
# Replace with your bucket's endpoint. The China (Hangzhou) endpoint is used here as an example.
endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
# Load credentials from environment variables to avoid hardcoding sensitive information.
access_key_id: ENV['OSS_ACCESS_KEY_ID'],
access_key_secret: ENV['OSS_ACCESS_KEY_SECRET']
)
bucket = client.get_bucket('examplebucket')
# Upload /tmp/example.zip to exampledir/example.zip in the bucket.
bucket.resumable_upload('exampledir/example.zip', '/tmp/example.zip') do |p|
puts "Progress: #{p}"
endThe block argument receives upload progress. The block is called once per completed part.
Upload with a custom part size and checkpoint file path
By default, the .cpt file is created in the same directory as the local file (for example, /tmp/example.zip.cpt). Specify a custom path if you need to manage the file explicitly, such as when persisting checkpoint state across deployments.
bucket.resumable_upload(
'exampledir/example.zip', '/tmp/example.zip',
:part_size => 100 * 1024, # 100 KiB per part
:cpt_file => '/tmp/example.zip.cpt' # Explicit checkpoint file path
) do |p|
puts "Progress: #{p}"
endParameters
| Parameter | Required | Default | Description |
|---|---|---|---|
key | Yes | — | Full path of the object in OSS, excluding the bucket name. For example, exampledir/example.zip. |
file | Yes | — | Full path of the local file to upload. If the file's ETag changes during the upload, the upload fails. |
:cpt_file | No | <file>.cpt in the same directory as file | Path of the checkpoint file that records upload state. Requires write permission. Do not edit this file — if it is corrupted, the upload cannot be resumed. Deleted automatically after the upload completes. |
:disable_cpt | No | false | Set to true to skip recording upload progress. If true, a failed upload cannot be resumed. |
:part_size | No | 4 MB | Size of each part in bytes. Smaller parts reduce the data lost on a failed upload but increase the total number of API calls. Larger parts are more efficient on stable, high-bandwidth connections. |
&block | No | — | A block that receives upload progress each time a part completes. |
Usage notes
Checkpoint file integrity
The .cpt file records intermediate upload state and includes a built-in validation feature. Do not edit it. If the file is corrupted, the upload must start over.
ETag validation
If the local file changes while the upload is in progress — for example, the file is modified externally — the ETag no longer matches and the upload fails. Keep the file unchanged for the duration of the upload.
Disabling progress recording
Setting :disable_cpt to true disables checkpoint recording. This is useful for short uploads where resumability is not required, but a failed upload must restart from the beginning.