All Products
Search
Document Center

Object Storage Service:Resumable upload

Last Updated:Oct 16, 2023

When you upload an object to Object Storage Service (OSS) by using resumable upload, you can specify a directory for the checkpoint file that stores resumable upload records. If an object fails to be uploaded because of a network exception or program error, the upload task is resumed from the position recorded in the checkpoint file.

Implementation method

An object can be split into several parts that can be uploaded independently. After all parts are uploaded, they are combined into a complete object.

You call the Bucket#resumable_upload method to implement resumable upload. The method contains the following parameters.

Parameter

Description

Required

Default value

key

The full path of the object that is uploaded to OSS.

Yes

None

file

The full path of the local file that you want to upload to OSS.

Note

If the ETag value of a local file changes during the upload, the upload fails.

Yes

None

:cpt_file

The path of the checkpoint file. You need to have the write permissions on the file.

Note
  • If the upload of a part fails, the next upload operation on this object continues from the position that is recorded in the .cpt file. When you call the Bucket#resumable_upload method again, you need to specify the .cpt file that is used for the previous upload operation. The .cpt file is deleted after the object is uploaded.

  • The .cpt file records the intermediate status information of uploads, and can be used to verify uploaded data. You cannot edit the .cpt file. If the .cpt file is damaged, the upload fails.

No

file.cpt in the directory of the local file. file is the name of the local file.

:disable_cpt

Specifies whether to record the upload progress. Valid values:

  • true: does not record the upload progress. If the upload fails, the upload cannot be resumed.

  • false: records the upload progress. If the upload fails, the upload is resumed from the position recorded in the checkpoint file.

No

false

:part_size

The size of each part.

No

4 MB

&block

Specifies whether the upload progress is handled by block. If you pass in block when you call Bucket#resumable_upload, the upload progress is handled by block.

No

None

Sample code

The following code provides an example on how to perform resumable upload:

require 'aliyun/oss'

client = Aliyun::OSS::Client.new(
  # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
  endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
  # Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  access_key_id: ENV['OSS_ACCESS_KEY_ID'],
  access_key_secret: ENV['OSS_ACCESS_KEY_SECRET']
)
# Specify the name of the bucket. Example: examplebucket. 
bucket = client.get_bucket('examplebucket')
# Set the key parameter to the full path of the object. Do not include the bucket name in the full path. Example: exampledir/example.zip. 
# Set the file parameter to the full path of the local file that you want to upload. Example: /tmp/example.zip. 
bucket.resumable_upload('exampledir/example.zip', '/tmp/example.zip') do |p|
  puts "Progress: #{p}"
end

bucket.resumable_upload(
  'exampledir/example.zip', '/tmp/example.zip',
  :part_size => 100 * 1024, :cpt_file => '/tmp/example.zip.cpt') { |p|
  puts "Progress: #{p}"
}