Streaming upload writes data to OSS incrementally without buffering the entire object in memory first. Use this approach when the data source is a live stream, a generator, or any object too large to hold in memory before uploading.
Prerequisites
Before you begin, ensure that you have:
An OSS bucket in your target region
AccessKey ID and AccessKey Secret stored in environment variables (
OSS_ACCESS_KEY_IDandOSS_ACCESS_KEY_SECRET)The
aliyun/ossRuby library installed
Upload an object using streaming upload
The following example opens a write stream on put_object and appends data in a loop.
require 'aliyun/oss'
client = Aliyun::OSS::Client.new(
# Replace the endpoint with the one for your region.
endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
# Read credentials from environment variables to avoid hardcoding secrets.
access_key_id: ENV['OSS_ACCESS_KEY_ID'],
access_key_secret: ENV['OSS_ACCESS_KEY_SECRET']
)
# Replace examplebucket with your bucket name.
bucket = client.get_bucket('examplebucket')
# Replace exampleobject.txt with the full object path (do not include the bucket name).
bucket.put_object('exampleobject.txt') do |stream|
100.times { |i| stream << i.to_s }
endReplace the following placeholders:
| Placeholder | Description | Example |
|---|---|---|
https://oss-cn-hangzhou.aliyuncs.com | Endpoint for your region | https://oss-ap-southeast-1.aliyuncs.com |
examplebucket | Your bucket name | my-app-bucket |
exampleobject.txt | Full path of the object in the bucket | logs/2026/app.log |
What's next
Multipart upload (Ruby SDK) — for large objects that require parallel uploads or the ability to resume a failed upload
Simple upload (Ruby SDK) — for small objects where you can read the entire content into memory before uploading