Storing user uploads on the same server running the app is one of those things that works fine, until it doesn’t. The disk fills up and the app starts throwing write errors. Someone has to SSH in and manually delete old files just to keep things running.
And if that server goes down without backups, everything uploaded by users is gone. This is a common scenario for developers who treat storage as an afterthought during the early stage of a project. Local disk storage storage is fast, simple, and requires zero setup. But it’s also fragile, impossible to scale horizontally and completely tied to the life of a single machine.
Alibaba Cloud Object Storage Service (OSS) solves all of that. It’s a managed storage service where files are uploaded through an API, stored redundantly across multiple devices, and accessible from anywhere. No disk, no management, no replication scripts, no midnight emergencies because the volume hit 100%.
OSS is an object storage system. Unlike a traditional filesystem with nested directories, everything in OSS is flat. You have buckets (top-level containers) and objects (the files inside them). Each object has a key, a string that looks like a file path (e.g., uploads/2026/04/photo.jpg) but is really just a unique identifier.
It’s worth understanding how this differs from other storage options on Alibaba Cloud. Block storage (ECS Cloud Disks) gives you raw volumes that attach to a single server - fast, but tied to one machine and limited in size.
File Storage NAS provides a shared filesystem across multiple servers, which is great for applications that need POSIX compatibility. Object storage trades filesystem semantics for massive scalability and HTTP-based access. There's no mounting, no file locking, and no directory tree. You just PUT and GET objects over HTTPS. For web applications dealing with user uploads, media files, backups, and static assets, this tradeoff is almost always the right one.
When a file is uploaded to OSS, Alibaba Cloud automatically replicates it across multiple storage devices within the selected region. The platform guarantees 99.9999999999% data durability, that's twelve nines. For context, that means if you stored a billion files, you'd statistically expect to lose one file every hundred years.
Beyond raw storage, OSS provides several features that make it useful in real applications:
● Storage tiering: Three classes - Standard, Infrequent Access, and Archive - at progressively lower costs. Files can transition between them automatically.
● Pre-signed URLs: Generate temporary, expiring links to private files. No need to make anything public or proxy downloads through your server.
● Lifecycle rules: Automatically move old data to cheaper tiers or delete it entirely after a set number of days.
● Versioning: Keep previous versions of overwritten or deleted files. Extremely useful as a safety net against bugs and accidental deletions.
● CDN integration: Pair OSS with Alibaba Cloud CDN to serve files from edge nodes worldwide, reducing latency for global users.
● Direct uploads: Using STS (Security Token Service), clients can upload files directly to OSS without routing them through your backend at all.
It's the kind of infrastructure that's boring in the best way, it just works, quietly, at whatever scale you throw at it.
Setting up a bucket takes about two minutes from the Alibaba Cloud console.
Navigate to Object Storage Service under the Storage section and click Create Bucket. The key decisions:
● Name: Globally unique across all of Alibaba Cloud. A good convention is {project}-{environment}-{purpose}, like myapp-prod-uploads. Only lowercase letters, numbers, and hyphens.
● Region: Pick the one closest to your users. If most of your traffic comes from Southeast Asia, ap-southeast-1 (Singapore) is a solid choice. This affects latency and also determines which data residency regulations apply.
● Storage Class: Start with Standard. Don't overthink this, lifecycle rules can automate transitions to cheaper tiers later. Standard is for data that gets accessed regularly, which covers most active application files.
● Access Control (ACL): Set it to Private. This is important. A public bucket means anyone with the URL can access any file in it. With a private bucket, access is controlled through API credentials or pre-signed URLs. You can always make specific files accessible temporarily without exposing the entire bucket.
● Versioning: Turn it on. Right now, before uploading anything. It adds negligible cost but provides a critical safety net. If a bug overwrites files with empty data, and this happens more often than anyone admits, versioning preserves every previous version. Recovery becomes a matter of listing old versions and restoring them, rather than explaining to users why their files are gone.
Interacting with OSS from code requires an Access Key ID and Access Key Secret. These are generated from the Alibaba Cloud console and function like a username and password for API calls.
The root account's access key has unrestricted permissions across every Alibaba Cloud service — compute, databases, DNS, billing, everything. Using it in application code is like giving every employee the master key to the building.
The correct approach is to create a RAM (Resource Access Management) user with permissions scoped to exactly what the application needs.
Step 1: Go to the RAM Console and create a new user with Programmatic Access enabled.
Step 2: Create a custom policy that only allows operations on the specific bucket:
{
"Version": "1",
"Statement": [
{
"Effect": "Allow",
"Action": [
"oss:PutObject",
"oss:GetObject",
"oss:DeleteObject",
"oss:ListObjects"
],
"Resource": [
"acs:oss:*:*:myapp-prod-uploads",
"acs:oss:*:*:myapp-prod-uploads/*"
]
}
]
}
This policy allows uploading, downloading, deleting, and listing, nothing else. The RAM user can't create buckets, modify billing settings, or touch any other service. If the key leaks, the blast radius is limited to one bucket.
Step 3: Attach the policy to the user, then save the generated credentials immediately. The secret is only displayed once.
Step 4: Store them as environment variables, never in source code or config files committed to version control:
export ALIBABA_OSS_KEY_ID="your_key_here"
export ALIBABA_OSS_KEY_SECRET="your_secret_here"
export ALIBABA_OSS_ENDPOINT="https://oss-ap-southeast-1.aliyuncs.com"
export ALIBABA_OSS_BUCKET="myapp-prod-uploads"
For production deployments, consider using Alibaba Cloud's KMS (Key Management Service) or injecting secrets through your CI/CD pipeline. The point is that credentials should never exist in a place where they can be accidentally shared, committed, or logged.
Alibaba Cloud provides official SDKs for Python, Java, Go, Node.js, PHP, and .NET. Python is one of the most commonly used, and the oss2 library is well-maintained with solid documentation.
pip install oss2
import oss2
import os
auth = oss2.Auth(
os.environ['ALIBABA_OSS_KEY_ID'],
os.environ['ALIBABA_OSS_KEY_SECRET']
)
bucket = oss2.Bucket(
auth,
os.environ['ALIBABA_OSS_ENDPOINT'],
os.environ['ALIBABA_OSS_BUCKET']
)
For files under 100MB, a simple put_object works:
def upload_file(local_path, key):
with open(local_path, 'rb') as f:
result = bucket.put_object(key, f)
print(f"Done. Status: {result.status}, ETag: {result.etag}")
upload_file('./photo.jpg', 'users/42/photo.jpg')
The key determines where the file "lives" inside the bucket. Even though OSS is flat, using slashes in keys creates an illusion of directories that makes organization natural. A good convention is {resource_type}/{identifier}/{filename}, it makes filtering, lifecycle rules, and access policies much easier to manage down the road.
For large files, database backups, video uploads, anything over 100MB, multipart upload is the way to go. The resumable_upload function handles everything: splitting the file into chunks, uploading them in parallel across multiple threads, and retrying any chunks that fail. If the connection drops entirely, calling the function again resumes from the last successful chunk instead of starting over:
def upload_large_file(local_path, key):
oss2.resumable_upload(
bucket,
key,
local_path,
part_size=10 * 1024 * 1024, # 10MB chunks
num_threads=4
)
print(f"Large file uploaded: {key}")
upload_large_file('./db-backup.sql.gz', 'backups/db-2026-04-20.sql.gz')
This is particularly useful for anything running on unreliable networks or dealing with files that take more than a few seconds to transfer.
def download_file(key, local_path):
result = bucket.get_object(key)
with open(local_path, 'wb') as f:
for chunk in result:
f.write(chunk)
print(f"Saved to {local_path}")
download_file('users/42/photo.jpg', './downloaded_photo.jpg')
Streaming the response in chunks keeps memory usage constant regardless of file size, a 10KB image and a 2GB video use the same amount of RAM during download.
This is arguably the most useful feature of OSS for web applications. Instead of routing file downloads through the backend (which ties up application threads doing nothing but I/O), generate a temporary signed URL that lets the user download directly from OSS:
def get_temp_url(key, expires_seconds=900):
return bucket.sign_url('GET', key, expires_seconds, slash_safe=True)
url = get_temp_url('users/42/photo.jpg')
# Valid for 15 minutes, then it stops working
The URL contains a cryptographic signature and an expiration timestamp. Anyone with the URL can access the file until it expires, but they can't modify it, access other files, or extend the expiration. This pattern works great for serving images in a web app, generating download links in emails, or allowing temporary access to shared documents. The backend generates the URL in microseconds, and OSS (or CDN) handles the actual data transfer.
Pre-signed URLs can also be generated for PUT operations, enabling direct uploads from browser JavaScript or mobile apps without the file ever touching the backend server.
Here's a complete, production-ready upload endpoint using Flask. It validates the incoming file, uploads it to OSS, and returns a temporary URL so the client can use the file immediately:
from flask import Flask, request, jsonify
import oss2
import os
import uuid
from datetime import datetime
app = Flask(__name__)
auth = oss2.Auth(
os.environ['ALIBABA_OSS_KEY_ID'],
os.environ['ALIBABA_OSS_KEY_SECRET']
)
bucket = oss2.Bucket(
auth,
os.environ['ALIBABA_OSS_ENDPOINT'],
os.environ['ALIBABA_OSS_BUCKET']
)
ALLOWED_TYPES = {'jpg', 'jpeg', 'png', 'webp'}
MAX_SIZE = 10 * 1024 * 1024 # 10MB
@app.route('/api/upload', methods=['POST'])
def handle_upload():
file = request.files.get('file')
if not file or file.filename == '':
return jsonify({'error': 'No file provided'}), 400
# Validate extension
ext = file.filename.rsplit('.', 1)[-1].lower() if '.' in file.filename else ''
if ext not in ALLOWED_TYPES:
return jsonify({'error': f'File type .{ext} not supported'}), 400
# Validate size
file.seek(0, 2)
size = file.tell()
file.seek(0)
if size > MAX_SIZE:
return jsonify({'error': 'File too large ({size // (1024*1024)}MB max 10MB)'}), 400
# Build a structured key with date prefix
date_path = datetime.utcnow().strftime('%Y/%m/%d')
key = f"uploads/{date_path}/{uuid.uuid4().hex}.{ext}"
try:
bucket.put_object(key, file.stream)
except oss2.exceptions.OssError as e:
app.logger.error(f"OSS upload failed: {e}")
return jsonify({'error': 'Storage unavailable, please try again'}), 503
return jsonify({
'key': key,
'url': bucket.sign_url('GET', key, 3600, slash_safe=True)
}), 201
A few things worth noting about this setup:
● Validation happens before the upload. There's no point sending a 50MB PDF to OSS just to reject it afterward. Checking size and type first saves bandwidth and time.
● Date-based key prefixes make debugging straightforward. Looking for a broken upload from April 18th? List objects with the prefix uploads/2026/04/18/.
● UUIDs prevent collisions. Two users uploading photo.jpg at the same time won't overwrite each other.
● The signed URL in the response lets the client display the uploaded file immediately without making a second API call.
● OSS-specific errors are caught and logged so users see a clean error message instead of a stack trace.
Storage costs tend to grow slowly and then suddenly become noticeable. Most uploaded files are actively accessed for a few weeks, then rarely touched again. Paying Standard storage rates for files nobody looks at is wasteful.
OSS offers three storage classes at different price points:
| Class | Best For | Cost vs. Standard |
|---|---|---|
| Standard | Active, frequently accessed data | Baseline |
| Infrequent Access (IA) | Data accessed less than once a month | 40% cheaper |
| Archive | Long-term retention, rarely accessed | 70% cheaper |
Lifecycle rules automate transitions between these tiers based on file age:
from oss2.models import LifecycleRule, LifecycleTransition, LifecycleExpiration
rules = [
# Transition uploads to cheaper storage over time
LifecycleRule(
id='uploads-tiering',
prefix='uploads/',
status=LifecycleRule.ENABLED,
transitions=[
LifecycleTransition(storage_class=oss2.BUCKET_STORAGE_CLASS_IA, days=60),
LifecycleTransition(storage_class=oss2.BUCKET_STORAGE_CLASS_ARCHIVE, days=180)
]
),
# Automatically delete temporary processing files after 7 days
LifecycleRule(
id='temp-cleanup',
prefix='temp/',
status=LifecycleRule.ENABLED,
expiration=LifecycleExpiration(days=7)
)
]
bucket.put_bucket_lifecycle(oss2.models.BucketLifecycle(rules))
The first rule moves user uploads to Infrequent Access after 60 days and to Archive after 180 days. The second rule deletes anything in the temp/ prefix after a week, useful for thumbnails being generated, staging files being processed, or anything that doesn't need to persist.
Once configured, these rules run automatically. There's nothing to maintain, no cron jobs to manage, and the cost savings are immediate.
Alibaba Cloud OSS handles the parts of file storage that developers shouldn't have to think about, replication, durability, scaling, and availability. The SDK is clean, the pricing scales with usage, and features like pre-signed URLs, lifecycle rules, and versioning solve real problems without adding complexity.
If file uploads still go to the application server's local disk, migrating to OSS is a weekend project that pays for itself immediately. Start with a single bucket, move one upload workflow, and build from there. Once storage stops being a source of production incidents, it becomes clear why managed object storage is the default choice for modern applications.
Disclaimer: The views expressed herein are for reference only and don't necessarily represent the official views of Alibaba Cloud.
1 posts | 0 followers
FollowAlibaba Clouder - August 21, 2020
Alibaba Cloud Native Community - September 20, 2023
Alibaba Cloud Community - January 4, 2026
5544031433091282 - October 23, 2023
Alibaba Cloud Community - December 27, 2024
Alibaba EMR - May 26, 2021
1 posts | 0 followers
Follow
Apsara File Storage NAS
Simple, scalable, on-demand and reliable network attached storage for use with ECS instances, HPC and Container Service.
Learn More
Storage Capacity Unit
Plan and optimize your storage budget with flexible storage services
Learn More
Hybrid Cloud Storage
A cost-effective, efficient and easy-to-manage hybrid cloud storage solution.
Learn More
Simple Log Service
An all-in-one service for log-type data
Learn More