All Products
Search
Document Center

Object Storage Service:Access OSS using an AWS SDK

Last Updated:Mar 20, 2026

OSS is compatible with the AWS S3 API. Configure two values — the endpoint and your AccessKey credentials — and your existing AWS SDK code connects to OSS without any other changes.

Prerequisites

Before you begin, ensure that you have:

Important

Starting March 20, 2025, new OSS users must use a custom domain name (CNAME) to perform data API operations on buckets in Chinese mainland regions. Default public endpoints are restricted for these operations. See the official announcement for the full list of affected operations. If you access your data via HTTPS, bind a valid SSL certificate to your custom domain. This is mandatory for OSS console access, as the console enforces HTTPS. See Use a custom domain name.

Configure the endpoint and credentials

Two configuration values are all that's needed:

Endpoint — Use the S3-compatible public or internal endpoint format:

  • Public: https://s3.oss-{region}.aliyuncs.com

  • Internal (VPC): https://s3.oss-{region}-internal.aliyuncs.com

Replace {region} with your actual region ID, such as cn-hangzhou. For a complete list, see Regions and endpoints.

AccessKey credentials — Use the AccessKey ID and AccessKey secret you created in RAM. OSS access credentials and AWS access credentials are completely separate systems and cannot be used interchangeably.

SDK examples

All examples below initialize the S3 client pointed at https://s3.oss-cn-hangzhou.aliyuncs.com (the China (Hangzhou) region). Replace this with the endpoint for your bucket's region.

Java

AWS SDK for Java 2.x

Set chunkedEncodingEnabled(false) in S3Configuration. The Java 2.x SDK uses chunked encoding by default for putObject requests, which causes a signature error (HTTP 403) with OSS. Also set pathStyleAccessEnabled(false) to use virtual-hosted-style URLs.
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.S3Configuration;
import java.net.URI;

S3Client s3Client = S3Client.builder()
    .endpointOverride(URI.create("https://s3.oss-cn-hangzhou.aliyuncs.com"))
    .region(Region.AWS_GLOBAL)          // Required by the SDK; OSS ignores the value
    .serviceConfiguration(
        S3Configuration.builder()
            .pathStyleAccessEnabled(false)      // Use virtual-hosted-style URLs
            .chunkedEncodingEnabled(false)      // OSS does not support chunked encoding
            .build()
    )
    .build();

AWS SDK for Java 1.x

Set withChunkedEncodingDisabled(true) when building the client. Without this setting, uploads fail with an InvalidArgument error.
import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
    .withEndpointConfiguration(new EndpointConfiguration(
        "https://s3.oss-cn-hangzhou.aliyuncs.com",
        "cn-hangzhou"))
    .withPathStyleAccessEnabled(false)
    .withChunkedEncodingDisabled(true)   // OSS does not support chunked encoding
    .build();
When calling getObject, read the S3ObjectInputStream to completion before closing it. Calling close() early discards unread data.
S3Object object = s3Client.getObject("my-bucket", "file.txt");
InputStream input = object.getObjectContent();
byte[] data = IOUtils.toByteArray(input); // Read fully before closing
input.close();

Python

Use Signature V2 (signature_version='s3') with boto3. The boto3 Signature V4 implementation is tied to chunked encoding and cannot be disabled, which causes upload failures with OSS.
import boto3
from botocore.config import Config

s3 = boto3.client(
    's3',
    endpoint_url='https://s3.oss-cn-hangzhou.aliyuncs.com',
    # You can also set credentials via the AWS_ACCESS_KEY_ID and
    # AWS_SECRET_ACCESS_KEY environment variables instead of passing them here.
    config=Config(
        signature_version='s3',             # Use Signature V2; V4 is not compatible with OSS
        s3={'addressing_style': 'virtual'}  # Use virtual-hosted-style URLs
    )
)

Node.js

AWS SDK for Node.js v3

import { S3Client } from '@aws-sdk/client-s3';

const client = new S3Client({
    endpoint: 'https://s3.oss-cn-hangzhou.aliyuncs.com',
    region: 'cn-hangzhou'               // Required by the SDK; replace with your bucket's region
});

AWS SDK for Node.js v2

const AWS = require('aws-sdk');

const s3 = new AWS.S3({
    endpoint: 'https://s3.oss-cn-hangzhou.aliyuncs.com',
    region: 'cn-hangzhou'               // Required by the SDK; replace with your bucket's region
});

Go

AWS SDK for Go v2

If you use the Go v2 Manager for large file uploads, it may use chunked encoding. Test large uploads and handle any encoding-related errors accordingly.
import (
    "context"
    "github.com/aws/aws-sdk-go-v2/aws"
    awsconfig "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

cfg, _ := awsconfig.LoadDefaultConfig(context.TODO(),
    awsconfig.WithEndpointResolverWithOptions(
        aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
            return aws.Endpoint{
                URL: "https://s3.oss-cn-hangzhou.aliyuncs.com",
            }, nil
        }),
    ),
)
client := s3.NewFromConfig(cfg)

AWS SDK for Go v1

import (
    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/session"
    "github.com/aws/aws-sdk-go/service/s3"
)

sess := session.Must(session.NewSessionWithOptions(session.Options{
    Config: aws.Config{
        Endpoint: aws.String("https://s3.oss-cn-hangzhou.aliyuncs.com"),
        Region:   aws.String("cn-hangzhou"),
    },
    SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)

.NET

The configuration is the same for AWS SDK for .NET 2.x and 3.x.

using Amazon.S3;

var config = new AmazonS3Config
{
    ServiceURL = "https://s3.oss-cn-hangzhou.aliyuncs.com"
};
var client = new AmazonS3Client(config);

PHP

AWS SDK for PHP 3.x

<?php
require_once __DIR__ . '/vendor/autoload.php';
use Aws\S3\S3Client;

$s3Client = new S3Client([
    'version'  => '2006-03-01',
    'region'   => 'cn-hangzhou',
    'endpoint' => 'https://s3.oss-cn-hangzhou.aliyuncs.com'
]);

AWS SDK for PHP 2.x

<?php
require_once __DIR__ . '/vendor/autoload.php';
use Aws\S3\S3Client;

$s3Client = S3Client::factory([
    'version'  => '2006-03-01',
    'region'   => 'cn-hangzhou',
    'base_url' => 'https://s3.oss-cn-hangzhou.aliyuncs.com'  // PHP 2.x uses base_url, not endpoint
]);

Ruby

AWS SDK for Ruby 3.x

require 'aws-sdk-s3'

s3 = Aws::S3::Client.new(
  endpoint: 'https://s3.oss-cn-hangzhou.aliyuncs.com',
  region: 'cn-hangzhou'
)

AWS SDK for Ruby 2.x

require 'aws-sdk'

s3 = AWS::S3::Client.new(
  s3_endpoint: 's3.oss-cn-hangzhou.aliyuncs.com', # No https:// prefix in Ruby 2.x
  region: 'cn-hangzhou',
  s3_force_path_style: false
)

C++

Requires AWS SDK for C++ version 1.7.68 or later.

#include <aws/s3/S3Client.h>
#include <aws/core/client/ClientConfiguration.h>

Aws::Client::ClientConfiguration config;
config.endpointOverride = "s3.oss-cn-hangzhou.aliyuncs.com"; // No https:// prefix
config.region = "cn-hangzhou";

Aws::S3::S3Client s3_client(config);

Browser

Frontend web apps must use temporary credentials from Security Token Service (STS). Never hard-code permanent AccessKeys in client-side code. Your server calls the AssumeRole API to obtain temporary credentials, then returns them to the browser. See Use STS temporary credentials to access OSS for a complete tutorial.

import { S3Client } from '@aws-sdk/client-s3';

// Fetch STS temporary credentials from your server
async function getSTSCredentials() {
    const response = await fetch('https://your-server.com/api/sts-token');
    return await response.json();
}

const client = new S3Client({
    region: 'cn-hangzhou',
    endpoint: 'https://s3.oss-cn-hangzhou.aliyuncs.com',
    credentials: async () => {
        const creds = await getSTSCredentials();
        return {
            accessKeyId: creds.accessKeyId,
            secretAccessKey: creds.secretAccessKey,
            sessionToken: creds.securityToken,  // Map STS securityToken to sessionToken
            expiration: new Date(creds.expiration)
        };
    }
});

Android

Android apps must use temporary credentials from STS. Never hard-code permanent AccessKeys in the app. Your server calls the AssumeRole API to get temporary credentials. See Use STS temporary credentials to access OSS for a complete tutorial.

import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.BasicSessionCredentials;
import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;

// Implement a credentials provider that fetches STS credentials from your server
public class OSSCredentialsProvider implements AWSCredentialsProvider {
    @Override
    public AWSCredentials getCredentials() {
        // Call your server endpoint (e.g., https://your-server.com/api/sts-token)
        String accessKeyId = fetchFromServer("accessKeyId");
        String secretKeyId = fetchFromServer("secretKeyId");
        String securityToken = fetchFromServer("securityToken");

        return new BasicSessionCredentials(accessKeyId, secretKeyId, securityToken);
    }

    @Override
    public void refresh() {
        // Refresh credentials before they expire
    }
}

// Create the S3 client with STS credentials
AmazonS3 s3Client = AmazonS3Client.builder()
    .withCredentials(new OSSCredentialsProvider())
    .withEndpointConfiguration(new EndpointConfiguration(
        "https://s3.oss-cn-hangzhou.aliyuncs.com", ""))
    .build();

// Example: upload an object
s3Client.putObject("my-bucket", "test.txt", "Hello OSS");

iOS

iOS apps must use temporary credentials from STS. Never hard-code permanent AccessKeys in the app. Your server calls the AssumeRole API to get temporary credentials. See Use STS temporary credentials to access OSS for a complete tutorial.

#import <AWSS3/AWSS3.h>

// Implement the credentials provider
@interface OSSCredentialsProvider : NSObject <AWSCredentialsProvider>
@end

@implementation OSSCredentialsProvider

- (AWSTask<AWSCredentials *> *)credentials {
    return [[AWSTask taskWithResult:nil] continueWithBlock:^id(AWSTask *task) {
        // Fetch STS credentials from your server
        NSString *accessKey    = [self fetchFromServer:@"accessKeyId"];
        NSString *secretKey    = [self fetchFromServer:@"secretKeyId"];
        NSString *sessionToken = [self fetchFromServer:@"securityToken"];

        AWSCredentials *credentials = [[AWSCredentials alloc]
            initWithAccessKey:accessKey
            secretKey:secretKey
            sessionKey:sessionToken
            expiration:[NSDate dateWithTimeIntervalSinceNow:3600]]; // Credentials expire in 1 hour

        return [AWSTask taskWithResult:credentials];
    }];
}

@end

// Configure the S3 client
AWSEndpoint *endpoint = [[AWSEndpoint alloc]
    initWithURLString:@"https://s3.oss-cn-hangzhou.aliyuncs.com"];
AWSServiceConfiguration *configuration = [[AWSServiceConfiguration alloc]
    initWithRegion:AWSRegionUnknown
    endpoint:endpoint
    credentialsProvider:[[OSSCredentialsProvider alloc] init]];

[AWSS3 registerS3WithConfiguration:configuration forKey:@"OSS"];
AWSS3 *s3 = [AWSS3 S3ForKey:@"OSS"];

// Example: upload an object
AWSS3PutObjectRequest *request = [AWSS3PutObjectRequest new];
request.bucket = @"my-bucket";
request.key    = @"test.txt";
request.body   = [@"Hello OSS" dataUsingEncoding:NSUTF8StringEncoding];

[[s3 putObject:request] continueWithBlock:^id(AWSTask *task) {
    if (task.error) {
        NSLog(@"Upload failed: %@", task.error);
    } else {
        NSLog(@"Upload succeeded");
    }
    return nil;
}];

SDK and signature version reference

LanguageRecommended SDKSignature versionKey constraint
PythonLatest boto3Signature V2 (s3)boto3 Signature V4 is coupled to chunked encoding and is not compatible with OSS
Java2.x (better performance)Signature V4Disable chunked encoding
Node.jsv3Signature V4 (default)
Gov1 (recommended)Signature V4 (default)Go v2 Manager may use chunked encoding for large uploads
.NET3.xSignature V4 (default)
PHP3.xSignature V4 (default)
Ruby3.xSignature V4 (default)

For existing projects, keep your current SDK version to minimize the risk of breaking changes.

Troubleshooting

InvalidArgument: aws-chunked encoding is not supported

This is the most common error when connecting AWS SDKs to OSS. OSS supports Signature V4 but does not support chunked transfer encoding. AWS SDKs that enable chunked encoding by default fail with:

InvalidArgument: aws-chunked encoding is not supported with the specified x-amz-content-sha256 value

The fix depends on your SDK:

  • Python (boto3): Switch to Signature V2 with signature_version='s3'. The boto3 Signature V4 implementation is tightly coupled to chunked encoding and cannot be configured otherwise.

  • Java 1.x: Add .withChunkedEncodingDisabled(true) to the client builder.

  • Java 2.x: Add .chunkedEncodingEnabled(false) to S3Configuration.

  • Go, Node.js: No action required — these SDKs do not use chunked encoding by default.

Technically, OSS requires requests to include x-oss-content-sha256: UNSIGNED-PAYLOAD and prohibits Transfer-Encoding: chunked. Most SDKs handle this correctly when chunked encoding is disabled.

SignatureDoesNotMatch

The most common cause is using an AWS AccessKey instead of an OSS AccessKey. Check the aws_access_key_id and aws_secret_access_key parameters in your code and make sure they are the AccessKey ID and AccessKey secret from the OSS console or RAM — not from your AWS account.

The second most common cause is clock drift. The S3 signature algorithm includes a timestamp, and OSS rejects requests where the client clock is more than 15 minutes off. Check your server time with date -u and correct it using ntpdate or your system's time synchronization service if needed.

A third cause is an incorrect endpoint. If the endpoint points to an AWS domain (s3.amazonaws.com) or the wrong OSS region, signature verification fails because the signed string includes the host. The endpoint must match the bucket's region exactly — for example, https://s3.oss-cn-hangzhou.aliyuncs.com for a bucket in China (Hangzhou).

For boto3 specifically: if signature_version='s3' is not set, boto3 defaults to Signature V4, which causes this error.

To verify whether your credentials and endpoint are correct, run:

ossutil ls oss://your-bucket \
  --access-key-id <your-access-key-id> \
  --access-key-secret <your-access-key-secret> \
  --endpoint oss-cn-hangzhou.aliyuncs.com

If the command lists objects successfully, your credentials are valid and the issue is in your SDK configuration.

NoSuchBucket or AccessDenied

The most common cause is an endpoint that doesn't match the bucket's region. Every OSS bucket belongs to a specific region — a bucket in China (Hangzhou) must be accessed via oss-cn-hangzhou.aliyuncs.com. OSS does not perform cross-region redirects, so using the wrong region endpoint always results in NoSuchBucket even if the bucket exists.

If the region is correct, check RAM permissions. Confirm that the RAM user associated with your AccessKey has been granted the necessary policies, such as oss:ListObjects, oss:GetObject, and oss:PutObject for the target bucket.

A third cause is bucket naming. OSS supports two URL styles:

  • Virtual-hosted-style: bucket-name.oss-cn-hangzhou.aliyuncs.com — the bucket name must be DNS-compliant and cannot contain underscores.

  • Path-style: oss-cn-hangzhou.aliyuncs.com/bucket-name — works with any valid bucket name.

If your bucket name contains an underscore, switch to path-style in your SDK configuration or rename the bucket.

Performance optimization

For large file transfers, AWS SDKs provide multipart upload abstractions that work directly with OSS, which is fully compatible with the S3 Multipart Upload API.

  • Python: Configure TransferConfig with multipart_threshold, max_concurrency, and multipart_chunksize. For files larger than 100 MB, tuning these parameters can significantly increase upload speed by parallelizing parts.

  • Java: Use the TransferManager class, which automatically selects multipart or single-part upload based on file size and handles concurrent transfers and retries.

  • Go: Use s3manager.Uploader instead of calling PutObject directly. The uploader handles concurrent multipart uploads and retry logic automatically.

  • Node.js: Use the Upload class from @aws-sdk/lib-storage. It supports streaming uploads, reducing memory usage for large files.

What's next