All Products
Search
Document Center

Object Storage Service:Access OSS using an AWS SDK

Last Updated:Dec 31, 2025

OSS is compatible with the AWS S3 API. You can access OSS using an AWS SDK without changing your code. To do this, configure the OSS Endpoint and access credentials.

  • Endpoint: Use an S3-compatible public endpoint (https://s3.oss-{region}.aliyuncs.com) or an internal network endpoint (https://s3.oss-{region}-internal.aliyuncs.com). Replace {region} with the actual region ID, such as cn-hangzhou. For a complete list of regions, see Regions and endpoints.

    Important

    Due to a policy change to improve compliance and security, starting March 20, 2025, new OSS users must use a custom domain name (CNAME) to perform data API operations on OSS buckets located in Chinese mainland regions. Default public endpoints are restricted for these operations. Refer to the official announcement for a complete list of the affected operations. If you access your data via HTTPS, you must bind a valid SSL Certificate to your custom domain. This is mandatory for OSS Console access, as the console enforces HTTPS.

  • Access credentials: Create an AccessKey with OSS access permissions in Resource Access Management (RAM).

Java

SDK 2.x

import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.S3Configuration;
import java.net.URI;

S3Client s3Client = S3Client.builder()
    .endpointOverride(URI.create("https://s3.oss-cn-hangzhou.aliyuncs.com"))
    .region(Region.AWS_GLOBAL)
    .serviceConfiguration(
        S3Configuration.builder()
            .pathStyleAccessEnabled(false)
            .chunkedEncodingEnabled(false)
            .build()
    )
    .build();

SDK 1.x

import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;

AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
    .withEndpointConfiguration(new EndpointConfiguration(
        "https://s3.oss-cn-hangzhou.aliyuncs.com", 
        "cn-hangzhou"))
    .withPathStyleAccessEnabled(false)
    .withChunkedEncodingDisabled(false)
    .build();

In SDK 1.x, calling close() on the S3ObjectInputStream returned by getObject immediately discards unread data. You must read the data completely before you close the stream.

S3Object object = s3Client.getObject("my-bucket", "file.txt");
InputStream input = object.getObjectContent();

byte[ ] data = IOUtils.toByteArray(input);

input.close();

Python

import boto3
from botocore.config import Config

s3 = boto3.client(
    's3',
    endpoint_url='https://s3.oss-cn-hangzhou.aliyuncs.com',
    config=Config(
        signature_version='s3',
        s3={'addressing_style': 'virtual'}
    )
)

Node.js

SDK v3

import { S3Client } from '@aws-sdk/client-s3';

const client = new S3Client({
    endpoint: 'https://s3.oss-cn-hangzhou.aliyuncs.com',
    region: 'cn-hangzhou'
});

SDK v2

const AWS = require('aws-sdk');

const s3 = new AWS.S3({
    endpoint: 'https://s3.oss-cn-hangzhou.aliyuncs.com',
    region: 'cn-hangzhou'
});

Go

SDK v2

import (
    "context"
    "github.com/aws/aws-sdk-go-v2/aws"
    awsconfig "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/service/s3"
)

cfg, _ := awsconfig.LoadDefaultConfig(context.TODO(),
    awsconfig.WithEndpointResolverWithOptions(
        aws.EndpointResolverWithOptionsFunc(func(service, region string, options ...interface{}) (aws.Endpoint, error) {
            return aws.Endpoint{
                URL: "https://s3.oss-cn-hangzhou.aliyuncs.com",
            }, nil
        }),
    ),
)
client := s3.NewFromConfig(cfg)

SDK v1

import (
    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/session"
    "github.com/aws/aws-sdk-go/service/s3"
)

sess := session.Must(session.NewSessionWithOptions(session.Options{
    Config: aws.Config{
        Endpoint: aws.String("https://s3.oss-cn-hangzhou.aliyuncs.com"),
        Region:   aws.String("cn-hangzhou"),
    },
    SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)

.NET

SDK 3.x

using Amazon.S3;

var config = new AmazonS3Config
{
    ServiceURL = "https://s3.oss-cn-hangzhou.aliyuncs.com"
};
var client = new AmazonS3Client(config);

SDK 2.x

using Amazon.S3;

var config = new AmazonS3Config
{
    ServiceURL = "https://s3.oss-cn-hangzhou.aliyuncs.com"
};
var client = new AmazonS3Client(config);

PHP

SDK 3.x

<?php
require_once __DIR__ . '/vendor/autoload.php';
use Aws\S3\S3Client;

$s3Client = new S3Client([
    'version' => '2006-03-01',
    'region'  => 'cn-hangzhou',
    'endpoint' => 'https://s3.oss-cn-hangzhou.aliyuncs.com'
]);

SDK 2.x

<?php
require_once __DIR__ . '/vendor/autoload.php';
use Aws\S3\S3Client;

$s3Client = S3Client::factory([
    'version' => '2006-03-01',
    'region'  => 'cn-hangzhou',
    'base_url' => 'https://s3.oss-cn-hangzhou.aliyuncs.com'
]);

Ruby

SDK 3.x

require 'aws-sdk-s3'

s3 = Aws::S3::Client.new(
  endpoint: 'https://s3.oss-cn-hangzhou.aliyuncs.com',
  region: 'cn-hangzhou'
)

SDK 2.x

require 'aws-sdk'

s3 = AWS::S3::Client.new(
  s3_endpoint: 's3.oss-cn-hangzhou.aliyuncs.com',
  region: 'cn-hangzhou',
  s3_force_path_style: false
)

C++

Requires SDK version 1.7.68 or later.

#include <aws/s3/S3Client.h>
#include <aws/core/client/ClientConfiguration.h>

Aws::Client::ClientConfiguration config;
config.endpointOverride = "s3.oss-cn-hangzhou.aliyuncs.com";
config.region = "cn-hangzhou";

Aws::S3::S3Client s3_client(config);

Browser

Frontend web applications must use Security Token Service (STS) temporary credentials. Do not hard-code permanent AccessKeys in the client. The server-side calls the AssumeRole operation to obtain temporary credentials and returns them to the client. For more information, see Use STS temporary credentials to access OSS.

import { S3Client } from '@aws-sdk/client-s3';

// Obtain STS temporary credentials from the server.
async function getSTSCredentials() {
    const response = await fetch('https://your-server.com/api/sts-token');
    return await response.json();
}

// Initialize the S3 client with temporary credentials.
const client = new S3Client({
    region: 'cn-hangzhou',
    endpoint: 'https://s3.oss-cn-hangzhou.aliyuncs.com',
    credentials: async () => {
        const creds = await getSTSCredentials();
        return {
            accessKeyId: creds.accessKeyId,
            secretAccessKey: creds.secretAccessKey,
            sessionToken: creds.securityToken,
            expiration: new Date(creds.expiration)
        };
    }
});

Android

Mobile applications (Android) must use STS temporary credentials. Do not hard-code permanent AccessKeys in the client. The server-side calls the AssumeRole operation to obtain temporary credentials and returns them to the client. For more information, see Use STS temporary credentials to access OSS.

import com.amazonaws.auth.AWSCredentialsProvider;
import com.amazonaws.auth.BasicSessionCredentials;
import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;

// Implement a credentials provider to obtain STS temporary credentials from the server.
public class OSSCredentialsProvider implements AWSCredentialsProvider {
    @Override
    public AWSCredentials getCredentials() {
        // Obtain STS temporary credentials from your server.
        // Request https://your-server.com/api/sts-token
        String accessKeyId = fetchFromServer("accessKeyId");
        String secretKeyId = fetchFromServer("secretKeyId");
        String securityToken = fetchFromServer("securityToken");
        
        return new BasicSessionCredentials(accessKeyId, secretKeyId, securityToken);
    }
    
    @Override
    public void refresh() {
        // Refresh the credentials.
    }
}

// Create the S3 client.
AmazonS3 s3Client = AmazonS3Client.builder()
    .withCredentials(new OSSCredentialsProvider())
    .withEndpointConfiguration(new EndpointConfiguration(
        "https://s3.oss-cn-hangzhou.aliyuncs.com", ""))
    .build();

// Business logic
s3Client.putObject("my-bucket", "test.txt", "Hello OSS");

iOS

Mobile applications (iOS) must use STS temporary credentials. Do not hard-code permanent AccessKeys in the client. The server-side calls the AssumeRole operation to obtain temporary credentials and returns them to the client. For more information, see Use STS temporary credentials to access OSS.

#import <AWSS3/AWSS3.h>

// Implement the credentials provider.
@interface OSSCredentialsProvider : NSObject <AWSCredentialsProvider>
@end

@implementation OSSCredentialsProvider

- (AWSTask<AWSCredentials *> *)credentials {
    return [[AWSTask taskWithResult:nil] continueWithBlock:^id(AWSTask *task) {
        // Obtain STS temporary credentials from the server.
        NSString *accessKey = [self fetchFromServer:@"accessKeyId"];
        NSString *secretKey = [self fetchFromServer:@"secretKeyId"];
        NSString *sessionToken = [self fetchFromServer:@"securityToken"];
        
        AWSCredentials *credentials = [[AWSCredentials alloc]
            initWithAccessKey:accessKey
            secretKey:secretKey
            sessionKey:sessionToken
            expiration:[NSDate dateWithTimeIntervalSinceNow:3600]];
        
        return [AWSTask taskWithResult:credentials];
    }];
}

@end

// Configure the S3 client.
AWSEndpoint *endpoint = [[AWSEndpoint alloc] initWithURLString:@"https://s3.oss-cn-hangzhou.aliyuncs.com"];
AWSServiceConfiguration *configuration = [[AWSServiceConfiguration alloc]
    initWithRegion:AWSRegionUnknown
    endpoint:endpoint
    credentialsProvider:[[OSSCredentialsProvider alloc] init]];

[AWSS3 registerS3WithConfiguration:configuration forKey:@"OSS"];
AWSS3 *s3 = [AWSS3 S3ForKey:@"OSS"];

// Business logic
AWSS3PutObjectRequest *request = [AWSS3PutObjectRequest new];
request.bucket = @"my-bucket";
request.key = @"test.txt";
request.body = [@"Hello OSS" dataUsingEncoding:NSUTF8StringEncoding];

[[s3 putObject:request] continueWithBlock:^id(AWSTask *task) {
    if (task.error) {
        NSLog(@"Error: %@", task.error);
    } else {
        NSLog(@"Success");
    }
    return nil;
}];

FAQ

Upload failed: InvalidArgument: aws-chunked encoding is not supported

Symptom: You receive the following error when you upload a file:

InvalidArgument: aws-chunked encoding is not supported with the specified x-amz-content-sha256 value

Root cause:

This is the most common issue that occurs when you access OSS using an AWS SDK. OSS supports the AWS Signature V4 algorithm, but with a difference in transfer encoding:

  • AWS S3 uses chunked encoding by default to transfer large files.

  • OSS does not support transfers that use chunked encoding.

Cause:

The Signature V4 implementation in some SDKs depends on chunked encoding:

  • Python (boto3): Signature V4 enforces chunked encoding, which cannot be disabled. Switch to Signature V2.

  • Java: You can disable chunked encoding in the configuration.

  • Go/Node.js: Chunked encoding is not used by default. No special handling is required.

Solutions by SDK:

SDK

Solution

Reason

Python (boto3)

Use Signature V2: signature_version='s3'

The boto3 Signature V4 implementation is tied to chunked encoding and cannot be disabled.

Java 1.x

Signature V4 + .withChunkedEncodingDisabled(true)

Chunked encoding can be disabled.

Java 2.x

Signature V4 + .chunkedEncodingEnabled(false)

Chunked encoding can be disabled.

Go v1

Signature V4

Does not use chunked encoding by default.

Go v2

Signature V4. Note the Manager for large file uploads.

The Manager feature may use chunked encoding.

Node.js v3

Signature V4

Does not use chunked encoding by default.

Python example: Before and after the fix:

# Incorrect configuration (boto3's V4 implementation uses chunked encoding)
s3 = boto3.client('s3',
    endpoint_url='https://oss-cn-hongkong.aliyuncs.com',
    config=Config(signature_version='v4'))

# Correct configuration (boto3 uses V2 signature)
s3 = boto3.client('s3',
    endpoint_url='https://oss-cn-hongkong.aliyuncs.com',
    config=Config(signature_version='s3'))  # V2 signature is the stable solution for boto3

Technical details:

The OSS V4 signature follows the AWS Signature Version 4 specification, with the following requirements:

  • The request header includes x-oss-content-sha256: UNSIGNED-PAYLOAD.

  • The Transfer-Encoding: chunked method is not used.

Most SDKs support compatibility through configuration. However, the boto3 Signature V4 implementation is tightly coupled with chunked encoding. Therefore, you must use Signature V2 with boto3.

SDK and signature version selection

Version selection reference:

Language

SDK version

Signature version

Configuration notes

Python

Latest boto3 version

Signature V2 (s3)

The boto3 V4 implementation is not compatible with OSS.

Java 1.x

Latest 1.x

Signature V4

Chunked encoding must be disabled.

Java 2.x

Latest 2.x

Signature V4

Chunked encoding must be disabled.

Node.js

v3

Signature V4 (default)

-

Go v1

Latest v1

Signature V4 (default)

-

Go v2

Latest v2

Signature V4 (default)

Manager considerations for large file uploads

Signature version details:

  • OSS V4 signature: OSS fully supports the AWS Signature V4 algorithm.

  • V2 signature: Signature V2 is required for boto3 because of SDK implementation limitations.

  • Compatibility: Except for boto3, all other SDKs can use Signature V4 to access OSS.

Version selection for new projects:

Scenario

Optional Solutions

Reason

New Python project

boto3 + Signature V2

boto3 does not currently support OSS V4.

New Java project

Java 2.x + Signature V4

Better performance.

New Node.js project

v3 + Signature V4

-

New Go project

Go v1 + Signature V4

Recommended.

Existing project migration

Keep the current SDK version.

Minimizes the risk of changes.

Signature error: SignatureDoesNotMatch

You may encounter a SignatureDoesNotMatch error. This error indicates that the signature calculated on the server-side does not match the signature provided by the client.

The most common cause is using an AWS AccessKey instead of an OSS AccessKey in your code. AWS access credentials and OSS access credentials are two separate systems and cannot be used interchangeably. Check for parameters such as aws_access_key_id and aws_secret_access_key in your code. Make sure that you are using the AccessKey ID and AccessKey secret created in the OSS console.

The second most common cause is server clock drift. The S3 signature algorithm includes a timestamp in the signature. The OSS server-side verifies the difference between the request time and the server time. If your server's clock is out of sync with the standard time by more than 15 minutes, all requests are rejected. You can check the server's UTC time with the date -u command. If the time is incorrect, use ntpdate or a system time synchronization service to correct it.

A third cause is an incorrect endpoint configuration. If the endpoint points to an AWS domain name, such as s3.amazonaws.com, or uses the wrong OSS region, the signature calculation fails. The standard format for an OSS endpoint is https://oss-{region}.aliyuncs.com. The region must match the region where the bucket is located, such as oss-cn-hangzhou or oss-cn-beijing.

Another potential cause is specific to using boto3. If you do not configure signature_version='s3', boto3 uses the default V4 signature algorithm, which causes the signature calculation to fail. The correct boto3 configuration must include the Config(signature_version='s3') parameter.

A simple way to verify your configuration is to use the ossutil command line interface. Run the following command: ossutil ls oss://your-bucket --access-key-id <key> --access-key-secret <secret> --endpoint oss-cn-hangzhou.aliyuncs.com. If the command successfully lists the contents of the bucket, your access credentials and endpoint are correct. The problem likely lies in your code's configuration.

Bucket access errors

NoSuchBucket or AccessDenied errors indicate that the specified bucket cannot be accessed. The most common cause is a mismatch between the endpoint and the bucket's region.

Each OSS bucket belongs to a specific region, such as cn-hangzhou or cn-beijing. When you access a bucket, the endpoint must match the bucket's region. For example, if your bucket is in the Hangzhou region, the endpoint is oss-cn-hangzhou.aliyuncs.com. You cannot use the Beijing region endpoint oss-cn-beijing.aliyuncs.com. Unlike AWS S3, which allows cross-region access and performs automatic redirection, OSS does not support cross-region access. Using an incorrect endpoint results in a NoSuchBucket error.

A second cause is an issue with the RAM permission configuration. Check whether the RAM user associated with your OSS AccessKey has permission to access the target bucket. On the RAM management page of the OSS console, confirm that the user is granted the necessary permissions, such as oss:ListObjects, oss:GetObject, and oss:PutObject.

A third cause is related to bucket naming conventions. OSS supports two URL styles: virtual host style (bucket-name.oss-cn-hangzhou.aliyuncs.com) and path style (oss-cn-hangzhou.aliyuncs.com/bucket-name). When you use the virtual host style, the bucket name must comply with DNS naming conventions and cannot contain underscores. If your bucket name contains an underscore, you must use the path style in your SDK configuration or create a new bucket with a compliant name.

Performance optimization

Uploading and downloading large files are common requirements for object storage applications. The AWS SDKs provide multiple transfer acceleration mechanisms that are also effective for OSS.

When you use Python boto3, you can configure multipart upload parameters with TransferConfig. When a file is larger than the configured threshold, boto3 automatically splits the file into multiple parts and uploads them concurrently. This significantly increases the transfer speed. The multipart_threshold parameter specifies the file size threshold for enabling multipart uploads. max_concurrency specifies the number of concurrent upload threads, and multipart_chunksize specifies the size of each part. Correctly configuring these parameters can increase the upload speed for large files over 100 MB several times.

When you use the Java SDK, the TransferManager class provides features such as multipart upload, concurrent transfer, and automatic retries. TransferManager automatically selects the optimal transfer strategy based on the file size, which eliminates the need for you to manually handle the multipart logic.

When you use the Go SDK, you should call s3manager.Uploader instead of PutObject directly. The Uploader has a built-in concurrent multipart upload feature that automatically splits large files for concurrent upload and handles the retry logic for failed uploads.

When you use the Node.js SDK, you can use the Upload class from the @aws-sdk/lib-storage package. This class supports streaming uploads. It can start uploading a file as it is being read, which reduces memory usage. An example is provided in the Node.js section above.

All of these transfer acceleration mechanisms are based on the S3 multipart upload API. Because OSS is fully compatible with this API, you can use these mechanisms directly without changing your code.