All Products
Search
Document Center

Object Storage Service:How to create a bucket

Last Updated:Mar 11, 2024

A bucket is a container for objects in Object Storage Service (OSS). Before you upload an object to OSS, you must first create a bucket to store the object. You can configure various attributes for a bucket, such as the access control list (ACL) and storage class. You can create buckets of different storage classes and store data in them based on your business requirements.

Usage notes

  • You are not charged for creating a bucket. You are charged for object storage, object access, and other usage. For more information, see Billing overview.

  • The capacity of a bucket is scalable. You do not need to purchase capacity before you use a bucket.

Limits

  • An Alibaba Cloud account can have up to 100 buckets per region.

  • After a bucket is created, you cannot modify its name, region, or storage class.

  • The capacity of a single bucket is unlimited.

Procedure

Use the OSS console

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets.

  3. On the Buckets page, click Creat Bucket.

  4. In the Create Bucket panel, configure parameters described in the following table.

    Parameter

    Description

    Bucket Name

    The name of the bucket that you want to create. The name of a bucket must meet the following requirements:

    • The name must be globally unique in OSS.

    • The name can contain lowercase letters, digits, and hyphens (-).

    • The name must start and end with a lowercase letter or a digit.

    • The name must be 3 to 63 characters in length.

    Region

    The region in which the bucket is located.

    To access OSS from an Elastic Compute Service (ECS) instance over an internal network, select the region in which the ECS instance is located. For more information, see OSS domain names.

    Note

    Before you create a bucket in a region in the Chinese mainland, you must complete real-name registration on the Real-name Registration page.

    Endpoint

    The public endpoint of the region in which the bucket is located.

    Storage Class

    The storage class of the bucket. Valid values:

    • Standard: provides highly reliable, highly available, and high-performance storage services that can process frequent data access. Standard is suitable for various business applications, such as social networking applications, image, audio, and video resource sharing applications, large websites, and big data analytics.

    • IA: provides highly durable storage services at lower prices compared with the Standard storage class. Infrequent Access (IA) has a minimum billable size of 64 KB and a minimum billable storage duration of 30 days. You can access IA objects in real time. You are charged data retrieval fees when you access IA objects. IA is suitable for data that is infrequently accessed, such as data accessed once or twice a month.

    • Archive: provides highly durable storage services at lower prices compared with Standard and IA. Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 60 days. You can access an Archive object after it is restored or real-time access of Archive objects is enabled. The amount of time that is required to restore an Archive object is approximately 1 minute. You are charged data retrieval fees if you restore an Archive object. If you access an Archive object after real-time access of Archive objects is enabled, you are charged Archive data retrieval fees based on the size of accessed Archive data. Archive is suitable for data that needs to be stored for a long period of time, such as archival data, medical images, scientific materials, and video footage.

    • Cold Archive: provides highly durable storage services at lower prices compared with Archive. Cold Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 180 days. You must restore a Cold Archive object before you can access the object. The amount of time that is required to restore a Cold Archive object varies based on the object size and the restoration mode. You are charged data retrieval fees and API operation calling fees when you restore a Cold Archive object. Cold Archive is suitable for storing cold data over an ultra-long period of time, including data that must be retained for an extended period of time due to compliance requirements, raw data that is accumulated over an extended period of time in the big data and AI fields, retained media resources in the film and television industries, and archived videos from the online education industry.

      Note

      Cold Archive is supported in the following regions: China (Hangzhou), China (Shanghai), China (Qingdao), China (Beijing), China (Zhangjiakou), China (Hohhot), China (Ulanqab), China (Shenzhen), China (Heyuan), China (Guangzhou), China (Chengdu), China (Hong Kong), US (Silicon Valley), US (Virginia), Japan (Tokyo), Singapore, Australia (Sydney), Malaysia (Kuala Lumpur), Indonesia (Jakarta), Philippines (Manila), India (Mumbai), Germany (Frankfurt), UK (London), and UAE (Dubai).

    • Deep Cold Archive: provides highly durable storage services at lower prices compared with Cold Archive. Deep Cold Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 180 days. You must restore a Deep Cold Archive object before you can access it. The amount of time that is required to restore a Deep Cold Archive object varies based on the object size and restoration mode. You are charged data retrieval fees and API operation calling fees when you restore a Deep Cold Archive object. Deep Cold Archive is suitable for storing extremely cold data for a long period of time, such as raw data that is accumulated over an extended period of time in the big data and AI fields, media data that requires long-term retention, data that must be retained for a long period of time due to regulatory and policy compliance requirements, and data that needs to be migrated from tapes to the cloud for long-term storage.

      Note

      Deep Cold Archive is supported in the following regions: China (Hangzhou), China (Shanghai), China (Beijing), China (Zhangjiakou), China (Ulanqab), China (Shenzhen), and Singapore.

    For more information about storage classes, see Overview.

    Redundancy Type

    The redundancy type of the bucket. Valid values:

    • LRS

      Locally redundant storage (LRS) stores multiple copies of your data on multiple devices of different facilities in the same zone. LRS provides data durability and availability even if hardware failures occur.

    • ZRS (Recommended)

      ZRS stores multiple copies of your data across multiple zones in the same region. Your data is still accessible even if a zone becomes unavailable.

      Important

      ZRS is supported in the following regions: China (Shenzhen), China (Beijing), China (Hangzhou), China (Shanghai), China (Hong Kong), Singapore, and Indonesia (Jakarta). ZRS increases your storage costs and cannot be disabled after it is enabled. We recommend that you exercise caution when you select this redundancy type

      For more information, see Create a ZRS bucket.

    ACL

    The ACL of the bucket.

    • Private: Only the owner and authorized users of this bucket can read and write objects in the bucket. Other users cannot access the objects in the bucket.

    • Public Read: Only the owner and authorized users of this bucket can write objects in the bucket. Other users, including anonymous users, can only read objects in the bucket.

      Warning

      If you set the ACL of a bucket to Public Read, all users can access the objects in the bucket over the Internet. This may result in unauthorized access to the data in your bucket, and you may be charged unexpected fees. We recommend that you exercise caution when you set the ACL of a bucket to Public Read.

    • Public Read/Write: All users, including anonymous users, can read and write objects in the bucket.

      Warning

      If you set the ACL of a bucket to Public Read/Write, all users can access the objects in the bucket and write data to the bucket over the Internet. This may result in unauthorized access to the data in your bucket, and you may be charged unexpected fees. If a user uploads prohibited data or information, your legitimate interests and rights may be infringed. We recommend that you do not set the bucket ACL to Public Read/Write unless necessary.

    Resource Group

    The resource group to which the bucket belongs. Resource groups are used to group your resources by usage, permission, and region. You can use resource groups to organize your resources in a hierarchical manner and group resources based on users and projects. For more information, see Resource group overview.

    Versioning

    Select whether to enable versioning. Valid values:

    • Activate: If versioning is enabled for a bucket, an object that is overwritten or deleted is saved as a previous version of the object. Versioning allows you to recover objects in the bucket to a previous version and protects data from accidental overwriting and deletion. For more information, see Overview.

    • Not Activated: If versioning is not enabled for a bucket, overwritten or deleted data cannot be recovered.

    Encryption Method

    The method to encrypt objects in the bucket.

    Note

    Server-side encryption is supported in the following regions: China (Hangzhou), China (Shanghai), China (Qingdao), China (Beijing), China (Zhangjiakou), China (Hohhot), China (Ulanqab), China (Shenzhen), China (Heyuan), China (Guangzhou), China (Chengdu), China (Hong Kong), US (Silicon Valley), US (Virginia), Japan (Tokyo), South Korea (Seoul), Singapore, Australia (Sydney), Malaysia (Kuala Lumpur), Indonesia (Jakarta), Philippines (Manila), Thailand (Bangkok), India (Mumbai), Germany (Frankfurt), UK (London), and UAE (Dubai).

    • None: Server-side encryption is disabled.

    • OSS-Managed: Keys managed by OSS are used to encrypt objects in the bucket. OSS encrypts each object with a different key. OSS also uses periodically rotated master keys to encrypt cryptographic keys.

    • KMS: The default CMK managed by KMS or the specified CMK is used to encrypt and decrypt objects.

      To use KMS-managed keys, you must activate KMS. For more information, see Purchase a dedicated KMS instance.

    • Encryption Algorithm: Only AES-256 is supported.

    • CMK: You can configure this parameter if you set Encryption Method to KMS. You can select one of the following CMK types:

      • alias/acs/oss: The default CMK stored in KMS is used to encrypt objects and decrypt objects when the objects are downloaded.

      • CMK ID: The keys generated by a specified CMK are used to encrypt different objects and the specified CMK ID is recorded in the metadata of the encrypted object. Objects are decrypted when they are downloaded by users who are granted decryption permissions. Before you specify a CMK ID, you must create a normal key or an external key in the same region as the bucket in the KMS console. For more information, see Create a CMK.

    Real-time Log Query

    If you want to query OSS access logs of the previous seven days free of charge, turn on Real-time Log Query.

    For more information, see Real-time log query.

    If you do not want to query real-time logs, retain the default setting, which is Not Activated.

    Scheduled Backup

    If you want to periodically back up your OSS data, turn on Scheduled Backup. In this case, OSS automatically creates a backup plan, based on which Cloud Backup performs a data backup once each day and retains a backup for seven days.

    Important
    • Scheduled backup is supported in the following regions: China (Hangzhou), China (Shanghai), China (Shenzhen), China (Beijing), China (Zhangjiakou), China (Hohhot), China (Chengdu), China (Hong Kong), Germany (Frankfurt), US (Virginia), Japan (Tokyo), Singapore, Australia (Sydney), Indonesia (Jakarta), Malaysia (Kuala Lumpur), India (Mumbai), and US (Silicon Valley).

    • Scheduled backup is not supported for IA, Archive, Cold Archive, and Deep Cold Archive buckets.

    • Scheduled backup does not support symbolic links, object access control lists (ACLs), and objects of the Archive, Cold Archive, and Deep Cold Archive storage classes.

    • If Cloud Backup is not activated or Cloud Backup is not authorized to access OSS, scheduled backup plans cannot be created.

    For more information, see Configure scheduled backup.

    If you do not want to periodically back up OSS data, retain the default setting, which is Not Activated.

    OSS-HDFS

    If you want to access OSS by using JindoSDK to build a data lake, you can enable OSS-HDFS by using the following steps: Move the pointer over the OSS-HDFS toggle button, and click Activate Now to enable OSS-HDFS.

    Important
    • OSS-HDFS is supported in the following regions: China (Hangzhou), China (Shanghai), China (Qingdao), China (Beijing), China (Ulanqab), China (Shenzhen), China (Guangzhou), China (Zhangjiakou), China (Hong Kong), Japan (Tokyo), Singapore, Germany (Frankfurt), US (Silicon Valley), US (Virginia), and Indonesia (Jakarta). To use OSS-HDFS, contact technical support. OSS-HDFS cannot be disabled after it is enabled. Exercise caution when you enable OSS-HDFS.

    • OSS-HDFS cannot be enabled for Archive or Cold Archive buckets.

    Hierarchical Namespace

    If you want to rename a directory or an object in the bucket, enable the hierarchical namespace feature.

    Important
    • You can enable the hierarchical namespace feature for a bucket only when you create the bucket. The hierarchical namespace feature cannot be disabled after it is enabled. Exercise caution when you enable the hierarchical namespace feature.

    • After you enable this feature for a bucket, specific features of OSS become unavailable for the bucket. For more information about the features that are not supported for a bucket for which the hierarchical namespace feature is enabled, see Use hierarchical namespace.

    • The hierarchical namespace feature is available only in the following regions: Australia (Sydney), US (Silicon Valley), Japan (Tokyo), India (Mumbai), UK (London), and Malaysia (Kuala Lumpur).

  5. Click OK.

Use ossbrowser

You can use ossbrowser to perform the same bucket-level operations that you can perform in the OSS console. You can follow the on-screen instructions in ossbrowser to create a bucket. For more information about how to use ossbrowser, see Use ossbrowser.

Use OSS SDKs

The following sample code provides examples on how to create a bucket by using OSS SDKs for common programming languages. For more information about how to create a bucket by using OSS SDKs for other programming languages, see Overview.

import com.aliyun.oss.ClientException;
import com.aliyun.oss.OSS;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.OSSClientBuilder;
import com.aliyun.oss.OSSException;
import com.aliyun.oss.model.*;

public class Demo {

    public static void main(String[] args) throws Exception {
        // Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
        String endpoint = "yourEndpoint";
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // Specify the name of the bucket. 
        String bucketName = "examplebucket";
        // Specify the ID of the resource group. If you do not specify a resource group ID, the bucket belongs to the default resource group. 
        //String rsId = "rg-aek27tc****";

        // Create an OSSClient instance. 
        OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider);

        try {
            // Create a bucket and enable the hierarchical namespace feature for the bucket. 
            CreateBucketRequest createBucketRequest = new CreateBucketRequest(bucketName).withHnsStatus(HnsStatus.Enabled);
            // The following code provides an example on how to specify the storage class, access control list (ACL), and redundancy type when you create the bucket. 
            // In this example, the storage class of the bucket is Standard. 
            createBucketRequest.setStorageClass(StorageClass.Standard);
            // The default redundancy type of the bucket is DataRedundancyType.LRS. 
            createBucketRequest.setDataRedundancyType(DataRedundancyType.LRS);
            // Set the ACL of the bucket to public-read. The default ACL is private. 
            createBucketRequest.setCannedACL(CannedAccessControlList.PublicRead);
            // When you create a bucket in a region that supports resource groups, you can configure a resource group for the bucket. 
            //createBucketRequest.setResourceGroupId(rsId);

            ossClient.createBucket(createBucketRequest);

            // Create a bucket. 
            ossClient.createBucket(createBucketRequest);
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\Credentials\EnvironmentVariableCredentialsProvider;
use OSS\OssClient;
use OSS\CoreOssException;

// Obtain access credentials from environment variables. Before you run the code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.  
$provider = new EnvironmentVariableCredentialsProvider();
// In this example, the endpoint of the China (Hangzhou) region is used. Specify the endpoint based on your business requirements. 
$endpoint = "http://oss-cn-hangzhou.aliyuncs.com";
// Specify the name of the bucket. Example: examplebucket. 
$bucket= "examplebucket";
try {
    $config = array(
        "provider" => $provider,
        "endpoint" => $endpoint,
    );
    $ossClient = new OssClient($config);
    // Set the storage class of the bucket to Infrequent Access (IA). The default storage class is Standard. 
    $options = array(
        OssClient::OSS_STORAGE => OssClient::OSS_STORAGE_IA
    );
    // Set the ACL of the bucket to public-read. The default bucket ACL is private. 
    $ossClient->createBucket($bucket, OssClient::OSS_ACL_TYPE_PUBLIC_READ, $options);
} catch (OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}
print(__FUNCTION__ . ": OK" . "\n");        
const OSS = require('ali-oss');

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that you have configured environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET
});

// Create the bucket. 
async function putBucket() {
  try {
    const options = {
      storageClass: 'Standard', // By default, the storage class of a bucket is Standard. To set the storage class of the bucket to Archive, set storageClass to Archive. 
      acl: 'private', // By default, the access control list (ACL) of a bucket is private. To set the ACL of the bucket to public read, set acl to public-read. 
      dataRedundancyType: 'LRS' // By default, the redundancy type of a bucket is locally redundant storage (LRS). To set the redundancy type of the bucket to zone-redundant storage (ZRS), set dataRedundancyType to ZRS. 
    }
    // Specify the name of the bucket. 
    const result = await client.putBucket('examplebucket', options);
    console.log(result);
  } catch (err) {
    console.log(err);
  }
}

putBucket();        
# -*- coding: utf-8 -*-
import oss2
from oss2.credentials import EnvironmentVariableCredentialsProvider

# Obtain access credentials from the environment variables. Before you run this sample code, make sure that you have configured environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET. 
auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider())
# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
# Specify the name of the bucket. Example: examplebucket. 
bucket = oss2.Bucket(auth, 'https://oss-cn-hangzhou.aliyuncs.com', 'examplebucket')

# Create the bucket. 
# The following sample code provides an example on how to specify the storage class, access control list (ACL), and redundancy type when you create the bucket. 
# In this example, the storage class is Standard, the ACL is private, and the redundancy type is zone-redundant storage (ZRS). 
# bucketConfig = oss2.models.BucketCreateConfig(oss2.BUCKET_STORAGE_CLASS_STANDARD, oss2.BUCKET_DATA_REDUNDANCY_TYPE_ZRS)
# bucket.create_bucket(oss2.BUCKET_ACL_PRIVATE, bucketConfig)
bucket.create_bucket()
using Aliyun.OSS;

// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
var endpoint = "yourEndpoint";
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID");
var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET");
// Specify the name of the bucket. 
var bucketName = "examplebucket";

// Initialize an OssClient instance. 
var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
// Create a bucket. 
try
    {
        var request = new CreateBucketRequest(bucketName);
        // Set the access control list (ACL) of the bucket to PublicRead. The default value is private. 
        request.ACL = CannedAccessControlList.PublicRead;
        // Set the redundancy type of the bucket to zone-redundant storage (ZRS). 
        request.DataRedundancyType = DataRedundancyType.ZRS;
        client.CreateBucket(request);
        Console.WriteLine("Create bucket succeeded");
    }
    catch (Exception ex)
    {
        Console.WriteLine("Create bucket failed. {0}", ex.Message);
    }
// Construct a request to create a bucket. 
// Specify the name of the bucket. 
CreateBucketRequest createBucketRequest=new CreateBucketRequest("examplebucket");. 
// Specify the access control list (ACL) of the bucket. 
// createBucketRequest.setBucketACL(CannedAccessControlList.Private);
// Specify the storage class of the bucket. 
// createBucketRequest.setBucketStorageClass(StorageClass.Standard);

// Create the bucket asynchronously. 
OSSAsyncTask createTask = oss.asyncCreateBucket(createBucketRequest, new OSSCompletedCallback<CreateBucketRequest, CreateBucketResult>() {
    @Override
    public void onSuccess(CreateBucketRequest request, CreateBucketResult result) {
        Log.d("asyncCreateBucket", "Success");
    }
    @Override
    public void onFailure(CreateBucketRequest request, ClientException clientException, ServiceException serviceException) {
        // Handle request exceptions. 
        if (clientException != null) {
            // Handle client exceptions, such as network exceptions. 
            clientException.printStackTrace();
        }
        if (serviceException != null) {
            // Handle service exceptions. 
            Log.e("ErrorCode", serviceException.getErrorCode());
            Log.e("RequestId", serviceException.getRequestId());
            Log.e("HostId", serviceException.getHostId());
            Log.e("RawMessage", serviceException.getRawMessage());
        }
    }
});
package main

    import (
        "fmt"
        "os"
        "github.com/aliyun/aliyun-oss-go-sdk/oss"
    )

func main() {
    /// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
    provider, err := oss.NewEnvironmentVariableCredentialsProvider()
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Create an OSSClient instance. 
    // Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
    client, err := oss.New("yourEndpoint", "", "", oss.SetCredentialsProvider(&provider))
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Create a bucket named examplebucket. Set the storage class to IA (oss.StorageIA), the ACL to public-read (oss.ACLPublicRead), and the redundancy type to ZRS (oss.RedundancyZRS). 
    err = client.CreateBucket("examplebucket", oss.StorageClass(oss.StorageIA), oss.ACL(oss.ACLPublicRead), oss.RedundancyType(oss.RedundancyZRS))
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }
}
id<OSSCredentialProvider> credentialProvider = [[OSSAuthCredentialProvider alloc] initWithAuthServerUrl:@"<StsServer>"];
OSSClientConfiguration *cfg = [[OSSClientConfiguration alloc] init];
cfg.maxRetryCount = 3;
cfg.timeoutIntervalForRequest = 15;

// If an OSSClient instance is released, all tasks in the session are canceled and the session becomes invalid. Make sure that the OSSClient instance is not released before the request is complete. 
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
OSSClient *client = [[OSSClient alloc] initWithEndpoint:@"Endpoint" credentialProvider:credentialProvider clientConfiguration:cfg];

OSSCreateBucketRequest * create = [OSSCreateBucketRequest new];
// Set the bucket name to examplebucket. 
create.bucketName = @"examplebucket";
// Set the access control list (ACL) of the bucket to private. 
create.xOssACL = @"private";
// Set the storage class of the bucket to Infrequent Access (IA). 
create.storageClass = OSSBucketStorageClassIA;

OSSTask * createTask = [client createBucket:create];

[createTask continueWithBlock:^id(OSSTask *task) {
    if (!task.error) {
        NSLog(@"create bucket success!");
    } else {
        NSLog(@"create bucket failed, error: %@", task.error);
    }
    return nil;
}];            
#include <alibabacloud/oss/OssClient.h>
using namespace AlibabaCloud::OSS;

int main(void)
{
    /* Initialize information about the account that is used to access OSS. */
    
    /* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "yourEndpoint";
    /* Specify the name of the bucket. Example: examplebucket. */
    std::string BucketName = "examplebucket";

    /* Initialize resources, such as network resources. */
    InitializeSdk();

    ClientConfiguration conf;
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
    OssClient client(Endpoint, credentialsProvider, conf);

    /* Specify the name, storage class, and access control list (ACL) of the bucket. */
    CreateBucketRequest request(BucketName, StorageClass::IA, CannedAccessControlList::PublicReadWrite);
    /* Set the redundancy type of the bucket to zone-redundant storage (ZRS). */
    //request.setDataRedundancyType(DataRedundancyType::ZRS);

    /* Create the bucket. */
    auto outcome = client.CreateBucket(request);

    if (!outcome.isSuccess()) {
        /* Handle exceptions. */
        std::cout << "CreateBucket fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        return -1;
    }

    /* Release resources, such as network resources. */
    ShutdownSdk();
    return 0;
}
#include "oss_api.h"
#include "aos_http_io.h"
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
const char *endpoint = "yourEndpoint";
/* Specify the name of the bucket. Example: examplebucket. */
const char *bucket_name = "examplebucket";

void init_options(oss_request_options_t *options)
{
    options->config = oss_config_create(options->pool);
    /* Use a char* string to initialize data of the aos_string_t type. */
    aos_str_set(&options->config->endpoint, endpoint);
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */  
    aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID"));
    aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET"));
    /* Specify whether to use CNAME to access OSS. The value of 0 indicates that CNAME is not used.  */
    options->config->is_cname = 0;
    /* Configure network parameters, such as the timeout period. */
    options->ctl = aos_http_controller_create(options->pool, 0);
}
int main(int argc, char *argv[])
{
    /* Call the aos_http_io_initialize method in main() to initialize global resources, such as network resources and memory resources.  */
    if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
        exit(1);
    }
    /* Create a memory pool to manage memory. aos_pool_t is equivalent to apr_pool_t. The code used to create a memory pool is included in the APR library. */
    aos_pool_t *pool;
    /* Create a memory pool. The value of the second parameter is NULL. This value specifies that the pool does not inherit other memory pools. */
    aos_pool_create(&pool, NULL);
    /* Create and initialize options. This parameter includes global configuration information, such as endpoint, access_key_id, access_key_secret, is_cname, and curl. */
    oss_request_options_t *oss_client_options;
    /* Allocate the memory resources in the memory pool to the options. */
    oss_client_options = oss_request_options_create(pool);
    /* Initialize oss_client_options. */
    init_options(oss_client_options);
    /* Initialize the parameters. */
    aos_string_t bucket;
    oss_acl_e oss_acl = OSS_ACL_PRIVATE;
    aos_table_t *resp_headers = NULL; 
    aos_status_t *resp_status = NULL; 
    /* Assign char* data to a bucket of the aos_string_t type.  */
    aos_str_set(&bucket, bucket_name);
    /* Create the bucket. */
    resp_status = oss_create_bucket(oss_client_options, &bucket, oss_acl, &resp_headers);
    /* Determine whether the bucket is created.  */
    if (aos_status_is_ok(resp_status)) {
        printf("create bucket succeeded\n");
    } else {
        printf("create bucket failed\n");
    }
    /* Release the memory pool. This operation releases the memory resources allocated for the request. */
    aos_pool_destroy(pool);
    /* Release the allocated global resources.  */
    aos_http_io_deinitialize();
    return 0;
}
require 'aliyun/oss'
client = Aliyun::OSS::Client.new(
  # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
  endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
  # Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  access_key_id: ENV['OSS_ACCESS_KEY_ID'],
  access_key_secret: ENV['OSS_ACCESS_KEY_SECRET']
)
# Specify the name of the bucket. Example: examplebucket. 
client.create_bucket('examplebucket')

Use ossutil

You can use ossutil to create a bucket. For more information, see mb (create buckets).

Use the OSS API

If your business requires a high level of customization, you can directly call RESTful APIs. To directly call an API, you must include the signature calculation in your code. For more information, see PutBucket.

What to do next

  • Object management

    • Upload objects

      After you create a bucket, you can upload objects to the bucket. For more information about how to upload objects, see Simple upload.

    • Download objects

      After you upload objects to a bucket, you can download the objects to the default download path of your browser or a custom local path. For more information about how to download objects, see Simple download.

    • Share objects

      You can share objects with users by using object URLs for downloads or previews. For more information about how to share objects, see Share objects.

  • Access control

    For security reasons, the default ACL of OSS resources is private. Only the owners and authorized users can access resources whose ACL is private. OSS allows you to configure various policies to grant users the permissions to access or use your OSS resources. For more information about access control, see Overview.