All Products
Search
Document Center

Object Storage Service:Simple upload

Last Updated:Sep 01, 2023

Simple upload allows you to call the PutObject operation to upload a single object that is smaller than 5 GB in size. Simple upload is suitable for scenarios where an object can be uploaded by sending a single HTTP request.

Prerequisites

A bucket is created. For more information, see Create buckets.

Usage notes

  • Object size

    You can upload an object up to 5 GB in size by using simple upload. If you want to upload an object that is larger than 5 GB in size, use multipart upload. For more information, see Multipart upload.

  • Naming conventions for objects

    • The name must be encoded in UTF-8.

    • The name must be 1 to 1,023 characters in length.

    • The name cannot start with a forward slash (/) or a backslash (\).

  • Lower PUT request fees

    If you want to upload a large number of objects and set the storage classes of the objects to Deep Cold Archive, you are charged high PUT request fees. We recommend that you set the storage classes of the objects to Standard for upload, and configure lifecycle rules to convert the storage classes of the Standard objects to Deep Cold Archive. This reduces PUT request fees.

  • Security and authorization

    Object Storage Service (OSS) allows you to configure the access control lists (ACLs) for buckets and objects. This way, third-party users who are not granted the required permissions cannot upload data to your bucket. For more information, see Overview.

    OSS provides account-level authorization. This allows you to grant permissions to third-party users to upload objects to OSS buckets. For more information, see Authorized third-party upload.

  • Upload objects to a bucket for which the OSS-HDFS service is enabled

    To maintain OSS-HDFS stability and prevent data loss, do not upload objects to the .dlsdata/ directory by using methods not supported by OSS-HDFS.

  • Prevent existing objects from being overwritten by uploaded objects that have the same names

    By default, OSS overwrites existing objects with the uploaded objects that have the same names. You can use the following methods to prevent the existing objects from being unexpectedly overwritten:

    • Enable versioning for the bucket.

      If you enable versioning for a bucket, objects that are overwritten in the bucket are saved as previous versions. You can recover the previous versions of the objects at any time. For more information, see Overview.

    • Include the x-oss-forbid-overwrite parameter in the upload request

      You can add the x-oss-forbid-overwrite parameter to the header of the upload request and set this parameter to true. If you upload an object that has the same name as an existing object, the upload fails and the FileAlreadyExists error is returned. If you do not add this parameter to the request header or if you set this parameter to false, the uploaded object that has the same name as the existing object overwrites the existing object.

  • Performance tuning of object upload

    If you upload a large number of objects and the names of the objects contain sequential prefixes such as timestamps and letters, multiple object indexes may be stored in a single partition. If you send a large number of requests to query these objects, latency may increase. If you upload a large number of objects, we recommend that you use random prefixes instead of sequential prefixes to specify object names. For more information, see OSS performance and scalability best practices.

Use the OSS console

Note

In Alibaba Finance Cloud, OSS cannot be accessed over the Internet. Therefore, objects cannot be uploaded in the OSS console. If you want to upload objects, you must use ossbrowser, OSS SDKs, or ossutil.

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

  3. In the navigation tree, choose Files > Objects.

  4. On the Objects page, click Upload.

  5. In the Upload panel, configure the parameters described in the following table.

    1. Configure basic settings.

      Parameter

      Description

      Upload To

      The directory in which an object is stored after the object is uploaded to the bucket.

      • Current: The object is uploaded to the current directory.

      • Specified: The object is uploaded to a specific directory. You must enter the name of the directory. If the directory whose name you entered does not exist, OSS automatically creates the directory and uploads the object to the directory.

        The directory name must meet the following conventions:

        • The name can contain only UTF-8 characters. The name must be 1 to 254 characters in length.

        • The name cannot start with a forward slash (/) or a backslash (\).

        • The name cannot contain consecutive forward slashes (/).

        • The name cannot be two consecutive periods ( .. ).

      File ACL

      The ACL of the object.

      • Inherited from Bucket: The ACL of the object is the same as that of the bucket.

      • Private: Only the object owner or authorized users can read and write the objects to upload. Other users, including anonymous users, cannot access the objects without authorization. We recommend that you set the parameter to this value.

      • Public Read: Only the object owner or authorized users can write the objects to upload. Other users, including anonymous users, can only read the objects. If you set the File ACL parameter to this value, the objects may be unexpectedly accessed. This may result in unexpectedly high fees. Proceed with caution.

      • Public Read/Write: All users, including anonymous users, can read and write the objects to upload. This may result in data leaks and unexpectedly high fees. If a user writes prohibited data or information to the object, your legitimate interests and rights may be infringed. We recommend that you do not set the File ACL parameter to Public Read/Write unless necessary.

      For more information about ACLs, see Object ACL.

      Files to Upload

      The files or directories that you want to upload.

      You can click Select Files to select a local file or click Select Folders to select a directory. You can also drag the required local file or directory to the Files to Upload section.

      If you select an unnecessary object, click Remove in the Actions column that corresponds to the object to remove the object.

      Important
      • If you upload an object to an unversioned bucket but the object name already exists, the existing object is overwritten.

      • If you upload an object to a versioned bucket but the object name already exists, the existing object becomes a previous version, and the uploaded object becomes the latest version.

    2. (Optional) Configure advanced settings such as Storage Class and Encryption Method.

      Parameter

      Description

      Storage Class

      The storage class of the object.

      • Inherited from Bucket: The storage class of the object is the same as that of the bucket.

      • Standard: provides highly reliable, highly available, and high-performance storage services that can process frequent data access. Standard is suitable for various business applications, such as social networking applications, image, audio, and video resource sharing applications, large websites, and big data analytics.

      • IA: provides highly durable storage services at lower prices compared with the Standard storage class. IA has a minimum billable size of 64 KB and a minimum billable storage duration of 30 days. You can access IA objects in real time. You are charged data retrieval fees when you access IA objects. IA is suitable for data that is infrequently accessed, such as data accessed once or twice a month.

      • Archive: provides highly durable storage services at prices that are lower than the prices of Standard and IA. Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 60 days. You can access an Archive object after it is restored or real-time access of Archive objects is enabled. The amount of time that is required to restore an Archive object is approximately 1 minute. You are charged data retrieval fees if you restore an Archive object. If you access an Archive object after real-time access of Archive objects is enabled, you are charged Archive data retrieval fees based on the size of accessed Archive data. Archive is suitable for data that needs to be stored for a long period of time, such as archival data, medical images, scientific materials, and video footage.

      • Cold Archive: provides highly durable storage services at lower prices compared with Archive. Cold Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 180 days. You must restore a Cold Archive object before you can access the object. The amount of time required to restore a Cold Archive object varies based on the object size and the restoration mode. You are charged data retrieval fees and API operation calling fees when you restore a Cold Archive object. Cold Archive is suitable for storing cold data over an ultra-long period of time, including data that must be retained for an extended period of time due to compliance requirements, raw data that is accumulated over an extended period of time in the big data and AI fields, retained media resources in the film and television industries, and archived videos from the online education industry.

      • Deep Cold Archive: provides highly durable storage services at lower prices compared with Cold Archive. Deep Cold Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 180 days. You must restore a Deep Cold Archive object before you can access the object. The amount of time that is required to restore a Deep Cold Archive object varies based on the object size and restoration mode. You are charged data retrieval fees and API operation calling fees when you restore a Deep Cold Archive object. Deep Cold Archive is suitable for storing extremely cold data for a long period of time, such as raw data that is accumulated over an extended period of time in the big data and AI fields, media data that requires long-term retention, data that must be retained for a long period of time due to regulatory and compliance requirements, and data that requires tape-based retention.

      For more information about the storage classes, see Overview.

      Encryption Method

      The server-side encryption method for an object.

      • Inherited from Bucket: The encryption method of the object is the same as that of the bucket.

      • OSS-Managed: Keys managed by OSS are used for data encryption. OSS uses data keys to encrypt objects. In addition, OSS uses regularly rotated master keys to encrypt data keys.

      • KMS: The default customer master key (CMK) stored in Key Management Service (KMS) or the specified CMK is used to encrypt and decrypt data: The encryption key of KMS contains the following elements:

        • alias/acs/oss: The default CMK stored in KMS is used to generate different keys to encrypt objects and decrypt objects when the objects are downloaded.

        • CMK ID: The keys generated by a specified CMK are used to encrypt different objects and the specified CMK ID is recorded in the metadata of the encrypted object. Objects are decrypted when they are downloaded by users who are granted decryption permissions. Before you specify a CMK ID, you must create a normal key or an external key in the same region as the bucket in the KMS console.

      • Encryption Algorithm: Only AES-256 is supported.

      User-defined Metadata

      The user metadata that you want to add for the object. You can add multiple pieces of user metadata as custom headers. However, the total size of the user metadata cannot exceed 8 KB. When you add user metadata, the user metadata headers must contain the x-oss-meta- prefix and values must be specified for the headers. Example: x-oss-meta-location:hangzhou.

    3. Click Upload.

      You can view the upload progress of the objects in the Upload Tasks panel.

Use ossbrowser

You can use ossbrowser to perform the same bucket-level operations that you can perform in the OSS console. You can follow the on-screen instructions in ossbrowser to perform simple upload. For more information about how to use ossbrowser, see Use ossbrowser.

Use OSS SDKs

The following code provides examples on how to perform simple upload by using OSS SDKs for common programming languages. For more information about how to perform simple upload by using OSS SDKs for other programming languages, see Overview.

import com.aliyun.oss.ClientException;
import com.aliyun.oss.OSS;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.OSSClientBuilder;
import com.aliyun.oss.OSSException;
import com.aliyun.oss.model.PutObjectRequest;
import com.aliyun.oss.model.PutObjectResult;
import java.io.File;

public class Demo {

    public static void main(String[] args) throws Exception {
        // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // Specify the name of the bucket. Example: examplebucket. 
        String bucketName = "examplebucket";
        // Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. 
        String objectName = "exampledir/exampleobject.txt";
        // Specify the full path of the local file that you want to upload. Example: D:\\localpath\\examplefile.txt. 
        // By default, if the path of the local file is not specified, the local file is uploaded from the path of the project to which the sample program belongs. 
        String filePath= "D:\\localpath\\examplefile.txt";

        // Create an OSSClient instance. 
        OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider);

        try {
            // Create a PutObjectRequest object. 
            PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, objectName, new File(filePath));
            // The following sample code provides an example on how to specify the storage class and ACL of an object when you upload the object: 
            // ObjectMetadata metadata = new ObjectMetadata();
            // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString());
            // metadata.setObjectAcl(CannedAccessControlList.Private);
            // putObjectRequest.setMetadata(metadata);
            
            // Upload the local file. 
            PutObjectResult result = ossClient.putObject(putObjectRequest);           
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\OssClient;
use OSS\Core\OssException;

// The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
$accessKeyId = "yourAccessKeyId";
$accessKeySecret = "yourAccessKeySecret";
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
$endpoint = "yourEndpoint";
// Specify the name of the bucket. Example: examplebucket. 
$bucket= "examplebucket";
// Specify the full path of the object. Example: exampledir/exampleobject.txt. The full path of the object cannot contain the bucket name. 
$object = "exampledir/exampleobject.txt";
// <yourLocalFile> consists of the local file path and the file name with an extension. Example: /users/local/myfile.txt. 
// Specify the full path of the local file. Example: D:\\localpath\\examplefile.txt. By default, if you do not specify the full path of the local file, the file is uploaded from the path of the project to which the sample program belongs. 
$filePath = "D:\\localpath\\examplefile.txt";

try{
    $ossClient = new OssClient($accessKeyId, $accessKeySecret, $endpoint);

    $ossClient->uploadFile($bucket, $object, $filePath);
} catch(OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}
print(__FUNCTION__ . "OK" . "\n");
const OSS = require('ali-oss')
constpath=require("path")

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
  accessKeyId: 'yourAccessKeyId',
  accessKeySecret: 'yourAccessKeySecret',
  // Specify the name of the bucket. 
  bucket: 'examplebucket',
});

const headers = {
  // Specify the storage class of the object. 
  'x-oss-storage-class': 'Standard',
  // Specify the ACL of the object. 
  'x-oss-object-acl': 'private',
  // When you access an object by using the URL of the object, specify that the object is downloaded as an attachment. The name of the downloaded object is example.jpg. 
  // 'Content-Disposition': 'attachment; filename="example.jpg"'
  // Specify tags for the object. You can specify multiple tags at a time. 
  'x-oss-tagging': 'Tag1=1&Tag2=2',
  // Specify whether the PutObject operation overwrites an object that has the same name. In this example, the x-oss-forbid-overwrite parameter is set to true, which specifies that an existing object that has the same name cannot be overwritten by the uploaded object. 
  'x-oss-forbid-overwrite': 'true',
};

async function put () {
  try {
    // Specify the full paths of the object and the local file. Do not include the bucket name in the full path of the object. 
    // If you do not specify the path of the local file, the local file is uploaded from the path of the project to which the sample program belongs. 
    const result = await client.put('exampleobject.txt', path.normalize('D:\\localpath\\examplefile.txt')
    // Specify headers.
    //,{headers}
    );
    console.log(result);
  } catch (e) {
    console.log(e);
  }
}

put();
# -*- coding: utf-8 -*-
import oss2
import os
# The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
auth = oss2.Auth('yourAccessKeyId', 'yourAccessKeySecret')
# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
# Specify the bucket name. 
bucket = oss2.Bucket(auth, 'yourEndpoint', 'examplebucket')

# The file must be opened in binary mode. 
# Specify the full path of the local file. By default, if you do not specify the full path of a local file, the local file is uploaded from the path of the project to which the sample program belongs. 
with open('D:\\localpath\\examplefile.txt', 'rb') as fileobj:
    # Use the seek method to read data from byte 1,000 of the file. The data is uploaded from byte 1,000 to the last byte of the local file. 
    fileobj.seek(1000, os.SEEK_SET)
    # Use the tell method to obtain the current position. 
    current = fileobj.tell()
    # Specify the full path of the object. The full path of the object cannot contain the bucket name. 
    bucket.put_object('exampleobject.txt', fileobj)
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <title>Document</title>
  </head>
  <body>
    <input id="file" type="file" />
    <button id="upload">Upload an Object</button>
    <script src="https://gosspublic.alicdn.com/aliyun-oss-sdk-6.18.0.min.js"></script>
    <script>
      const client = new OSS({
        // Specify the region in which the bucket is located. For example, if your bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
        region: "yourRegion",
        // Specify the temporary AccessKey pair obtained from Security Token Service (STS). The AccessKey pair consists of an AccessKey ID and an AccessKey secret. 
        accessKeyId: "yourAccessKeyId",
        accessKeySecret: "yourAccessKeySecret",
        // Specify the security token obtained from STS. 
        stsToken: "yourSecurityToken",
        // Specify the name of the bucket. 
        bucket: "examplebucket",
      });

      // Select the local file from the drop-down list. Example: <input type="file" id="file" />. 
      let data;
      // Create and specify the Blob data. 
      //const data = new Blob(['Hello OSS']);
      // Create an OSS buffer and specify the content of the OSS buffer. 
      //const data = new OSS.Buffer(['Hello OSS']);

      const upload = document.getElementById("upload");

      async function putObject(data) {
        try {
          // Specify the full path of the object. Do not include the bucket name in the full path. 
          // Specify the object name or the full path of the object to upload data to the current bucket or a specific directory in the bucket. For example, set the object name to exampleobject.txt or the path of the object to exampledir/exampleobject.txt. 
          // You can set the data to files, Blob data, or OSS buffers. 
          const options = {
            meta: { temp: "demo" },
            mime: "json",
            headers: { "Content-Type": "text/plain" },
          };
          const result = await client.put("examplefile.txt", data, options);
          console.log(result);
        } catch (e) {
          console.log(e);
        }
      }

      upload.addEventListener("click", () => {
        const data = file.files[0];
        putObject(data);
      });
    </script>
  </body>
</html>
using Aliyun.OSS;

// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
var endpoint = "yourEndpoint";
// The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
var accessKeyId = "yourAccessKeyId";
var accessKeySecret = "yourAccessKeySecret";
// Specify the bucket name. Example: examplebucket. 
var bucketName = "examplebucket";
// Specify the full path of the object. The full path of the object cannot contain the bucket name. Example: exampledir/exampleobject.txt. 
var objectName = "exampledir/exampleobject.txt";
// Specify the full path of the local file that you want to upload. By default, if you do not specify the full path of the local file, the file is uploaded from the path of the project to which the sample program belongs. 
var localFilename = "D:\\localpath\\examplefile.txt";

// Create an OSSClient instance. 
var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
try
{
    // Upload the local file. 
    client.PutObject(bucketName, objectName, localFilename);
    Console.WriteLine("Put object succeeded");
}
catch (Exception ex)
{
    Console.WriteLine("Put object failed, {0}", ex.Message);
}
// Construct a request to upload the local file. 
// Specify the name of the bucket, the full path of the object, and the full path of the local file. In this example, the name of the bucket is examplebucket, the full path of the object is exampledir/exampleobject.txt, and the full path of the local file is /storage/emulated/0/oss/examplefile.txt. 
// The full path of the object cannot contain the bucket name. 
PutObjectRequest put = new PutObjectRequest("examplebucket", "exampledir/exampleobject.txt", "/storage/emulated/0/oss/examplefile.txt");

// (Optional) Specify the object metadata. 
 ObjectMetadata metadata = new ObjectMetadata();
// metadata.setContentType("application/octet-stream"); // Specify the content type of the object. 
// metadata.setContentMD5(BinaryUtil.calculateBase64Md5(uploadFilePath)); // Specify the MD5 hash that is used for MD5 verification. 
// Set the access control list (ACL) of the object to private.
metadata.setHeader("x-oss-object-acl", "private");
// Set the storage class of the object to Standard.
metadata.setHeader("x-oss-storage-class", "Standard");
// Specify that the existing object is overwritten by an uploaded object that has the same name.
// metadata.setHeader("x-oss-forbid-overwrite", "true");
// Specify tags for the object. You can specify multiple tags at a time. 
// metadata.setHeader("x-oss-tagging", "a:1");
// Specify the server-side encryption algorithm that is used to encrypt the object when OSS creates the object. 
// metadata.setHeader("x-oss-server-side-encryption", "AES256");
// Specify the customer master key (CMK) that is managed by Key Management Service (KMS). This parameter takes effect only when x-oss-server-side-encryption is set to KMS. 
// metadata.setHeader("x-oss-server-side-encryption-key-id", "9468da86-3509-4f8d-a61e-6eab1eac****");

put.setMetadata(metadata);

try {
    PutObjectResult putResult = oss.putObject(put);

    Log.d("PutObject", "UploadSuccess");
    Log.d("ETag", putResult.getETag());
    Log.d("RequestId", putResult.getRequestId());
} catch (ClientException e) {
    // Handle client-side exceptions, such as network errors. 
    e.printStackTrace();
} catch (ServiceException e) {
    // Handle server-side exceptions. 
    Log.e("RequestId", e.getRequestId());
    Log.e("ErrorCode", e.getErrorCode());
    Log.e("HostId", e.getHostId());
    Log.e("RawMessage", e.getRawMessage());
}
package main

import (
    "fmt"
    "os"
    "github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
    // Create an OSSClient instance. 
    // Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
    // The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
    client, err := oss.New("yourEndpoint", "yourAccessKeyId", "yourAccessKeySecret")    
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Specify the name of the bucket. Example: examplebucket. 
    bucket, err := client.Bucket("examplebucket")
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Specify the full path of the object and the full path of the local file. In this example, the full path of the object is exampledir/exampleobject.txt and the full path of the local file is D:\\localpath\\examplefile.txt. 
    err = bucket.PutObjectFromFile("exampledir/exampleobject.txt", "D:\\localpath\\examplefile.txt")
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }
}            
OSSPutObjectRequest * put = [OSSPutObjectRequest new];

// Specify the name of the bucket. Example: examplebucket. 
put.bucketName = @"examplebucket";
// Specify the full path of the object. Example: exampledir/exampleobject.txt. The full path of the object cannot contain the bucket name. 
put.objectKey = @"exampledir/exampleobject.txt";
put.uploadingFileURL = [NSURL fileURLWithPath:@"<filePath>"];
// put.uploadingData = <NSData *>; // Directly upload NSData. 

// Optional. Specify the upload progress. 
put.uploadProgress = ^(int64_t bytesSent, int64_t totalByteSent, int64_t totalBytesExpectedToSend) {
    // Specify the number of bytes that are being uploaded, the total number of bytes that are uploaded, and the total number of bytes that you want to upload. 
    NSLog(@"%lld, %lld, %lld", bytesSent, totalByteSent, totalBytesExpectedToSend);
};
// Configure the optional fields. 
// put.contentType = @"application/octet-stream";
// Specify Content-MD5. 
// put.contentMd5 = @"eB5eJF1ptWaXm4bijSPyxw==";
// Specify the method that is used to encode the object. 
// put.contentEncoding = @"identity";
// Specify the method that is used to access the object. 
// put.contentDisposition = @"attachment";
// Specify object metadata or HTTP headers for the upload task. 
// NSMutableDictionary *meta = [NSMutableDictionary dictionary];
// Specify object metadata. 
// [meta setObject:@"value" forKey:@"x-oss-meta-name1"];
// Set the access control list (ACL) of the object to private. 
// [meta setObject:@"private" forKey:@"x-oss-object-acl"];
// Set the storage class of the object to Standard. 
// [meta setObject:@"Standard" forKey:@"x-oss-storage-class"];
// Specify that the existing object is overwritten by an uploaded object that has the same name. 
// [meta setObject:@"true" forKey:@"x-oss-forbid-overwrite"];
// Specify tags for the object. You can specify multiple tags at the same time. 
// [meta setObject:@"a:1" forKey:@"x-oss-tagging"];
// Specify the server-side encryption algorithm that is used to encrypt the destination object when Object Storage Service (OSS) creates the object. 
// [meta setObject:@"AES256" forKey:@"x-oss-server-side-encryption"];
// Specify the customer master key (CMK) that is managed by Key Management Service (KMS). This parameter takes effect only when x-oss-server-side-encryption is set to KMS. 
// [meta setObject:@"9468da86-3509-4f8d-a61e-6eab1eac****" forKey:@"x-oss-server-side-encryption-key-id"];
// put.objectMeta = meta;
OSSTask * putTask = [client putObject:put];

[putTask continueWithBlock:^id(OSSTask *task) {
    if (!task.error) {
        NSLog(@"upload object success!");
    } else {
        NSLog(@"upload object failed, error: %@" , task.error);
    }
    return nil;
}];
// waitUntilFinished blocks the current thread, but do not block the upload task. 
// [putTask waitUntilFinished];
// [put cancel];
#include <alibabacloud/oss/OssClient.h>
#include <fstream>
using namespace AlibabaCloud::OSS;

int main(void)
{
    /* Initialize the information about the account that is used to access OSS. */
    /* The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. */
    std::string AccessKeyId = "yourAccessKeyId";
    std::string AccessKeySecret = "yourAccessKeySecret";
    /* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "yourEndpoint";
    /* Specify the name of the bucket. Example: examplebucket. */
    std::string BucketName = "examplebucket";
    /* Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. */
    std::string ObjectName = "exampledir/exampleobject.txt";

    /* Initialize resources, such as network resources. */
    InitializeSdk();

    ClientConfiguration conf;
    OssClient client(Endpoint, AccessKeyId, AccessKeySecret, conf);
    /* Specify the full path of the local file. Example: D:\\localpath\\examplefile.txt. In this example, localpath indicates the local path in which the examplefile.txt file is stored. */
    std::shared_ptr<std::iostream> content = std::make_shared<std::fstream>("D:\\localpath\\examplefile.txt", std::ios::in | std::ios::binary);
    PutObjectRequest request(BucketName, ObjectName, content);

    /* (Optional) Set the ACL to private and the storage class to Standard for the object. */
    //request.MetaData().addHeader("x-oss-object-acl", "private");
    //request.MetaData().addHeader("x-oss-storage-class", "Standard");

    auto outcome = client.PutObject(request);

    if (!outcome.isSuccess()) {
            /* Handle exceptions. */
            std::cout << "PutObject fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        return -1;
    }

    /* Release resources, such as network resources. */
    ShutdownSdk();
        return 0;
}
#include "oss_api.h"
#include "aos_http_io.h"
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
const char *endpoint = "yourEndpoint";
/* The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. */
const char *access_key_id = "yourAccessKeyId";
const char *access_key_secret = "yourAccessKeySecret";
/* Specify the name of the bucket. Example: examplebucket. */
const char *bucket_name = "examplebucket";
/* Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. */
const char *object_name = "exampledir/exampleobject.txt";
const char *object_content = "More than just cloud.";
void init_options(oss_request_options_t *options)
{
    options->config = oss_config_create(options->pool);
    /* Use a char* string to initialize data of the aos_string_t type. */
    aos_str_set(&options->config->endpoint, endpoint);
    aos_str_set(&options->config->access_key_id, access_key_id);
    aos_str_set(&options->config->access_key_secret, access_key_secret);
    /* Specify whether to use CNAME. The value 0 indicates that CNAME is not used. */
    options->config->is_cname = 0;
    /* Configure network parameters, such as the timeout period. */
    options->ctl = aos_http_controller_create(options->pool, 0);
}
int main(int argc, char *argv[])
{
    /* Call the aos_http_io_initialize method in main() to initialize global resources, such as networks and memory. */
    if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
        exit(1);
    }
    /* Create a memory pool to manage memory. aos_pool_t is equivalent to apr_pool_t. The code that is used to create a memory pool is included in the APR library. */
    aos_pool_t *pool;
    /* Create a memory pool. The value of the second parameter is NULL. This value specifies that the pool does not inherit other memory pools. */
    aos_pool_create(&pool, NULL);
    /* Create and initialize options. This parameter includes global configuration information, such as endpoint, access_key_id, access_key_secret, is_cname, and curl. */
    oss_request_options_t *oss_client_options;
    /* Allocate the memory resources in the memory pool to the options. */
    oss_client_options = oss_request_options_create(pool);
    /* Initialize oss_client_options. */
    init_options(oss_client_options);
    /* Initialize the parameters. */
    aos_string_t bucket;
    aos_string_t object;
    aos_list_t buffer;
    aos_buf_t *content = NULL;
    aos_table_t *headers = NULL;
    aos_table_t *resp_headers = NULL; 
    aos_status_t *resp_status = NULL; 
    aos_str_set(&bucket, bucket_name);
    aos_str_set(&object, object_name);
    aos_list_init(&buffer);
    content = aos_buf_pack(oss_client_options->pool, object_content, strlen(object_content));
    aos_list_add_tail(&content->node, &buffer);
    /* Upload the object. */
    resp_status = oss_put_object_from_buffer(oss_client_options, &bucket, &object, &buffer, headers, &resp_headers);
    /* Check whether the object is uploaded. */
    if (aos_status_is_ok(resp_status)) {
        printf("put object from buffer succeeded\n");
    } else {
        printf("put object from buffer failed\n");      
    }
    /* Release the memory pool. This operation releases the memory resources allocated for the request. */
    aos_pool_destroy(pool);
    /* Release the allocated global resources. */
    aos_http_io_deinitialize();
    return 0;
}
require 'aliyun/oss'

client = Aliyun::OSS::Client.new(
  # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
  endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
  # The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations in OSS is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM console. 
  access_key_id: 'AccessKeyId', access_key_secret: 'AccessKeySecret')

# Specify the name of the bucket. Example: examplebucket. 
bucket = client.get_bucket('examplebucket')
# Replace my-object with the full path of the object. Do not include the bucket name in the full path. 
# Replace local-file with the full path of the local file that you want to upload. 
bucket.put_object('my-object', :file => 'local-file')

Use ossutil

For more information about how to perform simple upload by using ossutil, see Upload objects.

Use RESTful APIs

If your business requires a high level of customization, you can directly call RESTful APIs. To directly call an API, you must include the signature calculation in your code. For more information, see PutObject.

References

  • When you use simple upload, you can configure object metadata to describe an object. For example, you can specify standard HTTP headers such as Content-Type. You can also configure user metadata. For more information about object metadata, see Manage object metadata.

  • After you upload an object to OSS, you can send a callback request to a specified application server by using upload callbacks. For more information, see Upload callbacks.

  • After you upload an image object, you can also compress the image object and configure custom styles for the image object. For more information, see IMG implementation modes.

  • If you need to obtain the image size after an image is uploaded, you can specify ?x-oss-process=image/info to query the basic information about the image. For more information, see Query the EXIF data of an image.

  • When you access an object by using the URL of the object, whether the object is previewed or downloaded is determined by the type of the URL and the creation time of the bucket that stores the object For more information, see What do I do if an image object is downloaded as an attachment but cannot be previewed when I access the image object by using its URL?

  • You can add signature information to the URL of an uploaded object and share the signed URL to allow third parties to access the object. For more information, see Add signatures to a URL.