All Products
Search
Document Center

Object Storage Service:Upload files, images, and videos to OSS

Last Updated:Aug 13, 2024

If you want to upload an object that is not greater than 5 GB in size to Object Storage Service (OSS) and do not require high concurrent upload performance, you can use simple upload.

Prerequisites

A bucket is created. For more information, see Create a bucket.

Limits

You can upload an object up to 5 GB in size by using simple upload. If you want to upload an object that is greater than 5 GB in size, use multipart upload. For more information, see Multipart upload.

Usage notes

Data security

Object overwriting

By default, OSS overwrites an existing object with an uploaded object that has the same name. You can use the following methods to prevent existing objects from being unexpectedly overwritten:

  • Enable versioning for the bucket

    If you enable versioning for a bucket, objects that are overwritten in the bucket are saved as previous versions. You can recover the previous versions of the objects at any time. For more information, see Overview.

  • Include the x-oss-forbid-overwrite parameter in the upload request

    You can add the x-oss-forbid-overwrite parameter to the header of the upload request and set this parameter to true. If you upload an object that has the same name as an existing object, the object fails to be uploaded and the FileAlreadyExists error code is returned. If you do not add this parameter to the request header or if you set this parameter to false, the object overwrites an existing object that has the same name in the bucket.

Authorized upload

  • OSS provides access control at the bucket and object levels to prevent unauthorized data uploads to your bucket by third parties. For more information, see Overview.

  • You can use a signed URL to grant a third-party user the permissions to upload a specific object. This way, the user can upload data without additional credentials or authorization. OSS stores the uploaded data as an object in the bucket. For more information, see Upload local files with signed URLs.

PUT request costs

If you want to upload a large number of objects and set the storage classes of the objects to Deep Cold Archive, you are charged high PUT request fees. We recommend that you set the storage classes of the objects to Standard when you upload the objects, and configure lifecycle rules to convert the storage classes of the Standard objects to Deep Cold Archive. This reduces PUT request fees.

Uploads to a bucket for which OSS-HDFS is enabled

To maintain OSS-HDFS stability and prevent data loss, do not upload an object to the .dlsdata/ directory of a bucket for which OSS-HDFS is enabled by using methods that are not supported by OSS-HDFS.

Upload performance tuning

If you upload a large number of objects and the names of the objects contain sequential prefixes such as timestamps and letters, multiple object indexes may be stored in a single partition. As a consequence, latency increases when a large number of requests are sent to query these objects. If you want to upload a large number of objects, we recommend that you use random prefixes instead of sequential prefixes to specify object names. For more information, see OSS performance and scalability best practices.

Methods

Use the OSS console

Note

In Alibaba Finance Cloud, OSS cannot be accessed over the Internet. Therefore, objects cannot be uploaded by using the OSS console. Instead, you can use tools such as ossbrowser, OSS SDKs, and ossutil to upload objects.

  1. Log on to the OSS console.

  2. In the left-side navigation pane, click Buckets. On the Buckets page, find and click the desired bucket.

  3. In the navigation tree, choose Object Management > Objects.

  4. On the Objects page, click Upload Object.

  5. On the Upload Object page, configure the parameters described in the following table.

    1. Configure basic settings.

      Parameter

      Description

      Upload To

      The directory to which the object is uploaded in the bucket.

      • Current Directory: The object is uploaded to the current directory.

      • Specified Directory: The object is uploaded to the specified directory. You must enter the name of the directory. If you specify a directory that does not exist in the bucket, OSS automatically creates the directory and uploads the object to the directory.

        The directory name must meet the following requirements:

        • The name must be 1 to 254 characters in length. The name can contain only UTF-8 characters.

        • The name cannot start with a forward slash (/) or a backslash (\).

        • The name cannot contain consecutive forward slashes (/).

        • The name cannot be two consecutive periods (..).

      Object ACL

      The access control list (ACL) of the object.

      • Inherited from Bucket: The ACL of the object is the same as that of the bucket.

      • Private: Only the object owner and authorized users can read and write the object. Other users cannot access the object.

      • Public Read: Only the object owner and authorized users can read and write the object. Other users, including anonymous users, can only read the object. If you set the ACL to this value, the object can be read by all users. This may result in data leaks and unexpectedly high fees. Proceed with caution.

      • Public Read/Write: All users, including anonymous users, can read and write the object. This may result in data leaks and unexpectedly high fees. If a user writes prohibited data or information to the object, your legitimate interests and rights may be infringed. We recommend that you do not set the ACL to this value unless necessary.

      For more information, see Object ACLs.

      Files to Upload

      The local files or directories that you want to upload.

      You can click Select Files to select a local file or click Select Folders to select a directory. You can also drag the intended local file or directory to the Files to Upload section.

      If the selected directory contains a local file that you do not want to upload, find the local file in the file list in the Files to Upload section and click Remove in the Actions column to remove the file.

      Important
      • If you upload a local file to an unversioned bucket and the local file has the same name as an existing object in the bucket, the uploaded object overwrites the existing object.

      • If you upload a local file to a versioned bucket and the local file has the same name as an existing object in the bucket, the existing object becomes a previous version, and the uploaded object becomes the current version.

    2. (Optional) Configure advanced settings.

      Parameter

      Description

      Storage Class

      The storage class of the object.

      • Inherited from Bucket: The storage class of the object is the same as that of the bucket.

      • Standard: provides highly reliable, highly available, and high-performance storage for data that is frequently accessed. Standard is suitable for various business applications, such as social networking applications, image, audio, and video resource sharing applications, large websites, and big data analytics.

      • IA: provides highly durable storage at lower prices compared with Standard. Infrequent Access (IA) has a minimum billable size of 64 KB and a minimum billable storage duration of 30 days. You can access IA objects in real time. You are charged data retrieval fees when you access IA objects. IA is suitable for data that is infrequently accessed, such as data accessed once or twice a month.

      • Archive: provides highly durable storage at lower prices compared with Standard and IA. Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 60 days. You can access an Archive object after it is restored or real-time access of Archive objects is enabled. The amount of time that is required to restore an Archive object is approximately 1 minute. You are charged data retrieval fees if you restore an Archive object. If you access an Archive object after real-time access of Archive objects is enabled, you are charged data retrieval fees based on the size of the accessed Archive object. Archive is suitable for data that needs to be stored for a long period of time, such as archival data, medical images, scientific materials, and video footage.

      • Cold Archive: provides highly durable storage at lower prices compared with Archive. Cold Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 180 days. You must restore a Cold Archive object before you can access the object. The amount of time that is required to restore a Cold Archive object varies based on the object size and the restore priority. You are charged data retrieval fees and API operation calling fees when you restore a Cold Archive object. Cold Archive is suitable for storing cold data over an ultra-long period of time, including data that must be retained for an extended period of time due to compliance requirements, raw data that is accumulated over an extended period of time in the big data and AI fields, retained media resources in the film and television industries, and archived videos from the online education industry.

      • Deep Cold Archive: provides highly durable storage at lower prices compared with Cold Archive. Deep Cold Archive has a minimum billable size of 64 KB and a minimum billable storage duration of 180 days. You must restore a Deep Cold Archive object before you can access it. The amount of time that is required to restore a Deep Cold Archive object varies based on the object size and the restore priority. You are charged data retrieval fees and API operation calling fees when you restore a Deep Cold Archive object. Deep Cold Archive is suitable for storing extremely cold data for a long period of time, such as raw data that is accumulated over an extended period of time in the big data and AI fields, media data that requires long-term retention, data that must be retained for a long period of time due to regulatory and policy compliance requirements, and data that needs to be migrated from tapes to the cloud for long-term storage.

      For more information, see Overview.

      Encryption Method

      The server-side encryption method of the object.

      • Inherited from Bucket: The encryption method of the object is the same as that of the bucket.

      • OSS-Managed: The keys managed by OSS are used to encrypt objects in the bucket. OSS encrypts each object with a different key. OSS also uses master keys to encrypt cryptographic keys.

      • KMS: The default customer master key (CMK) stored in Key Management Service (KMS) or the specified CMK is used to encrypt and decrypt data. Description of the CMK parameter:

        • alias/acs/oss(CMK ID): The default CMK managed by KMS is used to generate keys for target encryption and decryption.

        • alias/<cmkname>(CMK ID): A custom CMK is used to generate keys for object encryption. The CMK ID is recorded in the metadata of the encrypted objects. Objects are decrypted when they are downloaded by users who have the decryption permissions. <cmkname> is the optional name of the CMK that you configured when you created the CMK.

          Before you specify a CMK ID, you must create a normal key or an external key in the same region as the bucket in the KMS console. For more information, see Create a CMK.

      • Encryption Algorithm: Only AES-256 is supported.

      User-defined Metadata

      The user metadata of the object. You can add multiple user metadata headers for an object. However, the total size of user metadata of the object cannot exceed 8 KB. When you add user metadata, the user metadata headers must contain the x-oss-meta- prefix and values must be specified. Example: x-oss-meta-location:hangzhou.

    3. Click Upload Object.

      You can view the upload progress on the Upload Tasks tab of the Task List panel.

Use ossbrowser

You can use ossbrowser to perform the same bucket-level operations that you can perform in the OSS console. You can follow the on-screen instructions in ossbrowser to perform simple uploads. For more information, see Use ossbrowser.

Use OSS SDKs

The following sample code provides examples on how to perform simple upload by using OSS SDKs for common programming languages. For more information about how to perform simple upload by using OSS SDKs for other programming languages, see Overview.

import com.aliyun.oss.ClientException;
import com.aliyun.oss.OSS;
import com.aliyun.oss.common.auth.*;
import com.aliyun.oss.OSSClientBuilder;
import com.aliyun.oss.OSSException;
import com.aliyun.oss.model.PutObjectRequest;
import com.aliyun.oss.model.PutObjectResult;
import java.io.File;

public class Demo {

    public static void main(String[] args) throws Exception {
        // In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
        String endpoint = "https://oss-cn-hangzhou.aliyuncs.com";
        // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
        EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider();
        // Specify the name of the bucket. Example: examplebucket. 
        String bucketName = "examplebucket";
        // Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. 
        String objectName = "exampledir/exampleobject.txt";
        // Specify the full path of the local file that you want to upload. Example: D:\\localpath\\examplefile.txt. 
        // By default, if the path of the local file is not specified, the local file is uploaded from the path of the project to which the sample program belongs. 
        String filePath= "D:\\localpath\\examplefile.txt";

        // Create an OSSClient instance. 
        OSS ossClient = new OSSClientBuilder().build(endpoint, credentialsProvider);

        try {
            // Create a PutObjectRequest object. 
            PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, objectName, new File(filePath));
            // The following sample code provides an example on how to specify the storage class and ACL of an object when you upload the object: 
            // ObjectMetadata metadata = new ObjectMetadata();
            // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString());
            // metadata.setObjectAcl(CannedAccessControlList.Private);
            // putObjectRequest.setMetadata(metadata);
            
            // Upload the local file. 
            PutObjectResult result = ossClient.putObject(putObjectRequest);           
        } catch (OSSException oe) {
            System.out.println("Caught an OSSException, which means your request made it to OSS, "
                    + "but was rejected with an error response for some reason.");
            System.out.println("Error Message:" + oe.getErrorMessage());
            System.out.println("Error Code:" + oe.getErrorCode());
            System.out.println("Request ID:" + oe.getRequestId());
            System.out.println("Host ID:" + oe.getHostId());
        } catch (ClientException ce) {
            System.out.println("Caught an ClientException, which means the client encountered "
                    + "a serious internal problem while trying to communicate with OSS, "
                    + "such as not being able to access the network.");
            System.out.println("Error Message:" + ce.getMessage());
        } finally {
            if (ossClient != null) {
                ossClient.shutdown();
            }
        }
    }
}
<?php
if (is_file(__DIR__ . '/../autoload.php')) {
    require_once __DIR__ . '/../autoload.php';
}
if (is_file(__DIR__ . '/../vendor/autoload.php')) {
    require_once __DIR__ . '/../vendor/autoload.php';
}

use OSS\Credentials\EnvironmentVariableCredentialsProvider;
use OSS\OssClient;
use OSS\Core\OssException;

// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
$provider = new EnvironmentVariableCredentialsProvider();
// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
$endpoint = "yourEndpoint";
// Specify the name of the bucket. Example: examplebucket. 
$bucket= "examplebucket";
// Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path. 
$object = "exampledir/exampleobject.txt";
// Specify the full path of the local file that you want to upload. Example: D:\\localpath\\examplefile.txt. By default, if you do not specify the full path of the local file, the local file is uploaded from the path of the project to which the sample program belongs. 
$filePath = "D:\\localpath\\examplefile.txt";

try{
    $config = array(
        "provider" => $provider,
        "endpoint" => $endpoint,
    );
    $ossClient = new OssClient($config);

    $ossClient->uploadFile($bucket, $object, $filePath);
} catch(OssException $e) {
    printf(__FUNCTION__ . ": FAILED\n");
    printf($e->getMessage() . "\n");
    return;
}
print(__FUNCTION__ . "OK" . "\n");
const OSS = require('ali-oss')
const path=require("path")

const client = new OSS({
  // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
  region: 'yourregion',
  // Obtain access credentials from environment variables. Before you run the sample code, make sure that you have configured environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET. 
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  // Specify the name of the bucket. 
  bucket: 'examplebucket',
});

// Add custom request headers.
const headers = {
  // Specify the storage class of the object. 
  'x-oss-storage-class': 'Standard',
  // Specify the access control list (ACL) of the object. 
  'x-oss-object-acl': 'private',
  // When you access an object by using the URL of the object, specify that the object is downloaded as an attachment. In this example, the name of the downloaded object is example.jpg. 
  'Content-Disposition': 'attachment; filename="example.txt"',
  // Specify tags for the object. You can specify multiple tags for the object at the same time. 
  'x-oss-tagging': 'Tag1=1&Tag2=2',
  // Specify whether the PutObject operation overwrites an object that has the same name. In this example, the x-oss-forbid-overwrite parameter is set to true, which specifies that an existing object that has the same name cannot be overwritten by the uploaded object. 
  'x-oss-forbid-overwrite': 'true',
};

async function put () {
  try {
    // Specify the full paths of the object and the local file. Do not include the bucket name in the full path of the object. 
    // If the path of the local file is not specified, the local file is uploaded from the path of the project to which the sample program belongs. 
    const result = await client.put('exampleobject.txt', path.normalize('D:\\localpath\\examplefile.txt')
    // Specify custom headers.
    ,{headers}
    );
    console.log(result);
  } catch (e) {
    console.log(e);
  }
}

put();
# -*- coding: utf-8 -*-
import oss2
import os
from oss2.credentials import EnvironmentVariableCredentialsProvider

# Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider())
# Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
# Specify the name of the bucket. 
bucket = oss2.Bucket(auth, 'https://oss-cn-hangzhou.aliyuncs.com', 'examplebucket')

# The file must be opened in binary mode. 
# Specify the full path of the local file. By default, if you do not specify the full path of the local file, the local file is uploaded from the path of the project to which the sample program belongs. 
with open('D:\\localpath\\examplefile.txt', 'rb') as fileobj:
    # Use the seek method to read data from byte 1,000 of the file. The data is uploaded from byte 1000 to the last byte of the local file. 
    fileobj.seek(1000, os.SEEK_SET)
    # Use the tell method to obtain the current position. 
    current = fileobj.tell()
    # Specify the full path of the object. Do not include the bucket name in the full path. 
    bucket.put_object('exampleobject.txt', fileobj)
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <title>Document</title>
  </head>
  <body>
    <input id="file" type="file" />
    <button id="upload">Upload an Object</button>
    <script src="https://gosspublic.alicdn.com/aliyun-oss-sdk-6.18.0.min.js"></script>
    <script>
      const client = new OSS({
        // Specify the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the region to oss-cn-hangzhou. 
        region: "yourRegion",
        // Specify the temporary AccessKey pair obtained from STS. The AccessKey pair consists of an AccessKey ID and an AccessKey secret. 
        accessKeyId: "yourAccessKeyId",
        accessKeySecret: "yourAccessKeySecret",
        // Specify the security token that you obtained from STS. 
        stsToken: "yourSecurityToken",
        // Specify the name of the bucket. 
        bucket: "examplebucket",
      });

      // Select the local file from the drop-down list. Example: <input type="file" id="file" />. 
      let data;
      // Create and specify the Blob data. 
      //const data = new Blob(['Hello OSS']);
      // Create an OSS buffer and specify the content of the OSS buffer. 
      //const data = new OSS.Buffer(['Hello OSS']);

      const upload = document.getElementById("upload");

      async function putObject(data) {
        try {
          // Specify the full path of the object. Do not include the bucket name in the full path. 
          // Specify the object name or the full path of the object to upload data to the current bucket or a specific directory in the bucket. For example, set the object name to exampleobject.txt or the path of the object to exampledir/exampleobject.txt. 
          // You can set the data to files, Blob data, or OSS buffers. 
          const options = {
            meta: { temp: "demo" },
            mime: "json",
            headers: { "Content-Type": "text/plain" },
          };
          const result = await client.put("examplefile.txt", data, options);
          console.log(result);
        } catch (e) {
          console.log(e);
        }
      }

      upload.addEventListener("click", () => {
        const data = file.files[0];
        putObject(data);
      });
    </script>
  </body>
</html>
using Aliyun.OSS;

// Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. 
var endpoint = "yourEndpoint";
// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID");
var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET");
// Specify the name of the bucket. Example: examplebucket. 
var bucketName = "examplebucket";
// Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. 
var objectName = "exampledir/exampleobject.txt";
// Specify the full path of the local file that you want to upload. By default, if you do not specify the full path of a local file, the local file is uploaded from the path of the project to which the sample program belongs. 
var localFilename = "D:\\localpath\\examplefile.txt";

// Create an OSSClient instance. 
var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
try
{
    // Upload the local file. 
    client.PutObject(bucketName, objectName, localFilename);
    Console.WriteLine("Put object succeeded");
}
catch (Exception ex)
{
    Console.WriteLine("Put object failed, {0}", ex.Message);
}
// Construct an upload request. 
// Specify the name of the bucket, the full path of the object, and the full path of the local file. In this example, the name of the bucket is examplebucket, the full path of the object is exampledir/exampleobject.txt, and the full path of the local file is /storage/emulated/0/oss/examplefile.txt. 
// Do not include the bucket name in the full path of the object. 
PutObjectRequest put = new PutObjectRequest("examplebucket", "exampledir/exampleobject.txt", "/storage/emulated/0/oss/examplefile.txt");

// (Optional) Specify the object metadata. 
 ObjectMetadata metadata = new ObjectMetadata();
// metadata.setContentType("application/octet-stream"); // Set content-type. 
// metadata.setContentMD5(BinaryUtil.calculateBase64Md5(uploadFilePath)); // Specify the MD5 hash that is used for MD5 verification. 
// Set the ACL of the object to private. 
metadata.setHeader("x-oss-object-acl", "private");
// Set the storage class of the object to Standard. 
metadata.setHeader("x-oss-storage-class", "Standard");
// Specify that the uploaded object that has the same name as an existing object does not overwrite the existing object. 
// metadata.setHeader("x-oss-forbid-overwrite", "true");
// Specify one or more tags for the object. 
// metadata.setHeader("x-oss-tagging", "a:1");
// Specify the server-side encryption algorithm that is used to encrypt the object when OSS creates the object. 
// metadata.setHeader("x-oss-server-side-encryption", "AES256");
// Specify the CMK that is managed by KMS. This parameter takes effect only when x-oss-server-side-encryption is set to KMS. 
// metadata.setHeader("x-oss-server-side-encryption-key-id", "9468da86-3509-4f8d-a61e-6eab1eac****");

put.setMetadata(metadata);

try {
    PutObjectResult putResult = oss.putObject(put);

    Log.d("PutObject", "UploadSuccess");
    Log.d("ETag", putResult.getETag());
    Log.d("RequestId", putResult.getRequestId());
} catch (ClientException e) {
    // Handle client-side exceptions, such as network errors. 
    e.printStackTrace();
} catch (ServiceException e) {
    // Handle server-side exceptions. 
    Log.e("RequestId", e.getRequestId());
    Log.e("ErrorCode", e.getErrorCode());
    Log.e("HostId", e.getHostId());
    Log.e("RawMessage", e.getRawMessage());
}
package main

import (
    "fmt"
    "os"
    "github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
    // Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
    provider, err := oss.NewEnvironmentVariableCredentialsProvider()
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Create an OSSClient instance. 
    // Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. Specify your actual endpoint. 
    client, err := oss.New("yourEndpoint", "", "", oss.SetCredentialsProvider(&provider))    
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Specify the name of the bucket. Example: examplebucket. 
    bucket, err := client.Bucket("examplebucket")
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }

    // Specify the full path of the object. Example: exampledir/exampleobject.txt. Then, specify the full path of the local file. Example: D:\\localpath\\examplefile.txt. 
    err = bucket.PutObjectFromFile("exampledir/exampleobject.txt", "D:\\localpath\\examplefile.txt")
    if err != nil {
        fmt.Println("Error:", err)
        os.Exit(-1)
    }
}            
OSSPutObjectRequest * put = [OSSPutObjectRequest new];

// Specify the name of the bucket. Example: examplebucket. 
put.bucketName = @"examplebucket";
// Specify the full path of the object. Example: exampledir/exampleobject.txt. Do not include the bucket name in the full path. 
put.objectKey = @"exampledir/exampleobject.txt";
put.uploadingFileURL = [NSURL fileURLWithPath:@"<filePath>"];
// put.uploadingData = <NSData *>; // Directly upload NSData. 

// (Optional) Configure an upload progress indicator. 
put.uploadProgress = ^(int64_t bytesSent, int64_t totalByteSent, int64_t totalBytesExpectedToSend) {
    // Specify the number of bytes that are being uploaded, the number of bytes that are uploaded, and the total number of bytes that you want to upload. 
    NSLog(@"%lld, %lld, %lld", bytesSent, totalByteSent, totalBytesExpectedToSend);
};
// Configure optional fields. 
// put.contentType = @"application/octet-stream";
// Specify Content-MD5. 
// put.contentMd5 = @"eB5eJF1ptWaXm4bijSPyxw==";
// Specify the method that is used to encode the object. 
// put.contentEncoding = @"identity";
// Specify the method that is used to display the object content. 
// put.contentDisposition = @"attachment";
// Configure object metadata or HTTP headers. 
// NSMutableDictionary *meta = [NSMutableDictionary dictionary];
// Specify object metadata. 
// [meta setObject:@"value" forKey:@"x-oss-meta-name1"];
// Set the access control list (ACL) of the object to private. 
// [meta setObject:@"private" forKey:@"x-oss-object-acl"];
// Set the storage class of the object to Standard. 
// [meta setObject:@"Standard" forKey:@"x-oss-storage-class"];
// Specify that this upload overwrites an existing object that has the same name. 
// [meta setObject:@"true" forKey:@"x-oss-forbid-overwrite"];
// Specify one or more tags for the object. 
// [meta setObject:@"a:1" forKey:@"x-oss-tagging"];
// Specify the server-side encryption algorithm that is used to encrypt the destination object when Object Storage Service (OSS) creates the object. 
// [meta setObject:@"AES256" forKey:@"x-oss-server-side-encryption"];
// Specify the CMK that is managed by KMS. This parameter takes effect only when x-oss-server-side-encryption is set to KMS. 
// [meta setObject:@"9468da86-3509-4f8d-a61e-6eab1eac****" forKey:@"x-oss-server-side-encryption-key-id"];
// put.objectMeta = meta;
OSSTask * putTask = [client putObject:put];

[putTask continueWithBlock:^id(OSSTask *task) {
    if (!task.error) {
        NSLog(@"upload object success!");
    } else {
        NSLog(@"upload object failed, error: %@" , task.error);
    }
    return nil;
}];
// waitUntilFinished blocks execution of the current thread but does not block the task progress. 
// [putTask waitUntilFinished];
// [put cancel];
#include <alibabacloud/oss/OssClient.h>
#include <fstream>
using namespace AlibabaCloud::OSS;

int main(void)
{
    /* Initialize information about the account that is used to access OSS. */
            
    /* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
    std::string Endpoint = "yourEndpoint";
    /* Specify the name of the bucket. Example: examplebucket. */
    std::string BucketName = "examplebucket";
    /* Specify the full path of the object. Do not include the bucket name in the full path of the object. Example: exampledir/exampleobject.txt. */
    std::string ObjectName = "exampledir/exampleobject.txt";

    /* Initialize resources such as network resources. */
    InitializeSdk();

    ClientConfiguration conf;
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>();
    OssClient client(Endpoint, credentialsProvider, conf);
    /* Specify the full path of the local file. Example: D:\\localpath\\examplefile.txt. In this example, localpath indicates the local path in which the examplefile.txt file is stored. */
    std::shared_ptr<std::iostream> content = std::make_shared<std::fstream>("D:\\localpath\\examplefile.txt", std::ios::in | std::ios::binary);
    PutObjectRequest request(BucketName, ObjectName, content);

    /* (Optional) Set the ACL to private and the storage class to Standard for the object. */
    //request.MetaData().addHeader("x-oss-object-acl", "private");
    //request.MetaData().addHeader("x-oss-storage-class", "Standard");

    auto outcome = client.PutObject(request);

    if (!outcome.isSuccess()) {
        /* Handle exceptions. */
        std::cout << "PutObject fail" <<
        ",code:" << outcome.error().Code() <<
        ",message:" << outcome.error().Message() <<
        ",requestId:" << outcome.error().RequestId() << std::endl;
        return -1;
    }

    /* Release resources such as network resources. */
    ShutdownSdk();
        return 0;
}
#include "oss_api.h"
#include "aos_http_io.h"
/* Specify the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. */
const char *endpoint = "yourEndpoint";
/* Specify the name of the bucket. Example: examplebucket. */
const char *bucket_name = "examplebucket";
/* Specify the full path of the object. Do not include the bucket name in the full path. Example: exampledir/exampleobject.txt. */
const char *object_name = "exampledir/exampleobject.txt";
const char *object_content = "More than just cloud.";
void init_options(oss_request_options_t *options)
{
    options->config = oss_config_create(options->pool);
    /* Use a char* string to initialize data of the aos_string_t type. */
    aos_str_set(&options->config->endpoint, endpoint);
    /* Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. */
    aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID"));
    aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET"));
    /* Specify whether to use CNAME. The value 0 indicates that CNAME is not used. */
    options->config->is_cname = 0;
    /* Configure network parameters, such as the timeout period. */
    options->ctl = aos_http_controller_create(options->pool, 0);
}
int main(int argc, char *argv[])
{
    /* Call the aos_http_io_initialize method in main() to initialize global resources, such as network resources and memory resources. */
    if (aos_http_io_initialize(NULL, 0) != AOSE_OK) {
        exit(1);
    }
    /* Create a memory pool to manage memory. aos_pool_t is equivalent to apr_pool_t. The code used to create a memory pool is included in the APR library. */
    aos_pool_t *pool;
    /* Create a memory pool. The value of the second parameter is NULL. This value indicates that the pool does not inherit other memory pools. */
    aos_pool_create(&pool, NULL);
    /* Create and initialize options. This parameter includes global configuration information, such as endpoint, access_key_id, access_key_secret, is_cname, and curl. */
    oss_request_options_t *oss_client_options;
    /* Allocate the memory resources in the memory pool to the options. */
    oss_client_options = oss_request_options_create(pool);
    /* Initialize oss_client_options. */
    init_options(oss_client_options);
    /* Initialize the parameters. */
    aos_string_t bucket;
    aos_string_t object;
    aos_list_t buffer;
    aos_buf_t *content = NULL;
    aos_table_t *headers = NULL;
    aos_table_t *resp_headers = NULL; 
    aos_status_t *resp_status = NULL; 
    aos_str_set(&bucket, bucket_name);
    aos_str_set(&object, object_name);
    aos_list_init(&buffer);
    content = aos_buf_pack(oss_client_options->pool, object_content, strlen(object_content));
    aos_list_add_tail(&content->node, &buffer);
    /* Upload the object. */
    resp_status = oss_put_object_from_buffer(oss_client_options, &bucket, &object, &buffer, headers, &resp_headers);
    /* Check whether the object is uploaded. */
    if (aos_status_is_ok(resp_status)) {
        printf("put object from buffer succeeded\n");
    } else {
        printf("put object from buffer failed\n");      
    }
    /* Release the memory pool. This operation releases the memory resources allocated for the request. */
    aos_pool_destroy(pool);
    /* Release the allocated global resources. */
    aos_http_io_deinitialize();
    return 0;
}
require 'aliyun/oss'

client = Aliyun::OSS::Client.new(
  # In this example, the endpoint of the China (Hangzhou) region is used. Specify your actual endpoint. 
  endpoint: 'https://oss-cn-hangzhou.aliyuncs.com',
  # Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured. 
  access_key_id: ENV['OSS_ACCESS_KEY_ID'],
  access_key_secret: ENV['OSS_ACCESS_KEY_SECRET']
)
# Specify the name of the bucket. Example: examplebucket. 
bucket = client.get_bucket('examplebucket')
# Upload the object. 
bucket.put_object('exampleobject.txt', :file => 'D:\\localpath\\examplefile.txt')

Use ossutil

You can perform simple uploads by using ossutil. For more information, see Simple upload.

Use the OSS API

If your business requires a high level of customization, you can directly call RESTful APIs. To directly call an API, you need to include the signature calculation in your code. For more information, see PutObject.

References

  • We recommend that you upload files from your client directly to OSS without uploading them to an application server first. This solution accelerates data upload and reduces the resource usage of the application server by eliminating the need to transfer objects to and from the application server. For more information, see Direct client uploads.

  • If you use simple upload to upload data, you can add multiple pieces of object metadata to describe the object to upload. For example, you can specify standard HTTP headers such as Content-Type and specify user metadata. For more information about object metadata, see Manage object metadata.

  • After you upload an object to OSS, you can send a callback request to the specified application server by using an upload callback. For more information, see Upload callbacks.

  • You can process an uploaded image object by using different methods, for example, by including an image style in the URL of the image object. For more information, see IMG implementation modes.

  • If you want to obtain the image size after an image is uploaded, you can specify ?x-oss-process=image/info to query basic information about the image. For more information, see Query image information.

  • You can perform operations, such as recognizing texts, extracting subtitles, transcoding, and generating thumbnails, on uploaded images or videos. For more information, see Functions and features.

  • You can add signature information to the URL of an uploaded object and share the signed URL to allow third parties to access the object. For more information, see Include a V1 signature in a URL.

  • When you access an object from a browser by using the URL of the object, whether the object is previewed or downloaded is determined by the type of the domain name in the URL and the creation time of the bucket that stores the object. For more information, see What do I do if an image object is downloaded as an attachment but cannot be previewed when I access the image object by using its URL?

  • When you use data processing frameworks such as Hadoop and Spark to process batch jobs, you can use OSS to store data. After data is uploaded to OSS, you can access the data from Elastic Container Instance. For more information, see Access data in an OSS bucket from Elastic Container Instance.

  • You can query the progress of an upload. For more formation, see Upload progress bar.