edit-icon download-icon

Upload objects

Last Updated: Nov 07, 2017

OSS Go SDK provides a variety of object uploading interfaces. You can upload a file to OSS using any of the following methods:

  • Simple upload (PutObject) is applicable to small files.
  • Multipart upload (UploadFile) is applicable to large files.
  • Append upload (AppendObject).

Simple upload

Upload data from a data stream (io.Reader)

You can use Bucket.PutObject to upload a file in the simple mode.

Note: The example code of simple upload can be found in sample/put_object.go.

String upload

  1. import "strings"
  2. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  3. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  4. if err != nil {
  5. // HandleError(err)
  6. }
  7. bucket, err := client.Bucket("my-bucket")
  8. if err != nil {
  9. // HandleError(err)
  10. }
  11. err = bucket.PutObject("my-object", strings.NewReader("MyObjectValue"))
  12. if err != nil {
  13. // HandleError(err)
  14. }

Byte array upload

  1. import "bytes"
  2. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  3. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  4. if err != nil {
  5. // HandleError(err)
  6. }
  7. bucket, err := client.Bucket("my-bucket")
  8. if err != nil {
  9. // HandleError(err)
  10. }
  11. err = bucket.PutObject("my-object", bytes.NewReader([]byte("MyObjectValue")))
  12. if err != nil {
  13. // HandleError(err)
  14. }

File stream upload

  1. import "os"
  2. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  3. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  4. if err != nil {
  5. // HandleError(err)
  6. }
  7. bucket, err := client.Bucket("my-bucket")
  8. if err != nil {
  9. // HandleError(err)
  10. }
  11. fd, err := os.Open("LocalFile")
  12. if err != nil {
  13. // HandleError(err)
  14. }
  15. defer fd.Close()
  16. err = bucket.PutObject("my-object", fd)
  17. if err != nil {
  18. // HandleError(err)
  19. }

Upload by local file names

You can use Bucket.PutObjectFromFile to upload a specified local file and use the content of the local file as the object value.

  1. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  2. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  3. if err != nil {
  4. // HandleError(err)
  5. }
  6. bucket, err := client.Bucket("my-bucket")
  7. if err != nil {
  8. // HandleError(err)
  9. }
  10. err = bucket.PutObjectFromFile("my-object", "LocalFile")
  11. if err != nil {
  12. // HandleError(err)
  13. }

Note: When you upload an object using the preceding code, make sure the object size does not exceed 5 GB. You can use multipart upload to upload a file exceeding 5 GB.

Specify object metadata during upload

When using a data stream to upload a file, you can specify the metadata of one or multiple files. The name of the metadata is case-insensitive. For instance, when you upload a file and define the metadata as “Name” and use Bucket.GetObjectDetailedMeta to access the file, “X-Oss-Meta-Name” is displayed as a result. The case sensitivity can be ignored during comparison/reading operations.

You can specify the following metadata:

Parameter Description
CacheControl It specifies the cache action of the webpage when the object is downloaded.
ContentDisposition Specifies the name of the object when it is downloaded.
ContentEncoding Specifies the content encoding format when the object is downloaded.
Expires Specifies the expiration time. You can customize the expiration time format. The format http.TimeFormat is recommended.
ServerSideEncryption Specifies the server-side encryption algorithm used when OSS creates an object. A valid value is AES256.
ObjectACL Specifies the object ACL policies when the OSS creates an object.
Meta Customizes a parameter prefixed with “X-Oss-Meta-“.
  1. import (
  2. "strings"
  3. "time"
  4. "github.com/aliyun/aliyun-oss-go-sdk/oss"
  5. )
  6. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  7. if err != nil {
  8. // HandleError(err)
  9. }
  10. bucket, err := client.Bucket("my-bucket")
  11. if err != nil {
  12. // HandleError(err)
  13. }
  14. expires := time.Date(2049, time.January, 10, 23, 0, 0, 0, time.UTC)
  15. options := []oss.Option{
  16. oss.Expires(expires),
  17. oss.ObjectACL(oss.ACLPublicRead),
  18. oss.Meta("MyProp", "MyPropVal"),
  19. }
  20. err = bucket.PutObject("my-object", strings.NewReader("MyObjectValue"), options...)
  21. if err != nil {
  22. // HandleError(err)
  23. }

Note: All Bucket.PutObject, Bucket.PutObjectFromFile, Bucket.UploadFile, and Bucket.UploadFile support specifying metadata during object upload.

Create a simulated folder

OSS does not use folders. All elements are stored as objects. However, you can create simulated folders using the following code:

  1. import "strings"
  2. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  3. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  4. if err != nil {
  5. // HandleError(err)
  6. }
  7. bucket, err := client.Bucket("my-bucket")
  8. if err != nil {
  9. // HandleError(err)
  10. }
  11. err = bucket.PutObject("my-dir/", strings.NewReader(""))
  12. if err != nil {
  13. // HandleError(err)
  14. }

Note:

  • Creating a simulated folder is equivalent to creating an empty object.
  • This object can be uploaded and downloaded as a normal file, but the OSS console displays the object ended with a slash as a folder.
  • You can use the preceding code to create a simulated folder.
  • For more information about access to folders, see Manage objects.

Append upload

OSS supports appendable upload of objects. You can use Bucket.AppendObject to upload an appendable object.

When calling this method, you must specify the append position of the object. If the object is newly created, the append position is 0. If the object already exists, the append position must be set to the object length before the append.

  • If the object does not exist, an appendable object is created when AppendObject is called.
  • If the object exists, new content is added to the end of the object when AppendObject is called.

Note: Fir example code of append upload, see sample/append_object.go.

  1. import "strings"
  2. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  3. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  4. if err != nil {
  5. // HandleError(err)
  6. }
  7. bucket, err := client.Bucket("my-bucket")
  8. if err != nil {
  9. // HandleError(err)
  10. }
  11. var nextPos int64 = 0
  12. // If the object is appended for the first time, the append position is 0, and the returned value is the position for the next append.
  13. nextPos, err = bucket.AppendObject("my-object", strings.NewReader("YourObjectValue"), nextPos)
  14. if err != nil {
  15. // HandleError(err)
  16. }
  17. // The second append
  18. nextPos, err = bucket.AppendObject("my-object", strings.NewReader("YourObjectValue"), nextPos)
  19. if err != nil {
  20. // HandleError(err)
  21. }
  22. // You can append the object for many times

Note:

  • You can only append content to an appendable object (object created using AppendObject).
  • An appendable object cannot be copied.

When the object is appended for the first time (that is, the append position is 0), you can specify the object metadata. You cannot specify the object metadata for appending operations other than the first one.

  1. // Specify the object metadata when appending the object for the first time
  2. nextPos, err = bucket.AppendObject("my-object", strings.NewReader("YourObjectValue"), 0, oss.Meta("MyProp", "MyPropVal"))
  3. if err != nil {
  4. // HandleError(err)
  5. }

Multipart upload

If the network jitters or the program crashes when a large object is being uploaded, the whole upload operation fails. You have to re-upload the objects, which results in to wastage of the resources. When the network is unstable, you may have retry multiple times.

You can use Bucket.UploadFile to resume the upload task from the checkpoint. The interface has the following parameters:

  • objectKey: The name of the object to be uploaded to OSS.

  • filePath: The path of the local file to be uploaded.

  • partSize: The size of each part to be uploaded. The size ranges from 100 KB to 5 GB. Its unit is byte.

  • options: Optional items, mainly include:

    • Routines: The number of concurrent uploads. The default value is 1, which means no concurrent uploads.
    • Checkpoint: Whether to enable resumable upload and the path of the checkpoint object. The resumable upload is disabled by default. The path of the checkpoint object can be set to null. If not specified, the checkpoint object by default the file.cpt under the same directory of the local file. The file is the name of the local file.
    • For other metadata information, see Specify Object Metadata During File Upload.

The function of resumable upload is to divide the object to be uploaded into multiple parts and upload them separately. When all the parts are uploaded, the upload of the entire object is completed. The progress of the current upload is recorded during the uploading process (in the checkpoint object). If the upload of any part fails during the process, the next upload attempt starts from the recorded position in the checkpoint object. This requires that the same checkpoint object with the previous upload must be used in the next call. When the upload is completed, the checkpoint object is deleted.

Note: The example code of multipart upload can be found in sample/put_object.go.

  1. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  2. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  3. if err != nil {
  4. // HandleError(err)
  5. }
  6. bucket, err := client.Bucket("my-bucket")
  7. if err != nil {
  8. // HandleError(err)
  9. }
  10. // The part size is 100 KB. Parts are uploaded using three coroutines concurrently. The resumable upload is enabled.
  11. err = bucket.UploadFile("my-object", "LocalFile", 100*1024, oss.Routines(3), oss.Checkpoint(true, ""))
  12. if err != nil {
  13. // HandleError(err)
  14. }

Note:

  • SDK records the upload intermediate states in the cp object. You must have the write permission on the cpt object.

  • The cpt object records the intermediate state information of the upload and has a self-checking function. You cannot edit the object.

If the cpt object is damaged, all the parts must be uploaded again. When the upload is completed, the checkpoint object is deleted.

  • If the local file is changed during file upload, all the parts are uploaded again.

  • You can use oss.Checkpoint(true, "your-cp-file.cp") to specify the path of the checkpoint object for a resumable upload.

  • You can use bucket.UploadFile(objectKey, localFile, 100*1024). The concurrent multipart upload and resumable upload are disabled by default.

Obtain the information about all uploaded parts

You can use Bucket.ListUploadedParts to obtain the uploaded parts.

  1. import "fmt"
  2. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  3. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  4. if err != nil {
  5. // HandleError(err)
  6. }
  7. bucket, err := client.Bucket("my-bucket")
  8. if err != nil {
  9. // HandleError(err)
  10. }
  11. imur, err := bucket.InitiateMultipartUpload("my-object")
  12. if err != nil {
  13. // HandleError(err)
  14. }
  15. lsRes, err := bucket.ListUploadedParts(imur)
  16. if err != nil {
  17. // HandleError(err)
  18. }
  19. fmt.Println("Parts:", lsRes.UploadedParts)

Obtain all multipart upload tasks

You can use Bucket.ListMultipartUploads to list the current multipart upload tasks. Main parameters are described as follows:

Parameter Description
Delimiter A character to group object names. All those objects whose names contain the specified prefix and are between the delimiter for the first time form a group of elements.
MaxUploads The maximum number of multipart upload events returned for one request. The default value is 1,000. The max-uploads value cannot exceed 1,000.
KeyMarker The multipart events in which the lexicographic orders of all object names are greater than the value of the KeyMarker parameter.
Prefix Specifies the prefix that the names of returned objects must contain. Note that when you make a query using the prefix, the returned object name still contains the prefix.

Use default parameters

  1. import "fmt"
  2. import "github.com/aliyun/aliyun-oss-go-sdk/oss"
  3. client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
  4. if err != nil {
  5. // HandleError(err)
  6. }
  7. bucket, err := client.Bucket("my-bucket")
  8. if err != nil {
  9. // HandleError(err)
  10. }
  11. lsRes, err := bucket.ListMultipartUploads()
  12. if err != nil {
  13. // HandleError(err)
  14. }
  15. fmt.Println("Uploads:", lsRes.Uploads)

Specify a prefix

  1. lsRes, err := bucket.ListMultipartUploads(oss.Prefix("my-object-"))
  2. if err != nil {
  3. // HandleError(err)
  4. }
  5. fmt.Println("Uploads:", lsRes.Uploads)

Specify to return a maximum of 100 results

  1. lsRes, err := bucket.ListMultipartUploads(oss.MaxUploads(100))
  2. if err != nil {
  3. // HandleError(err)
  4. }
  5. fmt.Println("Uploads:", lsRes.Uploads)

Specify the prefix and the maximum number of entries returned

  1. lsRes, err := bucket.ListMultipartUploads(oss.Prefix("my-object-"), oss.MaxUploads(100))
  2. if err != nil {
  3. // HandleError(err)
  4. }
  5. fmt.Println("Uploads:", lsRes.Uploads)
Thank you! We've received your feedback.