All Products
Search
Document Center

Object Storage Service:Multipart upload (Go SDK V1)

Last Updated:Nov 28, 2025

Object Storage Service (OSS) provides the multipart upload feature. Multipart upload allows you to split a large object into multiple parts to upload. After these parts are uploaded, you can call the CompleteMultipartUpload operation to combine the parts into a complete object.

Usage notes

  • In this topic, the public endpoint of the China (Hangzhou) region is used. If you want to access OSS from other Alibaba Cloud services in the same region as OSS, use an internal endpoint. For more information about OSS regions and endpoints, see Regions and endpoints.

  • In this topic, access credentials are obtained from environment variables. For more information about how to configure access credentials, see Configure access credentials.

  • In this topic, an OSSClient instance is created by using an OSS endpoint. If you want to create an OSSClient instance by using custom domain names or Security Token Service (STS), see Configure OSSClient instances.

  • To implement multipart upload, you must call the InitiateMultipartUpload, UploadPart, and CompleteMultipartUpload operations. Therefore, you must have the oss:PutObject permission. For more information, see Attach a custom policy to a RAM user.

  • Go SDK V2.2.5 or later supports all properties that are included in the following sample code.

Multipart upload procedure

To implement multipart upload, perform the following steps:

  1. Initialize a multipart upload.

    You can call the Bucket.InitiateMultipartUpload method. OSS returns a globally unique upload ID.

  2. Upload the parts.

    You can call the Bucket.UploadPart method to upload data for each part.

    Note
    • For the same upload ID, the part number identifies the relative position of a part within the object. If you upload new data using the same part number, the existing data of the part in OSS is overwritten.

    • OSS returns the MD5 hash of the received part data in the ETag header.

    • OSS calculates the MD5 hash of the uploaded data and compares it with the MD5 hash that is calculated by the SDK. If the two MD5 hashes do not match, the InvalidDigest error code is returned.

  3. Complete the multipart upload.

    After all parts are uploaded, you can call the Bucket.CompleteMultipartUpload method to combine all parts into a complete object.

Sample code

You can use the following sample code to perform a complete multipart upload.

package main

import (
	"fmt"
	"log"
	"os"

	"github.com/aliyun/aliyun-oss-go-sdk/oss"
)

func main() {
	// Obtain access credentials from environment variables. Before you run the sample code, make sure that the OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables are configured.
	provider, err := oss.NewEnvironmentVariableCredentialsProvider()
	if err != nil {
		log.Fatalf("Error: %v", err)
	}

	// Create an OSSClient instance.
	// Set yourEndpoint to the endpoint of the bucket. For example, for a bucket in the China (Hangzhou) region, set the endpoint to https://oss-cn-hangzhou.aliyuncs.com. For other regions, set the endpoint as needed.
	// Set yourRegion to the region where the bucket is located. For example, for a bucket in the China (Hangzhou) region, set the region to cn-hangzhou. For other regions, set the region as needed.
	clientOptions := []oss.ClientOption{oss.SetCredentialsProvider(&provider)}
	clientOptions = append(clientOptions, oss.Region("yourRegion"))
	// Set the signature version.
	clientOptions = append(clientOptions, oss.AuthVersion(oss.AuthV4))
	client, err := oss.New("yourEndpoint", "", "", clientOptions...)
	if err != nil {
		log.Fatalf("Error: %v", err)
	}

	// Set the bucket name.
	bucketName := "examplebucket"
	// Set the full path of the object. The full path cannot contain the bucket name.
	objectName := "exampleobject.txt"
	// Set the full path of the local file. If you do not specify a local path, the file is uploaded from the local path that corresponds to the project of the sample program.
	localFilename := "/localpath/exampleobject.txt"

	bucket, err := client.Bucket(bucketName)
	if err != nil {
		log.Fatalf("Error: %v", err)
	}

	// Set the part size in bytes. In this example, the part size is set to 5 MB.
	partSize := int64(5 * 1024 * 1024)

	// Call the multipart upload function.
	if err := uploadMultipart(bucket, objectName, localFilename, partSize); err != nil {
		log.Fatalf("Failed to upload multipart: %v", err)
	}

}

// Multipart upload function.
func uploadMultipart(bucket *oss.Bucket, objectName, localFilename string, partSize int64) error {
	// Split the local file into parts.
	chunks, err := oss.SplitFileByPartSize(localFilename, partSize)
	if err != nil {
		return fmt.Errorf("failed to split file into chunks: %w", err)
	}

	// Open the local file.
	file, err := os.Open(localFilename)
	if err != nil {
		return fmt.Errorf("failed to open file: %w", err)
	}
	defer file.Close()

	// Step 1: Initialize a multipart upload event.
	imur, err := bucket.InitiateMultipartUpload(objectName)
	if err != nil {
		return fmt.Errorf("failed to initiate multipart upload: %w", err)
	}

	// Step 2: Upload parts.
	var parts []oss.UploadPart
	for _, chunk := range chunks {
		part, err := bucket.UploadPart(imur, file, chunk.Size, chunk.Number)
		if err != nil {
			// If a part fails to be uploaded, try to abort the multipart upload task.
			if abortErr := bucket.AbortMultipartUpload(imur); abortErr != nil {
				log.Printf("Failed to abort multipart upload: %v", abortErr)
			}
			return fmt.Errorf("failed to upload part: %w", err)
		}
		parts = append(parts, part)
	}

	// Set the access control list (ACL) of the object to private. By default, the ACL of the object is inherited from the bucket.
	objectAcl := oss.ObjectACL(oss.ACLPrivate)

	// Step 3: Complete the multipart upload.
	_, err = bucket.CompleteMultipartUpload(imur, parts, objectAcl)
	if err != nil {
		// If the upload fails to be completed, try to abort the upload.
		if abortErr := bucket.AbortMultipartUpload(imur); abortErr != nil {
			log.Printf("Failed to abort multipart upload: %v", abortErr)
		}
		return fmt.Errorf("failed to complete multipart upload: %w", err)
	}

	log.Printf("Multipart upload completed successfully.")
	return nil
}

FAQ

How do I abort a multipart upload event?

You can use the Bucket.AbortMultipartUpload method to abort a multipart upload event in the following scenarios.

  1. File error:

    • If you find that a file is corrupted or contains malicious code during the upload, you can abort the upload to prevent potential threats.

  2. Unstable network:

    • If the network connection is unstable or interrupted, parts may be lost or corrupted during the upload. You can abort the upload and restart it to ensure data integrity and consistency.

  3. Resource limits:

    • If your storage space is limited and the file to be uploaded is too large, you can abort the upload to release storage resources. This lets you allocate resources to more important tasks.

  4. Accidental operation:

    • If you accidentally start an unnecessary upload task or upload an incorrect file version, you can abort the upload event.

...
if err = bucket.AbortMultipartUpload(imur); err != nil {
log.Fatalf("failed to abort multipart upload: %w", err)
}

log.Printf("Multipart upload aborted successfully.")

How do I list uploaded parts?

You can use the Bucket.ListUploadedParts method to list the parts that have been successfully uploaded for a specific multipart upload event in the following scenarios.

Monitor upload progress:

  1. Large file uploads:

    • When you upload a large file, you can list the uploaded parts to ensure that the upload is proceeding as expected and to promptly identify any issues.

  2. Resumable uploads:

    • If the network is unstable or the upload is interrupted, you can view the uploaded parts to decide whether to retry uploading the remaining parts. This helps you resume the upload.

  3. Troubleshooting:

    • If an error occurs during the upload, you can check the uploaded parts to quickly locate the source of the problem. For example, if a specific part fails to upload, you can address the issue accordingly.

  4. Resource management:

    • In scenarios where you need to strictly control resource usage, you can monitor the upload progress to better manage storage space and bandwidth resources. This ensures efficient resource utilization.

...	
if lsRes, err := bucket.ListUploadedParts(imur); err != nil {
log.Fatalf("Failed to list uploaded parts: %v", err)
}

for _, upload := range lsRes.UploadedParts {
log.Printf("List PartNumber: %d, ETag: %s, LastModified: %v\n", upload.PartNumber, upload.ETag, upload.LastModified)
}

List multipart upload events

You can use the Bucket.ListMultipartUploads method to list all ongoing multipart upload events in a bucket in the following scenarios.

Monitoring scenarios:

  1. Batch file upload management:

    • When you need to upload many files, you can use the ListMultipartUploads method to monitor all multipart upload activities in real time. This ensures that all files are correctly uploaded in parts.

  2. Fault detection and recovery:

    • If network issues or other faults occur during the upload, some parts may fail to upload. By monitoring ongoing multipart upload events, you can promptly detect these issues and take measures to resume the uploads.

  3. Resource optimization and management:

    • During large-scale file uploads, monitoring ongoing multipart upload events can help optimize resource allocation. For example, you can adjust bandwidth usage or optimize the upload policy based on the upload progress.

  4. Data migration:

    • When you perform a large-scale data migration project, you can monitor all ongoing multipart upload events to ensure the smooth progress of the migration task. This lets you promptly detect and resolve any potential issues.

Parameter settings

Parameter

Description

Delimiter

The character used to group object names. All object names that contain the specified prefix and appear before the first occurrence of the delimiter character are grouped as one element.

MaxUploads

The maximum number of multipart upload events to return. The default value and the maximum value are both 1000.

KeyMarker

The multipart upload events whose object names are lexicographically greater than the value of the KeyMarker parameter. You can use this parameter together with the UploadIDMarker parameter to specify the starting position of the results to return.

Prefix

The prefix that the returned object names must contain. Note that if you use the Prefix parameter in a query, the returned object names still contain the prefix.

UploadIDMarker

The starting position of the results to return. This parameter is used together with the KeyMarker parameter.

  • If the KeyMarker parameter is not set, OSS ignores this parameter.

  • If the KeyMarker parameter is set, the query results include the following:

    • Multipart upload events whose object names are lexicographically greater than the value of the KeyMarker parameter.

    • Multipart upload events whose object names are the same as the value of the KeyMarker parameter but whose upload IDs are greater than the value of the UploadIDMarker parameter.

  • Use default parameters.

    ...
    lsRes, err := bucket.ListMultipartUploads(oss.KeyMarker(keyMarker), oss.UploadIDMarker(uploadIdMarker))
    if err != nil {
    log.Fatalf("failed to list multipart uploads: %w", err)
    }
    
    for _, upload := range lsRes.Uploads {
    log.Printf("Upload: %s, UploadID: %s\n", upload.Key, upload.UploadID)
    }
  • Specify the prefix as file.

    ...
    lsRes, err := bucket.ListMultipartUploads(oss.Prefix('file'))
    if err != nil {
    log.Fatalf("failed to list multipart uploads with prefix: %w", err)
    }
    
    log.Printf("Uploads:", lsRes.Uploads)
  • Specify that a maximum of 100 results are returned.

    ...
    lsRes, err := bucket.ListMultipartUploads(oss.MaxUploads(100))
    if err != nil {
    log.Fatalf("failed to list multipart uploads with limit: %w", err)
    }
    
    log.Printf("Uploads:", lsRes.Uploads)
  • Specify the prefix as file and that a maximum of 100 results are returned.

    ...
    lsRes, err := bucket.ListMultipartUploads(oss.Prefix("file"), oss.MaxUploads(100))
    if err != nil {
    log.Fatalf("failed to list multipart uploads with prefix and limit: %w", err)
    }
    
    log.Printf("Uploads:", lsRes.Uploads)

References

  • For the complete sample code for multipart upload, see GitHub example.

  • A complete multipart upload involves three API operations. For more information, see the following topics:

    • For more information about the API operation used to initialize a multipart upload event, see InitiateMultipartUpload.

    • For more information about the API operation used to upload a part, see UploadPart.

    • For more information about the API operation used to complete a multipart upload, see CompleteMultipartUpload.

  • For more information about the API operation used to abort a multipart upload event, see AbortMultipartUpload.

  • For more information about the API operation used to list uploaded parts, see ListUploadedParts.

  • For more information about the API operation used to list all ongoing multipart upload events, which are events that have been initiated but not yet completed or aborted, see ListMultipartUploads.