Upload objects

Last Updated: Dec 22, 2017

In OSS, object is the basic data unit for user operation.Different upload methods have different object size limits. The PutObject method supports up to 5 GB for a single object, and the multipart upload method supports up to 48.8 TB for a single object.

Simple upload

Upload a specified string

  1. using System.Text;
  2. using Aliyun.OSS;
  3. // Initialize an OSSClient
  4. var client = new OssClient(endpoint, accessKeyId, accessKeySecret)
  5. try
  6. {
  7. string str = "a line of simple text";
  8. byte[] binaryData = Encoding.ASCII.GetBytes(str);
  9. MemoryStream requestContent = new MemoryStream(binaryData);
  10. client.PutObject(bucketName, key, requestContent);
  11. Console.WriteLine("Put object succeeded");
  12. }
  13. catch (Exception ex)
  14. {
  15. Console.WriteLine("Put object failed, {0}", ex.Message);
  16. }

For complete code, see GitHub

Upload a specified local object

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. string fileToUpload = "your local file path";
  7. client.PutObject(bucketName, key, fileToUpload);
  8. Console.WriteLine("Put object succeeded");
  9. }
  10. catch (Exception ex)
  11. {
  12. Console.WriteLine("Put object failed, {0}", ex.Message);
  13. }

For complete code, see GitHub

Upload an object with MD5 verification

  1. using Aliyun.OSS;
  2. using Aliyun.OSS.util;
  3. // Initialize an OSSClient
  4. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  5. try
  6. {
  7. string fileToUpload = "your local file path";
  8. string md5;
  9. using (var fs = File.Open(fileToUpload, FileMode.Open));
  10. {
  11. md5 = OssUtils.ComputeContentMd5(fs, fs.Length);
  12. }
  13. var objectMeta = new ObjectMetadata
  14. {
  15. ContentMd5 = md5
  16. };
  17. client.PutObject(bucketName, key, fileToUpload, objectMeta);
  18. Console.WriteLine("Put object succeeded");
  19. }
  20. catch (Exception ex)
  21. {
  22. Console.WriteLine("Put object failed, {0}", ex.Message);
  23. }

Complete code can be found at: GitHub

Note:

  • To make sure of the consistency between the data sent by SDK and the data received by OSS, you can insert the Content-MD5 value to ObjectMeta. Then OSS checks the received data against the MD5 value.

  • MD5 verification may cause a reduction in performance.

Upload an object carrying a header

Upload an object carrying a standard header

OSS allows you to customize HTTP headers for objects. The following code sets the expiration time for an object:

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. using (var fs = File.Open(fileToUpload, FileMode.Open))
  7. {
  8. var metadata = new ObjectMetadata();
  9. metadata.ContentLength = fs.Length;
  10. metadata.ExpirationTime = DateTime.Parse("2015-10-12T00:00:00.000Z");
  11. client.PutObject(bucketName, key, fs, metadata);
  12. Console.WriteLine("Put object succeeded");
  13. }
  14. }
  15. catch (Exception ex)
  16. {
  17. Console.WriteLine("Put object failed, {0}", ex.Message);
  18. }

For complete code, see GitHub

Note:

  • .NET SDK supports the following HTTP headers: Cache-Control, Content-Disposition, Content-Encoding, Expires, and Content-Type.

  • For more information, see RFC2616.

Upload an object carrying a custom header

OSS allows you to customize UserMetadata to describe objects. For example,

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. using (var fs = File.Open(fileToUpload, FileMode.Open))
  7. {
  8. var metadata = new ObjectMetadata();
  9. metadata.UserMetadata.Add("name", "my-data");
  10. metadata.ContentLength = fs.Length;
  11. client.PutObject(bucketName, key, fs, metadata);
  12. Console.WriteLine("Put object succeeded");
  13. }
  14. }
  15. catch (Exception ex)
  16. {
  17. Console.WriteLine("Put object failed, {0}", ex.Message);
  18. }

For complete code, see GitHub

Note:

  • The preceding code defines a UserMetadata record with its name being “name” and its value being “my-data”.

  • When a user downloads this object, the user also obtains the metadata.

  • A single object may have multiple similar parameters, but the total size of UserMetadata cannot exceed 2KB.

  • When you upload an object using the preceding code, make sure that the object size does not exceed 5 GB. If the size limit is exceeded, you can use multipart upload.

  • The UserMetadata name is case-insensitive. For example, when you upload an object and define the “UserMetadata” name as “Name”, the parameter stored in the table header is “x-oss-meta-name”. Hence, the parameter “name” can be used to access the object.

  • However, if the stored parameter is “name” but the parameter “Name” is used for access, no information can be found and the system returns as “Null”.

Create a simulated folder

OSS does not use folders. All elements are stored as objects. However, you can create simulated folders using the following code:

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. // Important: The directory key must end with a slash (/).
  7. const string key = "yourfolder/";
  8. // The directory here is an empty object.
  9. using (var stream = new MemoryStream())
  10. {
  11. client.PutObject(bucketName, key, stream);
  12. Console.WriteLine("Create dir {0} succeeded", key);
  13. }
  14. }
  15. catch (Exception ex)
  16. {
  17. Console.WriteLine("Create dir failed, {0}", ex.Message);
  18. }

For complete code, see GitHub

Note:

  • Creating a simulated folder is equivalent to creating an empty object.

  • This object can be uploaded and downloaded as a normal file, but the OSS console displays the object ended with a slash as a folder.

  • You can use the preceding code to create a simulated folder.

  • For more information, see Manage objects.

Asynchronous upload

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. static AutoResetEvent _event = new AutoResetEvent(false);
  5. public static void AsyncPutObject()
  6. {
  7. try
  8. {
  9. using (var fs = File.Open(fileToUpload, FileMode.Open))
  10. {
  11. var metadata = new ObjectMetadata();
  12. metadata.CacheControl = "No-Cache";
  13. metadata.ContentType = "text/html";
  14. client.BeginPutObject(bucketName, key, fs, metadata, PutObjectCallback, new string('a', 8));
  15. _event.WaitOne();
  16. }
  17. }
  18. catch (Exception ex)
  19. {
  20. Console.WriteLine("Put object failed, {0}", ex.Message);
  21. }
  22. }
  23. private static void PutObjectCallback(IAsyncResult ar)
  24. {
  25. try
  26. {
  27. var result = client.EndPutObject(ar);
  28. Console.WriteLine("ETag:{0}", result.ETag);
  29. Console.WriteLine("User Parameter:{0}", ar.AsyncState as string);
  30. Console.WriteLine("Put object succeeded");
  31. }
  32. catch (Exception ex)
  33. {
  34. Console.WriteLine("Put object failed, {0}", ex.Message);
  35. }
  36. finally
  37. {
  38. _event.Set();
  39. }
  40. }

For complete code, see GitHub

Note: Asynchronous upload requires the implementation of callback functions.

Append upload

You can upload an object with the append method. For more information, see Append object.

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. /// <summary>
  5. /// Append the content to the specified object on OSS.
  6. /// </summary>
  7. /// <param name="bucketName">Name of the specified bucket. </param>
  8. /// <param name="key">Name of the object to be appended with the content on the OSS. </param>
  9. /// <param name="fileToUpload">Specify the path of the object to be appended. </param>
  10. public static void AppendObject(string bucketName, string key, string objectToUpload)
  11. {
  12. //If the object to be appended for the first time already exists, first obtain the object length. If the object does not exist, the "position" is 0.
  13. long position = 0;
  14. try
  15. {
  16. var metadata = client.GetObjectMetadata(bucketName, key);
  17. position = metadata.ContentLength;
  18. }
  19. catch(Exception) {}
  20. try
  21. {
  22. using (var fs = File.Open(fileToUpload, FileMode.Open))
  23. {
  24. var request = new AppendObjectRequest(bucketName, key)
  25. {
  26. ObjectMetadata = new ObjectMetadata(),
  27. Content = fs,
  28. Position = position
  29. };
  30. var result = client.AppendObject(request);
  31. // Set the position of the object to be appended the next time
  32. position = result.NextAppendPosition;
  33. Console.WriteLine("Append object succeeded, next append position:{0}", position);
  34. }
  35. // Append the object for the second time, with the "position" value available in the previous append results
  36. using (var fs = File.Open(fileToUpload, FileMode.Open))
  37. {
  38. var request = new AppendObjectRequest(bucketName, key)
  39. {
  40. ObjectMetadata = new ObjectMetadata(),
  41. Content = fs,
  42. Position = position
  43. };
  44. var result = client.AppendObject(request);
  45. position = result.NextAppendPosition;
  46. Console.WriteLine("Append object succeeded, next append position:{0}", position);
  47. }
  48. }
  49. catch (Exception ex)
  50. {
  51. Console.WriteLine("Append object failed, {0}", ex.Message);
  52. }
  53. }

For complete code, see GitHub. For more information, see Append object.

Multipart upload

In addition to the PutObject interface, OSS provides the Multipart Upload mode for object upload.

You can apply the Multipart Upload mode in the following scenarios (but not limited to the following):

  • Resumable upload is required.

  • Upload an object larger than 100 MB.

  • The network conditions are poor and the OSS server is frequently disconnected.

  • The size of the object to be uploaded is unknown.Introduction to multipart upload is as follows:

Complete multipart upload

Initialize Multipart Upload

The following code uses the InitiateMultipartUpload method to initialize a multipart upload task:

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. string bucketName = "your-bucket-name";
  7. string key = "your-key";
  8. // Start Multipart Upload
  9. var request = new InitiateMultipartUploadRequest(bucketName, key);
  10. var result = client.InitiateMultipartUpload(request);
  11. // Print UploadId
  12. Console.WriteLine("Init multi part upload succeeded");
  13. Console.WriteLine("Upload Id:{0}", result.UploadId);
  14. }
  15. catch (Exception ex)
  16. {
  17. Console.WriteLine("Init multi part upload failed, {0}", ex.Message);
  18. }

For complete code, see GitHub.

Note:

  • InitiateMultipartUploadRequest is used to specify the name and bucket of the object to be uploaded.

  • In InitiateMultipartUploadRequest, you can set ObjectMeta, but do not require to specify ContentLength.

  • The result returned by InitiateMultipartUpload includes the UploadID which uniquely identifies a multipart upload task and is used in subsequent operations.

Upload parts from a local location

The following code uploads a local file through multipart upload:Assume that a file named “object.zip” exists in the local path “/path/to/“ and requires to be uploaded to OSS through multipart upload.

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. // Calculate the quantity of parts
  5. var fi = new FileInfo(fileToUpload);
  6. var fileSize = fi.Length;
  7. var partCount = fileSize / partSize;
  8. if (fileSize % partSize != 0)
  9. {
  10. partCount++;
  11. }
  12. // Start to upload parts
  13. try
  14. {
  15. var partETags = new List<PartETag>();
  16. using (var fs = File.Open(fileToUpload, FileMode.Open))
  17. {
  18. for (var i = 0; i < partCount; i++)
  19. {
  20. var skipBytes = (long)partSize * i;
  21. // Locate the start point of the part to be uploaded
  22. fs.Seek(skipBytes, 0);
  23. //Calculate the size of the part to be uploaded. The size of the last part is the size of the remaining portion of the object. The size of other parts is determined by the partSize parameter.
  24. var size = (partSize < fileSize - skipBytes) ? partSize : (fileSize - skipBytes);
  25. var request = new UploadPartRequest(bucketName, objectName, uploadId)
  26. {
  27. InputStream = fs,
  28. PartSize = size,
  29. PartNumber = i + 1
  30. };
  31. //Call the UploadPart interface to upload the object. The returned results contain the ETag value of the uploaded part.
  32. var result = _ossClient.UploadPart(request);
  33. partETags.Add(result.PartETag);
  34. }
  35. Console.WriteLine("Put multi part upload succeeded");
  36. }
  37. }
  38. catch (Exception ex)
  39. {
  40. Console.WriteLine("Put multi part upload failed, {0}", ex.Message);
  41. }

For complete code, see GitHub

Note:
When you use the preceding code to upload each part through UploadPart, consider the following points:

  • The UploadPart method requires all parts except the last one be larger than 100 KB. However, the Upload Part interface does not check the uploaded part size immediately. This is because, it cannot determine whether the current part is the last one. It checks the part size only when multipart upload is completed.

  • The OSS inserts the MD5 value of the part data that the server receives to the ETag header and returns it to the user.

  • To make sure the data transmitted over the network is free of errors, the SDK automatically sets Content-MD5. OSS calculates the MD5 value of the uploaded data and compares it with the MD5 value calculated by the SDK. If they are inconsistent, the system returns the InvalidDigest error code.

  • The part number ranges from 1 to 10,000. If the part number exceeds this range, OSS returns the InvalidArgument error code.

  • When each part is uploaded, the stream is directed to the start point of the part.

  • Once each part is uploaded, the returned results of OSS contain a PartETag that combines the ETag and PartNumber of the uploaded part.

  • You must to save the PartETag which is used for completing the multipart upload task. Generally, we save these PartETag objects in the List.

Complete multipart upload

The following code completes a multipart upload task:

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. var completeMultipartUploadRequest = new CompleteMultipartUploadRequest(bucketName, key, uploadId);
  7. foreach (var partETag in partETags)
  8. {
  9. completeMultipartUploadRequest.PartETags.Add(partETag);
  10. }
  11. var result = client.CompleteMultipartUpload(completeMultipartUploadRequest);
  12. Console.WriteLine("complete multi part succeeded");
  13. }
  14. catch (Exception ex)
  15. {
  16. Console.WriteLine("complete multi part failed, {0}", ex.Message);
  17. }

For complete code, see GitHub

Note:

  • In the preceding code, the PartETags are saved to the PartETags list during multipart upload. Once OSS receives the PartETags list submitted by the user, it checks the validity of each data part.

  • Once all data parts are verified, OSS combines these parts into a complete object.

Cancel a multipart upload task

The following code uses the AbortMultipartUpload method to cancel a multipart upload task.

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. var request = new AbortMultipartUpload(bucketName, key, uploadId);
  7. client.AbortMultipartUpload(request);
  8. Console.WriteLine("Abort multi part succeeded");
  9. }
  10. catch (Exception ex)
  11. {
  12. Console.WriteLine("Abort multi part failed, {0}", ex.Message);
  13. }

For complete code, see GitHub

Obtain all multipart upload tasks in a bucket

The following code uses the ListMultipartUploads method to retrieve all multipart upload tasks in a bucket:

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. // Get all upload events in the bucket
  7. var request = new ListMultipartUploadsRequest(bucketName);
  8. var multipartUploadListing = client.ListMultipartUploads(request);
  9. Console.WriteLine("List multi part succeeded");
  10. // Get information of various events
  11. var multipartUploads = multipartUploadListing.MultipartUploads;
  12. foreach (var mu : multipartUploads)
  13. {
  14. Console.WriteLine("Key:{0}, UploadId:{1}", mu.Key , mu.UploadId);
  15. }
  16. var commonPrefixes = multipartUploadListing.CommonPrefixes;
  17. foreach (var prefix : commonPrefixes)
  18. {
  19. Console.WriteLine("Prefix:{0}", prefix);
  20. }
  21. }
  22. catch (Exception ex)
  23. {
  24. Console.WriteLine("List multi part uploads failed, {0}", ex.Message);
  25. }

Complete code can be found at: GitHub

Note:

  • By default, if a bucket contains more than 1,000 multipart upload events, only the first 1,000 objects are returned and the IsTruncated parameter in the returned results are false. The NextKeyMarker and NextUploadIdMarker are returned to serve as the start point of the next read.

  • To increase the quantity of returned multipart upload events, you can modify the MaxUploads parameter or use the KeyMarker parameter and UploadIdMarker parameter for multiple accesses.

Obtain information of all the uploaded parts

The following code uses the ListParts method to obtain information of all uploaded parts of a multipart upload event:

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. try
  5. {
  6. var listPartsRequest = new ListPartsRequest(bucketName, key, uploadId);
  7. var listPartsResult = client.ListParts(listPartsRequest);
  8. Console.WriteLine("List parts succeeded");
  9. // Traverse all parts
  10. var parts = listPartsResult.Parts;
  11. foreach (var part : parts)
  12. {
  13. Console.WriteLine("partNumber:{0}, ETag:{1}, Size:{2}", part.PartNumber, part.ETag, part.Size);
  14. }
  15. }
  16. catch (Exception ex)
  17. {
  18. Console.WriteLine("List parts failed, {0}", ex.Message);
  19. }

For complete code, see GitHub

Note:

  • By default, if a bucket contains more than 1,000 multipart upload tasks, only the information of the first 1,000 multipart upload tasks is returned and the IsTruncated parameter in the returned results is false. The NextPartNumberMarker is returned to serve as the start point of next read.

  • To increase the quantity of returned multipart upload tasks, you can modify the MaxParts parameter or use the PartNumberMarker parameter for multiple accesses.

Resumable upload

Additionally, to multipart upload, OSS provides resumable upload to resume an interrupted upload task starting from the failed part. This speeds up the upload process.

  1. using Aliyun.OSS;
  2. // Initialize an OSSClient
  3. var client = new OssClient(endpoint, accessKeyId, accessKeySecret);
  4. /// <summary>
  5. /// Resume the upload
  6. /// </summary>
  7. /// <param name="bucketName">Name of the specified bucket. </param>
  8. /// <param name="key">Name of the object to save the data to on OSS </param>
  9. /// <param name="objectToUpload">Specify the path of the object to be uploaded. </param>
  10. /// <param name="checkpointDir">Directory that stores the intermediate state information of the resumable upload. Resumable upload takes effect after you specify the directory. Otherwise, the object is uploaded all over again.</param>
  11. public static void ResumableUploadObject(string bucketName, string key, string fileToUpload, string checkpointDir)
  12. {
  13. try
  14. {
  15. client.ResumableUploadObject(bucketName, key, fileToUpload, null, checkpointDir);
  16. Console.WriteLine("Resumable upload object:{0} succeeded", key);
  17. }
  18. catch (Exception ex)
  19. {
  20. Console.WriteLine("Resumable upload object failed, {0}", ex.Message);
  21. }
  22. }

For complete code, see GitHub

Note:

  • The checkpointDir directory stores the intermediate state information of a resumable upload. The information is used when a failed upload task is resumed.

  • If the checkpointDir directory is null, resumable upload does not take effect, and the object that previously fails to be uploaded gets uploaded all over again.

Thank you! We've received your feedback.