Manage objects

Last Updated: Jun 01, 2017

You can use the Python SDK to list, delete and copy objects, view object information, and change object metadata.

List objects

The Python SDK provides a series of iterators for you to list objects and perform multipart upload.

Simple list

The following code lists 10 files in a bucket:

  1. # -*- coding: utf-8 -*-
  2. import oss2
  3. from itertools import islice
  4. auth = oss2.Auth ('Your AccessKeyID', 'Your AccessKeySecret')
  5. bucket = oss2.Bucket (auth, 'Your endpoint', 'your bucket name')
  6. for b in islice(oss2.ObjectIterator(bucket), 10):
  7. print(b.key)

List objects by prefix

The following code only lists all the objects prefixed with “img-“:

  1. for obj in oss2.ObjectIterator(bucket, prefix='img-'):
  2. print(obj.key)

Simulate a folder

A bucket on the OSS is flat-structured as the OSS has no folders or directories. You can add “/“ to the object name to stimulate a folder.When listing objects, set the ‘delimiter’ parameter (the directory delimiter) to “/“, and the OSS determines whether it is a folder by identifying whether it is a “CommonPrefix”.For more information, refer to Folder Simulation.

The following code lists all the content under a “root directory”:

  1. for obj in oss2.ObjectIterator(bucket, delimiter='/'):
  2. if obj.is_prefix(): # Folder
  3. print('directory: ' + obj.key)
  4. else: # Object
  5. print('file: ' + obj.key)

NoteFolder listing stimulation is less efficient and is not recommended.

Identify whether an object exists

You can use ‘object_exists’ to determine whether the object exists. If the returned value is ‘true’, it indicates that the object exists. If the returned value is ‘false’, it indicates that the object does not exist.

  1. exist = bucket.object_exists('remote.txt')
  2. if exist:
  3. print('object exist')
  4. else:
  5. print('object not eixst')

Delete an object

The following code deletes a single object:

  1. bucket.delete_object('remote.txt')

You can delete multiple objects (up to 1,000 objects). The following code deletes three objects and prints the names of successfully-deleted objects:

  1. result = bucket.batch_delete_objects(['a.txt', 'b.txt', 'c.txt'])
  2. print('\n'.join(result.deleted_keys))

Copy an object

The following code copies the content of source.txt in a bucket named src-bucket to the target.txt object in the current bucket.

  1. bucket.copy_object('src-bucket', 'source.txt', 'target.txt')

Copy a large object

When an object is large, multipart copy is recommended to avoid timeout due to a large object size. Similar to multipart upload, multipart copy is completed in three steps:

  1. Initialization (Bucket.init_multipart_upload): Get an Upload ID
  2. Copy parts (Bucket.upload_part_copy): Each part of the source object is copied as a part of the target object
  3. Complete multipart copy (Bucket.complete_multipart_copy): Multipart copy is completed to generate the target object

An example is provided below:

  1. from oss2.models import PartInfo
  2. from oss2 import determine_part_size
  3. src_key = 'remote.txt'
  4. dst_key = 'remote-dst.txt'
  5. bucket.put_object(src_key, 'a' * (1024 * 1024 + 100))
  6. total_size = bucket.head_object(src_key).content_length
  7. part_size = determine_part_size(total_size, preferred_size=100 * 1024)
  8. # Part initialization
  9. upload_id = bucket.init_multipart_upload(dst_key).upload_id
  10. parts = []
  11. # Copy parts one by one
  12. part_number = 1
  13. offset = 0
  14. while offset < total_size:
  15. num_to_upload = min(part_size, total_size - offset)
  16. byte_range = (offset, offset + num_to_upload - 1)
  17. result = bucket.upload_part_copy(bucket.bucket_name, src_key, byte_range,
  18. dst_key, upload_id, part_number)
  19. parts.append(PartInfo(part_number, result.etag))
  20. offset += num_to_upload
  21. part_number += 1
  22. # Complete multipart upload
  23. bucket.complete_multipart_upload(dst_key, upload_id, parts)

Change the object metadata

The following code changes the custom object metadata:

  1. bucket.update_object_meta('story.txt', {'x-oss-meta-author': 'O. Henry'})
  2. bucket.update_object_meta('story.txt', {'x-oss-meta-price': '100 dollar'})

The previous value of the custom object metadata (the HTTP header prefixed with “x-oss-meta-“) is overwritten each time the custom object metadata is called.

In the example above, the custom object metadata “x-oss-meta-author” is deleted during the second call.

Other information such as Content-Type can also be changed:

  1. bucket.update_object_meta('story.txt', {'Content-Type': 'text/plain'})

Note that this call not only changed the Content-Type but also cleared the previous custom object metadata.

View the object access permission

  1. print(bucket.get_object_acl('story.txt').acl)

The object has four access ACLs: default (default value), private (private-read-write), public-read (public-read and private-write), and public-read-write (public-read-write). More detailed descriptions can be found at Object-level Permissions.

Set object access permissions

Set the object access permission to public-read:

  1. bucket.put_object_acl('story.txt', oss2.OBJECT_ACL_PUBLIC_READ)

The object has four access ACLs: default (default value), private (private-read-write), public-read (public-read and private-write), and public-read-write (public-read-write). They are mapped to oss2.OBJECT_ACL_DEFAULT, oss2.OBJECT_ACL_PRIVATE, oss2.OBJECT_ACL_PUBLIC_READ and oss2.OBJECT_ACL_PUBLIC_READ_WRITE respectively.

Thank you! We've received your feedback.