All Products
Search
Document Center

Object Storage Service:Copy objects using the cp command

Last Updated:Nov 06, 2025

Use the cp command to copy files from a source bucket to a destination bucket in the same region, or to another directory within the same bucket.

Precautions

  • The cp command does not support copying files across accounts or regions. To copy or migrate files across accounts or regions, use Data Online Migration.

  • This command copies only complete files and does not support copying parts of files.

  • Starting with ossutil version 1.6.16, you can use ossutil as the binary name in the command line without renaming it. For versions earlier than 1.6.16, you must rename the binary to match your operating system. For more information, see Command-line tool ossutil command reference.

    Permissions

    By default, only the Alibaba Cloud account has permission to perform all API operations. To execute this command, a RAM user or RAM role must be granted the required permissions by the Alibaba Cloud account or an administrator through a RAM Policy or a Bucket Policy.

    API Action

    Description

    oss:GetObject

    Copy an object between buckets in the same region.

    oss:PutObject

    oss:GetObjectVersion

    Optional. This permission is required to copy a specific version of an object.

    oss:GetObjectTagging

    Optional. These permissions are required if the copy operation involves object tagging.

    oss:PutObjectTagging

    oss:GetObjectVersionTagging

    Optional. This permission is required if the copy operation involves tags of a specific object version.

    kms:GenerateDataKey

    Optional. These permissions are required if the copy operation involves server-side encryption with KMS.

    kms:Decrypt

    Command syntax

    ossutil cp cloud_url cloud_url [options]

    The following table describes the parameters and options.

    Parameter

    Description

    cloud_url

    The source and destination OSS paths. The format is oss://bucketname/objectname. For example, to copy the source object `srcobject.jpg` to the destination object `destobject.jpg` in the same bucket named `examplebucket`, set the source path to oss://examplebucket/srcobject.jpg and the destination path to oss://examplebucket/destobject.jpg.

    -r, --recursive

    Performs a recursive operation. If you specify this option, ossutil performs the operation on all matching objects in the bucket. Otherwise, the operation is performed only on the specified object.

    -f --force

    Forces the operation without a confirmation prompt.

    -u,--update

    Copies an object only if the destination object does not exist, or if the last modified time of the source object is later than that of the destination object.

    --disable-ignore-error

    Does not ignore errors during batch operations.

    --only-current-dir

    Copies only the files in the current directory. Subdirectories and the files in them are ignored.

    -bigfile-threshold

    The size threshold for resumable copy. Unit: bytes.

    Default value: 100 MB

    Valid values: 0 to 9223372036854775807

    --part-size

    The part size. Unit: bytes. By default, ossutil calculates a suitable part size based on the file size.

    Valid values: 1 to 9223372036854775807

    --checkpoint-dir

    The directory that stores the checkpoint information for a resumable copy. If a resumable copy fails, ossutil automatically creates a directory named .ossutil_checkpoint to record checkpoint information. This directory is deleted after the resumable copy is successful. If you specify this option, make sure that the specified directory can be deleted.

    --encoding-type

    The encoding type of the file name. Set the value to url. If you do not specify this option, the file name is not encoded.

    --include

    Includes all files that meet the specified condition.

    --exclude

    Excludes all files that meet the specified condition.

    --meta

    The metadata of the file. The format is header:value#header:value. Example: Cache-Control:no-cache#Content-Encoding:gzip. For more information about metadata, see set-meta (Manage object metadata).

    --acl

    The access control list (ACL) of the file. Valid values:

    • default (default): The object inherits the ACL of the bucket.

    • private: Only the bucket owner has read and write permissions on the object. Other users cannot access the object.

    • public-read: Only the bucket owner has write permissions on the object. All other users, including anonymous users, have read permissions. This may cause data leaks and unexpected charges. We do not recommend you grant this permission unless necessary.

    • public-read-write: All users, including anonymous users, have read and write permissions on the object. This may cause data leaks and unexpected charges. Use this permission with caution.

    --disable-crc64

    Disables 64-bit cyclic redundancy check (CRC-64) for data. By default, CRC-64 is enabled for data transfer in ossutil.

    --payer

    The payment method for the request. If you want the requester to pay for traffic and requests when accessing resources at the specified path, set this option to requester.

    -j,--jobs

    The number of concurrent tasks for batch operations. Default value: 3. Valid values: 1 to 10000.

    --parallel

    The number of concurrent tasks for a single file operation. Valid values: 1 to 10000. If you do not set this option, ossutil determines the value based on the operation type and file size.

    --version-id

    Copies a specific version of a file. This option is available only for versioning-enabled buckets.

    --start-time

    A UNIX timestamp. If you specify this option, objects last updated before this time are ignored.

    Note

    Only ossutil 1.7.18 and later support this parameter. For more information about upgrading, see update (Upgrade ossutil).

    --end-time

    A UNIX timestamp. If you specify this option, objects last updated after this time are ignored.

    Note
    • If you specify both `start-time` and `end-time`, the copy command is executed only for files that were last modified between the specified start and end times.

    • Only ossutil 1.7.18 and later support this parameter. For more information about how to upgrade the version, see update (Upgrade ossutil).

    For more information about other common options for this command, see Common options.

    If the default concurrency does not meet your performance requirements, you can adjust the -j, --jobs and --parallel options to adjust performance. By default, ossutil calculates the value of `parallel` based on the file size. When you transfer large files in batches, the actual concurrency is the value of `jobs` × the value of `parallel`.

    • If the resources, such as network, memory, and CPU, of the ECS instance or server that runs the command are limited, we recommend that you decrease the concurrency to a value of less than 100. If resources are not fully utilized, you can increase the concurrency as needed.

    • A high concurrency may decrease performance or even cause EOF errors due to thread switching overhead and resource competition. Adjust the -j, --jobs and --parallel options based on the actual resources of your machine. When you perform stress testing, we recommend that you start with a low concurrency and gradually increase it to find the optimal value.

    Usage examples

    The following examples are for a Linux system. Modify the parameters based on your operating system and actual environment. The examples assume the following environment:

    • Source bucket: examplebucket1

    • Source directory 1 in the source bucket: srcfolder1

    • Source directory 2 in the source bucket: srcfolder2

    • Destination bucket: examplebucket2

    • Destination directory in the destination bucket: desfolder

    Copy a single file

    Copy a file from one directory to another in the same bucket and rename the file to example.txt.

    ossutil cp oss://examplebucket1/srcfolder1/examplefile.txt oss://examplebucket1/srcfolder2/example.txt                             

    Copy multiple files in a batch

    When you copy files, if the source path does not end with a forward slash (/), all files that match the specified prefix are copied to the destination bucket. If the source path ends with a forward slash (/), only the files in the specified directory are copied to the destination bucket.

    Assume that the `srcfolder1` directory in the source bucket `examplebucket1` contains the following files:

    srcfolder1/exampleobject1.txt
    srcfolder1/exampleobject2.png
    srcfolder1/dir1/
    srcfolder1/dir1/exampleobject3.jpg
    srcfolder1/dir2/
    srcfolder1/dir2/exampleobject4.jpg
    • The source path does not end with a forward slash (/)

      ossutil cp oss://examplebucket1/srcfolder1  oss://examplebucket2 -r

      After the copy is complete, the following files are added to the destination bucket `examplebucket2`:

      srcfolder1/exampleobject1.txt
      srcfolder1/exampleobject2.png
      srcfolder1/dir1/
      srcfolder1/dir1/exampleobject3.jpg
      srcfolder1/dir2/
      srcfolder1/dir2/exampleobject4.jpg
    • The source path ends with a forward slash (/).

      ossutil cp oss://examplebucket1/srcfolder1/  oss://examplebucket2 -r

      After the copy is complete, the following files are present in the destination bucket `examplebucket2`:

      exampleobject1.txt
      exampleobject2.png
      dir1/
      dir1/exampleobject3.jpg
      dir2/
      dir2/exampleobject4.jpg
    • Copy incremental files

      During a batch copy, if you specify the --update option, ossutil copies an object only if the destination object does not exist or if the last modified time of the source object is later than that of the destination object. The command is as follows:

      ossutil cp oss://examplebucket1/srcfolder1/ oss://examplebucket2/path2/ -r --update

      You can use this option to perform an incremental copy by skipping files that were already successfully copied when you retry a failed batch copy.

    • Copy only files in the current directory and ignore subdirectories

      ossutil cp oss://examplebucket1/srcfolder1/ oss://examplebucket1/srcfolder2/ --only-current-dir -r

    Copy files within a specified time range

    Copy only the files in `srcfolder1` that were last modified between 10:09:18 on October 31, 2023 (UTC+8) and 12:55:58 on October 31, 2023 (UTC+8).

    ossutil cp -r oss://examplebucket1/srcfolder1/ oss://examplebucket2/path2/ --start-time 1698718158 --end-time 1698728158

    Copy files that meet specified conditions

    Use the --include and --exclude parameters to copy only files that meet specific conditions.

    • Copy all files that are not in JPG format.

      ossutil cp oss://examplebucket1/srcfolder1/ oss://examplebucket2/desfolder/ --exclude "*.jpg" -r
    • Copy all files whose names contain abc but are not in JPG or TXT format

      ossutil cp oss://examplebucket1/srcfolder1/ oss://examplebucket2/desfolder/ --include "*abc*" --exclude "*.jpg" --exclude "*.txt" -r

    Copy a file and modify its metadata

    Use the --meta option to modify the object metadata. The format is header:value#header:value....

    ossutil cp oss://examplebucket1/examplefile.txt oss://examplebucket1/ --meta=Cache-Control:no-cache

    Copy a file and specify the pay-by-requester mode

    Copy a file from a source bucket to a destination bucket and specify the pay-by-requester mode.

    ossutil cp oss://examplebucket1/examplefile.txt oss://examplebucket2/desfolder/  --payer=requester

    Copy a file and change its storage class

    When you overwrite an object, you can add the --meta option to modify its storage class. The supported storage classes are:

    • Standard: Standard

    • IA: Infrequent Access

    • Archive: Archive Storage

    • ColdArchive: Cold Archive

    • DeepColdArchive: Deep Cold Archive

    For more information about storage classes, see Storage classes.

    Important

    By default, when you use the --meta option to change the storage class, the existing custom object metadata is overwritten. To retain the existing custom object metadata when you change the storage class, you must first use the x-oss-metadata-directive:COPY option to retain the metadata, and then change the storage class.

    Overwrite existing custom object metadata

    • Change the storage class of a specific file to Archive Storage

      ossutil cp oss://examplebucket1/srcfolder1/examplefile.txt oss://examplebucket1/srcfolder1/examplefile.txt --meta X-oss-Storage-Class:Archive
    • Change the storage class of all files in a specific folder to Standard

      ossutil cp oss://examplebucket1/srcfolder1/ oss://examplebucket1/srcfolder1/ --meta X-oss-Storage-Class:Standard -r

    Retain existing custom object metadata

    • Retain the metadata of a single existing file.

      ossutil cp oss://examplebucket1/srcfolder1/examplefile.txt oss://examplebucket1/srcfolder1/examplefile.txt --meta x-oss-metadata-directive:COPY
    • Retain the metadata of multiple existing files.

      ossutil cp oss://examplebucket1/srcfolder1/ oss://examplebucket1/srcfolder1/ --meta x-oss-metadata-directive:COPY -r -f 
    Important
    • When you use the cp command to change the storage class of an object, you are charged for PUT requests based on the source storage class of the object. The fees are billed to the destination bucket.

    • If you convert an object to the Infrequent Access, Archive, Cold Archive, or Deep Cold Archive storage class and the object is stored for a period shorter than the minimum storage duration, an early deletion fee is charged. For more information, see Storage fees.

    • To convert an object from Archive Storage, Cold Archive, or Deep Cold Archive to Standard or Infrequent Access using the cp command, you must first restore the object. To restore the object, use the restore (restore an object) command. After the object is restored, you can use the cp command to change its storage class. However, if real-time access of Archive objects is enabled, you can change the storage class of Archive Storage objects without restoring them.

    • When you use the cp command to change the storage class of a file larger than 100 MB, ossutil calculates a suitable part size based on the file size by default. If the automatic part size does not meet your needs, you can use the --part-size option to specify a part size. Make sure that the number of parts does not exceed 10,000.

    Copy a file and set tags

    When you overwrite a file, you can add the --tagging option to add or modify object tags. Separate multiple tags with ampersands (&). The command is as follows:

    ossutil cp oss://examplebucket1/examplefile.txt oss://examplebucket1/ --tagging "abc=1&bcd=2&……"

    For more information about object tagging, see object-tagging (Object tagging).

    Copy a file and configure server-side encryption

    You can specify a server-side encryption method to encrypt a file when you copy it to a bucket. For more information about server-side encryption, see Server-side encryption.

    • Copy a file and specify AES256 as the encryption method

      ossutil cp oss://examplebucket1/examplefile.txt oss://examplebucket1/srcfolder2/ --meta=x-oss-server-side-encryption:AES256
    • Copy a file and specify KMS as the encryption method

      ossutil cp oss://examplebucket1/examplefile.txt oss://examplebucket2/desfolder/ --meta=x-oss-server-side-encryption:KMS
      Important

      When you use KMS for encryption, OSS calls KMS to generate a master key for the file. This action incurs a fee for KMS API calls. For more information, see KMS billing.

    • Copy a file and specify the CMK ID for KMS encryption.

      ossutil cp oss://examplebucket1/examplefile.txt oss://examplebucket2/desfolder/ --meta=x-oss-server-side-encryption:KMS#x-oss-server-side-encryption-key-id:7bd6e2fe-cd0e-483e-acb0-f4b9e1******

    Restore a historical version of a file

    After you enable versioning for a bucket, overwritten and deleted objects are saved as historical versions. You can add the --version-id option to the cp command to restore a historical version, which overwrites the current version.

    First, use the ls --all-versions command to retrieve all version IDs of the object. Then, use the --version-id option to copy a specific version.

    Note

    The --version-id option can be used only for buckets for which versioning is enabled. For more information about the command to enable versioning for a bucket, see bucket-versioning (Versioning).

    ossutil cp oss://examplebucket1/examplefile.txt oss://examplebucket2/ --version-id  CAEQARiBgID8rumR2hYiIGUyOTAyZGY2MzU5MjQ5ZjlhYzQzZjNlYTAyZDE3MDRk

    Cross-account copy

    Use the -e, -i, and -k common options to copy the `srcobject.png` file from the root directory of the source bucket `examplebucket` in the China (Shanghai) region that belongs to another Alibaba Cloud account to the destination bucket `destbucket`.

    Note

    You must specify the Endpoint for the region where the bucket is located. For more information, see Regions and Endpoints.

    ossutil cp oss://examplebucket/srcobject.png  oss://destbucket  -e oss-cn-shanghai.aliyuncs.com -i yourAccessKeyID  -k yourAccessKeySecret
    Security tip: Using an AccessKey in a command line is a security risk. For automated or long-term tasks, we recommend that you create a RAM role for the source account and grant the destination account permissions to assume the RAM role for more secure cross-account access.