All Products
Search
Document Center

Object Storage Service:Copy objects (Node.js SDK)

Last Updated:Mar 20, 2026

Use the OSS Node.js SDK to copy an object within the same bucket or across buckets in the same region.

Prerequisites

Before you begin, make sure that you have:

  • An OSS bucket with read permissions on the source object and read/write permissions on the destination bucket

  • The OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET environment variables configured

  • No retention policies on the source or destination bucket (objects under a retention policy return the error The object you specified is immutable.)

The source and destination buckets must be in the same region. For example, you cannot copy an object from a bucket in the China (Hangzhou) region to a bucket in the China (Qingdao) region. In this topic, the public endpoint of the China (Hangzhou) region is used. To access OSS from other Alibaba Cloud services in the same region, use an internal endpoint. For details about supported regions and endpoints, see Regions and endpoints. If you want to create an OSSClient instance by using custom domain names or Security Token Service (STS), see Configuration examples for common scenarios.

Permissions

By default, an Alibaba Cloud account has full permissions. RAM users and RAM roles have no permissions by default. Grant the following permissions through RAM Policy or Bucket policies.

APIActionRequired when
CopyObjectoss:GetObjectAlways required
oss:PutObjectAlways required
oss:GetObjectVersionSpecifying a source object version via versionId
oss:GetObjectTaggingCopying object tags via x-oss-tagging
oss:PutObjectTaggingCopying object tags via x-oss-tagging
oss:GetObjectVersionTaggingSpecifying tags of a versioned source object via versionId
kms:GenerateDataKeyDestination metadata includes X-Oss-Server-Side-Encryption: KMS
kms:DecryptDestination metadata includes X-Oss-Server-Side-Encryption: KMS

Choose a copy method

Object sizeMethodSDK call
Up to 1 GBSimple copyclient.copy()
Over 1 GBMultipart copyclient.multipartUploadCopy()

Copy a small object

Use client.copy() to copy an object up to 1 GB, either within the same bucket or across buckets in the same region.

All examples read credentials from environment variables. Set up the client once and reuse it across calls.

const OSS = require('ali-oss');

const client = new OSS({
  // Region where your bucket is located, e.g. oss-cn-hangzhou
  region: '<your-region>',
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  authorizationV4: true,
  bucket: '<your-bucket>',
});

Copy within the same bucket

async function copyWithinBucket() {
  try {
    const result = await client.copy(
      'destexampleobject.txt',  // Destination object name
      'srcexampleobject.txt',   // Source object name
      {
        // HTTP headers for the destination object.
        // Omit to inherit the source object's headers.
        headers: {
          'Cache-Control': 'no-cache',
          // Conditional copy: copy only if source ETag matches
          'if-match': '5B3C1A2E053D763E1B002CC607C5****',
          // Conditional copy: copy only if source ETag differs
          'if-none-match': '5B3C1A2E053D763E1B002CC607C5****',
          // Conditional copy: copy only if modified after this time
          'if-modified-since': '2021-12-09T07:01:56.000Z',
          // Conditional copy: copy only if not modified after this time
          'if-unmodified-since': '2021-12-09T07:01:56.000Z',
          // ACL for the destination object
          'x-oss-object-acl': 'private',
          // Tags for the destination object
          'x-oss-tagging': 'Tag1=1&Tag2=2',
          // Prevent overwriting an existing object with the same name
          'x-oss-forbid-overwrite': 'true',
        },
        // Custom metadata for the destination object.
        // Omit to inherit the source object's metadata.
        meta: {
          location: 'hangzhou',
          year: 2015,
          people: 'mary',
        },
      }
    );
    console.log(result);
  } catch (e) {
    console.error(e);
  }
}

copyWithinBucket();

Copy across buckets

Pass the source bucket name as the third argument to client.copy().

const OSS = require('ali-oss');

// Initialize the client with the destination bucket
const client = new OSS({
  region: '<your-region>',
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  authorizationV4: true,
  bucket: 'destexamplebucket',
});

async function copyAcrossBuckets() {
  try {
    const result = await client.copy(
      'destobject.txt',   // Destination object name
      'srcobject.txt',    // Source object name
      'srcbucket',        // Source bucket name
      {
        headers: {
          'Cache-Control': 'no-cache',
        },
        meta: {
          location: 'hangzhou',
          year: 2015,
          people: 'mary',
        },
      }
    );
    console.log(result);
  } catch (e) {
    console.error(e);
  }
}

copyAcrossBuckets();

Copy a large object

For objects larger than 1 GB, use client.multipartUploadCopy(). This method splits the source object into parts and copies each part in parallel, then assembles them into a single destination object.

The example below shows three common patterns: a basic copy, a copy with progress tracking, and a resumed copy from a checkpoint.

const OSS = require('ali-oss');

// Initialize the client with the destination bucket
const client = new OSS({
  region: '<your-region>',
  accessKeyId: process.env.OSS_ACCESS_KEY_ID,
  accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET,
  authorizationV4: true,
  bucket: 'destexamplebucket',
});

async function copyLargeObject() {
  // Conditional copy headers applied to each source part
  const copyheaders = {
    // Copy only if source ETag matches; otherwise returns 412 PreconditionFailed
    'x-oss-copy-source-if-match': '5B3C1A2E053D763E1B002CC607C5****',
    // Copy only if source ETag differs; otherwise returns 304 NotModified
    'x-oss-copy-source-if-none-match': '5B3C1A2E053D763E1B002CC607C5****',
    // Copy only if source has not been modified since this time; otherwise returns 412 PreconditionFailed
    'x-oss-copy-source-if-unmodified-since': '2022-12-09T07:01:56.000Z',
    // Copy only if source has been modified since this time; otherwise returns 304 NotModified
    'x-oss-copy-source-if-modified-since': '2022-12-09T07:01:56.000Z',
  };

  // HTTP headers for the destination object
  const headers = {
    'Cache-Control': 'no-cache',
    'Content-Disposition': 'somename',
    Expires: '1000',
  };

  let savedCpt;

  try {
    // Pattern 1: Basic multipart copy with conditional headers
    const r1 = await client.multipartUploadCopy(
      'destexampleobject1.txt',
      {
        sourceKey: 'srcexampleobject.txt',
        sourceBucketName: 'sourcebucket',
        copyheaders,
      }
    );
    console.log(r1);

    // Pattern 2: Copy with parallel parts, custom part size, and progress tracking
    const r2 = await client.multipartUploadCopy(
      'destexampleobject2.txt',
      {
        sourceKey: 'srcexampleobject.txt',
        sourceBucketName: 'sourcebucket',
      },
      {
        parallel: 4,            // Number of parts uploaded concurrently
        partSize: 1024 * 1024,  // Part size in bytes (1 MiB)
        progress(p, cpt, res) {
          console.log('Progress:', p);
          savedCpt = cpt;       // Save checkpoint for resume support
          console.log('Request ID:', res.headers['x-oss-request-id']);
        },
        headers,
        copyheaders,
      }
    );
    console.log(r2);

    // Pattern 3: Resume a copy from a saved checkpoint
    const r3 = await client.multipartUploadCopy(
      'destexampleobject3.txt',
      {
        sourceKey: 'srcexampleobject.txt',
        sourceBucketName: 'sourcebucket',
      },
      {
        checkpoint: savedCpt,   // Resume from where the previous copy stopped
        progress(p, cpt, res) {
          console.log('Progress:', p);
          console.log(cpt);
          console.log('Request ID:', res.headers['x-oss-request-id']);
        },
      }
    );
    console.log(r3);
  } catch (e) {
    console.error(e);
  }
}

copyLargeObject();
To resume a copy across process restarts, persist the checkpoint object from the progress callback to durable storage such as a local file. The checkpoint object contains all information needed to resume the multipart copy from the last completed part.

What's next