All Products
Search
Document Center

ApsaraDB for MongoDB:Storage analysis

Last Updated:Mar 28, 2026

As MongoDB instances grow, storage exceptions — such as near-full disks, rapid growth, and over-indexed collections — can cause service degradation before you notice them. Storage analysis in CloudDBA gives you a centralized view of storage usage for ApsaraDB for MongoDB replica set and sharded cluster instances, detects exceptions automatically, and lets you reclaim disk space through defragmentation without downtime.

Prerequisites

Before you begin, make sure that:

  • The instance is a replica set or sharded cluster running a supported version:

    Major engine versionMinor engine versionSupported
    MongoDB 4.03.0.xNo
    MongoDB 4.24.0.0–4.0.22No
    MongoDB 4.2>= 4.0.23Yes
    MongoDB 4.45.0.0–5.0.6No
    MongoDB 4.4>= 5.0.7Yes
    MongoDB 5.0Upgrade or downgrade cloud computersYes
    MongoDB 6.0Upgrade or downgrade cloud computersYes
    MongoDB 7.0Upgrade or downgrade cloud computersYes
    MongoDB 8.0Upgrade or downgrade cloud computersYes
  • If you are a Resource Access Management (RAM) user, your account has the AliyunHDMFullAccess or AliyunHDMReadOnlyAccess permission on Database Autonomy Service (DAS). For details, see How do I use DAS as a RAM user?

View storage usage

  1. Log on to the ApsaraDB for MongoDB consoleMongoDB console.

  2. In the left-side navigation pane, click Replica Set Instances or Sharded Cluster Instances.

  3. In the upper-left corner of the page, select the resource group and region for the instance.

  4. Click the instance ID, or click Manage in the Actions column.

  5. In the left-side navigation pane of the instance details page, choose CloudDBA > Storage Analysis.

  6. On the Storage Overview tab, review the following sections:

    • Storage — Key metrics for overall storage health:

      MetricDescriptionAction
      ExceptionNumber of storage exceptions detected. An exception is raised when: storage usage exceeds 90%, available storage may be exhausted within seven days, or a single collection has more than 10 indexes.Address exceptions promptly. Unresolved storage exhaustion can block writes and degrade instance performance.
      Avg Daily Increase in Last WeekAverage daily storage growth over the past seven days. Formula: (Available storage at collection time − Available storage seven days ago) / 7. Assumes stable traffic — values are less accurate after bulk imports, data deletions, instance migrations, or instance rebuilds.Use this to project capacity needs and plan storage upgrades before disk space runs out.
      Available Days of StorageEstimated days until storage is exhausted. Formula: Available storage / Average daily increase in the last seven days. A value of 90+ means storage is sufficient for the foreseeable future. Assumes stable traffic.If the value is low (for example, under 14 days), expand storage or archive data to free space.
      Used StorageAmount of storage used and total storage capacity allocated to the instance.Monitor this alongside Exception to track overall storage health.
    • Exceptions — Details of any detected storage exceptions. Use this information to plan corrective actions.

    • Storage Trend — Storage usage trend over the past seven days.

    • Tablespaces — Storage breakdown by collection. Click a collection name to view its indexes.

  7. To view storage by data space, click the Data Space tab. Click a data space name to view its namespace details, or click a collection name to view its indexes.

Storage analysis can analyze up to 20,000 collections. If no results appear, check whether the collection count exceeds this limit or whether the current account has permission to access the collections. To re-authorize, see Grant permissions for storage analysis.

Defragment disks

Over time, delete and update operations leave fragmented space in collections. Defragmenting reclaims this space and restores disk efficiency.

Important

Defragmentation runs only on hidden nodes. To defragment a primary or secondary node, either switch it to the hidden node role first, or run the compact command directly. For primary/secondary switchover steps, see Primary/Secondary failover. To run compact manually, see Defragment the disks of an instance to increase disk utilization.

Choose a defragmentation method

Use the following table to decide between automatic and manual defragmentation:

ScenarioRecommended method
Many collections with small recyclable space (each below 100 GB)Automatic defragmentation
A single collection with large recyclable spaceManual defragmentation
Recyclable space exceeds 100 GB for a single collectionManual defragmentation (required — automatic defragmentation skips these collections)

Set up automatic defragmentation

Configure a defragmentation plan to have DAS automatically reclaim fragmented space during the maintenance window.

DAS runs automatic defragmentation on collections that meet all of the following conditions:

  • The sum of index and data storage exceeds 1 GB

  • The fragmentation rate exceeds 20%

  • The total recyclable space across all eligible collections on the hidden node does not exceed 100 GB per round

To configure automatic defragmentation:

  1. In the Tablespaces section of the Storage Analysis page, find the target collection and click Recycle in the Fragmentation Rate column.

  2. In the Recycle dialog, configure the defragmentation plan settings, including the maintenance window.

  3. Click OK.

Defragment disks manually

  1. Log on to the ApsaraDB for MongoDB consoleMongoDB console.

  2. In the left-side navigation pane, click Replica Set Instances or Sharded Cluster Instances.

  3. In the upper-left corner of the page, select the resource group and region for the instance.

  4. Click the instance ID, or click Manage in the Actions column.

  5. In the left-side navigation pane of the instance details page, choose CloudDBA > Storage Analysis.

  6. In the Tablespaces section, find the collection to defragment and click Recycle in the Fragmentation Rate column.

  7. In the Recycle dialog, go to the Collection with High Fragmentation Rate section, find the collection, and click Recycle in the Actions column.

  8. Select Execute Now or Run in the O & M window, then confirm.

Defragmentation does not take effect immediately. The compact command continues running in the background — actual reclaim time depends on the size of the data being defragmented.
Defragment up to 10 collections concurrently. Wait for the current batch to finish before starting the next. Starting a new batch while the previous one is still running may cause tasks to fail.
If the fragmentation rate is low, the amount of space reclaimed may be small.
If a collection's recyclable space exceeds 100 GB, defragmentation may take more than one hour.

Verify defragmentation results

After the defragmentation task completes, click Re-analyze to refresh the storage analysis results.

Defragmentation effectiveness depends on data distribution. If the reclaimed space is less than expected, run the defragmentation task again.

Grant permissions for storage analysis

If no storage analysis results appear, the account may not have the required permissions on the collections.

To re-authorize the account, choose one of the following methods:

  • Using an existing account:

    1. At the top of the Storage Analysis page, click Re-authorize.

    2. Enter the username and password in the Database Account and Password fields.

    3. Click OK.

  • Using a generated authorization command:

    1. At the top of the Storage Analysis page, click Re-authorize.

    2. Enter the username and password in the Database Account and Password fields.

    3. Click Generate Authorization Command.

    4. Click OK.

FAQ

What does the (Interrupted) Compaction interrupted on table:*** due to cache eviction pressure error mean?

The instance whose disks you want to defragment is of an earlier version and has low specifications. When you run the compact command on the instance, the instance fails due to a large amount of cached data.

Select a different time to retry the task. If it fails repeatedly, submit a ticket for support.

API reference

OperationDescription
CreateStorageAnalysisTaskCreates a storage analysis task to query the usage details of one or more databases and collections
GetStorageAnalysisResultQueries the status and results of a storage analysis task

What's next

References