Over time, database operations such as inserting, updating, and deleting data cause disk fragmentation, reducing effective disk utilization. The compact command attempts to release unneeded disk space to the operating system. This topic explains how to reclaim disk space and improve storage efficiency.
Recommended: Use the console-based Storage Analysis feature for simpler operation and lower business impact.
Storage Analysis only reclaims fragmentation on Hidden members. To compact Primary or Secondary nodes, perform a failover first to convert them to Hidden members.
Storage Analysis requires MongoDB 4.2 (≥4.0.23), MongoDB 4.4 (≥5.0.7), or MongoDB 5.0 and later.
If you need manual control or direct compaction of Primary/Secondary nodes, use the compact command as described below.
Before you begin
Your instance must use the WiredTiger storage engine.
Back up your data before running compaction.
When to run compact
Database operations cause disk fragmentation over time, reducing effective utilization. Run compact in the following scenarios:
After bulk data deletion
When you delete large amounts of data, disk space remains reserved for future writes. This creates unused fragmented space.
Neither manual deletion nor automatic TTL expiration triggers compaction. You must manually reclaim disk space.
Long-running high-write workloads
Frequent insert, update, and delete operations cause fragmented space to accumulate over time.
Disk space constraints with >20% fragmentation
When disk utilization reaches 85-90% and fragmentation exceeds 20%, running compact reduces disk usage.
Quick start: Compact a collection
This section provides the fastest path to release disk space.
For replica set and sharded cluster instances, run compact on Secondary nodes to minimize business impact. For complete workflow, see Compact primary/secondary nodes.
Step 1: Connect to your instance
Connect using Mongo Shell:
Replica Set Instance (connect to Secondary node to minimize impact)
Step 2: Switch to the target database
use <database_name>Run show dbs to view databases.
Step 3: Run the compact command
Before and after you run compact, you can run db.runCommand({collStats: "<collection_name>"}) in the target database to validate compact effect.
Replica set & Standalone
db.runCommand({compact:"<collection_name>"})Run show tables to view collections.
For MongoDB 4.2 or earlier Primary nodes, add force:true:
db.runCommand({compact:"<collection_name>",force:true})Successful execution returns:
{ "ok" : 1 }Sharded cluster
The following example uses mongo shell syntax. For mongosh 1.x and 2.x syntax, see Compact primary/secondary nodes.
db.runCommand({runCommandOnShard:"<Shard ID>","command":{compact:"<collection_name>"},$queryOptions: {$readPreference: {mode: 'secondary'}}})Parameters:
<Shard ID>: View in MongoDB console > instance Basic Information > Shard List.<collection_name>: Collection name.$readPreference: {mode: 'secondary'}: Routes the compact operation to Secondary nodes to avoid impacting the Primary node.
Successful execution returns:
{ "ok" : 1 }View disk storage space
Check storage size and fragmentation rate
To evaluate whether you need compact and to verify its effect, run the following command before and after compact in the database that contains the collection:
db.runCommand({collStats: <collection_name>})Key fields:
size: Total size of uncompressed data in the collection.storageSize: Total disk space allocated to the collection.freeStorageSize: The amount of storage space that can be reclaimed (available in MongoDB 4.4+).
Fragmentation rate = freeStorageSize / storageSize
A high ratio indicates high fragmentation.
For more details, see collStats Output.
Estimate reclaimable space
Switch to the database containing the collection, then run:
db.<collection_name>.stats().wiredTiger["block-manager"]["file bytes available for reuse"]Example:
db.test_collection.stats().wiredTiger["block-manager"]["file bytes available for reuse"]Sample output:
207806464This indicates approximately 207,806,464 bytes (~198 MB) can be reclaimed.
Compact primary/secondary nodes
Replica set & Standalone
Standalone instances have a single Primary node. Run compact directly on the Primary node during off-peak hours.
Replica set instances have Primary, Secondary, and optional ReadOnly nodes. You must run compact on all Primary and Secondary nodes. If ReadOnly nodes are present, compact them as well using the same command.
To minimize business impact, follow the recommended rolling maintenance workflow below:
Recommended workflow:
Compact the Secondary nodes first.
Perform a failover to switch Primary to Secondary.
Compact the new Secondary (former Primary).
Compact ReadOnly nodes if present.
Compact command:
db.runCommand({compact:"<collection_name>"})For MongoDB 4.2 or earlier Primary nodes in replica sets: db.runCommand({compact:"<collection_name>",force:true})Example:
db.runCommand({compact:"orders"})Sharded cluster
For sharded clusters, run compact only on Shard nodes, as mongos and config server do not store user data.
ReadOnly nodes in sharded clusters do not support compact.To reduce business impact, follow the workflow below instead of compacting Primary nodes directly.
Recommended workflow:
Compact Secondary nodes in each Shard first.
Perform a failover for each Shard.
Compact the new Secondary nodes (former Primary).
Compact Secondary nodes:
mongosh 2.x
db.runCommand({runCommandOnShard:"<Shard ID>","command":{compact:"<collection_name>"}},{readPreference: "secondary"})mongosh 1.x
db.getMongo().setReadPref('secondary')
db.runCommand({runCommandOnShard:"<Shard ID>","command":{compact:"<collection_name>"}})mongo shell
db.runCommand({runCommandOnShard:"<Shard ID>","command":{compact:"<collection_name>"},$queryOptions: {$readPreference: {mode: 'secondary'}}})Parameters:
<Shard ID>: View in MongoDB console > instance Basic Information > Shard List.<collection_name>: Collection name.$readPreference: {mode: 'secondary'}: Routes the compact operation to Secondary nodes to avoid impacting the Primary node.
Compact Primary nodes (Not Recommended):
db.runCommand({runCommandOnShard:"<Shard ID>","command":{compact:"<collection_name>",force:true}})The force parameter is required for MongoDB 4.2 or earlier.
Important considerations
Performance impact
MongoDB versions earlier than 4.4
compactlocks the entire database.All read and write operations are blocked during execution.
Extensive fragmentation may cause long execution times and significant replication lag on Hidden nodes..
Recommendation: Run during off-peak hours, adjust Oplog size, or upgrade to MongoDB 4.4+.
MongoDB 4.4 and later
compactdoes not block read/write operations.Performance may be impacted during execution.
Recommendation: Run during off-peak hours.
Node rebuild risk
The following versions may trigger automatic node rebuilds:
Version | Risk |
MongoDB 3.4 (all) | Node enters RECOVERING state, may trigger rebuild if duration exceeds health check threshold. |
MongoDB 4.0 (all) | Same as above. |
MongoDB 4.2 (≤4.0.22) | Same as above. |
MongoDB 4.4 (≤5.0.6) | Same as above. |
Later versions: Nodes remain in SECONDARY state and do not trigger rebuilds.
For version details, see MongoDB minor version release notes.
When compact is ineffective
The compact command has no effect when:
Physical collection size < 1 MB.
Fragmentation rate < 20%.
First 80% of file: available space < 20%.
First 90% of file: available space < 10%.
Reference: WiredTiger block_compact.c#L140.
Other considerations
Execution time: Depends on data volume and system workload.
Space release: Released space may be less than available space. Ensure the previous
compactcompletes before running the next.Disk full scenarios:
compactcan execute even when the disk is full and the instance is locked.
Troubleshooting
Error: "Compaction interrupted due to cache eviction pressure"
Cause: Small-specification instances on older versions may abort compact due to cache pressure.
Solution: Run the operation during off-peak hours when system load is lower.
Related documentation
Appendix: Understanding disk fragmentation
How fragmentation occurs
When you delete data from an ApsaraDB for MongoDB instance, the storage space used by that data is marked as available for reuse. New data may be:
Stored directly in the available space
Stored at the end of the file after expanding file size
This results in unused available space scattered throughout the file—this is disk fragmentation.
Fragmentation impact
Higher fragmentation reduces effective disk utilization.
Example:
Disk size: 100 GB
Fragmented space: 20 GB
Business data: 60 GB
Reported disk utilization: 80% (60 GB + 20 GB)
Effective disk utilization: 60% (only business data)
The 20 GB of fragmented space is counted as "used" but contains no data.
Command reference
For complete compact command documentation, see: