When monitoring shows an unusually high thread count in your application and most thread names contain nacos, use this guide to identify the root cause and reduce thread usage.
This guide covers Java applications. Apply equivalent commands for other languages.
Quick diagnosis
Use this decision tree to jump to the relevant section:
Check Nacos client instance count.
Instance count > 10 -- Too many client instances. Review your application code to reuse instances. See Step 1.
Instance count <= 3 -- Instance count is normal. Continue to step 2.
Check thread counts against expected upper limits.
All counts exceed limits -- Thread pool leak from unreleased instances. See Step 2a.
Only CPU-dependent counts exceed limits -- Invalid CPU core count (common in containers). See Step 2b.
No counts exceed limits -- Thread pools are within designed range. Optionally tune pool sizes. See Step 2c.
Step 1: Check Nacos client instance count
Dump a heap histogram to count active Nacos client instances:
# Replace ${pid} with the process ID of your Java application
jmap -histo ${pid} > histo.logFilter for NacosNamingService and NacosConfigService instances:
# Check NacosNamingService instance count (expected: no more than 3)
grep "NacosNamingService" histo.log | awk '{print $2,$4}'
# Check NacosConfigService instance count (expected: no more than 3)
grep "NacosConfigService" histo.log | awk '{print $2,$4}'If the count exceeds 10, the application is likely creating duplicate client instances instead of reusing existing ones. Review your application code to share Nacos client instances. For a known example of this issue in Dubbo, see dubbo create nacos multiple identical NamingService.
If the count is 3 or fewer, the instance count is normal. Proceed to Step 2.
Step 2: Check thread counts
Capture a thread dump and count threads by type:
jstack ${pid} > jstack.logUse the following commands to count threads for each Nacos thread pool. Compare the results against the expected upper limits.
Thread count reference
Thread name pattern | Purpose | Expected upper limit | Command |
| gRPC request processing | Nacos instances x CPU cores x 8 |
|
| Internal event notification | 5 |
|
| Server reconnection and heartbeats | Nacos instances x 2 |
|
| NacosNamingService cache updates | NacosNamingService instances x (CPU cores / 2) |
|
| UDP push data receiver | NacosNamingService instances |
|
| Non-persistent service data compensation after reconnect | NacosNamingService instances |
|
| ConfigService listener coordination | NacosConfigService instances |
|
| ConfigService listener callback | NacosConfigService instances x 5 |
|
| ConfigService long polling | NacosConfigService instances x CPU cores |
|
After collecting the counts, determine which scenario applies:
All counts exceed limits -- Go to Step 2a.
Only CPU-dependent counts exceed limits (such as
nacos-grpc-client-executorandcom.alibaba.nacos.client.naming.updater) -- Go to Step 2b.No counts exceed limits -- Go to Step 2c.
Step 2a: All thread counts exceed limits
Cause: Nacos client instances are being created continuously without calling shutdown() on the old instances. The old thread pools remain active, causing thread accumulation.
Each Nacos client instance maintains its own thread pools and connections throughout its lifecycle. Even after disconnection, the client retries to re-establish connections. If you replace an old instance with a new one without calling shutdown(), the old thread pools persist indefinitely.
Solution: Locate the code that replaces Nacos client instances and call the shutdown() method on the old instance before creating a new one. Reuse existing client instances whenever possible.
For a reference case involving Sentinel, see Connection leak risk when using Nacos data-source.
Step 2b: Only CPU-dependent thread counts exceed limits
Cause: The application reads an incorrect CPU core count, which inflates thread pool sizes that scale with CPU cores. This is common in container environments where the JVM detects the host CPU count rather than the container's allocated cores.
Diagnose: Check the CPU core count the application detects:
Runtime.getRuntime().availableProcessors()Solution (choose one):
Fix the container environment to correctly expose the allocated CPU count to the JVM.
Override the CPU count for Nacos by setting the JVM parameter
-Dnacos.common.processorsor the environment variableNACOS_COMMON_PROCESSORS.
The -Dnacos.common.processors parameter and NACOS_COMMON_PROCESSORS environment variable require Nacos client 2.1.1 or later.
Step 2c: No thread counts exceed limits
If no thread counts exceed their expected upper limits, the thread pools are not leaking. The thread count is within the designed range but may still be higher than desired.
Reduce thread pool size by setting JVM parameters to control the nacos-grpc-client-executor thread pool:
Parameter | Purpose | Default |
| Minimum thread count | CPU cores x 2 |
| Maximum thread count | CPU cores x 8 |
These parameters require Nacos client 2.1.1 or later.
The nacos-grpc-client-executor thread pool has a built-in reclaim mechanism: idle threads are automatically reclaimed down to the core size when no requests are pending. Thread IDs for the same thread name may increase over time in monitoring systems as threads are reclaimed and recreated. This is expected behavior.
Thread name to configuration parameter mapping
Use this table to find the configuration parameter that controls a specific thread pool:
Thread name in | Configuration parameter | Default value |
|
| CPU cores x 2 (min) / CPU cores x 8 (max) |
CPU-dependent threads (all types) |
|
|
All configuration parameters listed above require Nacos client 2.1.1 or later.