All Products
Search
Document Center

Microservices Engine:Troubleshoot excessive Nacos thread count

Last Updated:Mar 10, 2026

When monitoring shows an unusually high thread count in your application and most thread names contain nacos, use this guide to identify the root cause and reduce thread usage.

This guide covers Java applications. Apply equivalent commands for other languages.

Quick diagnosis

Use this decision tree to jump to the relevant section:

  1. Check Nacos client instance count.

    • Instance count > 10 -- Too many client instances. Review your application code to reuse instances. See Step 1.

    • Instance count <= 3 -- Instance count is normal. Continue to step 2.

  2. Check thread counts against expected upper limits.

    • All counts exceed limits -- Thread pool leak from unreleased instances. See Step 2a.

    • Only CPU-dependent counts exceed limits -- Invalid CPU core count (common in containers). See Step 2b.

    • No counts exceed limits -- Thread pools are within designed range. Optionally tune pool sizes. See Step 2c.

Step 1: Check Nacos client instance count

Dump a heap histogram to count active Nacos client instances:

# Replace ${pid} with the process ID of your Java application
jmap -histo ${pid} > histo.log

Filter for NacosNamingService and NacosConfigService instances:

# Check NacosNamingService instance count (expected: no more than 3)
grep "NacosNamingService" histo.log | awk '{print $2,$4}'

# Check NacosConfigService instance count (expected: no more than 3)
grep "NacosConfigService" histo.log | awk '{print $2,$4}'

If the count exceeds 10, the application is likely creating duplicate client instances instead of reusing existing ones. Review your application code to share Nacos client instances. For a known example of this issue in Dubbo, see dubbo create nacos multiple identical NamingService.

If the count is 3 or fewer, the instance count is normal. Proceed to Step 2.

Step 2: Check thread counts

Capture a thread dump and count threads by type:

jstack ${pid} > jstack.log

Use the following commands to count threads for each Nacos thread pool. Compare the results against the expected upper limits.

Thread count reference

Thread name pattern

Purpose

Expected upper limit

Command

nacos-grpc-client-executor

gRPC request processing

Nacos instances x CPU cores x 8

grep "nacos-grpc-client-executor" jstack.log | wc -l

nacos.publisher-

Internal event notification

5

grep "nacos.publisher-" jstack.log | wc -l

com.alibaba.nacos.client.remote.worker

Server reconnection and heartbeats

Nacos instances x 2

grep "com.alibaba.nacos.client.remote.worker" jstack.log | wc -l

com.alibaba.nacos.client.naming.updater

NacosNamingService cache updates

NacosNamingService instances x (CPU cores / 2)

grep "com.alibaba.nacos.client.naming.updater" jstack.log | wc -l

com.alibaba.nacos.naming.push.receiver

UDP push data receiver

NacosNamingService instances

grep "com.alibaba.nacos.naming.push.receiver" jstack.log | wc -l

com.alibaba.nacos.client.naming.grpc.redo

Non-persistent service data compensation after reconnect

NacosNamingService instances

grep "com.alibaba.nacos.client.naming.grpc.redo" jstack.log | wc -l

com.alibaba.nacos.client.Worker (excluding longPolling)

ConfigService listener coordination

NacosConfigService instances

grep "com.alibaba.nacos.client.Worker" jstack.log | grep -v "longPolling" | wc -l

nacos.client.config.listener.task

ConfigService listener callback

NacosConfigService instances x 5

grep "nacos.client.config.listener.task" jstack.log | wc -l

com.alibaba.nacos.client.Worker.longPolling

ConfigService long polling

NacosConfigService instances x CPU cores

grep "com.alibaba.nacos.client.Worker.longPolling" jstack.log | wc -l

After collecting the counts, determine which scenario applies:

  • All counts exceed limits -- Go to Step 2a.

  • Only CPU-dependent counts exceed limits (such as nacos-grpc-client-executor and com.alibaba.nacos.client.naming.updater) -- Go to Step 2b.

  • No counts exceed limits -- Go to Step 2c.

Step 2a: All thread counts exceed limits

Cause: Nacos client instances are being created continuously without calling shutdown() on the old instances. The old thread pools remain active, causing thread accumulation.

Each Nacos client instance maintains its own thread pools and connections throughout its lifecycle. Even after disconnection, the client retries to re-establish connections. If you replace an old instance with a new one without calling shutdown(), the old thread pools persist indefinitely.

Solution: Locate the code that replaces Nacos client instances and call the shutdown() method on the old instance before creating a new one. Reuse existing client instances whenever possible.

For a reference case involving Sentinel, see Connection leak risk when using Nacos data-source.

Step 2b: Only CPU-dependent thread counts exceed limits

Cause: The application reads an incorrect CPU core count, which inflates thread pool sizes that scale with CPU cores. This is common in container environments where the JVM detects the host CPU count rather than the container's allocated cores.

Diagnose: Check the CPU core count the application detects:

Runtime.getRuntime().availableProcessors()

Solution (choose one):

  • Fix the container environment to correctly expose the allocated CPU count to the JVM.

  • Override the CPU count for Nacos by setting the JVM parameter -Dnacos.common.processors or the environment variable NACOS_COMMON_PROCESSORS.

Note

The -Dnacos.common.processors parameter and NACOS_COMMON_PROCESSORS environment variable require Nacos client 2.1.1 or later.

Step 2c: No thread counts exceed limits

If no thread counts exceed their expected upper limits, the thread pools are not leaking. The thread count is within the designed range but may still be higher than desired.

Reduce thread pool size by setting JVM parameters to control the nacos-grpc-client-executor thread pool:

Parameter

Purpose

Default

-Dnacos.remote.client.grpc.pool.core.size

Minimum thread count

CPU cores x 2

-Dnacos.remote.client.grpc.pool.max.size

Maximum thread count

CPU cores x 8

Note

These parameters require Nacos client 2.1.1 or later.

The nacos-grpc-client-executor thread pool has a built-in reclaim mechanism: idle threads are automatically reclaimed down to the core size when no requests are pending. Thread IDs for the same thread name may increase over time in monitoring systems as threads are reclaimed and recreated. This is expected behavior.

Thread name to configuration parameter mapping

Use this table to find the configuration parameter that controls a specific thread pool:

Thread name in jstack output

Configuration parameter

Default value

nacos-grpc-client-executor

-Dnacos.remote.client.grpc.pool.core.size (min) / -Dnacos.remote.client.grpc.pool.max.size (max)

CPU cores x 2 (min) / CPU cores x 8 (max)

CPU-dependent threads (all types)

-Dnacos.common.processors or NACOS_COMMON_PROCESSORS

Runtime.getRuntime().availableProcessors()

Note

All configuration parameters listed above require Nacos client 2.1.1 or later.