All Products
Search
Document Center

E-MapReduce:YARN resource configuration

Last Updated:Sep 20, 2025

In EMR versions 3.49.0 and 5.15.0 and later, EMR dynamically adjusts the default memory settings for components during cluster creation based on the selected instance types and services. This behavior replaces the static default configurations used in previous versions. This topic describes how to configure the heap memory size for YARN components and use YARN resource configurations.

Note
  • After an EMR cluster is initialized, if the memory allocated to YARN components is too small, you can check whether an excessive number of services are deployed in the cluster. EMR allocates resources based on the services that are deployed in a cluster. If you deploy many services in the cluster, the memory resources that are allocated to YARN components may be reduced. In addition, you can check whether the specifications of Elastic Compute Service (ECS) instances in a node group are too low and cannot meet the memory requirements of services that are deployed in an EMR cluster.

  • You can adjust the parameter settings of YARN components in the EMR console after an EMR cluster is created.

Configurations of the heap memory size for YARN components

On the Configure tab of the YARN service page in the EMR console, configure the parameters. The following table describes the parameters.

Component name

Configuration file

Parameter

Effective scope

Remarks

ResourceManager

yarn-env.sh

YARN_RESOURCEMANAGER_HEAPSIZE

Cluster level

The minimum value is 1024. If many small jobs exist, you can increase the heap memory size. If you increase the heap memory size, you must restart the ResourceManager component to make the modification take effect.

NodeManager

yarn-env.sh

YARN_NODEMANAGER_HEAPSIZE

Cluster level

If full garbage collection (GC) occurs because the Shuffle Service component occupies a large amount of memory of the NodeManager component, you can increase the heap memory size. If you increase the heap memory size, you must restart the NodeManager component to make the modification take effect.

WebAppProxyServer

yarn-env.sh

YARN_PROXYSERVER_HEAPSIZE

Cluster level

If you adjust the value of this parameter, you must restart the WebAppProxyServer component to make the modification take effect.

TimelineServer

yarn-env.sh

YARN_TIMELINESERVER_HEAPSIZE

Cluster level

If you adjust the value of this parameter, you must restart the TimelineServer component to make the modification take effect.

TimelineServer

yarn-env.sh

-XX:MaxDirectMemorySize in YARN_TIMELINESERVER_OPTS

Cluster level

The maximum size of the direct memory for TimelineServer. The minimum value of this parameter is 512m. If you adjust the value of this parameter, you must restart the TimelineServer component to make the modification take effect.

MRHistoryServer

mapred-env.sh

HADOOP_JOB_HISTORYSERVER_HEAPSIZE

Cluster level

If you adjust the value of this parameter, you must restart the MRHistoryServer component to make the modification take effect.

Configurations of cluster resources for YARN

On the Configure tab of the YARN service page in the EMR console, configure the parameters. The following table describes the parameters.

Parameter

Description

Configuration file

Effective Scope

Remarks

yarn.scheduler.maximum-allocation-mb

The maximum memory resources that can be requested by a single container in a scheduler.

yarn-site.xml

Cluster level

If a cluster needs to submit large jobs in a single container, you can increase the value of this parameter. However, an excessively large value may cause resource fragmentation. If you adjust the value of this parameter, you must restart the ResourceManager component to make the modification take effect.

yarn.scheduler.minimum-allocation-mb

The minimum memory resources that can be requested by a single container in a scheduler.

yarn-site.xml

Cluster level

In most cases, you do not need to adjust the value of this parameter. If you adjust the value of this parameter, you must restart the ResourceManager component to make the modification to take effect.

yarn.scheduler.maximum-allocation-vcores

The maximum number of vCPUs that can be requested by a single container in a scheduler.

yarn-site.xml

Cluster level

The default value is 32. If a cluster needs to submit large jobs in a single container, you can increase the value of this parameter. However, an excessively large value may cause resource fragmentation. If you adjust the value of this parameter, you must restart the ResourceManager component to make the modification take effect.

yarn.scheduler.minimum-allocation-vcores

The minimum number of vCPUs that can be requested by a single container in a scheduler.

yarn-site.xml

Cluster level

The default value is 1. In most cases, you do not need to adjust the value of this parameter. If you adjust the value of this parameter, you must restart the ResourceManager component to make the modification to take effect.

yarn.nodemanager.resource.memory-mb

The memory resources that can be used by the NodeManager component.

yarn-site.xml

Node group level

You can configure this parameter based on your cluster deployment. If you adjust the value of this parameter, you must restart the NodeManager component to make the modification take effect.

Important

When you configure this parameter, you must select a node group.

yarn.nodemanager.resource.cpu-vcores

The number of available vCPUs that can be used by the NodeManager component.

yarn-site.xml

Node group level

The default value is the number of vCPUs of an instance type that is used by a node group. If the node group uses an instance type with high memory specifications, the node group uses twice the number of vCPUs of an instance type that has regular memory specifications. You can adjust the value of this parameter based on your cluster deployment. If you adjust the value of this parameter, you must restart the NodeManager component to make the modification take effect.

Important

When you configure this parameter, you must select a node group.

When you modify the yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores parameters, you must select Node Group-level Configuration for the changes to take effect. EMR allows a cluster to run different ECS instance types using node groups. All ECS instances within a node group have the same instance type. Therefore, configuring NodeManager resources at the node group level ensures that nodes with smaller instance types are not overloaded and nodes with larger instance types are not underutilized during scheduling. This also means you do not need to modify the NodeManager resource configuration for each ECS node.

EMR lets you configure the yarn.scheduler.maximum-allocation-mb and yarn.nodemanager.resource.memory-mb parameters when you create a cluster or scale out a cluster by adding a node group for the first time. The value of the yarn.scheduler.maximum-allocation-mb parameter must be greater than the value of the yarn.nodemanager.resource.memory-mb parameter. This ensures that your jobs can be scheduled as expected.

  • When you upgrade the specifications of a node group or change the value of the yarn.nodemanager.resource.memory-mb parameter, the value of the yarn.scheduler.maximum-allocation-mb parameter is not automatically changed. You can manually change the value of the yarn.scheduler.maximum-allocation-mb parameter as needed.

  • To prevent your jobs from being affected, the first time you configure the yarn.scheduler.maximum-allocation-mb parameter for a new node group, the ResourceManager component is not automatically restarted. To make the configuration take effect, you must manually restart the ResourceManager component.

    Note

    A restart of the ResourceManager component may cause jobs to fail. We recommend that you restart the component during off-peak hours.