All Products
Search
Document Center

E-MapReduce:Manage Spark configuration templates

Last Updated:Mar 26, 2026

Spark configuration templates let you define a reusable set of global default configurations for task execution. Instead of specifying driver memory, executor count, and other parameters every time you submit a job, create a template once and load it from Data Development, spark-submit, Livy Gateway, or Kyuubi Gateway. Templates support dynamic updates, so you can adjust configurations at any time to meet evolving business needs.

How templates and job-level parameters interact: Template settings serve as defaults. Any parameter you pass directly at job submission overrides the corresponding template value. For example, setting spark.emr.serverless.jr.timeout=-1 at submission removes the timeout limit regardless of what the template specifies.

Prerequisites

Before you begin, ensure that you have:

Create a configuration template

  1. Log on to the E-MapReduce console.

  2. In the left navigation pane, choose EMR Serverless > Spark.

  3. On the Spark page, click the name of your workspace.

  4. On the EMR Serverless Spark page, click Configurations in the left navigation pane.

  5. On the Task Templates page, click Create Template.

  6. On the Create Task Template page, configure the parameters described below, then click Create.

Template parameters

Basic settings

ParameterDescription
Template NameA name for the template. Use a descriptive name that reflects the workload type or resource profile — for example, high-memory-etl or interactive-analysis-standard — so team members can identify the right template at a glance.
Engine VersionThe engine version for the compute engine. For details, see Engine versions.
TimeoutThe maximum duration allowed for a task to complete. To remove the time limit for a specific job at submission, set spark.emr.serverless.jr.timeout=-1 in the job's configuration.

Driver and executor resources

ParameterDescription
spark.driver.coresNumber of CPU cores for the driver process.
spark.driver.memoryAmount of memory for the driver process.
spark.executor.coresNumber of CPU cores for each executor.
spark.executor.memoryAmount of memory for each executor.
spark.executor.instancesNumber of executors to allocate.

Dynamic resource allocation

Disabled by default. When enabled, Spark adjusts the number of executors based on workload demand. Configure the following sub-parameters:

ParameterDefaultDescription
Minimum number of executors2The lower bound for executor count during dynamic scaling.
Maximum number of executors10 (if spark.executor.instances is not set)The upper bound for executor count during dynamic scaling.

More memory configurations

ParameterDescription
spark.driver.memoryOverheadNon-heap memory per driver. If left blank, Spark sets this to max(384 MB, 10% × spark.driver.memory).
spark.executor.memoryOverheadNon-heap memory per executor. If left blank, Spark sets this to max(384 MB, 10% × spark.executor.memory).
spark.memory.offHeap.sizeOff-heap memory for the Spark application. Default: 1 GB. Takes effect only when spark.memory.offHeap.enabled is set to true. When using the Fusion engine, both spark.memory.offHeap.enabled and spark.memory.offHeap.size are set to their defaults (true and 1 GB) automatically.

Spark configuration

Additional Spark configuration key-value pairs. Separate each key and value with a space, and separate multiple pairs with spaces — for example, spark.sql.catalog.paimon.metastore dlf. For built-in EMR Serverless Spark parameters, see Custom Spark Conf parameters.

After the template is created, click Edit or Delete in the Actions column to modify or remove it.

Apply a template

Load from Data Development

When creating a batch or stream processing job in Data Development, set Load from template to an existing template. The console loads all configuration parameters from the selected template automatically.

image

Load with spark-submit

Set spark.emr.serverless.templateId in the --conf flag. EMR Serverless Spark loads the template's parameters as defaults for the job.

spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master yarn \
  --conf spark.emr.serverless.templateId=<template_id> \
  /path/to/your/spark-job.jar

You can find the template ID on the Task Templates page.

Load from a Livy Gateway

When creating a Livy Gateway, set Load from template in the upper-right corner of the Create Livy Gateway page to load an existing template's parameters.

Load from a Kyuubi Gateway

When creating a Kyuubi Gateway, set Load from template in the upper-right corner of the Create Kyuubi Gateway page to load an existing template's parameters.

Manage templates

Reset a template to system defaults

  1. On the Task Templates page, click Edit in the Actions column.

  2. On the Edit Task Template page, click Reset to system defaults at the bottom, then click Save Changes.

Set a template as the default

  1. On the Task Templates page, click Set as default in the Actions column.

  2. In the dialog box, click OK.