Spark configuration templates let you define a reusable set of global default configurations for task execution. Instead of specifying driver memory, executor count, and other parameters every time you submit a job, create a template once and load it from Data Development, spark-submit, Livy Gateway, or Kyuubi Gateway. Templates support dynamic updates, so you can adjust configurations at any time to meet evolving business needs.
How templates and job-level parameters interact: Template settings serve as defaults. Any parameter you pass directly at job submission overrides the corresponding template value. For example, setting spark.emr.serverless.jr.timeout=-1 at submission removes the timeout limit regardless of what the template specifies.
Prerequisites
Before you begin, ensure that you have:
A workspace. For more information, see Manage workspaces.
Create a configuration template
Log on to the E-MapReduce console.
In the left navigation pane, choose EMR Serverless > Spark.
On the Spark page, click the name of your workspace.
On the EMR Serverless Spark page, click Configurations in the left navigation pane.
On the Task Templates page, click Create Template.
On the Create Task Template page, configure the parameters described below, then click Create.
Template parameters
Basic settings
| Parameter | Description |
|---|---|
| Template Name | A name for the template. Use a descriptive name that reflects the workload type or resource profile — for example, high-memory-etl or interactive-analysis-standard — so team members can identify the right template at a glance. |
| Engine Version | The engine version for the compute engine. For details, see Engine versions. |
| Timeout | The maximum duration allowed for a task to complete. To remove the time limit for a specific job at submission, set spark.emr.serverless.jr.timeout=-1 in the job's configuration. |
Driver and executor resources
| Parameter | Description |
|---|---|
| spark.driver.cores | Number of CPU cores for the driver process. |
| spark.driver.memory | Amount of memory for the driver process. |
| spark.executor.cores | Number of CPU cores for each executor. |
| spark.executor.memory | Amount of memory for each executor. |
| spark.executor.instances | Number of executors to allocate. |
Dynamic resource allocation
Disabled by default. When enabled, Spark adjusts the number of executors based on workload demand. Configure the following sub-parameters:
| Parameter | Default | Description |
|---|---|---|
| Minimum number of executors | 2 | The lower bound for executor count during dynamic scaling. |
| Maximum number of executors | 10 (if spark.executor.instances is not set) | The upper bound for executor count during dynamic scaling. |
More memory configurations
| Parameter | Description |
|---|---|
| spark.driver.memoryOverhead | Non-heap memory per driver. If left blank, Spark sets this to max(384 MB, 10% × spark.driver.memory). |
| spark.executor.memoryOverhead | Non-heap memory per executor. If left blank, Spark sets this to max(384 MB, 10% × spark.executor.memory). |
| spark.memory.offHeap.size | Off-heap memory for the Spark application. Default: 1 GB. Takes effect only when spark.memory.offHeap.enabled is set to true. When using the Fusion engine, both spark.memory.offHeap.enabled and spark.memory.offHeap.size are set to their defaults (true and 1 GB) automatically. |
Spark configuration
Additional Spark configuration key-value pairs. Separate each key and value with a space, and separate multiple pairs with spaces — for example, spark.sql.catalog.paimon.metastore dlf. For built-in EMR Serverless Spark parameters, see Custom Spark Conf parameters.
After the template is created, click Edit or Delete in the Actions column to modify or remove it.
Apply a template
Load from Data Development
When creating a batch or stream processing job in Data Development, set Load from template to an existing template. The console loads all configuration parameters from the selected template automatically.

Load with spark-submit
Set spark.emr.serverless.templateId in the --conf flag. EMR Serverless Spark loads the template's parameters as defaults for the job.
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--conf spark.emr.serverless.templateId=<template_id> \
/path/to/your/spark-job.jarYou can find the template ID on the Task Templates page.
Load from a Livy Gateway
When creating a Livy Gateway, set Load from template in the upper-right corner of the Create Livy Gateway page to load an existing template's parameters.
Load from a Kyuubi Gateway
When creating a Kyuubi Gateway, set Load from template in the upper-right corner of the Create Kyuubi Gateway page to load an existing template's parameters.
Manage templates
Reset a template to system defaults
On the Task Templates page, click Edit in the Actions column.
On the Edit Task Template page, click Reset to system defaults at the bottom, then click Save Changes.
Set a template as the default
On the Task Templates page, click Set as default in the Actions column.
In the dialog box, click OK.