Spark configuration templates define global default configurations for task execution. You can create, edit, and manage these templates to ensure that tasks run with consistency and flexibility. The templates also support dynamic updates to meet different business needs.
Prerequisites
A workspace has been created. For more information, see Manage workspaces.
Create a configuration template
Go to the Configuration Management page.
Log on to the E-MapReduce console.
In the navigation pane on the left, choose .
On the Spark page, click the name of the target workspace.
On the EMR Serverless Spark page, click Configurations in the navigation pane on the left.
On the Task Templates page, click Create Template.
On the Create Task Template page, set the following parameters and click Create.
Parameter
Description
Template Name
It is customizable.
Engine Version
The engine version used by the current compute engine. For more information about engine versions, see Engine versions.
Timeout
The maximum time allowed for a task to complete.
NoteWhen you submit a task, you can set
spark.emr.serverless.jr.timeout=-1to overwrite the template's timeout configuration and remove the time limit for the task.spark.driver.cores
Specifies the number of CPU cores to use for the driver process in a Spark application.
spark.driver.memory
Specifies the amount of memory to use for the driver process in a Spark application.
spark.executor.cores
Specifies the number of CPU cores to use for each executor process.
spark.executor.memory
Specifies the amount of memory to use for each executor process.
spark.executor.instances
The number of executors to allocate for Spark.
Dynamic Resource Allocation
By default, this feature is disabled. After you enable this feature, you must configure the following parameters:
Minimum Number of Executors: Default value: 2.
Maximum Number of Executors: If you do not configure the spark.executor.instances parameter, the default value 10 is used.
More Memory Configurations
spark.driver.memoryOverhead: the size of non-heap memory that is available to each driver. If you leave this parameter empty, Spark automatically assigns a value to this parameter based on the following formula:
max(384 MB, 10% × spark.driver.memory).spark.executor.memoryOverhead: the size of non-heap memory that is available to each executor. If you leave this parameter empty, Spark automatically assigns a value to this parameter based on the following formula:
max(384 MB, 10% × spark.executor.memory).spark.memory.offHeap.size: the size of off-heap memory that is available to the Spark application. Default value: 1 GB.
This parameter is valid only if you set the
spark.memory.offHeap.enabledparameter totrue. By default, if you use the Fusion engine, the spark.memory.offHeap.enabled parameter is set to true and the spark.memory.offHeap.size parameter is set to 1 GB.
Spark Configuration
The Spark configurations. Separate the configurations with spaces, such as
spark.sql.catalog.paimon.metastore dlf.Serverless Spark provides multiple built-in parameters. For more information about the names, descriptions, and scenarios of these parameters, see Custom Spark Conf parameters.
After the configuration template is created, you can click Edit or Delete in the Actions column to modify or delete it.
Use a configuration template
Load a configuration in Data Development
When you develop batch or stream processing jobs, you can set the Load from template parameter to use an existing configuration template. The system automatically loads the configuration parameters from the selected template. This simplifies the configuration process and ensures consistency.

Using the spark-submit tool to load configurations
When you submit a task using spark-submit, you can specify a template ID by setting the spark.emr.serverless.templateId parameter in --conf. The system automatically loads the configuration parameters from the specified template and uses them as the default parameters for the Spark application.
Here is an example.
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--conf spark.emr.serverless.templateId=<template_id> \
/path/to/your/spark-job.jarIn the code, <template_id> represents the template ID. You can obtain the template ID on the Task Templates page.
Loading a configuration via Livy Gateway
When you create a Livy Gateway, you can set the Load from template parameter in the upper-right corner of the Create Livy Gateway page to use an existing configuration template. The system automatically loads the configuration parameters from the selected template. This simplifies the configuration process and ensures consistency.
Load configurations via Kyuubi Gateway
When you create a Kyuubi Gateway, you can set the Load from template parameter in the upper-right corner of the Create Kyuubi Gateway page to use an existing configuration template. The system automatically loads the configuration parameters from the selected template. This simplifies the configuration process and ensures consistency.
Restore default configurations
To reset the parameters of a configuration template to the system default values, follow these steps:
On the Task Templates page, click Edit in the Actions column.
On the Edit Task Template page, click Reset to system defaults at the bottom, and then click Save Changes.
Change the default template
To set a configuration template as the default, follow these steps:
On the Task Templates page, click Set as default in the Actions column.
In the dialog box that appears, click OK.