SQL sessions are Spark sessions that run inside an EMR Serverless Spark workspace. Create a session before submitting SQL jobs — the session provides the Spark runtime that executes your queries and supports scientific analysis of data.
Prerequisites
Before you begin, ensure that you have:
An EMR Serverless Spark workspace
At least one resource queue configured in the development environment or in both the development and production environments
Create an SQL session
Log on to the EMR console.
In the left-side navigation pane, choose EMR Serverless > Spark.
On the Spark page, click the workspace name.
In the left-side navigation pane of the EMR Serverless Spark page, choose Operation Center > Sessions.
On the SQL Sessions tab, click Create SQL Session.
On the Create SQL Session page, configure the parameters and click Create.
ImportantSet the Maximum Concurrency of the resource queue to a value greater than or equal to the number of CUs required by the notebook session. Check the current value in the EMR console.
Parameter Description Name A name for the SQL session. Must be 1–64 characters and can contain letters, digits, hyphens (-), underscores (_), and spaces. Resource Queue The resource queue where the session runs. Select from the drop-down list. Only queues available in the development environment, or in both development and production environments, are listed. For details, see Manage resource queues. Engine version The Spark engine version used by the session. For details, see Engine versions. Use Fusion Acceleration (Optional) Enables the Fusion engine to accelerate Spark workloads and lower job costs. For billing details, see Billing. For details, see Fusion engine. Auto Stop Stops the session automatically after it becomes inactive. Enabled by default. Sessions that are not stopped continue to consume compute units (CUs). Set the idle timeout based on your usage pattern. Network Connection (Optional) The network connection for accessing data sources or external services in a virtual private cloud (VPC). For details, see Configure network connectivity between EMR Serverless Spark and a data source across VPCs. spark.driver.cores The number of CPU cores allocated to the Spark driver. Default: 1. spark.driver.memory The memory allocated to the Spark driver. Default: 3.5 GB. spark.executor.cores The number of CPU cores per executor. Default: 1. spark.executor.memory The memory per executor. Default: 3.5 GB. spark.executor.instances The number of executors allocated to the Spark application. Default: 2. Dynamic Allocation (Optional) Disabled by default. When enabled, the session scales executors dynamically. Configure the following sub-parameters: Minimum Number of Executors (default: 2) and Maximum Number of Executors (default: 10, applied when spark.executor.instancesis not set).More Memory Configurations (Optional) Advanced memory settings:
• spark.driver.memoryOverhead — Non-heap memory for the driver. If left blank, Spark usesmax(384 MB, 10% × spark.driver.memory).
• spark.executor.memoryOverhead — Non-heap memory per executor. If left blank, Spark usesmax(384 MB, 10% × spark.executor.memory).
• spark.memory.offHeap.size — Off-heap memory for the Spark application. Default: 1 GB. Valid only whenspark.memory.offHeap.enabledis set totrue. When you use the Fusion engine,spark.memory.offHeap.enabledis set totrueandspark.memory.offHeap.sizeis set to 1 GB by default.Spark Configurations (Optional) Additional Spark configuration key-value pairs, separated by spaces. For example: spark.sql.catalog.paimon.metastore dlf
After you click Create, the session status changes from Starting to Running. You can then select this session when creating an SQL job.
After the session is running, you can stop, modify, or delete it as needed.
View jobs run by a session
On the Sessions page, click the session name.
Click the Execution Records tab. The tab shows each job's run ID and start time. Click the link in the Spark UI column to open the Spark UI for that job.

What's next
Manage resource queues — manage the resource queues used by sessions
Manage users and roles — configure roles and permissions for sessions
Get started with SQL jobs — develop and run SQL jobs using a session