All Products
Search
Document Center

E-MapReduce:Manage SQL sessions

Last Updated:Mar 25, 2026

SQL sessions are Spark sessions that run inside an EMR Serverless Spark workspace. Create a session before submitting SQL jobs — the session provides the Spark runtime that executes your queries and supports scientific analysis of data.

Prerequisites

Before you begin, ensure that you have:

  • An EMR Serverless Spark workspace

  • At least one resource queue configured in the development environment or in both the development and production environments

Create an SQL session

  1. Log on to the EMR console.

  2. In the left-side navigation pane, choose EMR Serverless > Spark.

  3. On the Spark page, click the workspace name.

  4. In the left-side navigation pane of the EMR Serverless Spark page, choose Operation Center > Sessions.

  5. On the SQL Sessions tab, click Create SQL Session.

  6. On the Create SQL Session page, configure the parameters and click Create.

    Important

    Set the Maximum Concurrency of the resource queue to a value greater than or equal to the number of CUs required by the notebook session. Check the current value in the EMR console.

    ParameterDescription
    NameA name for the SQL session. Must be 1–64 characters and can contain letters, digits, hyphens (-), underscores (_), and spaces.
    Resource QueueThe resource queue where the session runs. Select from the drop-down list. Only queues available in the development environment, or in both development and production environments, are listed. For details, see Manage resource queues.
    Engine versionThe Spark engine version used by the session. For details, see Engine versions.
    Use Fusion Acceleration(Optional) Enables the Fusion engine to accelerate Spark workloads and lower job costs. For billing details, see Billing. For details, see Fusion engine.
    Auto StopStops the session automatically after it becomes inactive. Enabled by default. Sessions that are not stopped continue to consume compute units (CUs). Set the idle timeout based on your usage pattern.
    Network Connection(Optional) The network connection for accessing data sources or external services in a virtual private cloud (VPC). For details, see Configure network connectivity between EMR Serverless Spark and a data source across VPCs.
    spark.driver.coresThe number of CPU cores allocated to the Spark driver. Default: 1.
    spark.driver.memoryThe memory allocated to the Spark driver. Default: 3.5 GB.
    spark.executor.coresThe number of CPU cores per executor. Default: 1.
    spark.executor.memoryThe memory per executor. Default: 3.5 GB.
    spark.executor.instancesThe number of executors allocated to the Spark application. Default: 2.
    Dynamic Allocation(Optional) Disabled by default. When enabled, the session scales executors dynamically. Configure the following sub-parameters: Minimum Number of Executors (default: 2) and Maximum Number of Executors (default: 10, applied when spark.executor.instances is not set).
    More Memory Configurations(Optional) Advanced memory settings:
    spark.driver.memoryOverhead — Non-heap memory for the driver. If left blank, Spark uses max(384 MB, 10% × spark.driver.memory).
    spark.executor.memoryOverhead — Non-heap memory per executor. If left blank, Spark uses max(384 MB, 10% × spark.executor.memory).
    spark.memory.offHeap.size — Off-heap memory for the Spark application. Default: 1 GB. Valid only when spark.memory.offHeap.enabled is set to true. When you use the Fusion engine, spark.memory.offHeap.enabled is set to true and spark.memory.offHeap.size is set to 1 GB by default.


    Spark Configurations(Optional) Additional Spark configuration key-value pairs, separated by spaces. For example: spark.sql.catalog.paimon.metastore dlf

After you click Create, the session status changes from Starting to Running. You can then select this session when creating an SQL job.

After the session is running, you can stop, modify, or delete it as needed.

View jobs run by a session

  1. On the Sessions page, click the session name.

  2. Click the Execution Records tab. The tab shows each job's run ID and start time. Click the link in the Spark UI column to open the Spark UI for that job.

image

What's next