Version requirements
The Serverless Spark engine version must meet the following requirements:
| Engine series | Minimum version |
|---|---|
| esr-4.x | esr-4.2.0 |
| esr-3.x | esr-3.0.1 |
| esr-2.x | esr-2.4.1 |
Usage notes
Ranger handles service authentication, not identity authentication. To enable user authentication, configure the OpenLDAP service separately. For details, see Configure LDAP authentication for a Spark Thrift Server.
Prerequisites
Before you begin, ensure that you have:
A Spark Thrift Server. For details, see Manage Spark Thrift Servers
Step 1: Configure network connectivity
Configure network connectivity between E-MapReduce (EMR) Serverless Spark and your virtual private cloud (VPC). This allows the Ranger plugin to reach Ranger Admin and receive authorization grants. For details, see Configure network connectivity between EMR Serverless Spark and a data source across VPCs.
Step 2: Configure a Ranger plugin
Stop the Spark Thrift Server before making changes. Then, in the Spark Thrift Server configuration:
Select a network connection from the Network Connection drop-down list.
Add the configuration items for your Ranger plugin type to the Spark Configuration parameter (see the options below).
Restart the Spark Thrift Server to apply the changes.
Option 1: Use the built-in Ranger plugin
This option requires Spark Thrift Server version esr-3.1.0 or later.
Add the following to Spark Configuration:
spark.ranger.plugin.enabled true
spark.jars /opt/ranger/ranger-spark.jar
ranger.plugin.spark.policy.rest.url http://<ranger_admin_ip>:<ranger_admin_port>Replace the placeholders:
| Placeholder | Description |
|---|---|
<ranger_admin_ip> | Internal IP address of Ranger Admin. For an EMR on ECS cluster, use the internal IP address of the master node. |
<ranger_admin_port> | Port number of Ranger Admin. For an EMR on ECS cluster, use 6080. |
Option 2: Use a custom Ranger plugin
Upload the custom Ranger plugin JAR file to Object Storage Service (OSS), then add the following to Spark Configuration:
spark.jars oss://<bucket>/path/to/user-ranger-spark.jar
spark.ranger.plugin.class <class_name>
spark.ranger.plugin.enabled true
ranger.plugin.spark.policy.rest.url http://<ranger_admin_ip>:<ranger_admin_port>Replace the placeholders:
| Placeholder | Description |
|---|---|
spark.jars | OSS path to the custom JAR file. |
spark.ranger.plugin.class | Name of the Spark extension class in the custom Ranger plugin. |
<ranger_admin_ip> | Internal IP address of Ranger Admin. For an EMR on ECS cluster, use the internal IP address of the master node. |
<ranger_admin_port> | Port number of Ranger Admin. For an EMR on ECS cluster, use 6080. |
(Optional) Step 3: Enable Ranger audit logging
By default, Ranger audit logging is disabled for EMR Serverless Spark. Ranger can store audit records in Solr or Hadoop Distributed File System (HDFS).
To enable audit logging with Solr, add the following to Spark Configuration:
xasecure.audit.is.enabled true
xasecure.audit.destination.solr true
xasecure.audit.destination.solr.urls http://<solr_ip>:<solr_port>/solr/ranger_audits
xasecure.audit.destination.solr.user <user>
xasecure.audit.destination.solr.password <password>| Parameter | Description |
|---|---|
xasecure.audit.is.enabled | Enables Ranger audit logging. Set to true. |
xasecure.audit.destination.solr | Sends audit records to Solr. Set to true. |
xasecure.audit.destination.solr.urls | Solr URL. Replace <solr_ip> and <solr_port> with the IP address and port number of your Solr instance. |
xasecure.audit.destination.solr.user | Solr username. Required only if basic authentication is enabled for Solr. |
xasecure.audit.destination.solr.password | Solr password. Required only if basic authentication is enabled for Solr. |
If the Ranger service runs in an EMR on ECS cluster, find the values for xasecure.audit.destination.solr.urls, xasecure.audit.destination.solr.user, and xasecure.audit.destination.solr.password on the ranger-spark-audit.xml tab of the Ranger-plugin service page.

After jobs run, view audit records on the Access tab of the Ranger web UI. For instructions on accessing the Ranger web UI, see Access the web UIs of open source components in the EMR console.
Audit records are only visible in the Ranger web UI when Solr is used as the storage backend.

Step 4: Test the connectivity
Use Beeline to confirm that Ranger is enforcing permissions correctly. Connect as a user who lacks permission on a resource, then run a query against it. A correctly configured setup returns an access control error:
0: jdbc:hive2://pre-emr-spark-gateway-cn-hang> create table test(id int);
Error: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.kyuubi.plugin.spark.authz.AccessControlException: Permission denied: user [test] does not have [create] privilege on [database=testdb/table=test]
at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:44)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:230)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:79)
at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:63)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:230)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:225)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:239)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)Keep the following in mind when selecting test users:
By default, all users can switch databases and create databases, and resource owners have full permissions on their own databases and tables. To get a meaningful test, verify that user B can be denied access to resources created by user A — not user A's own resources.
If Ranger Admin is incorrectly configured, queries succeed and no error is reported, but Ranger authentication does not take effect.