Apache Phoenix is a SQL layer built on HBase that enables you to use standard SQL to query and manage data.
Prerequisites
Before you begin, ensure that you have:
-
A DataServing or custom cluster with the Phoenix and HBase services selected at creation. See Create a cluster.
Use Phoenix from the command line
-
Log on to the cluster using SSH. See Log on to a cluster.
-
Open the Phoenix CLI:
/opt/apps/PHOENIX/phoenix-current/bin/sqlline.py -
Run SQL statements to manage data:
-
Create a table:
CREATE TABLE IF NOT EXISTS example( my_pk bigint not null, m.first_name varchar(50), m.last_name varchar(50) CONSTRAINT pk PRIMARY KEY (my_pk) ); -
Insert data:
UPSERT INTO example(my_pk,m.first_name,m.last_name) VALUES(100,'Jack','Ben'); UPSERT INTO example(my_pk,m.first_name,m.last_name) VALUES(200,'Jack3','Ben3'); -
Query data:
SELECT * FROM example;Expected output:
+--------+-------------+------------+ | MY_PK | FIRST_NAME | LAST_NAME | +--------+-------------+------------+ | 100 | Jack | Ben | | 200 | Jack3 | Ben3 | +--------+-------------+------------+ -
Drop the table:
DROP TABLE IF EXISTS example;
-
Connect to Phoenix using JDBC
Add the Maven dependency
<dependency>
<groupId>org.apache.phoenix</groupId>
<artifactId>phoenix-core</artifactId>
<version>${phoenix.version}</version>
</dependency>
Set ${phoenix.version} to match the Phoenix version on your cluster.
Sample code
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.PreparedStatement;
import java.sql.Statement;
public class TestPhoenixJdbc {
public static void main(String[] args) throws SQLException {
Statement stmt = null;
ResultSet rset = null;
Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
Connection con = DriverManager.getConnection("jdbc:phoenix:[zookeeper quorum hosts]");
stmt = con.createStatement();
stmt.executeUpdate("create table test (mykey integer not null primary key, mycolumn varchar)");
stmt.executeUpdate("upsert into test values (1,'Hello')");
stmt.executeUpdate("upsert into test values (2,'World!')");
con.commit();
PreparedStatement statement = con.prepareStatement("select * from test");
rset = statement.executeQuery();
while (rset.next()) {
System.out.println(rset.getString("mycolumn"));
}
statement.close();
con.close();
}
}
Connect to a Kerberos-enabled Phoenix cluster using JDBC
This section describes how to write a Java Database Connectivity (JDBC) client that connects to Phoenix on a Kerberos-enabled EMR cluster. Authentication is handled through a JDBC URL that embeds the principal and keytab path. The client then runs basic Data Definition Language (DDL) and Data Manipulation Language (DML) operations to verify the connection.
JDBC URL format
The Phoenix JDBC URL follows this structure:
jdbc:phoenix:[ZK address]:[ZK port]:[HBase ZNode]:[principal]:[absolute keytab path]
| Field | Description | Example |
|---|---|---|
ZK address |
ZooKeeper quorum hosts (comma-separated for multiple nodes) | master-1-1 or zk1,zk2,zk3 |
ZK port |
ZooKeeper client port | 2181 |
HBase ZNode |
HBase root ZNode path | /hbase or /hbase-secure |
principal |
Kerberos principal | phoenix_client@EMR.C-4FC5FDDE3759****.COM |
absolute keytab path |
Absolute path to the keytab file | /tmp/phoenix_client.keytab |
Example:
jdbc:phoenix:master-1-1:2181:/hbase:phoenix_client@EMR.C-4FC5FDDE3759****.COM:/tmp/phoenix_client.keytab
Step 1: Prepare the environment and credentials
Perform all steps on the master node of the cluster.
-
Connect to the master node using SSH. See Log on to a cluster.
-
Find the Kerberos realm. Each Kerberos-enabled cluster has a unique realm.
cat /etc/krb5.conf | grep default_realmThe output looks like:
default_realm = EMR.C-4FC5FDDE3759****.COMRecord the realm value for later use.
-
Create a client principal.
-
Open
kadmin.local:sudo kadmin.local -
Create the principal:
addprinc phoenix_client@EMR.C-4FC5FDDE3759****.COMSet a password when prompted. Even though the keytab enables password-free logon, the password may still be required in some cases.
-
-
Export the keytab file.
-
Export the keytab:
xst -k /tmp/phoenix_client.keytab phoenix_client@EMR.C-4FC5FDDE3759****.COM -
Exit
kadmin.local:exit
Important- Permissions: The user running the Java program must have read permissions on the keytab file. - Distribution: If the Java program runs outside the master node, copy both
phoenix_client.keytaband/etc/krb5.confto the machine where the program runs. Place the files in a path accessible to the script. -
Step 2: Write and package the Java application
Choose one of the following methods:
-
Method 1: Use a precompiled JAR file (for quick verification) Download hbase-phoenix-kerberos-1.0-SNAPSHOT.jar.
-
Method 2: Compile and package manually (recommended for production) Use the following code and Maven configuration to build the application. Core code (PhoenixKerberosDemo.java)
import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; /** * A client that connects to a Kerberos-secured Phoenix cluster using JDBC. * All connection parameters are provided through a complete JDBC URL passed from the command line. */ public class PhoenixKerberosDemo { /** * The main entry point for the application. * * @param args The command-line arguments. The program expects one argument: the full Phoenix JDBC URL. */ public static void main(String[] args) { // --- 1. Validate command-line input: Expect one argument, the JDBC URL --- if (args.length != 1) { System.err.println("ERROR: Invalid number of arguments."); System.err.println("Usage: java PhoenixKerberosDemo \"<full_jdbc_url>\""); System.err.println("Example: \"jdbc:phoenix:zk1,zk2:2181:/hbase:user@REALM.COM:/path/to/user.keytab\""); System.exit(1); // Exit with an error code } String jdbcUrl = args[0]; System.out.println("Attempting to connect to Phoenix..."); System.out.println("Using JDBC URL: " + jdbcUrl); try { // --- 2. Load the Phoenix driver --- Class.forName("org.apache.phoenix.jdbc.PhoenixDriver"); } catch (ClassNotFoundException e) { System.err.println("FATAL ERROR: Phoenix JDBC driver not found in the classpath."); e.printStackTrace(); System.exit(1); } // --- 3. Use a try-with-resources statement to establish a connection and execute SQL. This syntax automatically closes resources. --- try (Connection con = DriverManager.getConnection(jdbcUrl); Statement stmt = con.createStatement()) { System.out.println("Connection established successfully."); final String tableName = "TEST"; System.out.println("Creating table '" + tableName + "'..."); stmt.executeUpdate("CREATE TABLE IF NOT EXISTS " + tableName + " (mykey INTEGER NOT NULL PRIMARY KEY, mycolumn VARCHAR)"); con.commit(); System.out.println("Upserting data..."); stmt.executeUpdate("UPSERT INTO " + tableName + " VALUES (1, 'Hello')"); stmt.executeUpdate("UPSERT INTO " + tableName + " VALUES (2, 'World-Kerberos!')"); con.commit(); System.out.println("Data upserted successfully."); String sql = "SELECT * FROM " + tableName; System.out.println("Querying for results with: " + sql); try (PreparedStatement statement = con.prepareStatement(sql); ResultSet rset = statement.executeQuery()) { System.out.println("Query results:"); while (rset.next()) { System.out.println(rset.getInt("mykey") + " -> " + rset.getString("mycolumn")); } } System.out.println("Cleaning up the test table..."); stmt.executeUpdate("DROP TABLE IF EXISTS " + tableName); con.commit(); } catch (SQLException e) { // Catch SQL exceptions and provide helpful troubleshooting tips System.err.println("\n--- FAILED TO EXECUTE DATABASE OPERATION ---"); System.err.println("Please check the following:"); System.err.println("1. The JDBC URL is correct (format, principal, keytab path)."); System.err.println("2. Network connectivity to ZooKeeper and HBase."); System.err.println("3. The keytab file exists and has correct read permissions."); System.err.println("4. The principal has sufficient permissions on HBase tables and namespaces."); e.printStackTrace(); } System.out.println("\nExecution finished."); } }Maven configuration (pom.xml)
<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.aliyun.emr.doctor</groupId> <artifactId>hbase-phoenix-kerberos</artifactId> <version>1.0-SNAPSHOT</version> <name>Archetype - hbase-phoenix-kerberos</name> <url>http://maven.apache.org</url> <properties> <phoenix.version>5.2.1</phoenix.version> </properties> <dependencies> <dependency> <groupId>org.apache.phoenix</groupId> <artifactId>phoenix-core</artifactId> <version>${phoenix.version}</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <!-- Java Compiler --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>3.2.4</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <filters> <filter> <artifact>*:*</artifact> <excludes> <exclude>META-INF/*.SF</exclude> <exclude>META-INF/*.DSA</exclude> <exclude>META-INF/*.RSA</exclude> </excludes> </filter> </filters> </configuration> </execution> </executions> </plugin> </plugins> </build> </project>
Step 3: Run the application
-
Choose a runtime environment:
Environment Requirements Best for Cluster node (recommended) No extra setup — Hadoop, HBase, and Phoenix libraries are pre-installed and network connectivity is pre-configured Quick verification, testing, development Outside the cluster Network access to ZooKeeper, HBase Master, and RegionServer nodes; krb5.confand keytab copied to the machine; classpath includes all required Hadoop, HBase, and Phoenix client JARsCustom environments -
Upload the JAR file from Step 2 and the
kerberos-phoenix.shscript to a directory on the master node. -
Grant execute permissions to the script:
chmod +x kerberos-phoenix.sh -
Edit
kerberos-phoenix.shto set your cluster values in the User Configuration section, then run it:./kerberos-phoenix.shThe script sets up the classpath and passes the JDBC URL to
PhoenixKerberosDemo. All connection parameters are defined at the top of the script for easy modification.#!/bin/bash # ======================= 1. User Configuration (Modify based on your environment) ======================= # Directory where Hadoop and HBase configuration files are located HADOOP_CONF_DIR="/etc/taihao-apps/hadoop-conf" HBASE_CONF_DIR="/etc/taihao-apps/hbase-conf" # Path to the Phoenix client JAR file. Using a symbolic link is a best practice to handle version changes. # First, use 'ls -l /opt/apps/PHOENIX/phoenix-current/' to confirm this file exists. This path may need to be modified for different versions. PHOENIX_JAR="/opt/apps/PHOENIX/phoenix-current/phoenix-client-lite-hbase-2.6.jar" # Your application's JAR file name. YOUR_JAR_FILE="hbase-phoenix-kerberos-1.0-SNAPSHOT.jar" # Path to the Kerberos configuration file. KRB5_CONF_PATH="/etc/krb5.conf" # --- [Core] JDBC URL Configuration --- # Format: jdbc:phoenix:[ZK Address]:[ZK Port]:[HBase ZNode]:[Principal]:[Absolute Keytab Path] # Replace the ZK address, REALM, and Keytab path below with your actual information. ZK_QUORUM="master-1-1" # If there are multiple ZooKeeper nodes, separate them with commas, such as "zk1,zk2,zk3" ZK_PORT="2181" HBASE_ZNODE="/hbase" # For a secure cluster, it might be /hbase-secure PRINCIPAL="phoenix_client@EMR.C-4FC5FDDE3759****.COM" # Replace with your Principal KEYTAB_PATH="/tmp/phoenix_client.keytab" # Absolute path to the Keytab file JDBC_URL="jdbc:phoenix:${ZK_QUORUM}:${ZK_PORT}:${HBASE_ZNODE}:${PRINCIPAL}:${KEYTAB_PATH}" # ================================================================================= # ======================= 2. Execution Area (Usually no modification needed) ================================= echo "=================================================" echo "Starting Phoenix Kerberos JDBC Demo..." echo "Using JDBC URL: ${JDBC_URL}" echo "=================================================" # Build the Classpath. Order: current directory -> config directory -> your JAR -> dependency JARs # `hbase classpath` automatically loads core Hadoop/HBase dependencies CLASS_PATH=".:${HADOOP_CONF_DIR}:${HBASE_CONF_DIR}:${YOUR_JAR_FILE}:${PHOENIX_JAR}:$(hbase classpath)" # Execute the Java program java -cp "${CLASS_PATH}" \ -Djava.security.krb5.conf="${KRB5_CONF_PATH}" \ PhoenixKerberosDemo "${JDBC_URL}" # Check the exit code if [ $? -eq 0 ]; then echo -e "\n[SUCCESS] Program finished successfully." else echo -e "\n[FAILED] Program terminated with an error." fi # =================================================================================A successful run produces output similar to:

What's next
For more information about Phoenix, see the Apache Phoenix documentation: