All Products
Search
Document Center

Hologres:JDBC

Last Updated:Mar 26, 2026

Hologres is compatible with the PostgreSQL wire protocol, so any tool or application that uses the PostgreSQL JDBC Driver can connect to it. This guide walks you through setting up a JDBC connection, writing and querying data, and tuning performance.

On this page

Prerequisites

Before you begin, ensure that you have:

  • A Hologres instance with a database created

  • Your instance endpoint, port, and database name (available on the Instance Details page in the Hologres console, under Network Information)

  • Your AccessKey ID and AccessKey secret

Usage notes

  • Use PostgreSQL JDBC Driver 42.3.2 or later to write data through a JDBC connection.

  • For write performance tests, use a Virtual Private Cloud (VPC) network. The public network cannot meet performance testing benchmarks.

  • Hologres does not support multiple writes in a single transaction. Set autoCommit to true. The JDBC default is true, so do not explicitly call commit in your code. If you see the error ERROR: INSERT in transaction is not supported now, set autoCommit explicitly:

    Connection conn = DriverManager.getConnection(url, user, password);
    conn.setAutoCommit(true);

Connect to Hologres using JDBC

Step 1: Add the driver dependency

Most SQL client tools include a built-in PostgreSQL driver — use it if available. For Java applications, add the PostgreSQL JDBC Driver to your Maven project. Download it from jdbc.postgresql.org/download and use version 42.3.2 or later (the latest stable version is recommended).

Hologres uses the standard PostgreSQL JDBC Driver — there is no separate Hologres-specific driver to install.

Add the following dependency to your pom.xml:

<dependencies>
    <dependency>
        <groupId>org.postgresql</groupId>
        <artifactId>postgresql</artifactId>
        <version>42.3.2</version>
    </dependency>
</dependencies>

Step 2: Build the connection string

The connection string format is:

jdbc:postgresql://<ENDPOINT>:<PORT>/<DBNAME>?user=<ACCESS_ID>&password=<ACCESS_KEY>

Required parameters

Parameter Description
<ENDPOINT> The network endpoint of the Hologres instance. Find it on the Instance Details page in the Hologres console, under Network Information. Select the endpoint that matches the network environment where your code runs — mismatched network types cause connection failures.
<PORT> The port of the Hologres instance. Available in the same Network Information section.
<DBNAME> The name of the database in Hologres.
<ACCESS_ID> Your AccessKey ID. Store it in an environment variable instead of hardcoding it.
<ACCESS_KEY> Your AccessKey secret. Store it in an environment variable instead of hardcoding it.

Optional parameters

Append these to the connection string as needed. Separate multiple parameters with &.

Parameter Effect
ApplicationName=<name> Tags connections with your application name, making it easier to identify them in the slow query checklist.
reWriteBatchedInserts=true Rewrites batch inserts into a single multi-value INSERT statement for higher write throughput.
currentSchema=<schema> Sets the default schema. Required when querying foreign tables after enabling automatic foreign table loading from MaxCompute — MaxCompute project names are mapped to schemas with the same names.

Example connection string with recommended options:

jdbc:postgresql://<ENDPOINT>:<PORT>/<DBNAME>?user=<ACCESS_ID>&password=<ACCESS_KEY>&ApplicationName=myApp&reWriteBatchedInserts=true

Step 3: Store credentials in environment variables

Hardcoding credentials in connection strings creates a security risk. Store them as environment variables instead. On Linux, add the following to your ~/.bash_profile:

export ALIBABA_CLOUD_USER=<ACCESS_ID>
export ALIBABA_CLOUD_PASSWORD=<ACCESS_KEY>

Step 4: Connect and run a query

The following example reads credentials from environment variables, connects to Hologres, and runs a basic SELECT query using a standard Statement:

public class HologresTest {

    private void jdbcExample() throws SQLException {
        String user = System.getenv("ALIBABA_CLOUD_USER");
        String password = System.getenv("ALIBABA_CLOUD_PASSWORD");
        String url = String.format(
            "jdbc:postgresql://<ENDPOINT>:<PORT>/<DBNAME>?currentSchema=<SCHEMA_NAME>&user=%s&password=%s",
            user, password
        );
        try (Connection conn = DriverManager.getConnection(url)) {
            try (Statement st = conn.createStatement()) {
                String sql = "SELECT * FROM table where xxx limit 100";
                try (ResultSet rs = st.executeQuery(sql)) {
                    while (rs.next()) {
                        // Read the first column value
                        String c1 = rs.getString(1);
                    }
                }
            }
        }
    }
}

Write and query data

Write data

You can write data using the Statement or Prepared Statement mode in JDBC. Use Prepared Statement mode for write operations. In this mode, the server caches SQL compilation results, which reduces write latency and increases throughput. Set the batch size to a multiple of 256 — the minimum recommended batch size is 256.

Batch insert

/*
 * Write data in batches using Prepared Statement mode.
 * Batch size: 256 rows (minimum recommended).
 */
private static void writeBatchWithPreparedStatement(Connection conn) throws Exception {
    try (PreparedStatement stmt = conn.prepareStatement("insert into test_tb values (?,?,?,?)")) {
        int batchSize = 256;
        for (int i = 0; i < batchSize; ++i) {
            stmt.setInt(1, 1000 + i);
            stmt.setString(2, "1");
            SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss");
            Date parsedDate = dateFormat.parse("1990-11-11 00:00:00");
            stmt.setTimestamp(3, new java.sql.Timestamp(parsedDate.getTime()));
            stmt.setDouble(4, 0.1);
            stmt.addBatch();
        }
        stmt.executeBatch();
    }
}

Upsert with INSERT ON CONFLICT

To update existing rows on conflict, use the PostgreSQL INSERT ON CONFLICT syntax. The destination table must have a primary key.

private static void insertOverwrite(Connection conn) throws Exception {
    try (PreparedStatement stmt = conn.prepareStatement(
        "insert into test_tb values (?,?,?,?), (?,?,?,?), (?,?,?,?), (?,?,?,?), (?,?,?,?), (?,?,?,?) " +
        "on conflict(pk) do update set f1 = excluded.f1, f2 = excluded.f2, f3 = excluded.f3"
    )) {
        int batchSize = 6;
        for (int i = 0; i < batchSize; ++i) {
            stmt.setInt(i * 4 + 1, i);
            stmt.setString(i * 4 + 2, "1");
            SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd hh:mm:ss");
            Date parsedDate = dateFormat.parse("1990-11-11 00:00:00");
            stmt.setTimestamp(i * 4 + 3, new java.sql.Timestamp(parsedDate.getTime()));
            stmt.setDouble(i * 4 + 4, 0.1);
        }
        int affectedRows = stmt.executeUpdate();
        System.out.println("affected rows => " + affectedRows);
    }
}

The following example uses Prepared Statement mode for write operations, which improves throughput for repeated inserts:

private void jdbcPreparedStmtExample() throws SQLException {
    String user = System.getenv("ALIBABA_CLOUD_USER");
    String password = System.getenv("ALIBABA_CLOUD_PASSWORD");
    String url = String.format(
        "jdbc:postgresql://<ENDPOINT>:<PORT>/<DBNAME>?currentSchema=<SCHEMA_NAME>&user=%s&password=%s",
        user, password
    );
    try (Connection conn = DriverManager.getConnection(url)) {
        String sql = "insert into test values" +
                "(?, ?), (?, ?), (?, ?), (?, ?), (?, ?), " +
                "(?, ?), (?, ?), (?, ?), (?, ?), (?, ?)";
        try (PreparedStatement st = conn.prepareStatement(sql)) {
            for (int i = 0; i < 10; ++i) {
                for (int j = 0; j < 2 * 10; ++j) {
                    st.setString(j + 1, UUID.randomUUID().toString());
                }
                System.out.println("affected row => " + st.executeUpdate());
            }
        }
    }
}

Query data

Use standard SQL SELECT statements to query data from Hologres tables. The basic SELECT example in Step 4 demonstrates this pattern.

Configure a Druid connection pool

Use Druid 1.1.12 or later to connect to Hologres.

Usage notes:

  • Set keepAlive=true to reuse connections and avoid connection churn.

  • Druid versions 1.2.12 through 1.2.21 have a known issue where connectTimeout and socketTimeout default to 10 seconds when not specified. Upgrade to a newer version if you encounter this.

  • Set initialSize, minIdle, and maxActive based on your instance size and workload.

<bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource"
      init-method="init" destroy-method="close">
  <!-- Endpoint URL from the instance configuration page in the console -->
  <property name="url" value="${jdbc_url}" />
  <!-- AccessKey ID of the user account -->
  <property name="username" value="${jdbc_user}" />
  <!-- AccessKey secret of the user account -->
  <property name="password" value="${jdbc_password}" />

  <!-- Pool sizing: adjust based on instance size and workload -->
  <property name="initialSize" value="5" />
  <property name="minIdle" value="10" />
  <property name="maxActive" value="20" />

  <!-- Wait up to 60 seconds for a connection from the pool -->
  <property name="maxWait" value="60000" />
  <!-- Check for idle connections every 2 seconds -->
  <property name="timeBetweenEvictionRunsMillis" value="2000" />
  <!-- Evict connections idle for more than 10 minutes -->
  <property name="minEvictableIdleTimeMillis" value="600000" />
  <property name="maxEvictableIdleTimeMillis" value="900000" />

  <property name="validationQuery" value="select 1" />
  <property name="testWhileIdle" value="true" />
  <!-- Do not test connections on borrow/return (reduces overhead) -->
  <property name="testOnBorrow" value="false" />
  <property name="testOnReturn" value="false" />

  <property name="keepAlive" value="true" />
  <property name="phyMaxUseCount" value="100000" />
  <property name="filters" value="stat" />
</bean>

Set GUC parameters

Grand Unified Configuration (GUC) parameters control session-level behavior such as timeouts. Set them at connection time using PGProperty.OPTIONS.

The following example sets statement_timeout and idle_in_transaction_session_timeout to 12,345 milliseconds:

import org.postgresql.PGProperty;
import java.sql.*;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;

public class GucDemo {
    public static void main(String[] args) {
        String hostname = "hgpostcn-cn-xxxx-cn-hangzhou.hologres.aliyuncs.com";
        String port = "80";
        String dbname = "demo";
        String jdbcUrl = "jdbc:postgresql://" + hostname + ":" + port + "/" + dbname;

        Properties properties = new Properties();
        properties.setProperty("user", "xxxxx");
        properties.setProperty("password", "xxxx");

        // Set session-level GUC parameters
        PGProperty.OPTIONS.set(properties,
            "--statement_timeout=12345 --idle_in_transaction_session_timeout=12345");

        try {
            Class.forName("org.postgresql.Driver");
            Connection connection = DriverManager.getConnection(jdbcUrl, properties);

            PreparedStatement preparedStatement =
                connection.prepareStatement("show statement_timeout");
            ResultSet resultSet = preparedStatement.executeQuery();
            while (resultSet.next()) {
                ResultSetMetaData rsmd = resultSet.getMetaData();
                int columnCount = rsmd.getColumnCount();
                Map<String, Object> map = new HashMap<>();
                for (int i = 0; i < columnCount; i++) {
                    map.put(rsmd.getColumnName(i + 1).toLowerCase(), resultSet.getObject(i + 1));
                }
                System.out.println(map);
            }
        } catch (Exception exception) {
            exception.printStackTrace();
        }
    }
}

For the full list of available GUC parameters, see GUC parameters.

Performance tuning

Apply the following practices to maximize write throughput:

  • Use a VPC network. The public network introduces latency that prevents achieving optimal write performance.

  • Enable batch rewriting. Add reWriteBatchedInserts=true to the connection string. This rewrites individual inserts into a single multi-value statement, significantly increasing throughput.

  • Use Prepared Statement mode. The server caches SQL compilation results, reducing per-row latency.

  • Set batch size to a multiple of 256. The minimum effective batch size is 256. Larger multiples yield further throughput gains. For automated batching, use Holo Client.

Example connection string with all performance options enabled:

jdbc:postgresql://<ENDPOINT>:<PORT>/<DBNAME>?ApplicationName=<APPLICATION_NAME>&reWriteBatchedInserts=true

Load balancing

Starting from Hologres V1.3, you can configure multiple read-only secondary instances in JDBC to distribute read workloads. For setup instructions, see JDBC-based load balancing.