All Products
Search
Document Center

Lindorm:Java High Level REST Client

Last Updated:Mar 28, 2026

The Java High Level REST Client is an Elasticsearch client that provides a higher-level API for index and document management. LindormSearch is compatible with Elasticsearch 7.10 and earlier, so you can use this client to connect to LindormSearch and run complex queries and searches without changing your existing application code.

The Java High Level REST Client is forward compatible — for example, version 6.7.0 can communicate with Elasticsearch 6.7.0 or later clusters. Use version 7.10.0 or earlier to connect to LindormSearch.

Prerequisites

Before you begin, ensure that you have:

Add dependencies

For a Maven project, add the following dependencies to your pom.xml file:

<dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
    <version>7.10.0</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-core</artifactId>
    <version>2.20.0</version>
</dependency>
<dependency>
    <groupId>org.apache.logging.log4j</groupId>
    <artifactId>log4j-api</artifactId>
    <version>2.20.0</version>
</dependency>

Connect to LindormSearch

Use RestClient.builder() to create a RestHighLevelClient object. The client uses BasicCredentialsProvider for authentication.

// Set the Elasticsearch-compatible endpoint and port for LindormSearch.
String search_url = "ld-t4n5668xk31ui****-proxy-search-public.lindorm.rds.aliyuncs.com";
int search_port = 30070;

// Set the username and password. Retrieve them from the Lindorm console:
// navigate to Database Connections > Search Engine tab.
String username = "user";
String password = "test";

final CredentialsProvider credentials_provider = new BasicCredentialsProvider();
credentials_provider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));

RestHighLevelClient highClient = new RestHighLevelClient(
    RestClient.builder(new HttpHost(search_url, search_port, "http"))
        .setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
            public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
                return httpClientBuilder.setDefaultCredentialsProvider(credentials_provider);
            }
        })
);

Connection parameters

ParameterDescription
search_urlThe Elasticsearch-compatible endpoint for the LindormSearch search engine. To get the endpoint, see Elasticsearch-compatible address. Use the Virtual Private Cloud (VPC) address when connecting from a VPC, or the Internet address when connecting over the public network.
search_portThe port for Elasticsearch compatibility on LindormSearch: 30070.
usernameThe username for the search engine.
passwordThe password for the search engine.

Choose a network connection type

  • VPC (recommended): If your application runs on an Elastic Compute Service (ECS) instance in the same VPC as the Lindorm instance, connect over VPC for lower latency and better security. Set search_url to the VPC address of the Elasticsearch-compatible endpoint.

  • Public network: If your application runs outside Alibaba Cloud, enable the public endpoint first. In the Lindorm console, go to Database Connections, click the Search Engine tab, and click Enable Public Endpoint in the upper-right corner. Then set search_url to the Internet address of the Elasticsearch-compatible endpoint.

Create an index

Use CreateIndexRequest to create an index. The example below creates an index named lindorm_index with 4 shards.

String index_name = "lindorm_index";

// Create a CreateIndexRequest and configure index settings.
CreateIndexRequest createIndexRequest = new CreateIndexRequest(index_name);
Map<String, Object> settingsMap = new HashMap<>();
settingsMap.put("index.number_of_shards", 4);
createIndexRequest.settings(settingsMap);

CreateIndexResponse createIndexResponse = highClient.indices().create(createIndexRequest, COMMON_OPTIONS);
if (createIndexResponse.isAcknowledged()) {
    System.out.println("Create index [" + index_name + "] successfully.");
}

Index a document

Use IndexRequest to write a single document. Specify a document ID or leave it blank to have the system generate one automatically (omitting the ID can improve write performance).

// Specify the document ID.
String doc_id = "test";

// Build the document fields. Replace with actual field names and values for your use case.
Map<String, Object> jsonMap = new HashMap<>();
jsonMap.put("field1", "value1");
jsonMap.put("field2", "value2");

IndexRequest indexRequest = new IndexRequest(index_name);
indexRequest.id(doc_id).source(jsonMap);

IndexResponse indexResponse = highClient.index(indexRequest, COMMON_OPTIONS);
System.out.println("Index document with id[" + indexResponse.getId() + "] successfully.");

Bulk index documents

Use BulkProcessor with bulkAsync() to write large numbers of documents efficiently. BulkProcessor batches individual index requests and sends them in bulk when any configured threshold is met.

BulkProcessor configuration

ParameterDescriptionExample value
setConcurrentRequestsMaximum number of concurrent bulk requests. Increase this value to improve throughput under heavy write loads. Default: 1.10
setFlushIntervalTime interval after which a bulk request is sent regardless of size or count. Use this as a safety net to prevent data from being buffered too long.5 seconds
setBulkActionsNumber of individual operations that triggers a flush. Tune this based on your average document size.5000
setBulkSizeTotal size of buffered operations that triggers a flush.5 MB
int bulkTotal = 100000;
AtomicLong failedBulkItemCount = new AtomicLong();

BulkProcessor.Builder builder = BulkProcessor.builder(
    (request, bulkListener) -> highClient.bulkAsync(request, COMMON_OPTIONS, bulkListener),
    new BulkProcessor.Listener() {
        @Override
        public void beforeBulk(long executionId, BulkRequest request) {}

        @Override
        public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
            // Count failed items in the bulk response.
            for (BulkItemResponse bulkItemResponse : response) {
                if (bulkItemResponse.isFailed()) {
                    failedBulkItemCount.incrementAndGet();
                }
            }
        }

        @Override
        public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
            // If this callback fires, all requests in the bulk were not executed.
            if (null != failure) {
                failedBulkItemCount.addAndGet(request.numberOfActions());
            }
        }
    });

// Maximum concurrent bulk requests. Default is 1; increase for higher throughput.
builder.setConcurrentRequests(10);
// Flush thresholds — a bulk request is sent when any of these is met.
builder.setFlushInterval(TimeValue.timeValueSeconds(5));  // every 5 seconds
builder.setBulkActions(5000);                             // every 5,000 operations
builder.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB)); // every 5 MB

BulkProcessor bulkProcessor = builder.build();
Random random = new Random();
for (int i = 0; i < bulkTotal; i++) {
    // Replace with actual field names and values for your use case.
    Map<String, Object> map = new HashMap<>();
    map.put("field1", random.nextInt() + "");
    map.put("field2", random.nextInt() + "");
    IndexRequest bulkItemRequest = new IndexRequest(index_name);
    bulkItemRequest.source(map);
    bulkProcessor.add(bulkItemRequest);
}

// Wait up to 120 seconds for all pending operations to complete.
bulkProcessor.awaitClose(120, TimeUnit.SECONDS);
long failure = failedBulkItemCount.get(),
     success = bulkTotal - failure;
System.out.println("Bulk using BulkProcessor finished with [" + success + "] requests succeeded, [" + failure + "] requests failed.");

Search documents

Issue a refresh request first to make recently written data visible, then run your queries.

By default, a search query returns at most 10,000 documents. To get the exact total count when more than 10,000 documents match, call searchSourceBuilder.trackTotalHits(true) before running the query.
// Refresh the index to make written data searchable.
RefreshRequest refreshRequest = new RefreshRequest(index_name);
highClient.indices().refresh(refreshRequest, COMMON_OPTIONS);
System.out.println("Refresh on index [" + index_name + "] successfully.");

// Query all documents.
SearchRequest searchRequest = new SearchRequest(index_name);
SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
// Uncomment the next line to get the exact total count when results exceed 10,000.
// searchSourceBuilder.trackTotalHits(true);
QueryBuilder queryMatchAllBuilder = new MatchAllQueryBuilder();
searchSourceBuilder.query(queryMatchAllBuilder);
searchRequest.source(searchSourceBuilder);
SearchResponse searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
long totalHit = searchResponse.getHits().getTotalHits().value;
System.out.println("Search query match all hits [" + totalHit + "] in total.");

// Query documents by ID.
QueryBuilder queryByIdBuilder = new MatchQueryBuilder("_id", doc_id);
searchSourceBuilder.query(queryByIdBuilder);
searchRequest.source(searchSourceBuilder);
searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
for (SearchHit searchHit : searchResponse.getHits()) {
    System.out.println("Search query by id response [" + searchHit.getSourceAsString() + "]");
}

Delete documents and indexes

Use DeleteRequest to remove a single document and DeleteIndexRequest to drop an index.

// Delete a single document by ID.
DeleteRequest deleteRequest = new DeleteRequest(index_name);
deleteRequest.id(doc_id);
DeleteResponse deleteResponse = highClient.delete(deleteRequest, COMMON_OPTIONS);
System.out.println("Delete document with id [" + deleteResponse.getId() + "] successfully.");

// Delete the index.
DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(index_name);
AcknowledgedResponse deleteIndexResponse = highClient.indices().delete(deleteIndexRequest, COMMON_OPTIONS);
if (deleteIndexResponse.isAcknowledged()) {
    System.out.println("Delete index [" + index_name + "] successfully.");
}

highClient.close();

Complete example

import org.apache.http.HttpHost;
import org.apache.http.auth.AuthScope;
import org.apache.http.auth.UsernamePasswordCredentials;
import org.apache.http.client.CredentialsProvider;
import org.apache.http.impl.client.BasicCredentialsProvider;
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.admin.indices.refresh.RefreshRequest;
import org.elasticsearch.action.admin.indices.refresh.RefreshResponse;
import org.elasticsearch.action.bulk.BulkItemResponse;
import org.elasticsearch.action.bulk.BulkProcessor;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.client.HttpAsyncResponseConsumerFactory;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestClientBuilder;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.CreateIndexRequest;
import org.elasticsearch.client.indices.CreateIndexResponse;
import org.elasticsearch.common.unit.ByteSizeUnit;
import org.elasticsearch.common.unit.ByteSizeValue;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.MatchAllQueryBuilder;
import org.elasticsearch.index.query.MatchQueryBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;

import java.util.HashMap;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;

public class RestHClientTest {
    private static final RequestOptions COMMON_OPTIONS;
    static {
        // Set the maximum response buffer size to 30 MB. The default is 100 MB.
        RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
        builder.setHttpAsyncResponseConsumerFactory(
            new HttpAsyncResponseConsumerFactory
                .HeapBufferedResponseConsumerFactory(30 * 1024 * 1024));
        COMMON_OPTIONS = builder.build();
    }

    public static void main(String[] args) {
        // Set the Elasticsearch-compatible endpoint and port for LindormSearch.
        String search_url = "ld-t4n5668xk31ui****-proxy-search-public.lindorm.rds.aliyuncs.com";
        int search_port = 30070;

        // Set the username and password. Retrieve them from the Lindorm console.
        String username = "user";
        String password = "test";

        final CredentialsProvider credentials_provider = new BasicCredentialsProvider();
        credentials_provider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password));
        RestHighLevelClient highClient = new RestHighLevelClient(
            RestClient.builder(new HttpHost(search_url, search_port, "http"))
                .setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
                    public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
                        return httpClientBuilder.setDefaultCredentialsProvider(credentials_provider);
                    }
                })
        );

        try {
            String index_name = "lindorm_index";

            // Create an index.
            CreateIndexRequest createIndexRequest = new CreateIndexRequest(index_name);
            Map<String, Object> settingsMap = new HashMap<>();
            settingsMap.put("index.number_of_shards", 4);
            createIndexRequest.settings(settingsMap);
            CreateIndexResponse createIndexResponse = highClient.indices().create(createIndexRequest, COMMON_OPTIONS);
            if (createIndexResponse.isAcknowledged()) {
                System.out.println("Create index [" + index_name + "] successfully.");
            }

            // Index a single document.
            // Specify the document ID. If you do not specify the document ID, an ID is automatically generated,
            // which can improve write performance.
            String doc_id = "test";
            Map<String, Object> jsonMap = new HashMap<>();
            jsonMap.put("field1", "value1");
            jsonMap.put("field2", "value2");
            IndexRequest indexRequest = new IndexRequest(index_name);
            indexRequest.id(doc_id).source(jsonMap);
            IndexResponse indexResponse = highClient.index(indexRequest, COMMON_OPTIONS);
            System.out.println("Index document with id[" + indexResponse.getId() + "] successfully.");

            // Bulk index documents using BulkProcessor.
            int bulkTotal = 100000;
            AtomicLong failedBulkItemCount = new AtomicLong();
            BulkProcessor.Builder builder = BulkProcessor.builder(
                (request, bulkListener) -> highClient.bulkAsync(request, COMMON_OPTIONS, bulkListener),
                new BulkProcessor.Listener() {
                    @Override
                    public void beforeBulk(long executionId, BulkRequest request) {}

                    @Override
                    public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {
                        for (BulkItemResponse bulkItemResponse : response) {
                            if (bulkItemResponse.isFailed()) {
                                failedBulkItemCount.incrementAndGet();
                            }
                        }
                    }

                    @Override
                    public void afterBulk(long executionId, BulkRequest request, Throwable failure) {
                        if (null != failure) {
                            failedBulkItemCount.addAndGet(request.numberOfActions());
                        }
                    }
                });
            builder.setConcurrentRequests(10);
            builder.setFlushInterval(TimeValue.timeValueSeconds(5));
            builder.setBulkActions(5000);
            builder.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB));
            BulkProcessor bulkProcessor = builder.build();
            Random random = new Random();
            for (int i = 0; i < bulkTotal; i++) {
                Map<String, Object> map = new HashMap<>();
                map.put("field1", random.nextInt() + "");
                map.put("field2", random.nextInt() + "");
                IndexRequest bulkItemRequest = new IndexRequest(index_name);
                bulkItemRequest.source(map);
                bulkProcessor.add(bulkItemRequest);
            }
            bulkProcessor.awaitClose(120, TimeUnit.SECONDS);
            long failure = failedBulkItemCount.get(),
                 success = bulkTotal - failure;
            System.out.println("Bulk using BulkProcessor finished with [" + success + "] requests succeeded, [" + failure + "] requests failed.");

            // Refresh the index to make written data searchable.
            RefreshRequest refreshRequest = new RefreshRequest(index_name);
            RefreshResponse refreshResponse = highClient.indices().refresh(refreshRequest, COMMON_OPTIONS);
            System.out.println("Refresh on index [" + index_name + "] successfully.");

            // Query all documents. By default, at most 10,000 results are returned.
            // To get the exact total count, call searchSourceBuilder.trackTotalHits(true).
            SearchRequest searchRequest = new SearchRequest(index_name);
            SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
            QueryBuilder queryMatchAllBuilder = new MatchAllQueryBuilder();
            searchSourceBuilder.query(queryMatchAllBuilder);
            searchRequest.source(searchSourceBuilder);
            SearchResponse searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
            long totalHit = searchResponse.getHits().getTotalHits().value;
            System.out.println("Search query match all hits [" + totalHit + "] in total.");

            // Query documents by ID.
            QueryBuilder queryByIdBuilder = new MatchQueryBuilder("_id", doc_id);
            searchSourceBuilder.query(queryByIdBuilder);
            searchRequest.source(searchSourceBuilder);
            searchResponse = highClient.search(searchRequest, COMMON_OPTIONS);
            for (SearchHit searchHit : searchResponse.getHits()) {
                System.out.println("Search query by id response [" + searchHit.getSourceAsString() + "]");
            }

            // Delete a document by ID.
            DeleteRequest deleteRequest = new DeleteRequest(index_name);
            deleteRequest.id(doc_id);
            DeleteResponse deleteResponse = highClient.delete(deleteRequest, COMMON_OPTIONS);
            System.out.println("Delete document with id [" + deleteResponse.getId() + "] successfully.");

            // Delete the index.
            DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(index_name);
            AcknowledgedResponse deleteIndexResponse = highClient.indices().delete(deleteIndexRequest, COMMON_OPTIONS);
            if (deleteIndexResponse.isAcknowledged()) {
                System.out.println("Delete index [" + index_name + "] successfully.");
            }

            highClient.close();
        } catch (Exception exception) {
            System.out.println("msg " + exception);
        }
    }
}

The expected output is:

Create index [lindorm_index] successfully.
Index document with id[test] successfully.
Bulk using BulkProcessor finished with [100000] requests succeeded, [0] requests failed.
Refresh on index [lindorm_index] successfully.
Search query match all hits [10000] in total.
Search query by id response [{"field1":"value1","field2":"value2"}]
Delete document with id [test] successfully.
Delete index [lindorm_index] successfully.
The match-all query returns 10,000 hits even though 100,000 documents were written. This is the default result limit. To get the exact total count, call searchSourceBuilder.trackTotalHits(true) before running the query.