Alibaba Cloud Elasticsearch is available in three editions and multiple versions. Use this topic to compare editions and identify the capabilities added in each version, so you can choose the right combination for your workload.
Edition comparison
Alibaba Cloud Elasticsearch offers three editions: Standard Edition, Kernel-enhanced Edition, and Vector Enhanced Edition. The editions differ in supported versions, built-in optimizations, and pricing.
<table> <thead> <tr> <td><p><b>Item</b></p></td> <td><p><b>Kernel-enhanced Edition</b></p></td> <td><p><b>Vector Enhanced Edition and Standard Edition</b></p></td> </tr> </thead> <colgroup></colgroup> <colgroup></colgroup> <colgroup></colgroup> <tbody> <tr> <td><p>Supported versions</p></td> <td><p>7.16, 7.10, and 6.7</p></td> <td><p>Vector Enhanced Edition: 8.17 and 8.15</p><p>Standard Edition: 8.13, 8.9, 8.5, 7.7, 6.8, 6.3, 5.6, and 5.5</p></td> </tr> <tr> <td><p>Main features</p></td> <td> <ul> <li><p>All open-source Elasticsearch features</p></li> <li><p>Free license for all advanced X-Pack features</p></li> <li><p>AliES optimized kernel — reduces costs and improves performance and stability across high-throughput workloads</p></li> </ul></td> <td> <ul> <li><p>All open-source Elasticsearch features</p></li> <li><p>Free license for all advanced X-Pack features</p></li> </ul></td> </tr> <tr> <td><p>Use cases</p></td> <td><p>All Elasticsearch use cases, with particular strengths in:</p> <ul> <li><p>Enterprise workloads requiring high read and write throughput</p></li> <li><p>Log ingestion at scale (write-heavy, read-light)</p></li> </ul></td> <td><p>All Elasticsearch use cases: information retrieval, search, log analysis, and vector search</p></td> </tr> <tr> <td><p>Best for</p></td> <td> <ul> <li><p>Teams that need cluster write and query performance optimized out of the box</p></li> <li><p>Teams looking to reduce Elasticsearch O&M costs in the cloud</p></li> <li><p>Workloads with fluctuating traffic that require stable cluster performance</p></li> <li><p>Teams focused on reducing data storage costs</p></li> </ul></td> <td> <ul> <li><p>Teams with Elasticsearch expertise who manage cluster tuning themselves</p></li> <li><p>Teams with well-defined resource plans</p></li> </ul></td> </tr> <tr> <td><p>Billing</p></td> <td><p>Charged based on cluster specifications, storage, and number of nodes.</p> <ul> <li><p><b>Basic enhancements</b>: Delivered as free plug-ins. Install them based on your needs.</p></li> <li><p><b>Advanced enhancements</b>: Charged for additional write traffic and storage when enabled.</p> <div><div><i></i></div><div><strong>Note:</strong> <p>Only Kernel-enhanced Edition V7.10 clusters support advanced enhancements, available only in the China (Hong Kong) region. Availability in more regions is planned.</p></div></div></li> </ul></td> <td><p>Charged based on cluster specifications, storage, and number of nodes.</p></td> </tr> </tbody> </table>
|
Item |
Kernel-enhanced Edition |
Vector Search Edition and General-purpose Commercial Edition |
|
Supported versions |
Versions 7.16, 7.10, and 6.7 |
Vector Search Edition: Versions 8.17 and 8.15 General-purpose Commercial Edition: Versions 8.13, 8.9, 8.5, 7.7, 6.8, 6.3, 5.6, and 5.5 |
|
Key features |
|
|
|
Scenarios |
All ES application scenarios. It is especially suitable for:
|
All ES application scenarios. Example scenarios include information retrieval, search, log analysis, and vector search. |
|
User profile |
|
|
|
Billing items |
Billed based on the node specifications, storage space, and number of nodes of the ES cluster.
|
Billed based on the node specifications, storage space, and number of nodes of the ES cluster. |
Open-source version features
All Alibaba Cloud Elasticsearch versions are 100% compatible with open-source Elasticsearch and include a free Platinum-level license for advanced features (formerly X-Pack commercial plug-ins). The sections below highlight key additions in each version.
V7.16, V7.10, and V6.7 clusters are of the Kernel-enhanced Edition. These clusters use the deeply optimized AliES kernel. This enables the clusters to provide enhancements based on open source features. For more information, see AliES Kernel-enhanced Edition Feature Introduction.
V7.16, V7.10, and V6.7 clusters are Kernel-enhanced Edition and run the AliES optimized kernel, which provides additional enhancements on top of the open-source feature set. For details, see Features of the AliES Kernel-enhanced Edition.
8.17
V8.17 is the foundation for Vector Enhanced Edition, which integrates model services so you can build AI search applications and call external AI model services. The BBQ feature reduces memory costs by more than 10 times compared to standard dense vector storage.
Key additions:
-
Better binary quantization (BBQ) for dense vectors — Compresses vector indexes by 32 times, significantly cutting memory usage. This is the core feature of Vector Enhanced Edition. See What's new in 8.17.
-
Inference APIs reach GA — Stable APIs for integrating external model services into your search pipeline. See Inference APIs.
-
Reciprocal rank fusion (RRF) reaches GA — Combines text and vector recall rankings without manual score tuning. See RRF.
-
logsdb index mode reaches GA — Reduces log index storage by 3 times compared to the default index mode. See Logs data stream.
-
Elastic Rerank built-in model — A semantic reranking model that improves result relevance as a second-stage pass over lexical or vector search results. Useful for RAG applications where you need the most relevant context sent to a large language model. See Elastic Rerank.
-
zstd compression for `best_compression` codec — Reduces storage by 12% and improves write throughput by 14%.
-
ES|QL improvements — Full-text search support added to Elasticsearch Query Language (ES|QL). See ES|QL.
For the full list of changes, see What's new in 8.17 and What's new in 8.16.
8.15
V8.15 focuses on vector search efficiency and multimodal retrieval. If you need INT8 or INT4 quantized vector indexes, or hybrid search pipelines with reranking, start from this version.
Key additions:
-
INT8_HNSW as the default vector algorithm — Replaces HNSW as the default for dense vectors, with INT8 quantization enabled by default. INT4 quantization is also supported, saving up to 8 times the memory of float32 indexes. The
bitvector type is now available. See dense-vector. -
SIMD-accelerated INT8 index merging — Improves INT8-quantized index merge performance by approximately 3 times on AArch64 architecture.
-
Rerank phase and text_similarity_reranker API — Adds a rerank phase to search so you can apply rerank models as a second stage. See text-similarity-reranker-retriever.
-
retriever query syntax for multimodal search — A unified syntax for combining multiple retrieval strategies. See retriever.
-
`semantic_text` field type — Simplifies semantic search setup by handling inference configuration at the field level. See semantic-text.
-
`sparse_vector` query replaces `text_expansion` — Updated syntax for sparse vector queries. See query-dsl-sparse-vector-query.
-
`query_rules` API reaches GA — Stable API for applying query rules to search results. See query-rules-apis.
-
Nested field support for index sorting — Sort indexes by nested fields. See index-modules-index-sorting.
-
`logsdb` index mode — Available for logging workloads that require efficient storage. See logs-data-stream.
-
Lucene 9.11 — Improves query performance and memory efficiency. See Apache Lucene 9.11.0.
For the full list of changes, see What's new in 8.15 and What's new in 8.14.
8.13
V8.13 significantly extends vector search: larger dimensions, lower memory through scalar quantization, SIMD acceleration, and better support for chunked document indexing and external model integration.
Key additions:
-
Max vector dimensions increased to 4,096 — Up from 2,048. See 4096 dimension dense vector.
-
Scalar quantization for vector indexes — Reduces vector index memory usage by approximately 75%. See Understanding scalar quantization in Lucene.
-
`sparse_vector` type — Support for sparse vectors alongside dense vectors. See Sparse vector.
-
Shard-level query parallelization — Parallel query execution within a shard for faster aggregations and searches. See Query parallelization.
-
Nested vectors — Split large documents into chunks and index each chunk separately, enabling per-chunk vector retrieval. See Multiple results from the same doc with nested vectors.
-
Learning To Rank (LTR) — Re-rank query results using machine learning models during the restore phase. See Learning To Rank.
-
Inference APIs for external model services — Call external machine learning (ML) model endpoints directly from your Elasticsearch queries. See Inference APIs.
-
SIMD support for vector queries — Hardware-accelerated vector search using single instruction, multiple data (SIMD) instructions. See Accelerating vector search with SIMD instructions.
For the full list of changes, see What's new in 8.13.
8.9
V8.9 introduces the foundational vector search building blocks: ELSER for sparse semantic search, hybrid ranking with RRF, and multi-field k-nearest neighbors (k-NN) queries.
Key additions:
-
Hybrid text and vector ranking with RRF — Combine text recall and vector recall results using reciprocal rank fusion, without manual score normalization. See RRF.
-
Max vector dimensions increased to 2,048 — See Increase max number of vector dims to 2048.
-
Improved brute-force vector search performance — See Improve brute force vector search speed.
-
Multiple fields in k-NN queries — Run k-NN search across multiple vector fields in a single query. See Allow more than one KNN search clause.
-
ELSER built-in model — Elastic Learned Sparse EncodeR (ELSER) is a sparse semantic search model that runs natively in Elasticsearch without an external inference service. See ELSER inference integration.
-
Distributed natural language processing (NLP) model scheduling — Schedule and manage NLP model allocations across nodes. See Make native inference generally available.
-
Improved write performance for primary key operations — See Optimize primary keys.
-
Faster constant keyword field queries — Shards are skipped when querying constant keyword fields. See Skip shards when querying constant keyword fields.
-
Time series data streams (TSDS) and downsampling — Store and downsample time series metrics efficiently. See TSDS and Downsample.
-
Reduced raw text memory — ThreadLocal removed from raw text handling. See Remove uses of deprecated LeafReader.
For the full list of changes, see What's new in 8.9.
8.5
V8.5 adds foundational vector search support via HNSW-based k-NN and introduces performance and security improvements.
Key additions:
-
HNSW-based vector similarity search — Hierarchical Navigable Small World (HNSW) algorithm for approximate nearest neighbor search. See kNN search.
-
Time series data streams (TSDS) — See TSDS.
-
geo_grid queries — Query documents by geospatial grid cells. See Geo-grid query.
-
Simplified security configuration — Security is enabled automatically on new clusters. See Start the Elastic Stack with security enabled automatically.
-
Improved Lucene compression — Reduces index size.
-
Faster range queries — Enhanced range query performance.
-
`lookup` runtime fields — See lookup-runtime-fields.
-
`random_sampler` aggregation — Approximate aggregations on large datasets using random sampling. See Random sampler aggregation.
-
Reduced heap memory for master and data nodes — Lower baseline memory consumption.
-
Mapping types removed — Mapping types are no longer supported. Use RESTful API compatibility if your application depends on them. See rest-api-compatibility.
-
Index protection — By default, the
elasticuser can only read data from built-in Elasticsearch indexes.
For the full list of changes, see Breaking changes in 8.5.
7.16
Key additions:
-
SQL-based cross-cluster searches
-
Range-type enrich policies in ingest pipelines
-
Cache optimizations for improved query performance
-
Add and remove indexes from data streams
-
Cluster UUIDs and names included in audit logs
For the full list of changes, see Breaking changes in 7.16.
7.10
Key additions:
-
Improved storage field compression, reducing storage costs
-
Event Query Language (EQL) for security event detection. See EQL.
-
search.max_bucketsdefault increased from 10,000 to 65,535. See search.max_buckets. -
Case-insensitive queries via the
case_insensitiveparameter. See case_insensitive.
For the full list of changes, see Breaking changes in 7.10.
7.7
Key additions:
-
Default shard count in index templates changed from 5 to 1
-
Mapping types removed — no need to specify a mapping type when defining a mapping or index template. See Removal of mapping types.
-
Default result limit set to 10,000 documents per request (
track_total_hits). See track_total_hits. -
Default shard limit per data node set to 1,000 (
cluster.max_shards_per_node). See Cluster shard limit. -
Default scroll context limit set to 500 (
search.max_open_scroll_context). See Scroll search context. -
Parent circuit breaker triggers at 95% of JVM heap memory (
indices.breaker.total.use_real_memory), using actual memory usage instead of a fixed threshold. See Circuit breaker. -
_allfield removed, improving search performance -
Intervals queries supported — search and return documents based on the order and proximity of matching terms. See Intervals queries.
-
Audit events persisted to
<clustername>_audit.jsonon each node's file system (not stored in indexes). See Enabling audit logging.
For the full list of changes, see Breaking changes in 7.0.
6.x (6.3, 6.7, and 6.8)
Key additions:
-
One type per index; the
_doctype is recommended -
Index lifecycle management (ILM) introduced in V6.6.0 to reduce index operational overhead
-
Historical data rollup for summarizing and compressing historical data. See Historical data rollup.
-
Elasticsearch SQL (an X-Pack component) supported in V6.3 and later — converts SQL statements to domain-specific language (DSL), reducing the learning curve. See Elasticsearch SQL.
-
Composite, Parent, and Weighted Avg aggregation functions supported. See Composite, Parent, and Weighted Avg.
For the full list of changes, see Breaking changes in 6.0.
5.x (5.5 and 5.6)
Key additions:
-
Multiple types per index; custom types supported
-
STRINGdata type replaced byTEXTandKEYWORD -
Field
indexvalues changed fromnot_analyzed/nototrue/false -
DOUBLEreplaced byFLOATto reduce storage costs -
Java High Level REST Client replaces Transport Client
For the full list of changes, see Breaking changes in 5.0.
Related topics
-
View the basic information of a cluster — Check your cluster's edition and version in the Elasticsearch console.
-
Create an Alibaba Cloud Elasticsearch cluster — Get started with a new cluster.
-
Evaluate specifications and storage capacity — Size your cluster before provisioning.