All Products
Search
Document Center

PolarDB:Hybrid search

Last Updated:Dec 26, 2025

PolarDB for PostgreSQL supports multiple retrieval methods, such as dense retrieval, sparse retrieval, and hybrid retrieval.

Background

  • Dense retrieval: Uses semantic context to understand the meaning behind a query.

  • Sparse retrieval: Emphasizes text matching and finds results based on specific terms. This is equivalent to full-text search.

  • Hybrid retrieval: Combines dense and sparse retrieval to capture both the full context and specific keywords. This provides comprehensive search results.

Prepare the data

  1. Use a privileged account to create the extensions required for retrieval.

    CREATE EXTENSION IF NOT EXISTS rum;
    CREATE EXTENSION IF NOT EXISTS vector;
    CREATE EXTENSION IF NOT EXISTS polar_ai;

    The extensions provide the following features:

  2. Create a table and insert test data.

    CREATE TABLE t_chunk(id serial, chunk text, embedding vector(1536), v tsvector);
    
    INSERT INTO t_chunk(chunk) VALUES('Unlock the Power of AI 1 million free tokens 88% Price Reduction Activate Now AI Search Contact Sales English Cart Console Log In Why Us Pricing Products Solutions Marketplace Developers Partners Documentation Services Model Studio PolarDB Filter in menu Product Overview Benefits Billing Announcements and Updates Getting Started User Guide Use Cases Developer Reference Support Home Page PolarDBProduct OverviewSearch for Help ContentProduct OverviewUpdated at: 2025-01-06 08:50ProductCommunityWhat is PolarDB?PolarDB is a new-generation database service that is developed by Alibaba Cloud. This service decouples computing from storage and uses integrated software and hardware. PolarDB is a secure and reliable database service that provides auto scaling within seconds, high performance, and mass storage. PolarDB is 100% compatible with MySQL and PostgreSQL and highly compatible with Oracle.');
    INSERT INTO t_chunk(chunk) VALUES('PolarDB provides three engines: PolarDB for MySQL, PolarDB for PostgreSQL, and PolarDB-X. Years of best practices in Double 11 events prove that PolarDB can offer the flexibility of open source ecosystems and the high performance and security of commercial cloud-native databases.Database engine Ecosystem Compatibility Architecture Platform Scenario PolarDB for MySQL MySQL 100% compatible with MySQL Shared storage and compute-storage decoupled architecture Public cloud, Apsara Stack Enterprise Edition, DBStack');
    INSERT INTO t_chunk(chunk) VALUES('PolarDB for PostgreSQL PostgreSQL and Oracle 100% compatible with MySQL and highly compatible with Oracle Shared storage and compute-storage decoupled architecture Public cloud, Apsara Stack Enterprise Edition, DBStack Cloud-native databases in the PostgreSQL ecosystem PolarDB-X MySQL Standard Edition is 100% compatible with MySQL and Enterprise Edition is highly compatible with MySQL shared nothing and distributed architecture Public cloud, Apsara Stack Enterprise Edition, DBStack');
    INSERT INTO t_chunk(chunk) VALUES('Architecture of PolarDB for MySQL and PolarDB for PostgreSQL PolarDB for MySQL and PolarDB for PostgreSQL both use an architecture of shared storage and compute-storage decoupling. They are featured by cloud-native architecture, integrated software and hardware, and shared distributed storage. Physical replication and RDMA are used between, the primary node and read-only nodes to reduce latency and accelerate data synchronization. This resolves the issue of non-strong data consistency caused by asynchronous replication and ensures zero data loss in case of single point of failure (SPOF). The architecture also enables node scaling within seconds.');
    INSERT INTO t_chunk(chunk) VALUES('Core components PolarProxy PolarDB uses PolarProxy to provide external services for the applications. PolarProxy forwards the requests from the applications to database nodes. You can use the proxy to perform authentication, data protection, and session persistence. The proxy parses SQL statements, sends write requests to the primary node, and evenly distributes read requests to multiple read-only nodes.Compute nodes A cluster contains one primary node and multiple read-only nodes. A cluster of Multi-master Cluster Edition (only for PolarDB for MySQL) supports multiple primary nodes and multiple read-only nodes. Compute nodes can be either general-purpose or dedicated.Shared storage Multiple nodes in a cluster share storage resources. A single cluster supports up to 500 TB of storage capacity.');
    INSERT INTO t_chunk(chunk) VALUES('Architecture benefits Large storage capacity The maximum storage capacity of a cluster is 500 TB. You do not need to purchase clusters for database sharding due to the storage limit of a single host. This simplifies application development and reduces the O&M workload.Cost-effectiveness PolarDB decouples computing and storage. You are charged only for the computing resources when you add read-only nodes to a PolarDB cluster. In traditional database solutions, you are charged for both computing and storage resources when you add nodes.Elastic scaling within minutes PolarDB supports rapid scaling for computing resources. This is based on container virtualization, shared storage, and compute-storage decoupling. It requires only 5 minutes to add or remove a node. The storage capability is automatically scaled up. During the scale-up process, your services are not interrupted.');
    INSERT INTO t_chunk(chunk) VALUES('Read consistency PolarDB uses log sequence numbers (LSNs) for cluster endpoints that have read/write splitting enabled. This ensures global consistency for read operations and prevents the inconsistency that is caused by the replication delay between the primary node and read-only nodes.Millisecond-level latency in physical replication PolarDB performs physical replication from the primary node to read-only nodes based on redo logs. The physical replication replaces the logical replication that is based on binary logs. This way, the replication efficiency and stability are improved. No delays occur even if you perform DDL operations on large tables, such as adding indexes or fields.Data backup within seconds Snapshots that are implemented based on the distributed storage can back up a database with terabytes of data in a few minutes. During the entire backup process, no locks are required, which ensures high efficiency and minimized impacts on your business. Data can be backed up anytime.');
    INSERT INTO t_chunk(chunk) VALUES('Architecture of PolarDB-X PolarDB-X uses an architecture of shared nothing and compute-storage decoupling. This architecture lets you achieve hierarchical capacity planning as needed and implement mass scaling.Core components Global meta service (GMS): provides distributed metadata and a global timestamp distributor named Timestamp Oracle (TSO) and maintains meta information such as tables, schemas, and statistics. GMS also maintains security information such as accounts and permissions.Compute node (CN): provides a distributed SQL engine that contains core optimizers and executors. A CN uses a stateless SQL engine to provide distributed routing and computing and uses the two-phase commit protocol (2PC) to coordinate distributed transactions. A CN also executes DDL statements in a distributed manner and maintains global indexes.Data node (DN): provides a data storage engine. A data node uses Paxos to provide highly reliable storage services and uses multiversion concurrency control (MVCC) for distributed transactions. A data node also provides the pushdown computation feature to push down operators such as Project, Filter, Join, and Agg in distributed systems, and supports local SSDs and shared storage.Change data capture (CDC): provides a primary/secondary replication protocol that is compatible with MySQL. The primary/secondary replication protocol is compatible with the protocols and data formats that are supported by MySQL binary logging. CDC uses the primary/secondary replication protocol to exchange data.');
  3. Generate vector data. You can convert text to vectors by creating a custom model and invoking it.

    -- Run the embedding
    UPDATE t_chunk SET embedding = <custom_model_call_function>('<custom_model_name>', chunk);
  4. Create the indexes required for retrieval.

    • Vector index. This example uses L2 distance. You can adjust it as needed.

      CREATE INDEX ON t_chunk using hnsw(embedding vector_l2_ops);
    • Full-text index.

      UPDATE t_chunk SET v = to_tsvector('english', chunk);
      
      CREATE INDEX ON t_chunk USING rum (v rum_tsvector_ops);

Retrieve data

Dense retrieval

Retrieval is based solely on vectors. A smaller distance indicates a higher similarity.

SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5;

Sparse retrieval

The search is limited to the full text. A smaller distance indicates a higher similarity.

SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
FROM t_chunk 
WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
ORDER by rank ASC
LIMIT 5;

Hybrid retrieval

Hybrid retrieval merges the results of both retrieval methods to achieve multi-channel retrieval.

WITH t AS (
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5 ),
t2 as (
  SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
FROM t_chunk 
WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
ORDER by rank ASC
LIMIT 5
)
SELECT * FROM t
UNION ALL
SELECT * FROM t2;

Because these two distance calculation methods are not unified, the Reciprocal Rank Fusion (RRF) model is used for unified ranking. RRF is a method that combines multiple result sets with different relevance metrics into a single result set. This method provides high-quality results without tuning. The different relevance metrics do not need to be correlated. The basic steps are as follows:

  1. Collect rankings in the retrieval phase

    Multiple retrievers generate sorted results for their queries.

  2. Fuse ranks

    Use a simple scoring function, such as a reciprocal sum, to weight and fuse the rank positions from each retriever. The formula is as follows:

    In this formula, is the number of retrieval methods, is the rank of document from the -th retriever, and is a smoothing parameter, which is typically set to 60.

  3. Comprehensive Sort

    Rerank the documents based on the fused scores to generate the final result.

In the query above, if you are not satisfied with the result ranking, adjust the parameter to change the order. Based on the parameter, the system scores each document from the results of a full-text search (sparse vector) and a vector search (dense vector). Each document is scored using the formula , where is the rank of the document. If a document from the full-text search results does not appear in the vector search results, it has only one score. The same applies if a document from the vector search results does not appear in the full-text search results. If a document appears in the result sets of both searches, their scores are added together.

Note

The smoothing parameter controls how documents in a result set affect the final sort order. A higher value gives more weight to documents with lower ranks.

-- Dense vector retrieval
WITH t1 as 
(
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5
),
t2 as (
SELECT ROW_NUMBER() OVER (ORDER BY dist ASC) AS row_num,
chunk
FROM t1
),
-- Sparse vector retrieval
t3 as 
(
  SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
  FROM t_chunk 
  WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
  ORDER by rank ASC
  LIMIT 5
),
t4 as (  
SELECT ROW_NUMBER() OVER (ORDER BY rank DESC) AS row_num,
chunk
FROM t3
),
-- Calculate RRF scores for each set
t5 AS (
SELECT 1.0/(60+row_num) as score, chunk FROM t2
UNION ALL 
SELECT 1.0/(60+row_num), chunk FROM t4
)
-- Merge the scores
SELECT sum(score) as score, chunk
FROM t5
GROUP BY chunk
ORDER BY score DESC;

Weighting by weight

You can also set different weights for different result sets, such as a dense retrieval weight of 0.8 and a sparse retrieval weight of 0.2.

-- Dense vector retrieval
WITH t1 as 
(
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5
),
t2 as (
SELECT ROW_NUMBER() OVER (ORDER BY dist ASC) AS row_num,
chunk
FROM t1
),
-- Sparse vector retrieval
t3 as 
(
  SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
  FROM t_chunk 
  WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
  ORDER by rank ASC
  LIMIT 5
),
t4 as (  
SELECT ROW_NUMBER() OVER (ORDER BY rank DESC) AS row_num,
chunk
FROM t3
),
-- Calculate RRF scores for each set, with weights of 0.8 and 0.2
t5 as (
SELECT (1.0/(60+row_num)) * 0.8 as score , chunk FROM t2
UNION ALL 
SELECT (1.0/(60+row_num)) * 0.2, chunk FROM t4
)
-- Merge the scores
SELECT sum(score) as score, chunk
FROM t5
GROUP BY chunk
ORDER BY score DESC;