All Products
Search
Document Center

PolarDB:Hybrid search

Last Updated:May 22, 2025

PolarDB for PostgreSQL supports multiple search methods, such as dense search, sparse search, and hybrid search.

Background

  • Dense search: searches data based on the semantic context of queries.

  • Sparse search: searches data based on specific terms and text matching. It is equivalent to full-text search.

  • Hybrid search: combines dense search and sparse search to capture both context and specific keywords and obtain comprehensive search results.

Best practices

Data preparation

  1. Use a privileged account to install required extensions.

    CREATE EXTENSION IF NOT EXISTS rum;
    CREATE EXTENSION IF NOT EXISTS vector;
    CREATE EXTENSION IF NOT EXISTS polar_ai;

    The following extensions are installed:

    • RUM: supports full-text index and relevance sorting.

    • PGVector: supports vector search.

    • polar_ai: creates corresponding models for text vectorization.

  2. Create a table and insert test data.

    CREATE TABLE t_chunk(id serial, chunk text, embedding vector(1536), v tsvector);
    
    INSERT INTO t_chunk(chunk) VALUES('Unlock the Power of AI 1 million free tokens 88% Price Reduction Activate Now AI Search Contact Sales English Cart Console Log In Why Us Pricing Products Solutions Marketplace Developers Partners Documentation Services Model Studio PolarDB Filter in menu Product Overview Benefits Billing Announcements and Updates Getting Started User Guide Use Cases Developer Reference Support Home Page PolarDBProduct OverviewSearch for Help ContentProduct OverviewUpdated at: 2025-01-06 08:50ProductCommunityWhat is PolarDB?PolarDB is a new-generation database service that is developed by Alibaba Cloud. This service decouples computing from storage and uses integrated software and hardware. PolarDB is a secure and reliable database service that provides auto scaling within seconds, high performance, and mass storage. PolarDB is 100% compatible with MySQL and PostgreSQL and highly compatible with Oracle.');
    INSERT INTO t_chunk(chunk) VALUES('PolarDB provides three engines: PolarDB for MySQL, PolarDB for PostgreSQL, and PolarDB-X. Years of best practices in Double 11 events prove that PolarDB can offer the flexibility of open source ecosystems and the high performance and security of commercial cloud-native databases.Database engine Ecosystem Compatibility Architecture Platform Scenario PolarDB for MySQL MySQL 100% compatible with MySQL Shared storage and compute-storage decoupled architecture Public cloud, Apsara Stack Enterprise Edition, DBStack');
    INSERT INTO t_chunk(chunk) VALUES('PolarDB for PostgreSQL PostgreSQL and Oracle 100% compatible with MySQL and highly compatible with Oracle Shared storage and compute-storage decoupled architecture Public cloud, Apsara Stack Enterprise Edition, DBStack Cloud-native databases in the PostgreSQL ecosystem PolarDB-X MySQL Standard Edition is 100% compatible with MySQL and Enterprise Edition is highly compatible with MySQL shared nothing and distributed architecture Public cloud, Apsara Stack Enterprise Edition, DBStack');
    INSERT INTO t_chunk(chunk) VALUES('Architecture of PolarDB for MySQL and PolarDB for PostgreSQL PolarDB for MySQL and PolarDB for PostgreSQL both use an architecture of shared storage and compute-storage decoupling. They are featured by cloud-native architecture, integrated software and hardware, and shared distributed storage. Physical replication and RDMA are used between, the primary node and read-only nodes to reduce latency and accelerate data synchronization. This resolves the issue of non-strong data consistency caused by asynchronous replication and ensures zero data loss in case of single point of failure (SPOF). The architecture also enables node scaling within seconds.');
    INSERT INTO t_chunk(chunk) VALUES('Core components PolarProxy PolarDB uses PolarProxy to provide external services for the applications. PolarProxy forwards the requests from the applications to database nodes. You can use the proxy to perform authentication, data protection, and session persistence. The proxy parses SQL statements, sends write requests to the primary node, and evenly distributes read requests to multiple read-only nodes.Compute nodes A cluster contains one primary node and multiple read-only nodes. A cluster of Multi-master Cluster Edition (only for PolarDB for MySQL) supports multiple primary nodes and multiple read-only nodes. Compute nodes can be either general-purpose or dedicated. Shared storage Multiple nodes in a cluster share storage resources. A single cluster supports up to 500 TB of storage capacity.');
    INSERT INTO t_chunk(chunk) VALUES('Architecture benefits Large storage capacity The maximum storage capacity of a cluster is 500 TB. You do not need to purchase clusters for database sharding due to the storage limit of a single host. This simplifies application development and reduces the O&M workload.Cost-effectiveness PolarDB decouples computing and storage. You are charged only for the computing resources when you add read-only nodes to a PolarDB cluster. In traditional database solutions, you are charged for both computing and storage resources when you add nodes.Elastic scaling within minutes PolarDB supports rapid scaling for computing resources. This is based on container virtualization, shared storage, and compute-storage decoupling. It requires only 5 minutes to add or remove a node. The storage capability is automatically scaled up. During the scale-up process, your services are not interrupted.');
    INSERT INTO t_chunk(chunk) VALUES('Read consistency PolarDB uses log sequence numbers (LSNs) for cluster endpoints that have read/write splitting enabled. This ensures global consistency for read operations and prevents the inconsistency that is caused by the replication delay between the primary node and read-only nodes.Millisecond-level latency in physical replication PolarDB performs physical replication from the primary node to read-only nodes based on redo logs. The physical replication replaces the logical replication that is based on binary logs. This way, the replication efficiency and stability are improved. No delays occur even if you perform DDL operations on large tables, such as adding indexes or fields.Data backup within seconds Snapshots that are implemented based on the distributed storage can back up a database with terabytes of data in a few minutes. During the entire backup process, no locks are required, which ensures high efficiency and minimized impacts on your business. Data can be backed up anytime.');
    INSERT INTO t_chunk(chunk) VALUES('Architecture of PolarDB-X PolarDB-X uses an architecture of shared nothing and compute-storage decoupling. This architecture allows you to achieve hierarchical capacity planning based on your business requirements and implement mass scaling.Core components Global meta service (GMS): provides distributed metadata and a global timestamp distributor named Timestamp Oracle (TSO) and maintains meta information such as tables, schemas, and statistics. GMS also maintains security information such as accounts and permissions.Compute node (CN): provides a distributed SQL engine that contains core optimizers and executors. A CN uses a stateless SQL engine to provide distributed routing and computing and uses the two-phase commit protocol (2PC) to coordinate distributed transactions. A CN also executes DDL statements in a distributed manner and maintains global indexes.Data node (DN): provides a data storage engine. A data node uses Paxos to provide highly reliable storage services and uses multiversion concurrency control (MVCC) for distributed transactions. A data node also provides the pushdown computation feature to push down operators such as Project, Filter, Join, and Agg in distributed systems, and supports local SSDs and shared storage.Change data capture (CDC): provides a primary/secondary replication protocol that is compatible with MySQL. The primary/secondary replication protocol is compatible with the protocols and data formats that are supported by MySQL binary logging. CDC uses the primary/secondary replication protocol to exchange data.');
  3. Generate vector data. Create and call a custom model to vectorize text.

    UPDATE t_chunk SET embedding =  polar_ai.ai_text_embedding(chunk);
  4. Create indexes.

    • Create the vector index. L2 distance is used in this example. You can change the configuration if needed.

      CREATE INDEX ON t_chunk using hnsw(embedding vector_l2_ops);
    • Create the full-text index.

      UPDATE t_chunk SET v = to_tsvector('english', chunk);
      
      CREATE INDEX ON t_chunk USING rum (v rum_tsvector_ops);

Data search

Dense search

Search data based only on vector index, where smaller distance indicates higher similarity.

SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5;

Sparse search

Search data based only on full-text index, where smaller distance indicates higher similarity.

SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
FROM t_chunk 
WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
ORDER by rank ASC
LIMIT 5;

Hybrid search

Merge the results of both search methods to achieve multi-channel recall.

WITH t AS (
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5 ),
t2 as (
  SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
FROM t_chunk 
WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
ORDER by rank ASC
LIMIT 5
)
SELECT * FROM t
UNION ALL
SELECT * FROM t2;

Because these two distance calculation methods are not unified, the Reciprocal Rank Fusion (RRF) model is used for unified ranking. RRF is a method for combining multiple result sets with different metrics into a single result set. This method can generate high-quality results without the need of further tuning or correlations between metrics. It uses the following basic steps:

  1. Collecting rankings during the recall phase

    Multiple retrievers (various recall channels) generate sorted results for respective queries.

  2. Rank fusion

    A simple scoring function (such as reciprocal sum) is used to weight and merge the ranking positions of each retriever. The formula is as follows:

    In this formula, is the number of different recall channels, is the ranking position of document by the -th retriever, and is a smoothing parameter that is typically set to 60.

  3. Comprehensive ranking

    Rerank documents based on the fused scores to generate the final results.

In the above query, you can adjust the parameter to change the ranking if needed. Based on the parameter, the results of full-text search (sparse search) and vector search (dense search) are scored for each returned document according to the formula , where represents the position of a document. If a document in the results of full-text search does not appear in the results of vector search, that document only has one score. The same applies to a document that is in the results of vector search but not in the results of full-text search. If a document appears in the results of both searches, the scores calculated from each search are added together.

Note

The smoothing parameter determines the degree of influence that low-ranking documents in a single result set of a query has on the final ranking. The higher the value, the greater the influence of low-ranking documents on the final ranking.

-- Dense vector recall
WITH t1 as 
(
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5
),
t2 as (
SELECT ROW_NUMBER() OVER (ORDER BY dist ASC) AS row_num,
chunk
FROM t1
),
-- Sparse vector recall
t3 as 
(
  SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
  FROM t_chunk 
  WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
  ORDER by rank ASC
  LIMIT 5
),
t4 as (  
SELECT ROW_NUMBER() OVER (ORDER BY rank DESC) AS row_num,
chunk
FROM t3
),
-- Calculate RRF scores separately
t5 AS (
SELECT 1.0/(60+row_num) as score, chunk FROM t2
UNION ALL 
SELECT 1.0/(60+row_num), chunk FROM t4
)
-- Merge scores
SELECT sum(score) as score, chunk
FROM t5
GROUP BY chunk
ORDER BY score DESC;

Weights for result sets

You can set different weights for different result sets, such as 0.8 for the dense search result set and 0.2 for the sparse search result set.

-- Dense search
WITH t1 as 
(
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5
),
t2 as (
SELECT ROW_NUMBER() OVER (ORDER BY dist ASC) AS row_num,
chunk
FROM t1
),
-- Sparse search
t3 as 
(
  SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
  FROM t_chunk 
  WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
  ORDER by rank ASC
  LIMIT 5
),
t4 as (  
SELECT ROW_NUMBER() OVER (ORDER BY rank DESC) AS row_num,
chunk
FROM t3
),
-- Calculate RRF scores separately, with weights of 0.8 and 0.2 respectively
t5 as (
SELECT (1.0/(60+row_num)) * 0.8 as score , chunk FROM t2
UNION ALL 
SELECT (1.0/(60+row_num)) * 0.2, chunk FROM t4
)
-- Merge scores
SELECT sum(score) as score, chunk
FROM t5
GROUP BY chunk
ORDER BY score DESC;