PolarDB for PostgreSQL は、密ベクトル検索、疎ベクトル検索、ハイブリッド検索など、複数の検索方法をサポートしています。
背景
密ベクトル検索: クエリの意味コンテキストに基づいてデータを検索します。
疎ベクトル検索: 特定の用語とテキストのマッチングに基づいてデータを検索します。これはフルテキスト検索と同じです。
ハイブリッド検索: 密ベクトル検索と疎ベクトル検索を組み合わせて、コンテキストと特定のキーワードの両方を取得し、包括的な検索結果を得ます。
ベストプラクティス
データの準備
必要な拡張機能をインストールするには、特権アカウントを使用します。
CREATE EXTENSION IF NOT EXISTS rum; CREATE EXTENSION IF NOT EXISTS vector; CREATE EXTENSION IF NOT EXISTS polar_ai;
次の拡張機能がインストールされます。
テーブルを作成し、テストデータを挿入します。
CREATE TABLE t_chunk(id serial, chunk text, embedding vector(1536), v tsvector); INSERT INTO t_chunk(chunk) VALUES('Unlock the Power of AI 1 million free tokens 88% Price Reduction Activate Now AI Search Contact Sales English Cart Console Log In Why Us Pricing Products Solutions Marketplace Developers Partners Documentation Services Model Studio PolarDB Filter in menu Product Overview Benefits Billing Announcements and Updates Getting Started User Guide Use Cases Developer Reference Support Home Page PolarDBProduct OverviewSearch for Help ContentProduct OverviewUpdated at: 2025-01-06 08:50ProductCommunityWhat is PolarDB?PolarDB is a new-generation database service that is developed by Alibaba Cloud. This service decouples computing from storage and uses integrated software and hardware. PolarDB is a secure and reliable database service that provides auto scaling within seconds, high performance, and mass storage. PolarDB is 100% compatible with MySQL and PostgreSQL and highly compatible with Oracle.'); INSERT INTO t_chunk(chunk) VALUES('PolarDB provides three engines: PolarDB for MySQL, PolarDB for PostgreSQL, and PolarDB-X. Years of best practices in Double 11 events prove that PolarDB can offer the flexibility of open source ecosystems and the high performance and security of commercial cloud-native databases.Database engine Ecosystem Compatibility Architecture Platform Scenario PolarDB for MySQL MySQL 100% compatible with MySQL Shared storage and compute-storage decoupled architecture Public cloud, Apsara Stack Enterprise Edition, DBStack'); INSERT INTO t_chunk(chunk) VALUES('PolarDB for PostgreSQL PostgreSQL and Oracle 100% compatible with MySQL and highly compatible with Oracle Shared storage and compute-storage decoupled architecture Public cloud, Apsara Stack Enterprise Edition, DBStack Cloud-native databases in the PostgreSQL ecosystem PolarDB-X MySQL Standard Edition is 100% compatible with MySQL and Enterprise Edition is highly compatible with MySQL shared nothing and distributed architecture Public cloud, Apsara Stack Enterprise Edition, DBStack'); INSERT INTO t_chunk(chunk) VALUES('Architecture of PolarDB for MySQL and PolarDB for PostgreSQL PolarDB for MySQL and PolarDB for PostgreSQL both use an architecture of shared storage and compute-storage decoupling. They are featured by cloud-native architecture, integrated software and hardware, and shared distributed storage. Physical replication and RDMA are used between, the primary node and read-only nodes to reduce latency and accelerate data synchronization. This resolves the issue of non-strong data consistency caused by asynchronous replication and ensures zero data loss in case of single point of failure (SPOF). The architecture also enables node scaling within seconds.'); INSERT INTO t_chunk(chunk) VALUES('Core components PolarProxy PolarDB uses PolarProxy to provide external services for the applications. PolarProxy forwards the requests from the applications to database nodes. You can use the proxy to perform authentication, data protection, and session persistence. The proxy parses SQL statements, sends write requests to the primary node, and evenly distributes read requests to multiple read-only nodes.Compute nodes A cluster contains one primary node and multiple read-only nodes. A cluster of Multi-master Cluster Edition (only for PolarDB for MySQL) supports multiple primary nodes and multiple read-only nodes. Compute nodes can be either general-purpose or dedicated.Shared storage Multiple nodes in a cluster share storage resources. A single cluster supports up to 500 TB of storage capacity.'); INSERT INTO t_chunk(chunk) VALUES('Architecture benefits Large storage capacity The maximum storage capacity of a cluster is 500 TB. You do not need to purchase clusters for database sharding due to the storage limit of a single host. This simplifies application development and reduces the O&M workload.Cost-effectiveness PolarDB decouples computing and storage. You are charged only for the computing resources when you add read-only nodes to a PolarDB cluster. In traditional database solutions, you are charged for both computing and storage resources when you add nodes.Elastic scaling within minutes PolarDB supports rapid scaling for computing resources. This is based on container virtualization, shared storage, and compute-storage decoupling. It requires only 5 minutes to add or remove a node. The storage capability is automatically scaled up. During the scale-up process, your services are not interrupted.'); INSERT INTO t_chunk(chunk) VALUES('Read consistency PolarDB uses log sequence numbers (LSNs) for cluster endpoints that have read/write splitting enabled. This ensures global consistency for read operations and prevents the inconsistency that is caused by the replication delay between the primary node and read-only nodes.Millisecond-level latency in physical replication PolarDB performs physical replication from the primary node to read-only nodes based on redo logs. The physical replication replaces the logical replication that is based on binary logs. This way, the replication efficiency and stability are improved. No delays occur even if you perform DDL operations on large tables, such as adding indexes or fields.Data backup within seconds Snapshots that are implemented based on the distributed storage can back up a database with terabytes of data in a few minutes. During the entire backup process, no locks are required, which ensures high efficiency and minimized impacts on your business. Data can be backed up anytime.'); INSERT INTO t_chunk(chunk) VALUES('Architecture of PolarDB-X PolarDB-X uses an architecture of shared nothing and compute-storage decoupling. This architecture allows you to achieve hierarchical capacity planning based on your business requirements and implement mass scaling.Core components Global meta service (GMS): provides distributed metadata and a global timestamp distributor named Timestamp Oracle (TSO) and maintains meta information such as tables, schemas, and statistics. GMS also maintains security information such as accounts and permissions.Compute node (CN): provides a distributed SQL engine that contains core optimizers and executors. A CN uses a stateless SQL engine to provide distributed routing and computing and uses the two-phase commit protocol (2PC) to coordinate distributed transactions. A CN also executes DDL statements in a distributed manner and maintains global indexes.Data node (DN): provides a data storage engine. A data node uses Paxos to provide highly reliable storage services and uses multiversion concurrency control (MVCC) for distributed transactions. A data node also provides the pushdown computation feature to push down operators such as Project, Filter, Join, and Agg in distributed systems, and supports local SSDs and shared storage.Change data capture (CDC): provides a primary/secondary replication protocol that is compatible with MySQL. The primary/secondary replication protocol is compatible with the protocols and data formats that are supported by MySQL binary logging. CDC uses the primary/secondary replication protocol to exchange data.');
ベクトルデータを生成します。カスタムモデル を作成して呼び出し、テキストをベクトル化します。
UPDATE t_chunk SET embedding = polar_ai.ai_text_embedding(chunk);
インデックスを作成します。
ベクトルインデックスを作成します。この例では、L2 距離を使用します。必要に応じて構成を変更できます。
CREATE INDEX ON t_chunk using hnsw(embedding vector_l2_ops);
フルテキストインデックスを作成します。
UPDATE t_chunk SET v = to_tsvector('english', chunk); CREATE INDEX ON t_chunk USING rum (v rum_tsvector_ops);
データ検索
密ベクトル検索
ベクトルインデックスのみに基づいてデータを検索します。距離が小さいほど類似性が高いことを示します。
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5;
疎ベクトル検索
フルテキストインデックスのみに基づいてデータを検索します。距離が小さいほど類似性が高いことを示します。
SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
FROM t_chunk
WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
ORDER by rank ASC
LIMIT 5;
ハイブリッド検索
両方の検索メソッドの結果をマージして、マルチチャネルリコールを実現します。
WITH t AS (
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5 ),
t2 as (
SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
FROM t_chunk
WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
ORDER by rank ASC
LIMIT 5
)
SELECT * FROM t
UNION ALL
SELECT * FROM t2;
これら 2 つの距離計算方法は統一されていないため、Reciprocal Rank Fusion (RRF) モデルを使用して統一ランキングを行います。RRF は、メトリックが異なる複数の結果セットを 1 つの結果セットに結合する方法です。この方法では、メトリック間のさらなる調整や相関なしに、高品質な結果を生成できます。次の基本的な手順を使用します。
リコールフェーズでのランキングの収集
複数のリトリーバー (さまざまなリコールチャネル) が、それぞれのクエリに対してソートされた結果を生成します。
ランク融合
単純なスコアリング関数 (逆数の合計など) を使用して、各リトリーバーのランキング位置を加重してマージします。式は次のとおりです。
この式では、
は異なるリコールチャネルの数、 は 番目のリトリーバーによるドキュメント のランキング位置、 は通常 60 に設定されるスムージングパラメータです。 包括的ランキング
融合スコアに基づいてドキュメントを再ランク付けし、最終結果を生成します。
上記のクエリでは、必要に応じてパラメーター
スムージング パラメーター
-- 密ベクトル検索
WITH t1 as
(
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5
),
t2 as (
SELECT ROW_NUMBER() OVER (ORDER BY dist ASC) AS row_num,
chunk
FROM t1
),
-- スパースベクトル検索
t3 as
(
SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
FROM t_chunk
WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
ORDER by rank ASC
LIMIT 5
),
t4 as (
SELECT ROW_NUMBER() OVER (ORDER BY rank DESC) AS row_num,
chunk
FROM t3
),
-- RRF スコアを個別に計算する
t5 AS (
SELECT 1.0/(60+row_num) as score, chunk FROM t2
UNION ALL
SELECT 1.0/(60+row_num), chunk FROM t4
)
-- スコアをマージする
SELECT sum(score) as score, chunk
FROM t5
GROUP BY chunk
ORDER BY score DESC;
結果セットの重み
異なる結果セットに対して異なる重みを設定することができます。例えば、密な検索結果セットに0.8、疎な検索結果セットに0.2といった具合です。
-- 密検索
WITH t1 as
(
SELECT chunk, embedding <-> polar_ai.ai_text_embedding('What database engines does PolarDB provide')::vector(1536) as dist
FROM t_chunk
ORDER by dist ASC
limit 5
),
t2 as (
SELECT ROW_NUMBER() OVER (ORDER BY dist ASC) AS row_num,
chunk
FROM t1
),
-- スパース検索
t3 as
(
SELECT chunk, v <=> to_tsquery('english', 'PolarDB|PostgreSQL|efficiency') as rank
FROM t_chunk
WHERE v @@ to_tsquery('english', 'PolarDB|PostgreSQL|efficiency')
ORDER by rank ASC
LIMIT 5
),
t4 as (
SELECT ROW_NUMBER() OVER (ORDER BY rank DESC) AS row_num,
chunk
FROM t3
),
-- RRF スコアを個別に計算し、それぞれ 0.8 と 0.2 の重みを付けます
t5 as (
SELECT (1.0/(60+row_num)) * 0.8 as score , chunk FROM t2
UNION ALL
SELECT (1.0/(60+row_num)) * 0.2, chunk FROM t4
)
-- スコアをマージする
SELECT sum(score) as score, chunk
FROM t5
GROUP BY chunk
ORDER BY score DESC;