AnalyticDB for PostgreSQL is suitable for the following common scenarios:

  1. Data is written in real time or imported in batches to instances to implement the operational data model (ODM) layer. Data sources include log data, archive data, and streaming data. Typically, log data is generated by business applications and online transaction processing (OLTP) databases that run the MySQL, SQL Server, PostgreSQL, or Oracle database engine. Archive data is stored in Object Storage Service (OSS) or Hadoop. Streaming data is generated by Apache Kafka or Apache Flink.
  2. Data is processed at the ODS layer to implement the common data model (CDM) and application data service (ADS) layers. Typical operations include INSERT INTO SELECT and CREATE TABLE AS.
  3. Data in databases is queried and analyzed based on your business requirements, or is provided for downstream systems to be consumed and processed. Typical query and analysis scenarios include business data application, interactive analysis, and business intelligence (BI) reporting.

Data writes or batch import

AnalyticDB for PostgreSQL supports live data writes with low latency and batch import in high throughputs.

Live data writes
Specific operations for live data writes include INSERT ON CONFLICT IGNORE, DELETE, and UPDATE. Live data can be written with high concurrency and low latency. Typically, data is synchronized from OLTP databases such as MySQL and SQL Server databases by using Data Transmission Service (DTS). Business applications write the data to AnalyticDB for PostgreSQL. In another case, messaging middleware or streaming data systems such as Kafka and Flink write data to AnalyticDB for PostgreSQL by using sinks. AnalyticDB for PostgreSQL collects data in batches for overwriting based on a nearly linear scale-out architecture. Instances support millions of read and write transactions per second (TPS).
Batch import
Specific operations for batch import include COPY FROM CSV file and INSERT INTO SELECT FROM OSS/HDFS/MaxCompute external table. The throughout of the COPY operation can reach 50 MB/s in single concurrency mode. AnalyticDB for PostgreSQL also supports multi-concurrency. You can use external tables to read data from external data sources. This way, data is loaded on compute nodes. The overall throughout can reach 500 MB/s.

Data query and analysis

AnalyticDB for PostgreSQL supports indexing and sorting for multidimensional point queries and range queries with high concurrency and low latency. AnalyticDB for PostgreSQL also supports the vectorized execution engine, the cost-based optimizer (CBO), and column stores for complex analysis scenarios where large volumes of data in multiple tables require to be correlated and aggregated. For example, business data applications require fast queries on data at the ADS layer, interactive analytics on data at the ODS layer, and ad-hoc correlation analysis on data at the CDM layer. Query results can be accelerated for BI reporting and dashboard display by using features such as materialized views.