Simple Log Service (SLS) is compatible with the PostgreSQL protocol and supports connections through standard JDBC drivers. Use the psql client, a JDBC driver, or any tool that supports the PostgreSQL protocol to connect to SLS and execute SQL statements for reading, writing, and managing data.
Features
When you connect to SLS using the PostgreSQL protocol, you can:
Manage logstores using standard SQL syntax (create, modify, delete).
Analyze log data using PostgreSQL SQL syntax.
Write log data using INSERT statements.
Query logstore schema metadata seamlessly using native PostgreSQL syntax, ensuring full compatibility with PostgreSQL system tables.
Limits
The following limits apply when you connect to SLS using the PostgreSQL protocol:
Limitations | Description |
Prepared Statement | Prepared statement execution (parse/bind/execute in the PG protocol) is not supported. If you use JDBC, explicitly set the PG query mode to simple. For example, set the following connection property: Or set the preferQueryMode parameter in the JDBC URL: |
Transaction support | The begin, commit, and rollback statements are ignored and have no effect. |
Primary keys and default values | Constraints such as primary keys and default values are not supported. |
Prerequisites
You have created a project.
You have obtained an AccessKey ID and an AccessKey secret. For more information, see Create an AccessKey.
You have installed a PostgreSQL client tool (such as psql) or another tool that supports the PostgreSQL protocol.
Connect to SLS
To connect to SLS using a PostgreSQL client, use the following URL format:
postgresql://{AccessKey_ID}:{AccessKey_Secret}@{Endpoint}:5432/{Project_Name}
# The connection enables TLS by default (MD5 for authentication). To explicitly disable TLS (for example, if the client does not support it), set the sslmode parameter to disable.
postgresql://{AccessKey_ID}:{AccessKey_Secret}@{Endpoint}:5432/{Project_Name}?sslmode=disableTo connect to SLS using the PostgreSQL protocol through JDBC, use the following configuration:
private static final String URL = "jdbc:postgresql://{Endpoint}:5432/{Project_Name}?preferQueryMode=simple";
private static final String AccessKey_ID = "xyz";
private static final String AccessKey_Secret = "123456";
public static void main(String[] args) throws Exception
{
Properties props = new Properties();
props.setProperty(PGProperty.USER.getName(), AccessKey_ID);
props.setProperty(PGProperty.PASSWORD.getName(), AccessKey_Secret);
try (Connection conn = DriverManager.getConnection(URL, props)) {
System.out.println("Connection successful!");
}
}Parameter description:
Parameter | Description | Example |
AccessKey_ID | The AccessKey ID of your Alibaba Cloud account or RAM user. | LTAI5t*** |
AccessKey_Secret | The AccessKey secret of your Alibaba Cloud account or RAM user. | *** |
Endpoint | The endpoint of SLS. For more information, see Endpoints. Note that the endpoint must include the project name (as shown in the Example column). | {projectName}.cn-hangzhou.log.aliyuncs.com |
Project_Name | The name of the log project. | my-project |
Manage tables
In the SLS PostgreSQL protocol, a table corresponds to an SLS logstore.
Create a table
Use the CREATE TABLE statement to create a logstore:
CREATE TABLE users (
id BIGINT,
name TEXT,
create_time TIMESTAMP
)
WITH (
shard_count = 4,
ttl = 14,
default_query_range = '1d',
full_text_search_enabled = true,
index_tokens = '&[ ]",''\',
log_time_field = 'create_time',
insert_any_allowed = false
);Supported column types:
PostgreSQL type | SLS type | Description |
int8/bigint/int4/integer/int2/smallint/oid | long | Integer type |
text/varchar/char/bpchar/uuid | text | String type |
float4/real/float8/double precision/numeric | double | Floating-point number type |
json | json | JSON type |
Table properties (WITH clause):
Property name | Type | Description | Default value | Example |
shard_count | Integer | The number of shards in the logstore. | 4 | 4 |
ttl | Integer | The data retention period in days. | 365 | 14 |
default_query_range | String | The default query time range. Format: {n}{d/m/s} (day/minute/second). | '1d' | '3d' |
full_text_search_enabled | Boolean | Specifies whether to enable full-text index. | false | true |
index_tokens | String | The tokens used for tokenization. | Default delimiters | '&[]",''' |
log_time_field | String | Specifies the field to use as the log time for the SLS logstore during data insertion. The field type must be timestamp or long (epoch seconds, milliseconds, microseconds, or nanoseconds). The system automatically detects the unit. This setting identifies the log time field during data writes. | - | 'create_time' |
insert_any_allowed | Boolean | Specifies whether to allow writing any data without schema validation. Set this value to false to validate the schema of written data. | true | false |
hot_ttl | Integer | The number of days that logstore data is retained in hot storage. After this period, data automatically transitions to IA storage class. For more information, see Manage intelligent tiering. If this parameter is not set, data remains in hot storage for its entire lifecycle. | - | 7 |
chn_columns | String | Defines columns that contain Chinese text. Indexes for these columns enable Chinese tokenization. Separate multiple column names with commas. Use __line__ to indicate full-text index. Note: Enabling Chinese tokenization affects performance. Use it with caution. | - | '__line__,message,error' |
Modify a column
The following column modification operations are supported:
Add a column:
ALTER TABLE users ADD COLUMN change_time TEXT;Rename a column:
ALTER TABLE users RENAME COLUMN change_time TO update_time;Modify a column type:
ALTER TABLE users ALTER COLUMN update_time TYPE TIMESTAMP;Delete a column:
ALTER TABLE users DROP COLUMN update_time;Clear data
Use the TRUNCATE statement to clear all data from a table:
TRUNCATE TABLE users;Note : After data is deleted, it takes about 30 seconds for the change to become visible in SQL queries.
Delete a table
Use the DROP TABLE statement to delete a table (which also deletes the corresponding logstore):
DROP TABLE users;Query data
Basic query
Use the standard SQL SELECT statement to query data:
SELECT id, name FROM users LIMIT 10;Note : If no time range is specified in the SQL statement, the default time range defined bydefault_query_rangeis used. Ifdefault_query_rangewas not set when the table was created, the query defaults to the last 15 minutes of data.
Specify a query time range
Specify a query time range in one of the following two ways:
Method 1: Use the __time__ field
-- Specify the lower limit of the time range
SELECT id, name FROM users
WHERE __time__ > 1766646000
LIMIT 10;
-- Specify the time range
SELECT id, name FROM users
WHERE __time__ > 1766646000 AND __time__ < 1766646987
LIMIT 10; Note : The value of __time__ is a Unix timestamp in seconds. You must set a lower limit for the time range to take effect. If no upper limit is set, the end time defaults to the current time.Method 2: Use the query_begin and query_end variables
-- Set the lower limit of the time range
SET query_begin = 1766646000;
SELECT id, name FROM users LIMIT 10;
-- Set the time range
SET query_begin = 1766646000;
SET query_end = 1766646987;
SELECT id, name FROM users LIMIT 10;
-- Set the variables in a single statement (recommended)
SET query_begin = 1766646000; SET query_end = 1766646987; SELECT id, name FROM users LIMIT 10;Note : Once set, the query_begin and query_end variables remain valid for the entire connection session. Run show query_begin or show query_end to view their current values. A value of 0 means the variable has not been set.
Write data
Basic write
Use the INSERT INTO statement to write data:
INSERT INTO users (id, name, create_time)
VALUES
(1, 'Name#1', '2025-12-25T06:54:19'),
(2, 'Name#2', '2025-12-25T06:54:20'),
(3, 'Name#3', '2025-12-25T06:54:21');EXACTLY-ONCE writes
The SLS PostgreSQL protocol does not support uniqueness constraints such as primary keys, but it supports EXACTLY-ONCE writes to ensure data is written only once.
Use the hash_key and seq_id variables to implement EXACTLY-ONCE writes:
SET hash_key = 'file#1';
SET seq_id = 1;
INSERT INTO users (id, name, create_time)
VALUES
(1, 'Name#1', '2025-12-25T06:54:19'),
(2, 'Name#2', '2025-12-25T06:54:20'),
(3, 'Name#3', '2025-12-25T06:54:21');Parameter description:
Parameter | Description |
hash_key | A unique key that identifies the data source. |
seq_id | The serial number. It must start from 1 and monotonically increase for the same hash_key. |
Duplicate data error handling:
When duplicate data is written, the server returns an error message in the following format:
error: sequence id is lower than expected, hash_key=xyz,current sequence_id=100,requested sequence_id=90Use the following regular expression to extract the current seq_id from the server response:
current\s+sequence_id=(\d+)Query metadata
Query all logstores
SELECT
table_schema,
table_name
FROM
information_schema.tables;Note : By default, a logstore is mapped to a schema under a project of the same name.
Query the field structure
SELECT
column_name,
data_type
FROM
information_schema.columns
WHERE
table_name = 'access_log';