ApsaraDB RDS for PostgreSQL allows you to use event triggers to implement features such as the DDL recycle bin, firewall, incremental synchronization. This helps you reduce maintenance costs and protect data security.
Prerequisites
The RDS instance runs PostgreSQL, and is equipped with standard or enhanced SSDs.
Background information
You can create DDL recycle bin and firewall policies based on event triggers to ensure database security.
Prevent against risky operations, such as DROP TABLE, DROP INDEX, and DROP DATABASE.
Retrieve data from the recycle bin if a table is deleted by mistake.
You can use two event triggers, pg_get_ddl_command and pg_get_ddl_drop, to collect and save DDL statements in the dts_audit.dts_tb_ddl_command table. The event triggers are implemented by using the pg_func_ddl_command() function. The structure of the dts_audit.dts_tb_ddl_command table is as follows:
Column | Type | Collation | Nullable | Default | Storage | Stats target | Description
-----------------+-----------------------------+-----------+----------+--------------------+----------+--------------+-------------
event | text | | | | extended | |
tag | text | | | | extended | | Command tag
classid | oid | | | | plain | | OID of catalog the object belonged in
objid | oid | | | | plain | | OID the object had within the catalog
objsubid | integer | | | | plain | | Object sub-id (e.g. attribute number for columns)
object_type | text | | | | extended | | Type of the object
schema_name | text | | | | extended | | Name of the schema the object belonged in, if any; otherwise NULL. No quoting is applied.
object_identity | text | | | | extended | | Text rendering of the object identity, schema-qualified.
is_extension | boolean | | | | plain | | True if the command is part of an extension script
query | text | | | | extended | | sql text
username | text | | | CURRENT_USER | extended | |
datname | text | | | current_database() | extended | |
client_addr | inet | | | inet_client_addr() | main | |
crt_time | timestamp without time zone | | | now() | plain | |
The required CREATE statements are as follows:
CREATE SCHEMA IF NOT EXISTS dts_audit;
CREATE TABLE IF NOT EXISTS dts_audit.dts_tb_ddl_command ( event text,
tag text, classid oid, objid oid, objsubid int,
object_type text, schema_name text, object_identity text, is_extension bool, query text,
username text default current_user, datname text default current_database(),
client_addr inet default inet_client_addr(), crt_time timestamp default now()
);
The following section provides examples of how to implement the DDL recycle bin, firewall, incremental synchronization. You can modify the code as needed.
DDL recycle bin
Execute the following statements to create the required table, function, and triggers:
/* external/rds_ddl_pulication/rds_ddl_pulication--1.0.sql */ --create schema CREATE SCHEMA IF NOT EXISTS dts_audit; --create table for ddl record CREATE TABLE IF NOT EXISTS dts_audit.dts_tb_ddl_command ( event text, tag text, classid oid, objid oid, objsubid int, object_type text, schema_name text, object_identity text, is_extension bool, query text, username text default current_user, datname text default current_database(), client_addr inet default inet_client_addr(), crt_time timestamp default now() ); -- create function for event triggers create or replace function dts_audit.dts_func_ddl_command() returns event_trigger as $$ declare v1 text; is_superuser bool = false; r record; begin -- we don't record ddl command from superusers select u.rolsuper into is_superuser from pg_catalog.pg_roles u where u.rolname = SESSION_USER; if is_superuser then return; end if; select query into v1 from pg_stat_activity where pid=pg_backend_pid(); -- RAISE NOTICE 'ddl event:%, command:%', tg_event, tg_tag; -- NB:since ddl_command_end cannot collect the details of the drop statement, we use sql_drop if TG_EVENT='ddl_command_end' then SELECT * into r FROM pg_event_trigger_ddl_commands(); if r.classid > 0 then insert into dts_audit.dts_tb_ddl_command(event, tag, classid, objid, objsubid, object_type, schema_name, object_identity, is_extension, query) values(TG_EVENT, TG_TAG, r.classid, r.objid, r.objsubid, r.object_type, r.schema_name, r.object_identity, r.in_extension, v1); end if; end if; if TG_EVENT='sql_drop' then -- To avoid repeated collection, we filtered 'ALTER TABLE' and 'ALTER FOREIGN TABLE' if TG_TAG != 'ALTER TABLE' and TG_TAG != 'ALTER FOREIGN TABLE' then SELECT * into r FROM pg_event_trigger_dropped_objects(); insert into dts_audit.dts_tb_ddl_command(event, tag, classid, objid, objsubid, object_type, schema_name, object_identity, is_extension, query) values(TG_EVENT, TG_TAG, r.classid, r.objid, r.objsubid, r.object_type, r.schema_name, r.object_identity, 'f', v1); end if; end if; end; $$ language plpgsql strict; -- ddl_command_end event trigger CREATE EVENT TRIGGER pg_get_ddl_command on ddl_command_end EXECUTE PROCEDURE dts_audit.dts_func_ddl_command(); -- pg_get_ddl_drop event trigger CREATE EVENT TRIGGER pg_get_ddl_drop on sql_drop EXECUTE PROCEDURE dts_audit.dts_func_ddl_command(); -- grant privileges to all user GRANT USAGE ON SCHEMA dts_audit TO PUBLIC; GRANT SELECT, INSERT ON TABLE dts_audit.dts_tb_ddl_command TO PUBLIC;
NoteAfter you execute the preceding statements, DDL statements are recorded in the dts_audit.dts_tb_ddl_command table.
Execute a DDL statement to test whether it is recorded in the dts_audit.dts_tb_ddl_command table.
create table tb_test(id int); select * from dts_audit.dts_tb_ddl_command;
DDL firewall
You can create event triggers as needed and use the ddl_command_start event to block the execution of specific DDL statements.
Create a trigger function.
CREATE OR REPLACE FUNCTION abort1() RETURNS event_trigger LANGUAGE plpgsql AS $$ BEGIN if current_user = 'test1' then RAISE EXCEPTION 'event:%, command:%', tg_event, tg_tag; end if; END; $$;
Create a trigger to block DDL statements that create or delete tables.
create event trigger b on ddl_command_start when TAG IN ('CREATE TABLE', 'DROP TABLE') execute procedure abort1();
Log on to the RDS instance as user test1 and try to create a table.
NoteThe DDL statement is blocked.
Incremental synchronization
On the publication side, executed DDL statements are stored in the dts_audit.dts_tb_ddl_command table. You can synchronize the records from the publication side to the subscription side.
Create a publication on the publication side.
CREATE PUBLICATION my_ddl_publication FOR TABLE ONLY dts_audit.dts_tb_ddl_command;
Create the same table on the subscription side.
CREATE SCHEMA IF NOT EXISTS dts_audit; CREATE TABLE IF NOT EXISTS dts_audit.dts_tb_ddl_command ( event text, tag text, classid oid, objid oid, objsubid int, object_type text, schema_name text, object_identity text, is_extension bool, query text, username text, datname text, client_addr inet , crt_time timestamp );
Create a subscription on the subscription side.
CREATE SUBSCRIPTION my_ddl_subscriptin CONNECTION 'host=*** port=*** user=*** password=*** dbname=**' PUBLICATION my_ddl_publication;
NoteMake sure that the wal_level parameter is set to logical for your RDS instance. You can change the parameter value on the Parameters page in the ApsaraDB for RDS console and restart the instance for the change to take effect. For more information, see Use the logical subscription feature.
Example:
CREATE SUBSCRIPTION my_ddl_subscriptin CONNECTION 'host=pgm-bpxxxxx.pg.rds.aliyuncs.com port=1433 user=test1 password=xxxxx dbname=testdb1' PUBLICATION my_ddl_publication;
Create a trigger for the dts_audit.dts_tb_ddl_command table on the subscription side to perform incremental synchronization of DDL statements.