If the number of I/O events is large for a consecutive period, an I/O bottleneck occurs. In this case, you must check the relevant data and analyze this issue.
After an I/O bottleneck occurs, use one of the following three methods to analyze
the issue:
- Use the polar_stat_activity_rt view to check the processes for which the number of I/O events is large.
SELECT backend_type, SUM(local_read_iops local_write_iops) iops, SUM(local_read_iothroughput local_write_iothroughput) throughput, SUM(local_read_latency local_write_latency) latency FROM polar_stat_activity_rt GROUP BY backend_type ORDER BY iops DESC; \watch 1
- If the number of I/O events for the client backend process is large, use the polar_stat_activity view to check the SQL statements and wait events that cause the issue.
SELECT query, COUNT(*) AS wait_count FROM polar_stat_activity WHERE state='active' AND backend_type='client backend' AND wait_event_type = 'IO' GROUP BY query ORDER BY wait_count DESC;\watch 1
- If the number of I/O events for the checkpoint process is large, the amount of written data is large.
- If the number of I/O events for the bgwriter process is large, data is written into the storage at a low rate.
- If the number of I/O events for the vacuum process is large, use the pg_stat_progress_vacuum view to check the progress of vacuuming tables.
SELECT * FROM pg_stat_progress_vacuum;
- If the number of I/O events for the client backend process is large, use the polar_stat_activity view to check the SQL statements and wait events that cause the issue.
- You can check the polar_stat_io_info view to record the I/O monitoring information by file type. For more information, see polar_stat_io_info.
- You can check the polar_stat_io_latency view to analyze issues. The view records the I/O latency information for each file type by file operation type. For more information, see polar_stat_io_latency.