Before performance assessment, you must download the traffic files of the corresponding RDS MySQL, PolarDB MySQL, or PolarDB-X 1.0 instance.
Background
Log Audit Service allows you to quickly enable log collection. After audit log collection is enabled, SQL insight (or SQL audit) is automatically enabled for eligible PolarDB MySQL clusters or RDS instances.
Log Audit Service allows you to collect logs of cloud products such as storage, network, and databases. Collected logs are automatically stored in the corresponding Logstore or Metricstore.
Prerequisites
You have registered an Alibaba Cloud account.
We recommend that you use an Alibaba Cloud Resource Access Management (RAM) user and grant all management permissions, namely,
AliyunLogFullAccessandAliyunRAMFullAccess, to the RAM user. You can also grant permissions based on the actual situation. For more information, see Grant a RAM user the permissions to perform operations on Log Audit Service.You have created an RDS instance, PolarDB instance, or PolarDB-X 1.0 instance.
Initial configuration
You need to perform this operation only once.
You must perform this operation by using an account with the
AliyunRAMFullAccesspermission.
Log on to the SLS console.
In the Log Application section, click the Audit & Security tab. Then, click Log Audit Service.

Grant permissions as prompted.
After that, Log Audit Service uses the
AliyunServiceRoleForSLSAuditservice-linked role to collect logs from cloud products.
Enable log collection
Enable SQL audit log collection.
Go to the Log Audit Service page.
In the left-side navigation pane, choose Access to Cloud Products > Global Configurations.
On the Global Configurations page, click Modify in the upper-right corner.

Select the target region for centralized log storage from the Region of Central Project drop-down list.

In the cloud product list, toggle on SQL Audit Logs for an RDS MySQL instance or PolarDB-X 1.0 instance, or Audit Logs for a PolarDB MySQL instance.

Confirm the information in the Note dialog box and then click Go to Collection Policy.
In the Configure Collection Policy dialog box, configure collection policies.
Simple Log Service (SLS) allows you to configure collection policies by using the Default Collection Policy or Advanced Edit Mode option. For more information, see Configure log collection policies.
ImportantIf you turn on Default Collection Policy, logs are collected from all instances in all regions by default, which incurs unexpected charges. We recommend that you turn on Advanced Edit Mode to enable audit for a specific instance.
In the Policies to Add section, select Instance ID as the property and Exact Match as the operator. In the right-side text box, enter the ID of the RDS MySQL instance, PolarDB MySQL instance, or PolarDB-X 1.0 instance whose logs are to be collected.
The property settings here are only for your reference. You can modify the settings based on the actual situation.

Cloud product
Log source
Parameter
Description
RDS
RDS instance
Account: account.id
The ID of the Alibaba Cloud account to which the RDS instance belongs.
Region: region
The region where the RDS instance is located. Example: cn-hangzhou.
Instance ID: instance.id
The ID of the RDS instance.
Instance Name: instance.name
The name of the RDS instance.
Database Type: instance.db_type
The type of the database.
Database Version: instance.db_version
The version of the database. Example: 8.0.
Tag: tag.*
The custom tag name. You can replace the asterisk (*) in
tag.*with a custom tag name.PolarDB
PolarDB cluster
Account: account.id
The ID of the Alibaba Cloud account to which the PolarDB cluster belongs.
Region: region
The region where the PolarDB cluster is located. Example: cn-hangzhou.
Cluster ID: cluster.id
The ID of the PolarDB cluster.
Cluster Name: cluster.name
The name of the PolarDB cluster.
DB Type Compatible with Cluster: cluster.db_type
The database type supported by the PolarDB cluster. Valid value: MySQL.
DB Version Compatible with Cluster: cluster.db_version
The version of the database. Valid values: 8.0, 5.7, and 5.6.
Tag: tag.*
The custom tag name. You can replace the asterisk (*) in
tag.*with a custom tag name.PolarDB-X 1.0
PolarDB-X 1.0 instance
Account: account.id
The ID of the Alibaba Cloud account to which the PolarDB-X 1.0 instance belongs.
Region: region
The region where the PolarDB-X 1.0 instance is located. Example: cn-shanghai.
Instance ID: instance.id
The ID of the PolarDB-X 1.0 instance.
Instance Name: instance.name
The name of the PolarDB-X 1.0 instance.
Click Add Policy in the lower right of the dialog box.
Confirm the settings and then click OK.
Go back to the Global Configurations page and click OK in the upper-right corner. Then, wait until the modification is completed.
Obtain the traffic files of an instance
You can obtain the traffic files of the target instance in two ways: saving SLS traffic files to an Object Storage Service (OSS) bucket and downloading SQL audit logs to your local storage. The method you use determines the option to select for Data Collected From on the Configure Data Collection page.

If you saved SLS traffic files to an OSS bucket, select OSS Import to import traffic files for performance assessment.
If you downloaded SQL audit logs to your local storage, select Upload File to import a traffic file for performance assessment.
Save SLS traffic files to an OSS bucket
The migration assessment service supports only traffic files delivered in real time, and does not support supplementary traffic files delivered from SLS. For example, if you enable OSS delivery two hours after Log Audit Service is enabled, the migration assessment service will not receive traffic files generated during those two hours.
Go back to the SLS console.
In the Projects section, click the name of the target project to go to the Logstores page.
For more information about how to create a project, see Manage a project.

In the left-side navigation pane, choose Data Processing > Export under the target Logstore, hover over Object Storage Service (OSS), and then click the + icon that appears.

In the Create Data Shipping Job dialog box, select OSS Export and click OK.
In the Data Shipping to OSS dialog box, configure the parameters to export logs to an OSS bucket. For more information, see Create an OSS data shipping job (new version).
ImportantObserve the following rules when you configure the parameters in the Data Shipping to OSS dialog box:
File Delivery Directory is required.
Partition Format must be set to %Y/%m/%d/%H/%M.
Storage Format can be set only to
json.Compression can be set only to Compress(snappy).
After you configure the parameters, click OK.
Download SQL audit logs
Go back to the SLS console.
In the Projects section, click the name of the target project to go to the Logstores page.
For more information about how to create a project, see Manage a project.

In the left-side navigation pane, click the name of the target Logstore to go to the log details page.
For more information about how to create a Logstore, see Manage a Logstore.
On the Raw Logs tab, click the download icon and select Download Log.

You can also specify a time range to query logs and statistics.

In the Download Log dialog box, select Offline Download and configure the parameters.
After you configure the parameters, click OK.
In the Download Tasks dialog box that appears, click Download.
