The access log provided by Anti-DDoS Pro is integrated with Alibaba Cloud Log Service to provide real-time analysis and reporting features. The access log contains log entries of HTTP flood attacks. The full log service is a value-added service. You must make a purchase to use it. After you activate the full log service, Log Service starts to collect access logs and attack logs in real time. You can query and analyze log data collected by Anti-DDoS Pro, and the results are displayed on dashboards.
The full log service is integrated with Log Service. You can query and analyze log data on dashboards. This helps you flexibly analyze and monitor your website services. After you activate full log, you can consume and ship logs through Log Service. This helps you manage website access logs collected by Anti-DDoS Pro.
Log Service is an all-in-one logging service developed by Alibaba Cloud. It has been tested in a wide array of big data scenarios. Log Service helps you quickly collect, consume, ship, query, and analyze log data without development work. It improves the O&M and operations efficiency, and provides the capability to process large volumes of data. For more information, see What is Log Service.
Activate the full log service
- Log on to the Anti-DDoS Pro console.
- In the left-side navigation pane, choose .
- On the Log Service page, click Buy Now.
- On the buy page of Log Service, set Applicable Product to Anti-DDoS Pro, and select the specification as needed.
- Log Storage: the log storage capacity. Unit: TB. After the log storage capacity is exhausted, new logs cannot be stored. We recommend that you monitor the remaining log storage space and expand the storage space as needed.
- Duration: the validity period of the full log service. After the full log service expires, new logs cannot be stored. If you do not renew the full log service within seven days after it expires, all log data will be automatically deleted.
The full log service is charged at a price rate of RMB 500/TB (log storage space)/month (service duration).Note If the full log service has sufficient storage capacity within the validity period, it stores logs of the last 180 days. Logs from day 181 will overwrite the logs from day 1. Full log stores logs in the last 180 days only.
Example of how to select a log storage capacity
Typically, each request log occupies about 2 KB of storage space. If the average request volume of your workload is 500 queries per second (QPS), the storage space required for one day is: 500 x 60 x 60 x 24 x 2 = 86,400,000 KB (82 GB). By default, full log stores logs in the last 180 days. Therefore, you need to select a log storage capacity of 14,832 GB ( 14.5 TB).
- Click Buy Now and settle the payment.
- In the Anti-DDoS Pro console, navigate to the Log Service page and click Authorize Now.
- On the Cloud Resource Access Authorization page, click Authorize to authorize Anti-DDoS Pro to store logs to the specified Logstore.
After the full log service is activated, you can go to the Log Service page and click Details to view the service specification.Note We recommend that you pay close attention to the remaining log storage space and validity period during use.
- When the utilization of the log storage capacity reaches 70%, expand the log storage capacity to make sure that new logs can be stored.
- If a large amount of storage space remains unused for a long time, reduce the storage capacity as needed.
Activate full log for a website
- Log on to the Anti-DDoS Pro console.
- In the left-side navigation pane, choose .
- On the Log Service page, select the target website domain, and turn on the Status switch to enable full
After you enable full log, you can query and analyze the collected logs in real time, view and edit dashboards, and set monitoring alerts on the Log Service page.
Use full log
The full log service is integrated with Alibaba Cloud Log Service. After you enable this service, you can analyze access logs, attack logs, and defense logs collected by Log Service. You can also display data on dashboards, and set monitoring alerts by using thresholds.
|Query and analysis||You can query and analyze collected log data in real time. A query consists of Search
statements and Analytics statements. Separate Search and Analytics statements with
vertical bars (
For example, the following Search statement is used to query the number of visits to a website.
For more sample statements, see the following section describing commonly used Search statements.
|Query and analysis|
|Graphs||Search statements contain analytics syntax. After Search statements are executed, analysis results are displayed in charts by default. You can choose a line chart, bar chart, pie chart, and other types of charts.||Graphs|
|Dashboards||Analysis results are displayed on dashboards in real time. You can execute Search
statements to display data in charts. Charts can be saved to dashboards.
Full log provides two default dashboards: access center and operations center.
You can also subscribe to dashboards, or sent dashboards to specific recipients through emails or DingTalk messages.
|Monitoring and alerts||You can configure alerts based on the charts on a dashboard to monitor the service status in real time.||Alerts|
- Troubleshoot website access problems
After full log is enabled for your website, you can query and analyze the logs collected from your website in real time. You can use SQL statements to analyze the access log of your website. This allows you to quickly troubleshoot and analyze access problems, and view information about read/write latency and the distribution of ISPs.
For example, the following statement can be used to view access logs on your website:
- Track HTTP flood attack sources
Access logs record information about the sources and distribution of HTTP flood attacks. You can query and analyze access logs in real time to identify the origins of attacks, and use this information to select the most effective protection strategy.
- For example, the following statement can be used to analyze the geographical distribution
of HTTP flood attacks:
__topic__: DDoS_access_log and cc_blocks > 0| SELECT ip_to_country(if(real_client_ip='-', remote_addr, real_client_ip)) as country, count(1) as "number of attacks" group by country
- For example, the following statement can be used to view PVs:
__topic__: DDoS_access_log | select count(1) as PV
- For example, the following statement can be used to analyze the geographical distribution of HTTP flood attacks:
- Analyze website operations
Access logs record information about website traffic in real time. You can use SQL queries to analyze log data and better understand your users. For example, you can identify the most visited web pages, the source IP addresses of the clients, the browsers that initiated the requests, and the distribution of client devices, which can help you analyze website operations.
For example, the following statement can be used to view the distribution of traffic by ISP:
__topic__: DDoS_access_log | select ip_to_provider(if(real_client_ip='-', remote_addr, real_client_ip)) as provider, round(sum(request_length)/1024.0/1024.0, 3) as mb_in group by provider having ip_to_provider(if(real_client_ip='-', remote_addr, real_client_ip)) <> '' order by mb_in desc limit 10
Commonly used Search statements
- Query types of blocked requests
* | select cc_action,cc_phase,count(*) as t group by cc_action,cc_phase order by t desc limit 10
- Query the number of queries per second
* | select time_series(__time__,'15m','%H:%i','0') as time,count(*)/900 as QPS group by time order by time
- Query attacked domains
* and cc_blocks:1 | select cc_action,cc_phase,count(*) as t group by cc_action,cc_phase order by t desc limit 10
- Query attacked URLs
* and cc_blocks:1 | select count(*) as times,host,request_path group by host,request_path order by times
- Query request details
* | select date_format(date_trunc('second',__time__),'%H:%i:%s') as time,host,request_uri,request_method,status,upstream_status,querystring limit 10
- Query 5XX HTTP status codes
* and status>499 | select host,status,upstream_status,count(*)as t group by host,status,upstream_status order by t desc
- Query the distribution of request latency
* | SELECT count_if(upstream_response_time<20) as "<20", count_if(upstream_response_time<50 and upstream_response_time>20) as "<50", count_if(upstream_response_time<100 and upstream_response_time>50) as "<100", count_if(upstream_response_time<500 and upstream_response_time>100) as "<500", count_if(upstream_response_time<1000 and upstream_response_time>500) as "<1000", count_if(upstream_response_time>1000) as ">1000"