This topic shows how to use the SQL query capabilities of Simple Log Service (SLS) within the Cloud Threat Detection and Response (CTDR) platform to build a custom detection rule. This rule lets you monitor and generate real-time alerts for security incidents like logins from dormant accounts.
Background
Objective: Detect logins from dormant accounts.
Method: Use scheduled SQL queries to compare recent logins against a historical baseline. This identifies users who have logged in recently but have not appeared in historical records for an extended period.
Core logic
The detection logic has three main parts:
Define recent activity: Query all login events that occurred in the last 20 minutes.
Define a historical baseline: Query for users who have logged in to a host within a longer period, specifically the last 24 hours, but excluding the most recent 20 minutes.
Compare the datasets: Join the recent activity with the historical baseline. If a user appears in the recent activity but has no record in the historical baseline, the login is considered abnormal.
Define recent activity
(
select
user_id,
src_ip,
username,
uuid,
start_time
from
log
where
cast(start_time as bigint) >= cast(to_unixtime (current_timestamp) as bigint) - 20 * 60
and cast(start_time as bigint) < cast(to_unixtime (current_timestamp) as bigint)
) aSyntax analysis:
from log: Queries data from thelogtable.to_unixtime(current_timestamp): Returns the current Unix timestamp in seconds.cast(... as bigint): Converts the timestamp to abigint(long integer) data type for mathematical operations and comparisons.where ...: Defines a 20-minute sliding time window ending at the current time.>= ... - 20 * 60: The log's start time (start_time) must be greater than or equal to "current time - 20 minutes".< ...: The log's start time must be less than "current time".
Logic analysis:
Purpose: Defines a set named
afor "recent activity" to scope the primary objects of analysis.Result: A temporary result set of users who were active in the last 20 minutes, such as
user_idandsrc_ip.Key point: The "recent" period is precisely defined as a 20-minute time window extending back from the current time.
Define a historical baseline
(
select
user_id,
username,
uuid
from
log
where
schema='HOST_LOGIN_ACTIVITY' and
cast(start_time as bigint) >= cast(to_unixtime (current_timestamp) as bigint) - 24 * 3600
and cast(start_time as bigint) < cast(to_unixtime (current_timestamp) as bigint) - 20 * 60
) bSyntax analysis:
schema='HOST_LOGIN_ACTIVITY': Queries only host login activity logs, which makes the historical baseline more accurate.where ...: Defines a time range from 24 hours ago up to 20 minutes ago.>= ... - 24 * 3600: The log start time must be greater than or equal to "current time - 24 hours".< ... - 20 * 60: The log start time must be less than "current time - 20 minutes".
Logic analysis:
Purpose: Builds a "historical baseline" set named
bas a reference to determine if an activity is abnormal.Result: A temporary result set containing users who have logged in within the last day, excluding the most recent 20 minutes.
Key point: This time window is set from 24 hours ago to 20 minutes ago. This ensures that the historical baseline data does not overlap with recent activity data, preventing logical errors from self-comparison.
Use LEFT JOIN for difference comparison
... a
left join
... b on a.username = b.username and a.uuid = b.uuid and a.user_id = b.user_idSyntax analysis:
LEFT JOIN: Uses the Left Table (a, recent users) as the base and matches each of its records with records in the Right Table (b, historical users).on a.username = b.username and a.uuid = b.uuid and a.user_id=b.user_id: Specifies the join conditions. Theusername,uuid, anduser_idfields are used together to uniquely identify a user entity, ensuring an accurate match.
Logic analysis:
Purpose: Correlates "recent activity" (
a) with the "historical baseline" (b) to find corresponding historical login records for each recent user activity.Result: Generates a combined result set that includes all records from table
aand the records from tablebthat were successfully matched based on the user identity fields.Key point: This query leverages the asymmetric nature of a
LEFT JOIN: if a user from tableadoes not exist in tableb, all columns from tablebwill beNULLin the result. This is the core mechanism for identifying new activity.
Filter the final result
where
(
b.username is null
or b.username = ''
)Syntax analysis:
b.username is null: This is the key to leveraging theLEFT JOINcharacteristic. When a recently logged-in user (in tablea) cannot be found in the historical baseline (tableb),b.usernamewill beNULL.or b.username = '': This is a defensive condition to handle cases where a log field might be an empty string''instead ofNULL.
Logic analysis:
Purpose: To filter the joined results, isolating activities that appeared recently but have no historical record.
Result: After this filter is applied, the result set contains only the records that failed to match in the
LEFT JOIN—the abnormal events you are looking for.Key point:
b.username is nullis the core decision point of the entire detection logic. It uses theNULLvalues generated in the previous step to separate the "present in recent, absent in historical" records from the massive dataset.
Use SELECT DISTINCT to output alerts
select distinct
a.user_id,
a.src_ip,
a.username,
a.uuidSyntax analysis:
SELECT DISTINCT: Selects and outputs the specified fields, and removes duplicates from the results. This ensures only one alert is generated for a single abnormal event.
Logic analysis:
Purpose: To format and output a final list of abnormal events that can be used directly for alerting.
Result: A clean, deduplicated list of alerts. Each record contains the core information needed to trace the abnormal event, such as the user ID and Source IP.
Key point: The use of
DISTINCTis critical. It deduplicates the results to ensure that the same abnormal behavior from the same user within the detection window (20 minutes) triggers only one alert.
Complete solution
Final SQL query
*|set session mode=scan;
select distinct
a.user_id,
a.src_ip,
a.username,
a.uuid
from
(
select
user_id,
src_ip,
username,
uuid,
start_time
from
log
where
cast(start_time as bigint) >= cast(to_unixtime (current_timestamp) as bigint) -20 * 60
and cast(start_time as bigint) < cast(to_unixtime (current_timestamp) as bigint)
) a
left join (
select
user_id,
username,
uuid
from
log
where
schema='HOST_LOGIN_ACTIVITY' and
cast(start_time as bigint) >= cast(to_unixtime (current_timestamp) as bigint) - 24 * 3600
and cast(start_time as bigint) < cast(to_unixtime (current_timestamp) as bigint) -20 * 60
) b on a.username = b.username
and a.uuid = b.uuid
and a.user_id=b.user_id
where
(
b.username is null
or b.username = ''
)Configure the rule in CTDR
Purchase and enable CTDR
Refer to Purchase and enable CTDR for purchasing options. To access all custom threat detection services, we recommend purchasing both Log Ingestion Traffic and Log Storage Capacity.
Log on to the console and navigate to the Create Custom Rule page
Log on to the .
In the left-side navigation pane, choose . In the upper-left corner of the Console, select the Region where your assets are located: Chinese Mainland or Outside Chinese Mainland.
On the Custom tab, click Create Custom Rule.
Configure the alert generation rule
In the Create Custom Rule panel, on the Basic Information tab, enter a rule name and description, then click Next.
Configure the SQL detection rule. Refer to the following parameter settings:
Parameter
Value
Rule Body
SQL
Log Scope
Logon Logs - Host Logon Success Log.
SQL Query
Copy the code from the Final SQL query section.
Scheduling Interval
Fixed Interval - 20 minutes.
SQL Time Window
24 hours.
Start Time
When the rule is enabled.
Generation Structure
Other Alert Logs.
Alarm Metric
Abnormal Logon.
Alert Severity
Medium.
ATT&CK Tactic
Persistence - T1136 Create Account.
Entity Mapping
Network Address
is_malware: 1ip:$src_ipnet_connect_dir: in
Host
is_asset: 1uuid:$uuid
Configure the Incident generation rule
On the Alert Settings page, after you complete the configuration, click Next.
Configure the event rule. Refer to the following parameter settings:
Generate Event: Yes.
Incident Generation Method: Aggregate by type.
聚合窗口: 20 minutes.
Validate the rule
New rules are Disabled by default. You can test them to evaluate their effectiveness. During testing, the system automatically calibrates alert fields. Use the generated calibration suggestions to optimize the rule's SQL or playbook to ensure alert accuracy and standardization after the rule is enabled.
Change the Enabling Status of the target rule to Testing.
In the actions column for the target rule, click View Alert Test Result.
On the test result details page, view the alert trend chart and the list of alerts generated.
In the Actions column for an alert, click 详情 to view its calibration results.
Enable the custom rule
After the rule passes the test, set its Enabling Status to
Enabled.ImportantWe recommend testing the rule before you enable it.
Example test result

Risk assessment
False positive: Normal users logging in for the first time after a long vacation or business trip might be flagged incorrectly. You can reduce false positives by extending the historical lookback period (for example, from 24 hours to 72 hours) or by configuring a user Whitelist.
False negative: Log collection interruptions or non-standard field formats can cause the SQL query to miscalculate time intervals, leading to missed detections. Ensure the integrity and consistency of your log data.
SQL syntax documentation
For more information about SQL syntax, see SQL analysis syntax and features.
To learn how to configure threat detection rules, see Configure threat detection rules.