All Products
Search
Document Center

Security Center:Best practices for detecting logins from dormant accounts

Last Updated:Oct 22, 2025

This topic shows how to use the SQL query capabilities of Simple Log Service (SLS) within the Cloud Threat Detection and Response (CTDR) platform to build a custom detection rule. This rule lets you monitor and generate real-time alerts for security incidents like logins from dormant accounts.

Background

Objective: Detect logins from dormant accounts.

Method: Use scheduled SQL queries to compare recent logins against a historical baseline. This identifies users who have logged in recently but have not appeared in historical records for an extended period.

Core logic

The detection logic has three main parts:

  1. Define recent activity: Query all login events that occurred in the last 20 minutes.

  2. Define a historical baseline: Query for users who have logged in to a host within a longer period, specifically the last 24 hours, but excluding the most recent 20 minutes.

  3. Compare the datasets: Join the recent activity with the historical baseline. If a user appears in the recent activity but has no record in the historical baseline, the login is considered abnormal.

Define recent activity

(
  select
    user_id,
    src_ip,
    username,
    uuid,
    start_time
  from
    log
  where
    cast(start_time as bigint) >= cast(to_unixtime (current_timestamp) as bigint) - 20 * 60
    and cast(start_time as bigint) < cast(to_unixtime (current_timestamp) as bigint)
) a
  • Syntax analysis:

    • from log: Queries data from the log table.

    • to_unixtime(current_timestamp): Returns the current Unix timestamp in seconds.

    • cast(... as bigint): Converts the timestamp to a bigint (long integer) data type for mathematical operations and comparisons.

    • where ...: Defines a 20-minute sliding time window ending at the current time.

      • >= ... - 20 * 60: The log's start time (start_time) must be greater than or equal to "current time - 20 minutes".

      • < ...: The log's start time must be less than "current time".

  • Logic analysis:

    • Purpose: Defines a set named a for "recent activity" to scope the primary objects of analysis.

    • Result: A temporary result set of users who were active in the last 20 minutes, such as user_id and src_ip.

    • Key point: The "recent" period is precisely defined as a 20-minute time window extending back from the current time.

Define a historical baseline

(
  select
    user_id,
    username,
    uuid
  from
    log
  where
    schema='HOST_LOGIN_ACTIVITY' and
    cast(start_time as bigint) >= cast(to_unixtime (current_timestamp) as bigint) - 24 * 3600
    and cast(start_time as bigint) < cast(to_unixtime (current_timestamp) as bigint) - 20 * 60
) b
  • Syntax analysis:

    • schema='HOST_LOGIN_ACTIVITY': Queries only host login activity logs, which makes the historical baseline more accurate.

    • where ...: Defines a time range from 24 hours ago up to 20 minutes ago.

      • >= ... - 24 * 3600: The log start time must be greater than or equal to "current time - 24 hours".

      • < ... - 20 * 60: The log start time must be less than "current time - 20 minutes".

  • Logic analysis:

    • Purpose: Builds a "historical baseline" set named b as a reference to determine if an activity is abnormal.

    • Result: A temporary result set containing users who have logged in within the last day, excluding the most recent 20 minutes.

    • Key point: This time window is set from 24 hours ago to 20 minutes ago. This ensures that the historical baseline data does not overlap with recent activity data, preventing logical errors from self-comparison.

Use LEFT JOIN for difference comparison

... a
left join
... b on a.username = b.username and a.uuid = b.uuid and a.user_id = b.user_id
  • Syntax analysis:

    • LEFT JOIN: Uses the Left Table (a, recent users) as the base and matches each of its records with records in the Right Table (b, historical users).

    • on a.username = b.username and a.uuid = b.uuid and a.user_id=b.user_id: Specifies the join conditions. The username, uuid, and user_id fields are used together to uniquely identify a user entity, ensuring an accurate match.

  • Logic analysis:

    • Purpose: Correlates "recent activity" (a) with the "historical baseline" (b) to find corresponding historical login records for each recent user activity.

    • Result: Generates a combined result set that includes all records from table a and the records from table b that were successfully matched based on the user identity fields.

    • Key point: This query leverages the asymmetric nature of a LEFT JOIN: if a user from table a does not exist in table b, all columns from table b will be NULL in the result. This is the core mechanism for identifying new activity.

Filter the final result

where
  (
    b.username is null
    or b.username = ''
  )
  • Syntax analysis:

    • b.username is null: This is the key to leveraging the LEFT JOIN characteristic. When a recently logged-in user (in table a) cannot be found in the historical baseline (table b), b.username will be NULL.

    • or b.username = '': This is a defensive condition to handle cases where a log field might be an empty string '' instead of NULL.

  • Logic analysis:

    • Purpose: To filter the joined results, isolating activities that appeared recently but have no historical record.

    • Result: After this filter is applied, the result set contains only the records that failed to match in the LEFT JOIN—the abnormal events you are looking for.

    • Key point: b.username is null is the core decision point of the entire detection logic. It uses the NULL values generated in the previous step to separate the "present in recent, absent in historical" records from the massive dataset.

Use SELECT DISTINCT to output alerts

select distinct
     a.user_id,
     a.src_ip,
     a.username,
     a.uuid
  • Syntax analysis:

    • SELECT DISTINCT: Selects and outputs the specified fields, and removes duplicates from the results. This ensures only one alert is generated for a single abnormal event.

  • Logic analysis:

    • Purpose: To format and output a final list of abnormal events that can be used directly for alerting.

    • Result: A clean, deduplicated list of alerts. Each record contains the core information needed to trace the abnormal event, such as the user ID and Source IP.

    • Key point: The use of DISTINCT is critical. It deduplicates the results to ensure that the same abnormal behavior from the same user within the detection window (20 minutes) triggers only one alert.

Complete solution

Final SQL query

*|set session mode=scan;
select distinct
     a.user_id,
     a.src_ip,
     a.username,
     a.uuid
   from
     (
       select
         user_id,
         src_ip,
         username,
         uuid,
         start_time
       from
         log
       where
         cast(start_time as bigint) >= cast(to_unixtime (current_timestamp) as bigint) -20 * 60
         and cast(start_time as bigint) < cast(to_unixtime (current_timestamp) as bigint)
     ) a
     left join (
       select
         user_id,
         username,
         uuid
       from
         log
       where
         schema='HOST_LOGIN_ACTIVITY' and
         cast(start_time as bigint) >= cast(to_unixtime (current_timestamp) as bigint) - 24 * 3600
         and cast(start_time as bigint) < cast(to_unixtime (current_timestamp) as bigint) -20 * 60
     ) b on a.username = b.username
     and a.uuid = b.uuid
     and a.user_id=b.user_id
   where
     (
       b.username is null
       or b.username = ''
     )

Configure the rule in CTDR

  1. Purchase and enable CTDR

    Refer to Purchase and enable CTDR for purchasing options. To access all custom threat detection services, we recommend purchasing both Log Ingestion Traffic and Log Storage Capacity.

  2. Log on to the console and navigate to the Create Custom Rule page

    1. Log on to the .

    2. In the left-side navigation pane, choose CTDR > Rule Management. In the upper-left corner of the Console, select the Region where your assets are located: Chinese Mainland or Outside Chinese Mainland.

    3. On the Custom tab, click Create Custom Rule.

  3. Configure the alert generation rule

    1. In the Create Custom Rule panel, on the Basic Information tab, enter a rule name and description, then click Next.

    2. Configure the SQL detection rule. Refer to the following parameter settings:

      Parameter

      Value

      Rule Body

      SQL

      Log Scope

      Logon Logs - Host Logon Success Log.

      SQL Query

      Copy the code from the Final SQL query section.

      Scheduling Interval

      Fixed Interval - 20 minutes.

      SQL Time Window

      24 hours.

      Start Time

      When the rule is enabled.

      Generation Structure

      Other Alert Logs.

      Alarm Metric

      Abnormal Logon.

      Alert Severity

      Medium.

      ATT&CK Tactic

      Persistence - T1136 Create Account.

      Entity Mapping

      • Network Address

        • is_malware: 1

        • ip: $src_ip

        • net_connect_dir: in

      • Host

        • is_asset: 1

        • uuid: $uuid

  4. Configure the Incident generation rule

    1. On the Alert Settings page, after you complete the configuration, click Next.

    2. Configure the event rule. Refer to the following parameter settings:

      • Generate Event: Yes.

      • Incident Generation Method: Aggregate by type.

      • 聚合窗口: 20 minutes.

  5. Validate the rule

    New rules are Disabled by default. You can test them to evaluate their effectiveness. During testing, the system automatically calibrates alert fields. Use the generated calibration suggestions to optimize the rule's SQL or playbook to ensure alert accuracy and standardization after the rule is enabled.

    1. Change the Enabling Status of the target rule to Testing.

    2. In the actions column for the target rule, click View Alert Test Result.

    3. On the test result details page, view the alert trend chart and the list of alerts generated.

    4. In the Actions column for an alert, click 详情 to view its calibration results.

  6. Enable the custom rule

    After the rule passes the test, set its Enabling Status to Enabled.

    Important

    We recommend testing the rule before you enable it.

Example test result

image

Risk assessment

  • False positive: Normal users logging in for the first time after a long vacation or business trip might be flagged incorrectly. You can reduce false positives by extending the historical lookback period (for example, from 24 hours to 72 hours) or by configuring a user Whitelist.

  • False negative: Log collection interruptions or non-standard field formats can cause the SQL query to miscalculate time intervals, leading to missed detections. Ensure the integrity and consistency of your log data.

SQL syntax documentation