All Products
Search
Document Center

Simple Log Service:Time series forecasting

Last Updated:May 13, 2025

Simple Log Service offers time series forecasting to predict future data trends, enabling early assessment of key metrics. This topic covers the feature's scenarios, terms, scheduling and execution use cases, and usage notes.

Important

The Intelligent Anomaly Analysis application in Simple Log Service is being phased out and will no longer be available on July 15, 2025 (UTC+8).

  1. Impact scope

    Intelligent inspection, text analysis, and time series forecasting will no longer be available.

  2. Feature replacement

    The preceding features can be fully replaced by the machine learning syntax, scheduled SQL and dashboard features of Simple Log Service. Documentation will be provided to help you configure feature-related settings.

Scenarios

When a service runs, it generates time series data that tracks changes in metrics over time. This data is essential for system monitoring and fault detection. You can monitor and analyze it using two methods:

  • Use intelligent inspection to analyze the generated time series data to detect possible exceptions.

    This method is used to identify and locate problems after errors occur.

  • Use time series forecasting to predict future time series data and estimate future trends.

    This method is used to provide early warning for abnormal trends of key service metrics.

Time series forecasting can be used to:

  • Predict future trends of key service metrics, such as QPS and online users, triggering alerts if values exceed thresholds and allowing proactive troubleshooting.

  • Anticipate fluctuations in system metrics, such as CPU and disk usage, enabling adjustments like scaling out systems if high CPU usage is expected.

Important

Time series data trends are influenced by factors like emergencies, instability, and forecasting algorithm limits, making predictions not fully accurate. Use these predictions only as a reference in decision-making.

How it works

A time series forecasting job uses SQL to extract or aggregate metrics. It regularly pulls data per schedule, writes predictions to the logstore internal-ml-log, and displays them on a dashboard for quick viewing.

Terms

Term

Description

Job

The job includes data features and algorithm model parameters.

Instance

The job creates an execution instance to regularly pull data, run the algorithm, and distribute results. For details on how operations impact instance scheduling and execution, see Scheduling and execution use cases.

Instance ID

The unique ID of the execution instance.

Creation time

The creation time of the execution instance generated based on scheduling rules. The instances can be created immediately to process historical data or offset delays from previous timeouts.

Start time

The time when the execution instance begins running. If the job is retried, the start time is when it most recently began.

End time

The time when the execution instance stops running. If the job is retried, the end time is when it most recently stopped.

Status

The status of the execution instance. Valid values:

  • RUNNING

  • STARTING

  • SUCCEEDED

  • FAILED

Data feature configurations

See data feature configurations.

Algorithm configurations

See algorithm configurations.

Forecasting results

The results are displayed in the built-in dashboard.

Scheduling and execution use cases

Each job can create multiple instances, but only one can run at a time, whether scheduled or retried due to an anomaly. Concurrent instances aren't allowed in a single job.

You cannot modify a job's configuration when it's running. Changing the configuration leads to the creation of a new instance, which is independent of the previous one.

The following table lists the scheduling and execution use cases.

Use case

Description

Execute a job from the current point in time

The job reads historical data based on the configured Observation Period and then predicts the time series for a future time period.

Execute a job from a historical point in time

The job processes historical data according to its rules. The algorithm models rapidly handle historical data and progressively reach the current time.

Modify the scheduling configurations

After scheduling rules are modified, a job creates an instance based on the new rules. Algorithm models track the time before which historical data was analyzed and continue analyzing recent data.

Retry a failed instance

If an instance fails due to issues such as insufficient permissions, unavailable logstores, or invalid configuration, Simple Log Service can automatically retry it. If the instance is stuck in the STARTING state, its configuration may have errors. An error log is generated and sent to the internal-etl-log logstore. Check the configuration and retry the instance. After execution, the status updates to SUCCEEDED or FAILED based on the retry outcome.

Usage notes

To optimize time series forecasting efficiency:

  • Define data formats, fields, and observation granularity for the logstore.

  • Understand the changes, stability, and periodicity of metric data for specified entities to configure algorithm parameters effectively.

  • Set accurate time windows for forecasting jobs in seconds, minutes, or hours.