Enable nanosecond precision support in a Logtail collection configuration and set up a time parsing plugin to collect and store high-precision timestamps at the millisecond, microsecond, or nanosecond level. The collected time is split and stored in two fields: a second-level timestamp __time__ and a nanosecond offset __time_ns_part__. This process enables high-precision log sorting and analysis.
Use cases
In use cases such as distributed tracing, high-frequency trading, performance profiling, or in use cases that require strict log ordering, time precision at the second level is insufficient. Business logs in these use cases often record timestamps at the millisecond, microsecond, or even nanosecond level. This topic describes how to configure Logtail to collect, store, and analyze high-precision time information to ensure the accuracy and chronological order of your log analysis.
Solution architecture
The core mechanism of Logtail for collecting nanosecond-precision timestamps is to store an additional nanosecond-level offset with the standard second-level timestamp. This design is compatible with existing second-based time systems and provides high-precision sorting capabilities.
The core workflow is as follows:
Enable nanosecond support: In the advanced parameters of the Logtail collection configuration, activate high-precision time processing by setting
{ "EnableTimestampNanosecond": true }. This feature is available only for Logtail 1.8.0 or later on Linux.Parse logs: Use a plugin, such as a delimiter, JSON, or regular expression plugin, to extract the string that contains the high-precision timestamp from the raw log.
Convert time: The time parsing plugin converts the time string into a standard time format.
Store time: Simple Log Service (SLS) splits the time into two fields for storage:
__time__: A standard Unix timestamp (long integer) in seconds.__time_ns_part__: The nanosecond part (long integer), with a value range of 0 to 999,999,999.
Query and analyze: During a query and analysis, perform a strict chronological log analysis by sorting based on the combination of the
__time__and__time_ns_part__fields.
Procedure
This section provides an end-to-end procedure that covers log collection, parsing, index configuration, and query and analysis. A JSON log that contains a nanosecond-level timestamp is used as an example.
Step 1: Create a project and a logstore
Before you collect logs, you must create a project and a logstore to manage and store the logs.
Project: A resource management unit in SLS, used to isolate and manage logs for different projects or services.
Logstore: A log storage unit used to store logs.
If you have already created a project and a logstore, skip this step and go to Configure a machine group and install Logtail.
Log on to the Simple Log Service console.
Click Create Project and configure the following parameters:
Region: Select the region where your log sources are located. This cannot be changed after the project is created.
Project Name: Enter a name that is globally unique within Alibaba Cloud. This cannot be changed after the project is created.
Keep the default settings for other parameters and click Create. For more information about other parameters, see Manage a project.
Click the project name to go to the project.
In the navigation pane on the left, click
, and then click +.On the Create Logstore page, configure the following core parameters:
Set Logstore Name to a unique name within the project. The name cannot be changed after the logstore is created.
For Logstore Type, select Standard or Query based on their features.
Billing Mode:
Pay-by-feature: This mode bills you for individual resources, such as storage, indexes, and read/write operations, and is suitable for small-scale use cases or when feature usage is uncertain.
Pay-by-ingested-data: This billing mode charges you only for the volume of raw data written. It includes a 30-day free storage period and free features, such as data transformation and delivery. This simple cost model is ideal for use cases where the storage period is approximately 30 days or the data processing pipeline is complex.
Set Data Retention Period to the number of days to retain logs. The value can be set from 1 to 3,650. A value of 3,650 specifies permanent storage. The default value is 30 days.
Keep the default settings for the other parameters and click OK. For more information about these parameters, see Manage a logstore.
Step 2: Configure a machine group and install LoongCollector
After you create a project and logstore, install Logtail on your server and add it to a machine group. This topic uses an Elastic Compute Service (ECS) instance to show how to install Logtail. The ECS instance and the SLS project belong to the same Alibaba Cloud account and region. If your ECS instance and project are not in the same account or region, or if you use a self-managed server, see Installation and configuration to install it manually.
This feature is supported only by Logtail 1.8.0 or later on Linux systems. It does not take effect on Windows systems.
In the target project, on the Logstores
All templates differ only in their parsing plugins. The rest of the configuration process is the same, and all settings can be modified later. |
|
Procedure:
On the Machine Group Configurations page, configure the following parameters:
Set Scenario to Servers.
Set Installation Environment to ECS.
Configure a machine group: Based on the Logtail installation status and machine group configuration of the target server, perform the corresponding operation:
If LoongCollector is already installed in a machine group, select that group from the Source Machine Group list and add it to the Applied Server Groups. You do not need to create the group again.
If LoongCollector is not installed, click Create Machine Group:
The following steps describe how to automatically install LoongCollector and create a new machine group.
The system automatically lists the ECS instances in the same region as the project. Select one or more instances from which you want to collect logs.
Click Install and Create Machine Group to automatically install LoongCollector on the selected ECS instances.
Enter a Name for the machine group and click OK.
NoteIf the installation fails or remains in a waiting state, check that the ECS region is the same as the project region.
To add a server with Logtail already installed to an existing machine group, see How do I add a server to an existing machine group?
Step 3: Create a collection configuration
Configure in the console
After you install LoongCollector and configure the machine group, define log collection and processing rules on the Logtail Configuration page.
1. Enable nanosecond precision support
Define the log collection source and collection rules, and enable nanosecond precision support.
Global Configurations:
Configuration Name: A unique name for the configuration within the project. The name cannot be changed after creation.
: To enable support for nanosecond precision, turn on the Advanced Parameters switch and enter the following JSON content:
{ "EnableTimestampNanosecond": true }
Input Configurations:
Set Type to Text Log Collection.
File Path: The path from which logs are collected.
Linux: Must start with a forward slash (/). For example,
/data/mylogs/**/*.logindicates all files with the .log extension in the/data/mylogsdirectory and its subdirectories.Windows: Must start with a drive letter. For example,
C:\Program Files\Intel\**\*.Log.
Maximum Directory Monitoring Depth: The maximum depth of directories to monitor when you use the
**wildcard character in the File Path. The default value is 0, which monitors only the current directory.
2. Configure processing plugins
Because the source log is in JSON format, add the Data Parsing (JSON Mode) plugin in the Processor Configurations section to extract the string that contains the nanosecond timestamp from the raw log and store it as a separate field.
Add a log sample
Assume the log format in the log file is as follows, where the
asctimefield contains a nanosecond-precision timestamp.{ "asctime": "2023-10-25 23:51:10,199999999", "filename": "generate_data.py", "levelname": "INFO", "lineno": 51, "module": "generate_data", "message": "{\"no\": 14, \"inner_loop\": 166, \"loop\": 27451, \"uuid\": \"9be98c29-22c7-40a1-b7ed-29ae6c8367af\"}", "threadName": "MainThread" }Add the JSON Parsing plugin
Click Add Processor, select , and click OK.
Add a time parsing plugin
Convert the time string extracted in the previous step (the
asctimefield) into a standard nanosecond timestamp and use it as the event time for the log entry.Plugin name
Core feature
Use case
Time Parsing
Basic time parsing
Simple use cases with a fixed format.
Extract Log Time (strptime Time Format)
Flexible, supports a rich set of
strptimeformatsRecommended. Comprehensive features and compatible with de facto standards.
Extract Log Time (Go Time Format)
Uses Go standard library formats
use cases where you are familiar with Go or the log format matches the Go standard library.
Time Parsing
Click Add Processor, select , and configure the following parameters:
Original Field: The field that contains the time value before parsing. For example,
asctime.Time Format: Set the time format to match the time field in the log. For example, in the format
%Y-%m-%d %H:%M:%S,%f,%frepresents the fractional part of a second with nanosecond precision.The time format string must exactly match the time format in the raw log, including the separator between seconds and nanoseconds, such as a comma (
,) or a period (.). Otherwise, parsing will fail.Time Zone: The time zone for the log time field. The machine's time zone is used by default.
Extract Log Time (strptime Time Format)
Click Add Processor, select , and then configure the following parameters:
Original Field: The field that contains the time value to be parsed. For example,
asctime.Time Format: Set the time format based on the values of the time field in the log. For example:
%Y-%m-%d %H:%M:%S,%f. In this format,%frepresents the fractional part of a second with nanosecond precision.The time format string must exactly match the time format in the raw log, including the separator between seconds and nanoseconds, such as a comma (
,) or a period (.). Otherwise, parsing will fail.
Extract Log Time (Go Time Format)
Click Add Processor, select , and configure the following parameters:
Original Time Field: The Original Field that contains the time value before parsing. For example,
asctime.Time Format: You can set the format for the time field in the raw log. The format must follow the Go time format specification. The format template is based on Go's reference time:
2006-01-02 15:04:05 -0700 MST. The time format for this example is2006-01-02 15:04:05,999999999.The time format string must exactly match the time format in the raw log, including the separator between seconds and nanoseconds, such as a comma (
,) or a period (.). Otherwise, parsing will fail.New Time Field: The destination field for the parsed time, such as
result_asctime.New Time Format: The format of the time value after parsing, which must follow the Go time format specification, such as
2006-01-02 15:04:05,999999999Z07:00.
3. Configure indexes
After you complete the Logtail configuration, click Next. The Query and Analysis Configurations page appears:
By default, the system enables full-text indexing, which lets you search for keywords in the raw log content.
To perform term queries by field, wait for the Preview Data to load and then click Automatic Index Generation. SLS then generates a field index based on the first entry in the preview data.
After completing the configuration, click Next to complete the collection process setup.
Configure using a CRD (Kubernetes)
In an Alibaba Cloud Container Service for Kubernetes (ACK) cluster or a self-managed Kubernetes cluster, configure the collection of nanosecond-precision timestamps using an AliyunLog Custom Resource Definition (CRD). The following sections provide configuration examples for the three different plugins.
Time Parsing
apiVersion: telemetry.alibabacloud.com/v1alpha1
kind: ClusterAliyunPipelineConfig
metadata:
name: ${your-config-name}
spec:
config:
aggregators: []
global:
EnableTimestampNanosecond: true
inputs:
- Type: input_file
FilePaths:
- /test/sls/json_nano.log
MaxDirSearchDepth: 0
FileEncoding: utf8
EnableContainerDiscovery: true
processors:
- Type: processor_parse_json_native
SourceKey: content
- Type: processor_parse_timestamp_native
SourceKey: asctime
SourceFormat: '%Y-%m-%d %H:%M:%S,%f'
flushers:
- Type: flusher_sls
Logstore: ${your-logstore-name}
sample: |-
{
"asctime": "2025-11-03 15:39:14,229939478",
"filename": "log_generator.sh",
"levelname": "INFO",
"lineno": 204,
"module": "log_generator",
"message": "{\"no\": 45, \"inner_loop\": 15, \"loop\": 1697, \"uuid\": \"80366fca-a57d-b65a-be07-2ac1173505d9\"}",
"threadName": "MainThread"
}
project:
name: ${your-project-name}
logstores:
- name: ${your-logstore-name}Extract Log Time (strptime Time Format)
apiVersion: telemetry.alibabacloud.com/v1alpha1
kind: ClusterAliyunPipelineConfig
metadata:
name: ${your-config-name}
spec:
config:
aggregators: []
global:
EnableTimestampNanosecond: true
inputs:
- Type: input_file
FilePaths:
- /test/sls/json_nano.log
MaxDirSearchDepth: 0
FileEncoding: utf8
EnableContainerDiscovery: true
processors:
- Type: processor_parse_json_native
SourceKey: content
- Type: processor_strptime
SourceKey: asctime
Format: '%Y-%m-%d %H:%M:%S,%f'
KeepSource: true
AlarmIfFail: true
AdjustUTCOffset: false
flushers:
- Type: flusher_sls
Logstore: ${your-logstore-name}
sample: |-
{
"asctime": "2025-11-03 15:39:14,229939478",
"filename": "log_generator.sh",
"levelname": "INFO",
"lineno": 204,
"module": "log_generator",
"message": "{"no": 45, "inner_loop": 15, "loop": 1697, "uuid": "80366fca-a57d-b65a-be07-2ac1173505d9"}",
"threadName": "MainThread"
}
project:
name: ${your-project-name}
logstores:
- name: ${your-logstore-name}Extract Log Time (Go Time Format)
apiVersion: telemetry.alibabacloud.com/v1alpha1
kind: ClusterAliyunPipelineConfig
metadata:
name: ${your-config-name}
spec:
config:
aggregators: []
global:
EnableTimestampNanosecond: true
inputs:
- Type: input_file
FilePaths:
- /test/sls/json_nano.log
MaxDirSearchDepth: 0
FileEncoding: utf8
EnableContainerDiscovery: true
processors:
- Type: processor_parse_json_native
SourceKey: content
- Type: processor_gotime
SourceKey: asctime
SourceFormat: '2006-01-02 15:04:05,999999999'
DestKey: result_asctime
DestFormat: '2006-01-02 15:04:05,999999999Z07:00'
SetTime: true
KeepSource: true
NoKeyError: true
AlarmIfFail: true
flushers:
- Type: flusher_sls
Logstore: ${your-logstore-name}
sample: |-
{
"asctime": "2025-11-03 15:39:14,229939478",
"filename": "log_generator.sh",
"levelname": "INFO",
"lineno": 204,
"module": "log_generator",
"message": "{"no": 45, "inner_loop": 15, "loop": 1697, "uuid": "80366fca-a57d-b65a-be07-2ac1173505d9"}",
"threadName": "MainThread"
}
project:
name: ${your-project-name}
logstores:
- name: ${your-logstore-name}Step 4: Verify the results
After the configuration is complete, wait for a few moments for new log data to be collected into the logstore.
On the Search & Analyze page in the SLS console, view the collected logs. The console automatically optimizes the display of high-precision time information by showing it in millisecond, microsecond, or nanosecond format.

FAQ
Why can't nanosecond timestamps be parsed correctly during log collection?
After a Logtail configuration is created and applied, high-precision timestamps fail to be parsed.

Cause
The used plugin supports %f, but the configured time format does not match the log time in raw logs.
Solution
Log on to the Logtail machine, view the logs, and find multiple STRPTIME_PARSE_ALARM error logs.
tail -f /usr/local/ilogtail/logtail_plugin.LOG 2023-10-26 00:30:39 [WRN] [strptime.go:164] [processLog] [##1.0##xxxx,xxx] AlarmType:STRPTIME_PARSE_ALARM strptime(2023-10-26 00:30:10,199999999, %Y-%m-%d %H:%M:%S %f) failed: 0001-01-01 00:00:00 +0000 UTC, <nil>Change the time format for the plugin.
The raw log time is
2023-10-26 00:30:10,199999999. The separator between the seconds and the nanoseconds is a comma (,). However, the parsing format is%Y-%m-%d %H:%M:%S %f, where the separator is a space. You must modify the time format in the collection configuration to%Y-%m-%d %H:%M:%S,%f.
Costs and limits
Cost impact: The
__time_ns_part__field is stored as part of the log content, which slightly increases the storage volume of raw logs.Environment limits: This feature is supported only by Logtail 1.8.0 or later on Linux systems. It does not take effect on Windows systems.
References
Appendix 1: Common log time formats
On Linux servers, Logtail supports all time formats that are provided by the strftime function. This means that any log time string that can be formatted by the strftime function can be parsed and used by Logtail.
Time format | Description | Example |
%a | Abbreviated weekday name. | Fri |
%A | Full weekday name. | Friday |
%b | Abbreviated month name. | Jan |
%B | Full month name. | January |
%d | Day of the month as a zero-padded decimal number. Range: 01 to 31. | 07, 31 |
%f | The fractional part of a second (millisecond, microsecond, or nanosecond). | 123 |
%h | Abbreviated month name. Same as %b. | Jan |
%H | Hour (24-hour clock). | 22 |
%I | Hour (12-hour clock). | 11 |
%m | Month as a zero-padded decimal number. Range: 01 to 12. | 08 |
%M | Minute as a zero-padded decimal number. Range: 00 to 59. | 59 |
%n | A newline character. | Newline character |
%p | AM or PM. | AM, PM |
%r | 12-hour clock time. Same as %I:%M:%S %p. | 11:59:59 AM |
%R | Hour and minute. Same as %H:%M. | 23:59 |
%S | Second as a zero-padded decimal number. Range: 00 to 59. | 59 |
%t | A tab character. | None |
%y | Year without century as a zero-padded decimal number. Range: 00 to 99. | 04, 98 |
%Y | Year with century as a decimal number. | 2004, 1998 |
%C | Century as a decimal number. Range: 00 to 99. | 16 |
%e | Day of the month as a space-padded decimal number. Range: 1 to 31. A leading space is used for single-digit numbers. | 7, 31 |
%j | Day of the year as a zero-padded decimal number. Range: 001 to 366. | 365 |
%u | Weekday as a decimal number, where 1 is Monday. Range: 1 to 7. | 2 |
%U | Week number of the year, with Sunday as the first day of the week. Range: 00 to 53. | 23 |
%V | Week number of the year, with Monday as the first day of the week. Range: 01 to 53. If the week containing January 1 has four or more days in the new year, it is week 1. Otherwise, it is the next week. | 24 |
%w | Weekday as a decimal number, where 0 is Sunday. Range: 0 to 6. | 5 |
%W | Week number of the year, with Monday as the first day of the week. Range: 00 to 53. | 23 |
%c | Standard date and time. | Tue Nov 20 14:12:58 2020 |
%x | Standard date without time. | Tue Nov 20 2020 |
%X | Standard time without date. | 11:59:59 |
%s | Unix timestamp. | 1476187251 |
Time format examples
The following table shows common time standards, examples, and their corresponding time expressions.
Example | Time expression | Time standard |
2017-12-11 15:05:07 | %Y-%m-%d %H:%M:%S | Custom |
[2017-12-11 15:05:07.012] | [%Y-%m-%d %H:%M:%S | Custom |
2017-12-11 15:05:07.123 | %Y-%m-%d %H:%M:%S.%f | Custom |
02 Jan 06 15:04 MST | %d %b %y %H:%M | RFC822 |
02 Jan 06 15:04 -0700 | %d %b %y %H:%M | RFC822Z |
Monday, 02-Jan-06 15:04:05 MST | %A, %d-%b-%y %H:%M:%S | RFC850 |
Mon, 02 Jan 2006 15:04:05 MST | %A, %d %b %Y %H:%M:%S | RFC1123 |
2006-01-02T15:04:05Z07:00 | %Y-%m-%dT%H:%M:%S | RFC3339 |
2006-01-02T15:04:05.999999999Z07:00 | %Y-%m-%dT%H:%M:%S | RFC3339Nano |
1637843406 | %s | Custom |
1637843406123 | %s | Custom (SLS processes it with second-level precision) |
Appendix 2: Go time formats
The following are the official time format examples from Go:
const (
Layout = "01/02 03:04:05PM '06 -0700" // The reference time, in numerical order.
ANSIC = "Mon Jan _2 15:04:05 2006"
UnixDate = "Mon Jan _2 15:04:05 MST 2006"
RubyDate = "Mon Jan 02 15:04:05 -0700 2006"
RFC822 = "02 Jan 06 15:04 MST"
RFC822Z = "02 Jan 06 15:04 -0700" // RFC822 with numeric zone
RFC850 = "Monday, 02-Jan-06 15:04:05 MST"
RFC1123 = "Mon, 02 Jan 2006 15:04:05 MST"
RFC1123Z = "Mon, 02 Jan 2006 15:04:05 -0700" // RFC1123 with numeric zone
RFC3339 = "2006-01-02T15:04:05Z07:00"
RFC3339Nano = "2006-01-02T15:04:05.999999999Z07:00"
Kitchen = "3:04PM"// Handy time stamps.
Stamp = "Jan _2 15:04:05"
StampMilli = "Jan _2 15:04:05.000"
StampMicro = "Jan _2 15:04:05.000000"
StampNano = "Jan _2 15:04:05.000000000"
)