All Products
Search
Document Center

Security Center:general data import procedure

Last Updated:Mar 31, 2026

Agentic SOC collects logs from Kafka, Amazon S3, and Alibaba Cloud Object Storage Service (OSS) into a unified log analysis and security operations center, giving you consistent visibility across cloud environments.

How it works

image

The import flow has four stages:

  1. Choose an import channel (Kafka, S3, or OSS) based on your latency requirements and data source.

  2. Deliver source logs to that channel.

  3. Grant Agentic SOC permission to read from the channel.

  4. Create a data import task and configure parsing rules.

Prerequisites

Before you begin, ensure that you have:

  • An active Security Center subscription with the Agentic SOC feature enabled

  • Access credentials (AccessKey pairs or passwords) for your log source

  • Endpoint and bucket or topic information for the chosen import channel

Choose an import channel

Select the channel that matches your data source, latency needs, and cost requirements.

ChannelBest forLatencySupported formatsCost notes
KafkaReal-time log stream analysis; self-hosted platforms exporting via KafkaLowJSON, text (raw log)Configuration is more complex than S3 or OSS
S3 (or S3-compatible storage)Sources where minute-level latency is acceptable; any source supporting the S3 protocolMinute-levelCSV, JSON, text, multi-line textSimple setup; works with AWS S3, Tencent Cloud COS, Huawei Cloud OBS
OSSLog data already stored in Alibaba Cloud OSSMinute-levelCSV, single-line JSON, text, multi-line text, ORC, Parquet, OSS access log, Alibaba Cloud CDN download logTightly integrated with the Alibaba Cloud ecosystem; same-region access is free

Deliver logs to the import channel

Agentic SOC pulls data from Kafka, S3, or OSS — it does not reach into your source systems directly. Route your source logs to the chosen channel first.

Common delivery methods:

  • OSS: Store log data in an Alibaba Cloud OSS bucket. See OSS Quick Start.

  • Huawei Cloud: Use the log dump feature in Huawei Cloud Log Tank Service (LTS) to forward logs to Kafka or Object Storage Service (OBS). See Import Huawei Cloud log data.

  • Tencent Cloud: Use the log dump feature in Tencent Cloud Log Service (CLS) to forward logs to Kafka or Cloud Object Storage (COS). See Import Tencent Cloud log data.

  • Azure: Azure Event Hubs is compatible with the Apache Kafka protocol. Agentic SOC can read from an Event Hub as if it were a Kafka service. See Import Azure log data.

Save the access credentials, endpoints, and bucket or topic names from this step — you'll need them when configuring the import task.

Grant Security Center access

Before creating an import task, grant Agentic SOC permission to read from the channel.

Grant access for Kafka and S3

  1. Log on to the Security Center console and navigate to System Settings > Feature Settings. In the upper-left corner, select the region where your assets are located: Chinese Mainland or Outside Chinese Mainland.

  2. On the Multi-cloud Configuration Management tab, select Multi-cloud Assets, then click Grant Permission.

  3. In the panel that appears, select IDC from the drop-down list.

    When configuring a generic Kafka or S3-compatible service (such as AWS S3, Tencent Cloud COS, or Huawei Cloud OBS), select IDC. This is a UI classification only and does not affect functionality.
  4. Configure the connection parameters based on your channel type.

Kafka parameters

ParameterDescription
Service ProviderSelect Apache
ServiceSelect Kafka
EndpointThe public endpoint for Kafka. For Azure Event Hubs, use <YOUR-NAMESPACE>.servicebus.windows.net:9093
Username/PasswordKafka credentials. For Azure Event Hubs, the default username is $ConnectionString and the password is the primary connection string
Communication ProtocolMaps to security.protocol in Kafka config. Options: plaintext (default), sasl_plaintext, sasl_ssl, ssl
SASL Authentication MechanismMaps to sasl.mechanism in Kafka config. Options: plain, SCRAM-SHA-256, SCRAM-SHA-512

S3 parameters

ParameterDescription
Service ProviderSelect AWS-S3
ServiceSelect S3
EndpointThe endpoint for the object storage service
Access Key Id / Secret Access KeyThe AccessKey pair used to access S3
  1. Configure the synchronization policy. AK Service Status Check: Sets the interval at which Security Center checks the validity of your AccessKey pair. Select Disable to turn off this check.

Grant access for OSS

  1. Go to the Cloud Resource Access Authorization page and click Confirm Authorization to grant the system role permission to access OSS resources.

  2. If you are a Resource Access Management (RAM) user, attach the following custom policy to your RAM user. For instructions, see Create a custom policy and Manage RAM user permissions.

    {
      "Statement": [
        {
          "Effect": "Allow",
          "Action": "ram:PassRole",
          "Resource": "acs:ram:*:*:role/aliyunlogimportossrole"
        },
        {
          "Effect": "Allow",
          "Action": "oss:GetBucketWebsite",
          "Resource": "*"
        },
        {
          "Effect": "Allow",
          "Action": "oss:ListBuckets",
          "Resource": "*"
        }
      ],
      "Version": "1"
    }

Create a data import task

Step 1: Create a data source

Skip this step if a data source already exists.

  1. Log on to the Security Center console and navigate to Agentic SOC > Integration Center. In the upper-left corner, select the region where your assets are located: Chinese Mainland or Outside Chinese Mainland.

  2. On the Data Source tab, click to create a data source. For detailed steps, see Create a data source for Simple Log Service (SLS). Configure the following parameters:

    ParameterDescription
    Source Data Source TypeSelect Agentic SOC Dedicated Collection Channel (recommended) or User Log Service
    Add InstancesCreate a new Logstore to isolate this data from other sources

Step 2: Configure the import source

On the Data Import tab, click Add Data, then configure the parameters for your channel type.

For parameter values such as endpoints and bucket names, refer to the official documentation of your cloud provider.

Kafka

ParameterDescription
EndpointSelect the Kafka public endpoint you entered when granting access
TopicsThe Kafka topic where logs are stored. For Azure Event Hubs, the Event Hub name is the Kafka topic
Value TypeThe log storage format

Value Type options:

Log formatValue Type
JSONjson
Raw logtext

S3

Warning

Always set File Path Prefix Filter. If left blank, the import task traverses the entire S3 bucket, which can severely degrade performance when the bucket contains many files.

ParameterDescription
EndpointThe endpoint for the S3 bucket
S3 BucketThe ID of the S3 bucket
File Path Prefix FilterFilters objects by path prefix. For example, set to csv/ to import only objects stored in the csv/ directory
File Path Regex FilterFilters files using a regular expression. Default is blank (no filtering). Example: (testdata/csv/)(.*) for a file at testdata/csv/bill.csv. Evaluated with a logical AND together with File Path Prefix Filter. See How to debug a regular expression
Data FormatThe parsing format for the file. See S3 data formats
Encoding FormatEncoding of the S3 file. Options: GBK, UTF-8
Compression formatAuto-detected based on the S3 data compression settings
Modified TimeImports all files modified starting 30 minutes before the task begins, including newly added files
New File Check CycleHow often the import task scans for new files
S3 data formats
  • CSV: A separator-delimited text file. Specify the first row as field names or define them manually. Each data row is parsed as log field values.

  • JSON: Reads the file line by line and parses each line as a JSON object. JSON fields map to log fields.

  • Text: Parses each line as a single log entry.

  • Multi-line Text: Parses multi-line log entries using a regular expression for the first or last line.

If you select CSV or Multi-line Text, configure the additional parameters below.

CSV parameters:

ParameterDescription
SeparatorDelimiter for log fields. Default: comma (,)
Quote characterQuote character used in the CSV string
Escape CharacterEscape character for log fields. Default: backslash (\)
Max Line SpanMaximum number of lines for a log entry that spans multiple lines. Default: 1
First Row as HeaderWhen enabled, the first row of the CSV file is used as field names. For example, the first row in the following figure is extracted as the names of the log fields.首行
Skip rowsNumber of rows to skip at the start of the file. For example, 1 starts collection from the second row

Multi-line Text parameters:

ParameterDescription
Regex Match PositionPrefix Regex: Matches the first line of a log entry; subsequent non-matching lines are part of that entry until Max Lines is reached. Suffix Regex: Matches the last line of a log entry; the non-matching part is the next entry until Max Lines is reached
RegexThe regular expression based on your log content. See How to debug a regular expression
Max LinesMaximum number of lines in a single log entry

OSS

Important

OSS import supports files up to 5 GB. Compressed file size is measured after compression.

Warning

Always set File Path Prefix Filter. If left blank, the import task traverses the entire OSS bucket, which can degrade performance and incur unnecessary costs.

ParameterDescription
OSS RegionThe region where the OSS data is stored. Cross-region access incurs data transfer costs charged by OSS. See Billing details
BucketThe OSS bucket that contains the files to import
File Path Prefix FilterFilters objects by path prefix. For example, set to csv/ to import only files in the csv/ directory
File Path Regex FilterFilters files using a regular expression. Default is blank (no filtering). Example: (testdata/csv/)(.*) for a file at testdata/csv/bill.csv. Evaluated with a logical AND together with File Path Prefix Filter. See How to debug a regular expression
Modified TimeSets the start time to 30 minutes before the task is initiated. All files modified after that time, including newly added files, are imported
Data FormatThe format used to parse the file. See OSS data formats
Encoding FormatEncoding of the OSS file. Options: GBK, UTF-8
Compression formatAuto-detected based on the OSS data compression settings
New File Check CycleHow often the import task scans for new files
OSS data formats
  • CSV: A separator-delimited text file. Specify the first row as field names or define them manually. Each data row is parsed as log field values.

  • Single-Line JSON: Reads the file line by line and parses each line as a JSON object. JSON fields map to log fields.

  • Text: Parses each line as a single log entry.

  • Multi-line Text: Parses multi-line log entries using a regular expression for the first or last line.

  • ORC: Apache ORC format. Automatically parsed into log format with no additional configuration.

  • Parquet: Apache Parquet format. Automatically parsed into log format with no additional configuration.

  • OSS Access Log: Format for Alibaba Cloud OSS access logs. See Log storage.

  • Download Alibaba Cloud CDN logs: Format for Alibaba Cloud CDN download logs. See Quick Start.

If you select CSV or Multi-line Text, configure the additional parameters below.

CSV parameters:

ParameterDescription
SeparatorDelimiter for log fields. Default: comma (,)
Quote characterQuote character used in the CSV string
Escape CharacterEscape character for log fields. Default: backslash (\)
Max Line SpanMaximum number of lines for a log entry that spans multiple lines. Default: 1
First Row as HeaderWhen enabled, the first row of the CSV file is used as field names. For example, the first row in the following figure is extracted as the names of the log fields.首行
Skip rowsNumber of rows to skip at the start of the file. For example, 1 starts collection from the second row

Multi-line Text parameters:

ParameterDescription
Regex Match PositionPrefix Regex: Matches the first line of a log entry; subsequent non-matching lines are part of that entry until Max Lines is reached. Suffix Regex: Matches the last line of a log entry; the non-matching part is the next entry until Max Lines is reached
RegexThe regular expression based on your log content. See How to debug a regular expression
Max LinesMaximum number of lines in a single log entry

Step 3: Configure the destination

ParameterDescription
Data Source NameSelect the data source created in Step 1
Target LogstoreThe Logstore configured in Step 1

Step 4: Save and start

Click OK to save the configuration. Security Center automatically starts pulling logs from the import channel.

Analyze imported data

After logs are ingested, configure parsing and detection rules so Agentic SOC can analyze them.

Create an integration policy

Create a new integration policy in Product integration with the following settings:

ParameterDescription
Data SourceSelect the target data source configured in the import task
Standardized RuleAgentic SOC provides built-in standardized rules for some cloud products. Select one if available for your source
Standardization MethodWhen converting access logs to alert logs, only Real-time Consumption is supported

Configure threat detection rules

Enable or create log detection rules in rule management to generate alerts and security events from the ingested logs. See Detection Rules.

Billing

Costs vary by the data source type you selected and the cloud providers you import from.

Agentic SOC and Simple Log Service (SLS) costs

Data source typeAgentic SOC billable itemsSLS billable itemsNotes
Agentic SOC Dedicated Collection ChannelLog ingestion fees; log storage and write fees (consume Log Ingestion Traffic)Fees other than log storage and writes (such as Internet traffic)Agentic SOC creates and manages SLS resources and is billed for Logstore storage and write fees
User Log ServiceLog ingestion fees (consuming Log Ingestion Traffic)All log-related fees (storage, writes, data transfer, and more)All log resources are managed by SLS, which is billed for all log fees

For billing details, see Agentic SOC subscription, Agentic SOC pay-as-you-go, and SLS billing overview.

OSS import costs

When importing from OSS, OSS charges for traffic and API requests. Same-region access uses the internal network (free); cross-region access uses the Internet (charged).OSS Pricing

Cost formula:

image..png
FieldDescription
NNumber of files imported per day
TTotal data volume imported per day, in GB
p_readTraffic fee per GB. Same-region: internal network outbound (free). Cross-region: Internet outbound (charged)
p_putFee per 10,000 PUT requests. SLS uses the ListObjects API to list files in the bucket (max 1,000 entries per call), which OSS bills as PUT requests
p_getFee per 10,000 GET requests
MNew file check interval in minutes, set by the New File Check Cycle parameter

For OSS pricing, see OSS Pricing.

Third-party channel costs

What's next