You can upload log files to Object Storage Service (OSS) buckets for storage. Then, you can import the log data from OSS to Log Service and perform supported operations on the data in Log Service. For example, you can query, analyze, and transform the data. You can import only the OSS objects that are no more than 5 GB in size to Log Service. If you want to import a compressed object, the size of the compressed object must be no more than 5 GB.
Prerequisites
- Log files are uploaded to an OSS bucket. For more information, see Upload objects.
- A project and a Logstore are created. For more information, see Create a project and Create a Logstore.
- Log Service is authorized to assume the AliyunLogImportOSSRole role to access your OSS resources. You can complete the authorization on the Cloud Resource Access Authorization page. If you use a RAM user, you must grant the PassRole permission to the RAM user. The following example shows a policy that you can use to grant the PassRole permission. For more information, see Create a custom policy and Grant permissions to the RAM user.
{ "Statement": [ { "Effect": "Allow", "Action": "ram:PassRole", "Resource": "acs:ram:*:*:role/aliyunlogimportossrole" } ], "Version": "1" }
Create a data import configuration
- Log on to the Log Service console.
- On the Data Import tab in the Import Data section, click OSS - Data Import.
- Select the project and Logstore. Then, click Next.
- In the Configure Import Settings step, create a data import configuration.
- Preview data, configure indexes, and then click Next. By default, full-text indexing is enabled for Log Service. You can also configure field indexes based on collected logs in manual mode or automatic mode. To configure field indexes in automatic mode, click Automatic Index Generation. This way, Log Service automatically creates field indexes. For more information, see Create indexes.Important If you want to query and analyze logs, you must enable full-text indexing or field indexing. If you enable both full-text indexing and field indexing, the system uses only field indexes.
- Click Log Query. On the Search & Analysis page, check whether OSS data is imported. Wait for approximately 1 minute. If the required OSS data is imported, the import is successful.
View the data import configuration
After you create the data import configuration, you can view the configuration details and related statistical reports in the Log Service console.
- In the Projects section, click the project to which the data import configuration belongs.
- In the left-side navigation pane, choose . Click the Logstore to which the data import configuration belongs, choose , and then click the name of the data import configuration.
- On the Import Configuration Overview page, view the basic information and statistical reports of the data import configuration.
What to do next
On the Import Configuration Overview page, you can perform the following operations on the data import configuration:
- Modify the configuration
To modify the data import configuration, click Modify Settings. For more information, see Create a data import configuration.
- Delete the configuration To delete the data import configuration, click Delete Configuration.Warning After the data import configuration is deleted, it cannot be recovered.
- Stop a task
To stop the data import task, click Stop.
FAQ
Issue | Cause | Solution |
---|---|---|
No data is displayed during preview. | The OSS bucket contains no objects, the objects contain no data, or no objects meet the filter conditions. |
|
Garbled characters exist. | The data format, compression format, or encoding format is not configured as expected. | Check the actual format of the OSS objects, and then modify the Data Format, Compression Format, or Encoding Format parameter. To handle the existing garbled characters, create a Logstore and a data import configuration. |
The log time displayed in Log Service is different from the actual time at which data is imported. | The time field is not specified in the data import configuration, or the specified time format or time zone is invalid. | Specify a time field or specify a valid time format or time zone. For more information, see Create a data import configuration. |
After data is imported, the data cannot be queried or analyzed. |
|
|
The number of imported data entries is less than expected. | Some OSS objects contain data in which a line is greater than 3 MB in size. In this case, the data is discarded during the import. For more information, see Limits on collection. | When you write data to an OSS object, make sure that the size of a line does not exceed 3 MB. |
The number of OSS objects and the total volume of data are large, but the import speed does not meet your expectation. In most cases, the import speed can reach 80 MB/s. | The number of shards in the Logstore is excessively small. For more information, see Limits on performance. | If the number of shards in a Logstore is small, increase the number of shards to 10 or more and check the latency. For more information, see Manage shards. |
You cannot select an OSS bucket when you create a data import configuration. | The AliyunLogImportOSSRole role is not assigned to Log Service. | Complete authorization. For more information, see the "Prerequisites" section of this topic. |
Some OSS objects failed to be imported to Log Service. | The settings of the filter conditions are invalid or the size of a single object exceeds 5 GB. For more information, see Limits on collection. |
|
No Archive objects are imported to Log Service. | Import Archive Files is turned off. For more information, see Limits on collection. |
|
Multi-line text logs are incorrectly parsed. | The specified regular expression that is used to match the first line or the last line in a log is invalid. | Check whether the regular expression that is used to match the first line or the last line in a log is valid. |
The latency to import new OSS objects is higher than expected. | The number of existing OSS objects that meet the conditions specified by File Path Prefix Filter exceeds the limit and OSS Metadata Indexing is turned off in the data import configuration. | If the number of existing OSS objects that meet the conditions specified by File Path Prefix Filter exceeds one million, turn on OSS Metadata Indexing in the data import configuration. Otherwise, the efficiency of new file discovery is low. |
Error handling
Item | Description |
---|---|
File read failure | If an OSS object fails to be completely read because a network exception occurs or the object is damaged, the corresponding data import task automatically retries to read the object. If the object fails to be read after three retries, the object is skipped. The retry interval is the same as the value of the New File Check Cycle parameter. If the New File Check Cycle parameter is set to Never Check, the retry interval is 5 minutes. |
Compression format parsing error | If the compression format is invalid when an OSS object is decompressed, the corresponding data import task skips the object. |
Data format parsing error |
|
Logstore not exist | A data import task periodically retries. The data import task does not resume the import until the Logstore is recreated. If the Logstore does not exist, the data import task does not skip any OSS objects. Therefore, after the Logstore is recreated, the data import task automatically imports data from the unprocessed objects in the OSS bucket to the Log Service Logstore. |
OSS bucket not exist | A data import task periodically retries. The data import task does not resume the import until the OSS bucket is recreated. |
Permission error | If a permission error occurs when data is read from an OSS bucket or data is written to a Log Service Logstore, the corresponding data import task periodically retries. The data import task does not resume the import until the error is fixed. If a permission error occurs, the data import task does not skip any OSS objects. Therefore, after the error is fixed, the data import task automatically imports data from the unprocessed objects in the OSS bucket to the Log Service Logstore. |