This topic describes some common errors reported when querying and analyzing logs, and provides solutions to them.
Logstore * does not exist
Error description
The logstore that you want to query does not exist in the current region and project.
Causes
The logstore has been deleted.
The logstore does not exist and may be in another project.
The logstore does not exist and may be in another region.
Solution
Make sure the logstore that you want to query exists.
Logstore without index config
Error description
You are using SQL syntax, but the logstore has no indexes configured. SQL requires at least one index to be configured.
Cause
The logstore that you want to query has no indexes configured.
Solution
Check the logstore that you want to query and make sure that index configurations are enabled for at least one column.
Too many queued queries
Error description
There is a conflict between the alias of the index field in the logstore that you want to query. SQL cannot determine the specific column you want to analyze.
Explanation
The concurrency quota of Simple Log Service SQL is isolated at the user project level. Multiple SQL requests in the same project are submitted to the Simple Log Service server. When an SQL is being executed, it occupies an SQL concurrency quota. When the SQL execution is completed, the allocated SQL concurrency quota is returned. The SQL concurrency quota of a user project is 15 for executing normal SQL mode and 150 for executing enhanced SQL mode.
Causes
The number of concurrent requests is too high.
The latency of a single SQL request is high.
The retry logic of SQL requests in your business code causes many loop retries.
Solution
Reduce the number of requests.
Optimize SQL to reduce the execution latency of a single SQL.
Add random wait time to the retry logic to avoid invalid repeated loop retries, which increases additional concurrent request pressure.
Duplicate column conflicts
Error description
There is a conflict between the alias of the index field in the logstore that you want to query. SQL cannot determine the specific column you want to analyze.
Cause
A column name in the logstore is exactly the same as the alias of a column.
Solution
Check the index key columns of the logstore that you want to query and ensure that there are no naming conflicts.
Denied by sts or ram, action:*
Error description
You do not have permission to access the logstore under your current identity.
Cause
The logstore is not authorized to your current identity.
Solution
Check the RAM permissions and grant read permissions on the logstore to your current identity. Authorization resource description:
action: log:GetLogStoreLogs, resource: acs:log:<region>:<uid>:project/<project>/logstore/<logstore>
You are using nested sql, please specify FROM LOGSTORE in the innermost sql
Error description
You are using nested subqueries in SQL. Please specify the table name in the innermost subquery.
Explanation
To simplify user queries, Simple Log Service specifies the current logstore as the FROM table by default when performing single-table queries. Suppose your current logstore name is test, then the following three query statements are synonymous:
Statement 1: You can omit the FROM clause.
Statement 2: You can use FROM log, which specifies the current logstore.
Statement 3: You can explicitly specify the table you want to query as the logstore test.
However, when you perform complex queries that include subqueries, Simple Log Service cannot infer the target table for each subquery. Therefore, you must manually specify the FROM clause in the subquery.
Solution
If you want to query the current logstore, you can directly use the constant string "FROM log" to specify the FROM table as the current logstore.
Alternatively, you can explicitly specify the name of the target logstore you want to query.
In value and list items must be the same type: varchar
Error description
There is a syntax error in the SQL statement. The value and list items in the IN clause must be of the same data type: varchar.
Cause
When using the IN operator for queries, the provided value and list items have inconsistent data types. For example, the value is of varchar type, and the list items are of integer type.
Solution
Ensure that the value and list items in the IN operator have the same data type. You can use the CAST or CONVERT function to convert data types, or convert both the value and list items to the same data type before querying. We recommend using the same data type for the corresponding column fields when writing logs to avoid similar errors.
Unexpected parameters (bigint) for function url_decode. Expected: url_decode(varchar(x))
Error description
There is a syntax error in the SQL statement. The input parameter type is incorrect when using the SQL function.
Causes
The function requires a string type parameter, but the input parameter type is bigint.
This type of error may occur in different functions, not necessarily url_decode, but also regexp_like. The cause of the error is the same: the input parameter type is incorrect when using the SQL function.
Solution
Convert the input parameter to a string type before passing it to the url_decode function. You can use the CAST or CONVERT function to convert the bigint type parameter to a string type parameter, or directly enclose the parameter in quotation marks when calling the function to convert it to a string type. For example:
SELECT url_decode(CAST(bigint_param AS varchar(20))) -- Use the CAST function to convert the bigint type parameter to a string typeSELECT url_decode('123456789') -- If the parameter is a literal, you can directly enclose the parameter to convert it to
Target of repeat operator is not specified
Error description
The target of the repeat operator is not specified.
Cause
This may be an error in a regular expression, indicating that the target of the repeat operator is not specified. The repeat operator "()" is used to match zero or more occurrences of the preceding character or group, but it requires a target to apply the repetition. For example, "(a) *" means zero or more occurrences of the letter "a". If no target is specified, such as in "() *", the regular expression engine will not know how to apply the repeat operator, resulting in this error.
Solution
Check whether the repeat operator "()" in the regular expression has a correct target and make the necessary corrections.
NULL values are not allowed on the probe side of SemiJoin operator. See the query plan for details.
Error description
NULL values are not allowed on the probe side of the SemiJoin.
Cause
In a SemiJoin, non-NULL values cannot appear on the probe side. Otherwise, the SemiJoin cannot be executed correctly. This may be because a row record in the probe table or a subquery returned a NULL value during query runtime.
Solution
Check the query plan to determine which table returned a NULL value. If a subquery is involved, ensure that the result set returned by the subquery does not contain NULL values. If NULL values in an external table cause the issue, consider using INNER JOIN instead of SemiJoin, or use the COALESCE or ISNULL function to handle NULL values.
Array subscript out of bounds
Error description
Array subscript out of bounds.
Cause
You are trying to access an index position that is out of the array's range. For example, accessing a negative index or an index beyond the array's length. This may be due to incorrect logic in SQL or data entry errors.
Solution
In SQL, array index positions start from 1. Check the valid length of the array in SQL, then check the reference of the array index position, and ensure that the array subscript is not out of range.
Expression "*" is not of type ROW
Error description
The expression "fields" is not of type ROW.
Cause
When using the ROW type, the expression "fields" does not meet the requirements of the ROW type. This may be because the parameters do not meet the requirements when using the ROW function.
Solution
Check whether the parameters of the ROW function are correct and whether all the fields in the parameters exist and meet the requirements. If the parameters are correct but the result is still not of type ROW, you can try using the CAST function to convert it to type ROW.
Key-value delimiter must appear exactly once in each entry. Bad input: '*'
Error description
The key-value delimiter must appear exactly once in each entry.
Cause
The delimiter appears more than once or not at all in a key-value pair in the input, so that the system cannot parse the key-value pair.
Solution
Check whether the format of the key-value pair in the input is correct. Ensure that there is only one delimiter in each key-value pair and that there are correct keys and values on both sides of it.
Pattern has # groups. cannot access group
Error description
The regular expression cannot access the specified group.
Cause
The regular expression matched 0 groups, making it impossible to access the first group. This may occur if grouping syntax is used in the regular expression but no groups are defined. In this case, the group cannot be accessed because it does not exist.
Solution
Check the grouping syntax in the regular expression and ensure that at least one group is defined in the pattern. You can use parentheses () to define groups. For example, to match an email address in a string and separate the username and domain name into groups, use the following regular expression: (w+)@(w+.w+). In this example, there are two groups, so you can use group(1) and group(2) to access their values. If grouping is not needed, use non-capturing groups, such as (?:w+)@(?:w+.w+). If the issue persists, use an online regex checker for debugging and verify the pattern before using it in SQL.
Ts_compare must group by timestamp, your grouping by type is :bigint
Error description
The ts_compare function must group by the timestamp type.
Cause
When using the ts_compare function in SQL, the column used in the group by clause may be a non-timestamp type, such as numeric.
Solution
Ensure that the column type used in the group by clause for the ts_compare function is the correct timestamp type. You can use functions such as from_unixtime to convert integer timestamps to the timestamp type.
Time # is out of specified time range
Error description
The timestamp is out of the specified time range.
Cause
The SQL statement uses a timestamp that is out of the specified time range, possibly due to data entry errors or data type mismatches.
Solution
Check whether the timestamp is correct. If the issue is caused by a data type mismatch, try using relevant data type conversion functions to convert the timestamp to the correct data type.
Row comparison not supported for fields with null elements
Error description
ROW comparison is not supported for fields with NULL values.
Cause
The SQL statement uses a ROW type comparison operation that contains NULL value elements, such as using the
=
or!=
operator for comparison.Solution
Before performing row comparison, you must handle the ROW that contains NULL elements. Use operators like IS NULL or IS NOT NULL to filter out NULL values, or use the COALESCE function to handle NULL values. Handle rows containing NULL elements during the log data writing and processing process to avoid the above errors.
The specified key does not exist.
Error description
This usually occurs when you perform a foreign table join query using OSS and fail to access the OSS bucket: the specified key does not exist.
Cause
The specified object does not exist in the OSS bucket you are accessing. It may have been deleted or never existed. This may be because you specified the wrong OSS bucket endpoint or the wrong object key.
Solutions
Check the OSS bucket and the name of the object key to be accessed to ensure they are correct.
Go to OSS to confirm whether the specified object key exists in the specified OSS bucket.
Reading data with pagination only allow reading max
Error description
The maximum number of rows for pagination cannot exceed 1,000,000.
Cause
Simple Log Service SQL limits the maximum output rows to 1,000,000. The pagination read request you are making exceeds this limit.
Solutions
Use the LIMIT clause to limit the maximum number of rows for pagination to no more than 1,000,000.
Narrow the query range to limit the maximum number of rows for pagination to no more than 1,000,000.
Use the Scheduled SQL service to perform SQL aggregation analysis in windows periodically, and then perform secondary aggregation on the aggregation results.
Could not choose a best candidate operator. Explicit type casts must be added.
Error description
Could not choose the best candidate operator. Explicit type casts must be added.
Cause
This usually occurs because you are trying to perform arithmetic or comparison operations between variables of different types, and the system cannot automatically determine which type of operator should be used.
Solution
Add explicit type casts to tell the system which type of operator to use.
For example, if you are trying to add a string type variable and an integer type variable, you can use the CAST function to convert the string type variable to an integer type, and then perform the addition operation. Example code:
SELECT CAST('10' AS INTEGER) + 5;
Here, we use the CAST function to convert the string type variable '10' to an integer type, and then add it to the integer type variable 5. This avoids the above error.
Function * not registered
Error description
The function does not exist.
Causes
The function you specified does not exist in the Simple Log Service SQL system.
You used a specific function of a database vendor that is not a standard SQL function and is not provided in the Simple Log Service SQL system.
You misspelled the function name.
Solution
Check the function name and confirm that it is a valid function provided by Simple Log Service SQL.
SQL array indices start at 1
Index must be greater than zero
All COALESCE operands must be the same type: *
Error description
All parameters in the COALESCE function must be of the same type.
Cause
In the COALESCE function, operands must have the same data type. Otherwise, a data type error will occur. In this error, at least one operand is of Boolean type, while other operands have different data types, such as numbers or strings.
Solution
Check each operand in the COALESCE function and ensure they have the same data type. If they are different, perform data type conversion to make them have the same data type. You can use the CAST function to perform data type conversion.
Multiple columns returned by subquery are not yet supported
Error description
Scalar queries do not support returning multiple columns.
Cause
Multiple columns are selected in the subquery.
Solution
Ensure that the subquery returns only one column or one value. You can modify the subquery or main query. You can also try using the JOIN statement instead of the subquery to retrieve the required data.
Group by clause cannot contain aggregations or window functions: *
Error description
The GROUP BY clause cannot contain aggregate or window functions.
Cause
Aggregate or window functions are included in the GROUP BY clause.
Solution
Include only column names in the GROUP BY clause, not aggregate or window functions. Aggregate and window functions should be used in the SELECT statement, not in the GROUP BY clause. If you need to use an aggregate function in the GROUP BY clause, use the alias of the column or the numeric index instead of the aggregate function. For example, use the following query:
SELECT column1, column2, COUNT(column3) as count_column3 FROM table GROUP BY column1, column2, 3
In this query, count_column3 is the alias of COUNT(column3), and 3 is the position of COUNT(column3) in the SELECT statement. Note that using numeric indexes may make the code difficult to understand, so we recommend that you use column aliases.
Where by clause cannot contain aggregations or window functions: *
Error description
The WHERE clause cannot contain aggregate functions or window functions.
Cause
Aggregate functions or window functions are included in the WHERE clause.
Solution
Include only column names in the WHERE clause, not aggregate or window functions. These should be used in the SELECT statement, not in the WHERE clause. If you need to use an aggregate function in the WHERE clause, you can use the alias of the column instead.
For example, use the following query:
SELECT column1, column2, COUNT(column3) as count_column3 FROM table WHERE count_column3 > 10
count_column3 is the alias of COUNT(column3), which represents the aggregate count result of column3.
Left side of LIKE expression must evaluate to a varchar (actual: bigint)
Error description
The left side of the LIKE expression must be of varchar type (actual: bigint).
Cause
This error usually occurs when you try to use the LIKE operator to compare bigint with varchar. The LIKE operator requires both sides of the expression to have the same data type.
Solution
You may need to use the CAST function to convert bigint to varchar.
SELECT * FROM table WHERE CAST(bigint_column AS varchar) LIKE 'pattern'
This will convert bigint_column to varchar, allowing the LIKE operator to match it with the specified pattern.
Left side of logical expression must evaluate to a boolean (actual: varchar)
Error description
The left side of the logical expression must be of the boolean type (actual: varchar).
Cause
This error usually occurs when you try to use a logical expression, and the right side of the relational operator
=
or!=
is a boolean value (true or false), but the left side is a non-boolean, such as varchar.Solution
Check the value type on the left side of the logical expression and ensure it is boolean.
Right side of logical expression must evaluate to a boolean (actual: bigint)
Error description
The right side of the logical expression must be of the boolean type, which is actually bigint.
Cause
This error usually occurs when you try to use a logical expression, and the variable type on the right side of the logical expression is not boolean.
Solution
You need to check the value type on the right side of the logical expression and ensure it is boolean.
Invalid JSON path: ...
Error description
Invalid JSON access path.
Cause
When using JSON functions (such as json_extract, json_extract_scalar, json_size, etc.) in SQL to access the specified JSON path, an invalid path is specified.
Solution
Specify the json_path correctly, in the format
$.a.b
. Here,$
represents the root node of the current JSON object, and the period.
refers to the node to be extracted (can be cascaded). However, when there are special characters (such as.
, space-
) in the fields of the JSON object, such ashttp.path
,http path
, andhttp-path
, you must use square brackets[]
instead of the period.
, and enclose the field name in double quotes""
, for example:* | SELECT json_extract_scalar(request, '$["X-Power-Open-App-Id"]')
For more information, see JSON functions and FAQs about the query and analysis of JSON logs.
Max distinct num is:10, please use approx_distinct
Key not present in map
Error description
The specified key does not exist in the map.
Cause
You used a map type in SQL and specified a key that does not exist.
Solutions
Check the map type data and confirm that the key you specified exists in the map.
If the key does not exist in the map type data, use the try function to wrap the map access to ignore the error, for example:
SELECT try(map['name']) -- map is a map type column. If the key named 'name' does not exist in the column, NULL is returned.
Line 1:44: column 'XXX' cannot be resolved; please add the column in the index attribute
Cause
No index is configured for the XXX field.
Solution
Configure an index for the target field and enable the log analysis feature. For more information, see Create indexes.
ErrorType:QueryParseError.ErrorMessage:syntax error error position is from column:10 to column:11,error near < : >
Cause
The syntax of the query statement is invalid. The position of the invalid syntax is near the colon
:
.Solution
Modify the query statement and execute it again.
Column 'XXX' not in GROUP BY clause; please add the column in the index attribute
Cause
In an SQL statement, if you use the GROUP BY clause, the system can query only a column that is included in the GROUP BY clause or perform aggregation on an arbitrary column when the system executes the SELECT statement. You cannot query a column that is not included in the GROUP BY clause. For example,
* | SELECT status, request_time, COUNT(*) AS PV GROUP BY status
is an invalid analysis statement because request_time is not a column that is included in the GROUP BY clause.Solution
Modify the query statement and execute the new query statement. For example, the correct statement for the preceding example is
* | SELECT status, arbitrary(request_time), count(*) AS PV GROUP BY status
. For more information, see GROUP BY clause.
SQL query must follow search query, please read syntax doc
Cause
Only an analysis statement is specified. In Simple Log Service, an analysis statement must be used together with a query statement. The format is
query statement | analysis statement
.Solution
Add a query statement before the analysis statement. For example,
* | SELECT status, count(*) AS PV GROUP BY status
. For more information, see Syntax.
Line 1:10: identifiers must not start with a digit; surround the identifier with double quotes
Cause
The syntax of the analysis statement is invalid because a column name or variable name that is specified in the analysis statement starts with a digit. In compliance with SQL standards, a column name can contain letters, digits, and underscores
_
and must start with a letter.Solution
Change the alias. For more information, see Column aliases.
Line 1:9: extraneous input ‘’ expecting
Cause
Extra Chinese quotation marks are specified in the query statement.
Solution
Modify the query statement and execute it again.
key (XXX) is not config as key value config, if symbol : is in your log, please wrap : with quotation mark "
Cause
No field indexes are configured for the XXX field, or the field that you specify contains special characters such as spaces and is not enclosed in double quotation marks
""
.Solutions
Check whether a field index is configured for the field and whether the log analysis feature is enabled for the field.
If yes, proceed to the next step.
If no, configure field indexes and enable the log analysis feature for the field. For more information, see Create indexes.
If the issue is resolved, skip the next step.
Enclose the field in double quotation marks "".
Query exceeded max memory size of 3GB
Cause
The memory usage of the query statement exceeds 3 GB. The error occurs because many values are returned in the query and analysis results after you use a GROUP BY clause to remove duplicates.
Solution
Optimize the GROUP BY clause. Reduce the number of fields that are specified in the GROUP BY clause.
ErrorType:ColumnNotExists.ErrorPosition,line:0,column:1.ErrorMessage:line 1:123: Column 'XXX' cannot be resolved; it seems XXX is wrapper by "; if XXX is a string, not a key field, please use 'XXX'
Cause
XXX is not an indexed field and cannot be enclosed in double quotation marks
""
. If you want to use a string in an analysis statement, you must enclose the string in single quotation marks''
. Strings that are not enclosed or enclosed in double quotation marks""
indicate field names or column names.Solution
If XXX is a field that you want to analyze, make sure that you configure an index and enable the log analysis feature for the field. For more information, see Create indexes.
If XXX is a string, you must enclose the string in single quotation marks
''
.
User can only run 15 query concurrently
Cause
More than 15 analysis operations are concurrently executed. Each project supports up to 15 concurrent analysis operations.
Solution
Reduce the number of concurrent analysis operations based on your business requirements.
Unclosed string quote
Cause
The double quotes
"
in the query and analysis statement are unmatched.Solution
Modify the query statement and execute it again.
Error after :.error detail:error after :.error detail:line 1:147: mismatched input 'in' expecting {<EOF>, 'GROUP', 'ORDER', 'HAVING', 'LIMIT', 'OR', 'AND', 'UNION', 'EXCEPT', 'INTERSECT'}
Cause
The invalid keyword
in
is specified.Solution
Modify the query statement and execute it again.
Duplicate keys (XXX) are not allowed
Cause
Duplicate indexes are configured for fields.
Solution
Check the index configurations. For more information, see Create indexes.
Only support * or ? in the middle or end of the query
Cause
Wildcard characters are not used at the expected positions when you perform a fuzzy search.
Solution
Modify the wildcard characters in the query statement. The following rules apply:
You can add an asterisk
*
or a question mark?
as a wildcard character to the middle or end of a keyword to perform a fuzzy search.You cannot add an asterisk
*
or a question mark?
to the start of a word.The long and double data types do not support asterisks
*
or question marks?
in fuzzy searches.
Logstore (xxx) is not found
Cause
The XXX logstore does not exist or has no indexes configured.
Solution
Check whether the logstore exists. If it does, you must configure indexes for at least one field and enable the log analysis feature for it.
Condition number 43 is more than 30
Cause
The number of fields that are specified in the search statement is 43. A maximum of 30 fields can be specified in a search statement.
Solution
Modify the search statement and make sure that the number of fields is less than or equal to 30.
ErrorType:SyntaxError.ErrorPosition,line:1,column:19.ErrorMessage:line 1:19: Expression "data" is not of type ROW
Cause
The data type of a field that is specified in the query statement is invalid.
Solution
Check whether the parameters of the ROW function are correct and whether all the fields in the parameters exist and meet the requirements. If the parameters are correct but the result is still not of type ROW, you can try using the CAST function to convert it to type ROW.
ErrorType:SyntaxError.ErrorPosition,line:1,column:9.ErrorMessage:line 1:9: identifiers must not contain ':'
Cause
The field that you want to analyze contains colons
:
.Solution
Enclose the field in double quotation marks
""
. For example, if you want to analyze the__tag__:__receive_time__
field, use the statement*| select "__tag__:__receive_time__"
.ImportantBefore you can analyze a field, you must create indexes for the field. For more information, see Create indexes.
No nodes available to run query
Cause
Internal system error.
Solution
Refresh the page and execute the query and analysis statement again.