All Products
Search
Document Center

InfluxQL Reference

Last Updated: Aug 11, 2020

Overview

This topic describes the basic concepts of Influx Query Language (InfluxQL). For more information about InfluxQL, see the following topics:

  • Use InfluxQL to explore data
  • Use InfluxQL to explore schema
  • Database management

InfluxQL is an SQL-like query language that you can use to interact with TSDB for InfluxDB®. InfluxQL provide features that are specific to time series data. You can use these features to store and analyze time series data.

Sections:

Notation

InfluxQL uses Extended Backus-Naur Form (EBNF) to describe the syntax. The EBNF notation is also used in the Go programming language. For more information, see Go programming language specification. InfluxQL uses the same notation as the Go programming language because TSDB for InfluxDB® is written in Go.

  1. Production = production_name "=" [ Expression ] "." .
  2. Expression = Alternative { "|" Alternative } .
  3. Alternative = Term { Term } .
  4. Term = production_name | token [ "…" token ] | Group | Option | Repetition .
  5. Group = "(" Expression ")" .
  6. Option = "[" Expression "]" .
  7. Repetition = "{" Expression "}" .

The following notation operators are listed based on priorities in ascending order:

  1. | alternation
  2. () grouping
  3. [] option (0 or 1 times)
  4. {} repetition (0 to n times)

Query representation

Characters

The text in InfluxQL is encoded in UTF-8.

  1. newline = /* the Unicode code point U+000A */ .
  2. unicode_char = /* an arbitrary Unicode code point except newline */ .

Letters and digits

The supported letters in InfluxQL include ASCII characters. An underscore () is considered as a letter in InfluxQL. Note that an underscore () is represented by U+005F in Unicode.

InfluxQL supports only decimal digits.

  1. letter = ascii_letter | "_" .
  2. ascii_letter = "A" "Z" | "a" "z".
  3. digit = "0" "9" .

Identifiers

Identifiers can be database names, retention policy names, usernames, measurement names, tag keys, or field keys.

Rules

  • An identifier that is enclosed in double quotation marks (“) can contain all the Unicode characters except line feeds.
  • An identifier that is enclosed in double quotation marks (“) can contain the \" character that is used to escape ".
  • An identifier that is enclosed in double quotation marks (“) can contain InfluxQL keywords.
  • An identifier that is not enclosed in quotation marks must start with an ASCII character or an underscore (_).
  • An identifier that is not enclosed in quotation marks can contain only ASCII characters, digits, and underscores (_).
  1. identifier = unquoted_identifier | quoted_identifier .
  2. unquoted_identifier = ( letter ) { letter | digit } .
  3. quoted_identifier = `"` unicode_char { unicode_char } `"` .

Examples

  1. cpu
  2. _cpu_stats
  3. "1h"
  4. "anything really"
  5. "1_Crazy-1337.identifier>NAME"

Keywords

  1. ALL ALTER ANY AS ASC BEGIN
  2. BY CREATE CONTINUOUS DATABASE DATABASES DEFAULT
  3. DELETE DESC DESTINATIONS DIAGNOSTICS DISTINCT DROP
  4. DURATION END EVERY EXPLAIN FIELD FOR
  5. FROM GRANT GRANTS GROUP GROUPS IN
  6. INF INSERT INTO KEY KEYS KILL
  7. LIMIT SHOW MEASUREMENT MEASUREMENTS NAME OFFSET
  8. ON ORDER PASSWORD POLICY POLICIES PRIVILEGES
  9. QUERIES QUERY READ REPLICATION RESAMPLE RETENTION
  10. REVOKE SELECT SERIES SET SHARD SHARDS
  11. SLIMIT SOFFSET STATS SUBSCRIPTION SUBSCRIPTIONS TAG
  12. TO USER USERS VALUES WHERE WITH
  13. WRITE

If you use InfluxQL keywords as identifiers, you must enclose the keywords in double quotation marks (“) in your queries.

This rule does not apply to the time keyword. The time keyword can be a continuous query name, database name, measurement name, retention policy name, subscription name, or username. In these cases, you do not need to use double quotation marks (“) to enclose time in your queries. You cannot use time as a field key or a tag key. TSDB for InfluxDB® rejects data writes where time is used as a field key or a tag key, and reports errors. For more information, see the “FAQ” topic.

Literals

Integers

InfluxQL supports only decimal integers. Hexadecimal and octal integers are not supported.

  1. int_lit = ( "1" "9" ) { digit } .

Floating-point numbers

InfluxQL supports floating-point numbers. Exponents are not supported.

  1. float_lit = int_lit "." int_lit .

Strings

You must enclose strings in single quotation marks (‘). Strings can contain the \' character that is used to escape '.

  1. string_lit = `'` { unicode_char } `'` .

Durations

A duration specifies the length of a period. A duration literal consists of an integer and a unit of time. No space is used between the integer and the unit of time. You can use multiple time units to describe a duration. For example, a duration can be 2 minutes and 10 seconds.

Duration units

Unit Description
ns Nanosecond (one billionth of a second)
u or µ Microsecond (one millionth of a second)
ms Millisecond (one thousandth of a second)
s Second
m Minute
h Hour
d Day
w Week
  1. duration_lit = int_lit duration_unit .
  2. duration_unit = "ns" | "u" | "µ" | "ms" | "s" | "m" | "h" | "d" | "w" .

Date and time

The date and time in InfluxQL is not specified based on the EBNF notation. Instead, the date and time is specified based on the time formatting and parsing rules of Go. To be more specific, the date and time is specified based on the reference date that is written in the required format of InfluxQL. The following section explains the reference date and time.

The reference date and time in InfluxQL is 15:04:05 January 2, 2006.

  1. time_lit = "2006-01-02 15:04:05.999999" | "2006-01-02" .

Boolean values

  1. bool_lit = TRUE | FALSE .

Regular expressions

  1. regex_lit = "/" { unicode_char } "/" .

Comparison operators: =~ (match) and ! ~ (not match)

Notes: InfluxQL allows you to use regular expressions to specify the following objects in your query:

  • Field keys and tag keys in the SELECT clause
  • Measurements in the FROM clause
  • Tag values in the WHERE clause
  • Tag keys in the GROUP BY clause

You cannot use regular expressions to match database names, non-string field values in WHERE clauses, or retention policy names.

Queries

A query consists of one or more statements that are separated with semicolons (;).

  1. query = statement { ";" statement } .
  2. statement = alter_retention_policy_stmt |
  3. create_continuous_query_stmt |
  4. create_database_stmt |
  5. create_retention_policy_stmt |
  6. create_subscription_stmt |
  7. create_user_stmt |
  8. delete_stmt |
  9. drop_continuous_query_stmt |
  10. drop_database_stmt |
  11. drop_measurement_stmt |
  12. drop_retention_policy_stmt |
  13. drop_series_stmt |
  14. drop_shard_stmt |
  15. drop_subscription_stmt |
  16. drop_user_stmt |
  17. explain_stmt |
  18. explain_analyze_stmt |
  19. grant_stmt |
  20. kill_query_statement |
  21. revoke_stmt |
  22. select_stmt |
  23. show_continuous_queries_stmt |
  24. show_databases_stmt |
  25. show_diagnostics_stmt |
  26. show_field_key_cardinality_stmt |
  27. show_field_keys_stmt |
  28. show_grants_stmt |
  29. show_measurement_cardinality_stmt |
  30. show_measurement_exact_cardinality_stmt |
  31. show_measurements_stmt |
  32. show_queries_stmt |
  33. show_retention_policies_stmt |
  34. show_series_cardinality_stmt |
  35. show_series_exact_cardinality_stmt |
  36. show_series_stmt |
  37. show_shard_groups_stmt |
  38. show_shards_stmt |
  39. show_stats_stmt |
  40. show_subscriptions_stmt |
  41. show_tag_key_cardinality_stmt |
  42. show_tag_key_exact_cardinality_stmt |
  43. show_tag_keys_stmt |
  44. show_tag_values_stmt |
  45. show_tag_values_cardinality_stmt |
  46. show_users_stmt .

Statements

This section does not provide an exhaustive list of the supported statements.

DELETE

  1. delete_stmt = "DELETE" ( from_clause | where_clause | from_clause where_clause ) .

Examples

  1. DELETE FROM "cpu"
  2. DELETE FROM "cpu" WHERE time < '2000-01-01T00:00:00Z'
  3. DELETE WHERE time < '2000-01-01T00:00:00Z'

EXPLAIN

You can execute the EXPLAIN statement to query and parse an InfluxQL query plan and print a summary of the estimated costs.

In many SQL engines, the EXPLAIN statement is executed to show the details about the join order, join algorithms, and predicate and expression pushdown. InfluxQL does not support join operations. In most cases, the cost of an InfluxQL query is the result that is returned by a function. The result may include the number of accessed series, the number of iterators that access the Time-Structured Merge Tree (TSM) files, and the number of TSM blocks to be scanned.

The execution plan that is returned by an EXPLAIN query includes the following components:

  • Expression
  • Auxiliary fields
  • Number of shards
  • Number of series
  • Cached values
  • Number of files
  • Number of blocks
  • Block size

Example

  1. > explain select sum(pointReq) from "_internal"."monitor"."write" group by hostname;
  2. > QUERY PLAN
  3. ------
  4. EXPRESSION: sum(pointReq::integer)
  5. NUMBER OF SHARDS: 2
  6. NUMBER OF SERIES: 2
  7. CACHED VALUES: 110
  8. NUMBER OF FILES: 1
  9. NUMBER OF BLOCKS: 1
  10. SIZE OF BLOCKS: 931

EXPLAIN ANALYZE

You can execute the EXPLAIN ANALYZE statement to run the specified InfluxQL query and calculate the actual runtime cost.

  1. explain_analyze_stmt = "EXPLAIN ANALYZE" select_stmt .

Example

  1. > explain analyze select sum(pointReq) from "_internal"."monitor"."write" group by hostname;
  2. > EXPLAIN ANALYZE
  3. -----------
  4. .
  5. └── select
  6. ├── execution_time: 242.167µs
  7. ├── planning_time: 2.165637ms
  8. ├── total_time: 2.407804ms
  9. ├── field_iterators
  10. ├── labels
  11. └── statement: SELECT sum(pointReq::integer) FROM "_internal"."monitor"."write" GROUP BY hostname
  12. └── expression
  13. ├── labels
  14. └── expr: sum(pointReq::integer)
  15. ├── create_iterator
  16. ├── labels
  17. ├── measurement: write
  18. └── shard_id: 57
  19. ├── cursors_ref: 1
  20. ├── cursors_aux: 0
  21. ├── cursors_cond: 0
  22. ├── float_blocks_decoded: 0
  23. ├── float_blocks_size_bytes: 0
  24. ├── integer_blocks_decoded: 1
  25. ├── integer_blocks_size_bytes: 931
  26. ├── unsigned_blocks_decoded: 0
  27. ├── unsigned_blocks_size_bytes: 0
  28. ├── string_blocks_decoded: 0
  29. ├── string_blocks_size_bytes: 0
  30. ├── boolean_blocks_decoded: 0
  31. ├── boolean_blocks_size_bytes: 0
  32. └── planning_time: 1.401099ms
  33. └── create_iterator
  34. ├── labels
  35. ├── measurement: write
  36. └── shard_id: 58
  37. ├── cursors_ref: 1
  38. ├── cursors_aux: 0
  39. ├── cursors_cond: 0
  40. ├── float_blocks_decoded: 0
  41. ├── float_blocks_size_bytes: 0
  42. ├── integer_blocks_decoded: 0
  43. ├── integer_blocks_size_bytes: 0
  44. ├── unsigned_blocks_decoded: 0
  45. ├── unsigned_blocks_size_bytes: 0
  46. ├── string_blocks_decoded: 0
  47. ├── string_blocks_size_bytes: 0
  48. ├── boolean_blocks_decoded: 0
  49. ├── boolean_blocks_size_bytes: 0
  50. └── planning_time: 76.192µs

KILL QUERY

You can execute the KILL QUERY statement to terminate a running query.

  1. kill_query_statement = "KILL QUERY" query_id .

In the statement, the query_id parameter specifies the query ID. To obtain the query ID, execute the SHOW QUERIES statement. The query ID is indicated by qid in the query result of the SHOW QUERIES statement.

Example

  1. -- kill query with qid of 36 on the local host
  2. KILL QUERY 36

SELECT

  1. select_stmt = "SELECT" fields from_clause [ into_clause ] [ where_clause ]
  2. [ group_by_clause ] [ order_by_clause ] [ limit_clause ]
  3. [ offset_clause ] [ slimit_clause ] [ soffset_clause ] [ timezone_clause ] .

Examples

Execute the following statement to query the data for the measurements whose names start with cpu. The statement also writes the data to the same measurements. The cpu_1h retention policy is used for the write operation.

  1. SELECT mean("value") INTO "cpu_1h".:MEASUREMENT FROM /cpu.*/

Execute the following statement to query data in the specified measurement and group the results by day based on the specified time zone.

  1. SELECT mean("value") FROM "cpu" GROUP BY region, time(1d) fill(0) tz('America/Chicago')

SHOW CARDINALITY

SHOW CARDINALITY is a set of statements that you can execute to calculate the estimated or exact cardinality of measurements, series, tag keys, tag values, and field keys.

You can execute the SHOW CARDINALITY statement to obtain the estimated or exact cardinality. The estimated cardinality is calculated based on a sketch. By default, the estimated cardinality is used to ensure that the cost of the operation is low. The exact cardinality is calculated based on the TSM data. The cost of calculating the exact cardinality is high if you are handling high-cardinality data. We recommend that you query the estimated cardinality instead of the exact cardinality if possible.

You can filter data by time only when the Time Series Index (TSI) feature is enabled for your database.

For more information about the SHOW CARDINALITY statement, see the following sections:

  • SHOW FIELD KEY CARDINALITY
  • SHOW MEASUREMENT CARDINALITY
  • SHOW SERIES CARDINALITY
  • SHOW TAG KEY CARDINALITY
  • SHOW TAG VALUES CARDINALITY

SHOW CONTINUOUS QUERIES

  1. show_continuous_queries_stmt = "SHOW CONTINUOUS QUERIES" .

Example

  1. -- show all continuous queries
  2. SHOW CONTINUOUS QUERIES

SHOW DATABASES

  1. show_databases_stmt = "SHOW DATABASES" .

Example

  1. -- show all databases
  2. SHOW DATABASES

SHOW DIAGNOSTICS

You can execute the SHOW DIAGNOSTICS statement to obtain node information, such as the build information, runtime, host name, server configuration, memory usage, and Go runtime diagnostics.

  1. show_diagnostics_stmt = "SHOW DIAGNOSTICS"

SHOW FIELD KEY CARDINALITY

If you do not specify a database in the ON <database> clause, the SHOW FIELD KEY CARDINALITY statement returns the estimated or exact cardinality of the field key set in the current database.

Notes: In the statement, the following clauses are optional: ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses. If you use these clauses, the query returns the exact cardinality. You can filter data by time only when the TSI feature is enabled for your database. You cannot use the time keyword in the WHERE clause.

  1. show_field_key_cardinality_stmt = "SHOW FIELD KEY CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ]
  2. show_field_key_exact_cardinality_stmt = "SHOW FIELD KEY EXACT CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ]

Examples

  1. -- show estimated cardinality of the field key set of current database
  2. SHOW FIELD KEY CARDINALITY
  3. -- show exact cardinality on field key set of specified database
  4. SHOW FIELD KEY EXACT CARDINALITY ON mydb

SHOW FIELD KEY

  1. show_field_keys_stmt = "SHOW FIELD KEYS" [on_clause] [ from_clause ] .

Examples

  1. -- show field keys and field value data types from all measurements
  2. SHOW FIELD KEYS
  3. -- show field keys and field value data types from specified measurement
  4. SHOW FIELD KEYS FROM "cpu"

SHOW GRANTS

  1. show_grants_stmt = "SHOW GRANTS FOR" user_name .

Example

  1. -- show grants for jdoe
  2. SHOW GRANTS FOR "jdoe"

SHOW MEASUREMENT CARDINALITY

If you do not specify a database in the ON <database> clause, the SHOW MEASUREMENT CARDINALITY statement returns the estimated or exact cardinality of the measurement set in the current database.

Notes: In the statement, the following clauses are optional: ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses. If you use these clauses, the query returns the exact cardinality. You can filter data by time only when the TSI feature is enabled for your database. You cannot use the time keyword in the WHERE clause.

  1. show_measurement_cardinality_stmt = "SHOW MEASUREMENT CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ]
  2. show_measurement_exact_cardinality_stmt = "SHOW MEASUREMENT EXACT CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ]

Examples

  1. -- show estimated cardinality of measurement set on current database
  2. SHOW MEASUREMENT CARDINALITY
  3. -- show exact cardinality of measurement set on specified database
  4. SHOW MEASUREMENT EXACT CARDINALITY ON mydb

SHOW MEASUREMENTS

  1. show_measurements_stmt = "SHOW MEASUREMENTS" [on_clause] [ with_measurement_clause ] [ where_clause ] [ limit_clause ] [ offset_clause ] .

Examples

  1. -- show all measurements
  2. SHOW MEASUREMENTS
  3. -- show measurements where region tag = 'uswest' AND host tag = 'serverA'
  4. SHOW MEASUREMENTS WHERE "region" = 'uswest' AND "host" = 'serverA'
  5. -- show measurements that start with 'h2o'
  6. SHOW MEASUREMENTS WITH MEASUREMENT =~ /h2o.*/

SHOW QUERIES

  1. show_queries_stmt = "SHOW QUERIES" .

Example

  1. -- show all currently-running queries
  2. SHOW QUERIES

SHOW RETENTION POLICIES

  1. show_retention_policies_stmt = "SHOW RETENTION POLICIES" [on_clause] .

Example

  1. -- show all retention policies on a database
  2. SHOW RETENTION POLICIES ON "mydb"

SHOW SERIES

  1. show_series_stmt = "SHOW SERIES" [on_clause] [ from_clause ] [ where_clause ] [ limit_clause ] [ offset_clause ] .

Example

  1. SHOW SERIES FROM "telegraf"."autogen"."cpu" WHERE cpu = 'cpu8'

SHOW SERIES CARDINALITY

If you do not specify a database in the ON <database> clause, the SHOW SERIES CARDINALITY statement returns the estimated or exact cardinality of the series set in the current database.

The cardinality of the series set is a major factor that affects the random-access memory (RAM) usage.

Notes: In the statement, the following clauses are optional: ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses. If you use these clauses, the query returns the exact cardinality. You can filter data by time only when the TSI feature is enabled for your database. You cannot use the time keyword in the WHERE clause.

  1. show_series_cardinality_stmt = "SHOW SERIES CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ]
  2. show_series_exact_cardinality_stmt = "SHOW SERIES EXACT CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ]

Examples

  1. -- show estimated cardinality of the series on current database
  2. SHOW SERIES CARDINALITY
  3. -- show estimated cardinality of the series on specified database
  4. SHOW SERIES CARDINALITY ON mydb
  5. -- show exact series cardinality
  6. SHOW SERIES EXACT CARDINALITY
  7. -- show series cardinality of the series on specified database
  8. SHOW SERIES EXACT CARDINALITY ON mydb

SHOW SHARD GROUPS

  1. show_shard_groups_stmt = "SHOW SHARD GROUPS" .

Example

  1. SHOW SHARD GROUPS

SHOW SHARD

  1. show_shards_stmt = "SHOW SHARDS" .

Example

  1. SHOW SHARDS

SHOW STATS

You can execute the SHOW STATS statement to obtain the statistics of the specified TSDB for InfluxDB® node and available components. Components become available only after they are enabled.

  1. show_stats_stmt = "SHOW STATS [ FOR '<component>' | 'indexes' ]"

SHOW STATS

  • The SHOW STATS statement does not return the memory usage of indexes. To obtain the memory usage of indexes, execute the SHOW STATS FOR 'indexes' statement.
  • The statistics returned by the SHOW STATS statement are stored in the memory. The values are reset to 0 if the node restarts. The SHOW STATS statement is automatically executed every 10 seconds to provide data for the _internal database.

SHOW STATS FOR <component>

  • The statement returns the statistics of the specified component.
  • For the runtime component, the statement returns a summary of the memory usage for the TSDB for InfluxDB® system based on the runtime package of Go. For more information, see Package runtime.

SHOW STATS FOR 'indexes'

  • This statement returns the estimated memory usage of all the indexes. The SHOW STATS statement does not return the memory usage of indexes because this may consume large amounts of resources.

Example

  1. > show stats
  2. name: runtime
  3. -------------
  4. Alloc Frees HeapAlloc HeapIdle HeapInUse HeapObjects HeapReleased HeapSys Lookups Mallocs NumGC NumGoroutine PauseTotalNs Sys TotalAlloc
  5. 4136056 6684537 4136056 34586624 5816320 49412 0 40402944 110 6733949 83 44 36083006 46692600 439945704
  6. name: graphite
  7. tags: proto=tcp
  8. batches_tx bytes_rx connections_active connections_handled points_rx points_tx
  9. ---------- -------- ------------------ ------------------- --------- ---------
  10. 159 3999750 0 1 158110 158110

SHOW SUBSCRIPTIONS

  1. show_subscriptions_stmt = "SHOW SUBSCRIPTIONS" .

Example

  1. SHOW SUBSCRIPTIONS

SHOW TAG KEY CARDINALITY

If you do not specify a database in the ON <database> clause, the SHOW TAG KEY CARDINALITY statement returns the estimated or exact cardinality of the tag key set in the current database.

Notes: In the statement, the following clauses are optional: ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses. If you use these clauses, the query returns the exact cardinality. You can filter data by time only when the TSI feature is enabled for your database. You cannot use the time keyword in the WHERE clause.

  1. show_tag_key_cardinality_stmt = "SHOW TAG KEY CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ]
  2. show_tag_key_exact_cardinality_stmt = "SHOW TAG KEY EXACT CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ]

Examples

  1. -- show estimated tag key cardinality
  2. SHOW TAG KEY CARDINALITY
  3. -- show exact tag key cardinality
  4. SHOW TAG KEY EXACT CARDINALITY

SHOW TAG KEYS

  1. show_tag_keys_stmt = "SHOW TAG KEYS" [on_clause] [ from_clause ] [ where_clause ]
  2. [ limit_clause ] [ offset_clause ] .

Examples

  1. -- show all tag keys
  2. SHOW TAG KEYS
  3. -- show all tag keys from the cpu measurement
  4. SHOW TAG KEYS FROM "cpu"
  5. -- show all tag keys from the cpu measurement where the region key = 'uswest'
  6. SHOW TAG KEYS FROM "cpu" WHERE "region" = 'uswest'
  7. -- show all tag keys where the host key = 'serverA'
  8. SHOW TAG KEYS WHERE "host" = 'serverA'

SHOW TAG VALUES

  1. show_tag_values_stmt = "SHOW TAG VALUES" [on_clause] [ from_clause ] with_tag_clause [ where_clause ]
  2. [ limit_clause ] [ offset_clause ] .

Examples

  1. -- show all tag values across all measurements for the region tag
  2. SHOW TAG VALUES WITH KEY = "region"
  3. -- show tag values from the cpu measurement for the region tag
  4. SHOW TAG VALUES FROM "cpu" WITH KEY = "region"
  5. -- show tag values across all measurements for all tag keys that do not include the letter c
  6. SHOW TAG VALUES WITH KEY !~ /.*c.*/
  7. -- show tag values from the cpu measurement for region & host tag keys where service = 'redis'
  8. SHOW TAG VALUES FROM "cpu" WITH KEY IN ("region", "host") WHERE "service" = 'redis'

SHOW TAG VALUES CARDINALITY

If you do not specify a database in the ON <database> clause, the statement returns the estimated or exact cardinality of the tag value set for the specified tag key in the current database.

Notes: In the statement, the following clauses are optional: ON <database>, FROM <sources>, WITH KEY = <key>, WHERE <condition>, GROUP BY <dimensions>, and LIMIT/OFFSET clauses. If you use these clauses, the query returns the exact cardinality. You can filter data by time only when the TSI feature is enabled for your database.

  1. show_tag_values_cardinality_stmt = "SHOW TAG VALUES CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ] with_key_clause
  2. show_tag_values_exact_cardinality_stmt = "SHOW TAG VALUES EXACT CARDINALITY" [ on_clause ] [ from_clause ] [ where_clause ] [ group_by_clause ] [ limit_clause ] [ offset_clause ] with_key_clause

Examples

  1. -- show estimated tag key values cardinality for a specified tag key
  2. SHOW TAG VALUES CARDINALITY WITH KEY = "myTagKey"
  3. -- show estimated tag key values cardinality for a specified tag key
  4. SHOW TAG VALUES CARDINALITY WITH KEY = "myTagKey"
  5. -- show exact tag key values cardinality for a specified tag key
  6. SHOW TAG VALUES EXACT CARDINALITY WITH KEY = "myTagKey"
  7. -- show exact tag key values cardinality for a specified tag key
  8. SHOW TAG VALUES EXACT CARDINALITY WITH KEY = "myTagKey"

SHOW USERS

  1. show_users_stmt = "SHOW USERS" .

Example

  1. -- show all users
  2. SHOW USERS

Clauses

  1. from_clause = "FROM" measurements .
  2. group_by_clause = "GROUP BY" dimensions fill(fill_option).
  3. into_clause = "INTO" ( measurement | back_ref ).
  4. limit_clause = "LIMIT" int_lit .
  5. offset_clause = "OFFSET" int_lit .
  6. slimit_clause = "SLIMIT" int_lit .
  7. soffset_clause = "SOFFSET" int_lit .
  8. timezone_clause = tz(string_lit) .
  9. on_clause = "ON" db_name .
  10. order_by_clause = "ORDER BY" sort_fields .
  11. to_clause = "TO" user_name .
  12. where_clause = "WHERE" expr .
  13. with_measurement_clause = "WITH MEASUREMENT" ( "=" measurement | "=~" regex_lit ) .
  14. with_tag_clause = "WITH KEY" ( "=" tag_key | "!=" tag_key | "=~" regex_lit | "IN (" tag_keys ")" ) .

Expressions

  1. binary_op = "+" | "-" | "*" | "/" | "%" | "&" | "|" | "^" | "AND" |
  2. "OR" | "=" | "!=" | "<>" | "<" | "<=" | ">" | ">=" .
  3. expr = unary_expr { binary_op unary_expr } .
  4. unary_expr = "(" expr ")" | var_ref | time_lit | string_lit | int_lit |
  5. float_lit | bool_lit | duration_lit | regex_lit .

Others

  1. alias = "AS" identifier .
  2. back_ref = ( policy_name ".:MEASUREMENT" ) |
  3. ( db_name "." [ policy_name ] ".:MEASUREMENT" ) .
  4. db_name = identifier .
  5. dimension = expr .
  6. dimensions = dimension { "," dimension } .
  7. field_key = identifier .
  8. field = expr [ alias ] .
  9. fields = field { "," field } .
  10. fill_option = "null" | "none" | "previous" | int_lit | float_lit . | "linear"
  11. host = string_lit .
  12. measurement = measurement_name |
  13. ( policy_name "." measurement_name ) |
  14. ( db_name "." [ policy_name ] "." measurement_name ) .
  15. measurements = measurement { "," measurement } .
  16. measurement_name = identifier | regex_lit .
  17. password = string_lit .
  18. policy_name = identifier .
  19. privilege = "ALL" [ "PRIVILEGES" ] | "READ" | "WRITE" .
  20. query_id = int_lit .
  21. query_name = identifier .
  22. retention_policy = identifier .
  23. retention_policy_option = retention_policy_duration |
  24. retention_policy_replication |
  25. retention_policy_shard_group_duration |
  26. "DEFAULT" .
  27. retention_policy_duration = "DURATION" duration_lit .
  28. retention_policy_replication = "REPLICATION" int_lit .
  29. retention_policy_shard_group_duration = "SHARD DURATION" duration_lit .
  30. retention_policy_name = "NAME" identifier .
  31. series_id = int_lit .
  32. shard_id = int_lit .
  33. sort_field = field_key [ ASC | DESC ] .
  34. sort_fields = sort_field { "," sort_field } .
  35. subscription_name = identifier .
  36. tag_key = identifier .
  37. tag_keys = tag_key { "," tag_key } .
  38. user_name = identifier .
  39. var_ref = measurement .

Comments

You can use comments in InfluxQL statements to describe your queries.

  • A single-line comment starts with two hyphens (--) and ends with a line feed that is detected by TSDB for InfluxDB®. The comments of this type cannot span multiple lines.
  • A multi-line comment starts with /* and ends with */. The comments of this type can span multiple lines. Nested multi-line comments are not supported.

Implementation in the query engine

After you familiarize yourself with InfluxQL, we recommend that you have a general understanding of how to implement InfluxQL structures in the query engine. This step is important and helps you obtain the information about how to process results and create valid queries.

The following process describes the lifecycle of a query:

  1. The InfluxQL statement is converted into tokens and parsed into an abstract syntax tree (AST). The result is the code representation of the query.
  2. The AST is passed to the QueryExecutor that sends the query to an appropriate handler. For example, queries associated with metadata are run in the metadata service, and SELECT statements are executed in the shards.
  3. Then, the query engine determines the shards that match the time range specified in the SELECT statement. In the matched shards, iterators are created for each specified field in the statement.
  4. The iterators are passed to an emitter. The emitter drains the iterators and joins the result points. The emitter converts simple time and value points into complex result objects and returns the objects to the client.

Iterators

Iterators are the core of a query engine. Iterators provide a simple interface that you can use to loop through a set of points. In the following example, an iterator is used to loop through the floating-point numbers:

  1. type FloatIterator interface {
  2. Next() *FloatPoint
  3. }

Create an iterator by using the IteratorCreator interface:

  1. type IteratorCreator interface {
  2. CreateIterator(opt *IteratorOptions) (Iterator, error)
  3. }

You can use IteratorOptions to set multiple parameters to specify the details about an iterator. For example, you can specify the fields, the time range, and the dimension of an iterator when you plan and create the iterator. You can use the IteratorCreator interface at multiple levels, such as Shards, Shard, and Engine levels. This allows you to optimize your queries when applicable. For example, data can be preprocessed before the COUNT() function returns results.

You can use iterators to read raw data from storage locations. You can also combine iterators to compute data for an input iterator. This allows you to use additional features of iterators. For example, you can use the DistinctIterator to calculate the unique values of each time window for an input iterator. You can use the FillIterator to generate data to replace the missing points in an input iterator.

You can also combine iterators to aggregate data. The following statement is one of the examples where iterators are used to aggregate data.

  1. SELECT MEAN(value) FROM cpu GROUP BY time(10m)

In the statement, MEAN(value) is a MeanIterator that is wrapped based on the underlying shards. You can execute the following statement to add an iterator to calculate the derivatives of the average values:

  1. SELECT DERIVATIVE(MEAN(value), 20m) FROM cpu GROUP BY time(10m)

Auxiliary fields

InfluxQL provides selector functions such as FIRST(), LAST(), MIN(), and MAX(). Therefore, the query engine must provide a method to return the requested points and the relevant data at the same time.

The following query is used as an example:

  1. SELECT FIRST(value), host FROM cpu GROUP BY time(1h)

The query returns the first value that is generated every hour. The query also returns the host that is associated with each requested point at the same time. To improve efficiency, only one value type is specified for the points. Therefore, host serves as the auxiliary field for the points. These auxiliary fields are attached to a point until the point is passed to the emitter where the auxiliary fields are allocated to the corresponding iterators.

Built-in iterators

You can use the following helper iterators to create queries:

  • Merge iterator: merges one or more iterators into a new iterator of the same type. Merge iterators ensure that all the points in the current time window are returned before the query engine scans data for the next time window. However, merge iterators do not ensure that the points in each window are sorted. Therefore, merge iterators apply to the aggregate queries that require fast access and have low requirements for sorting.
  • Sorted merge iterator: merges one or more iterators into a new iterator of the same type, and ensures that all the points are sorted by time. Sorted merge iterators require longer response time than merge iterators because of the sorting. Data sorting is required for non-aggregate queries that return raw points. Therefore, sorted merge iterators apply to the non-aggregate queries of this type.
  • Limit iterator: limits the number of points for each name or tag group. Limit iterators are created based on the LIMIT and OFFSET syntax.
  • Fill iterator: generates values to replace the missing points in the input iterator. The values can be null values, the previous values, or the specified values.
  • Buffered iterator: maintains points in an Unread state and returns the points to the buffer so that the points can be read next time. Buffered iterators are widely used to enable lookahead for windows.
  • Reduce iterator: calls a reduction function for each point in the window. After all the operations in a window are complete, all the points in the window are returned. Reduce iterators apply to simple aggregate functions such as COUNT().
  • Reduce slice iterator: collects all the points in a window and then immediately passes them to a reduction function. The results are returned from the corresponding reduce slice iterator. Reduce slice iterators apply to aggregate functions such as DERIVATIVE().
  • Transform iterator: calls a transform function for each point in the input iterator. Transform iterators are used to implement binary expressions.
  • Dedupe iterator: returns only unique points. Dedupe iterators need to consume large amounts of resources. Therefore, dedupe iterators are used only for small queries such as meta query statements.

Call iterators

You can implement InfluxQL function calls at two levels: shard level and engine level.

Some function calls are wrapped at multiple layers to improve efficiency. For example, the COUNT() function can be implemented at the shard level. Multiple count iterators can be wrapped in another count iterator to calculate the number of all the shards. You can call the NewCallIterator() function to create these count iterators.

Some iterators are complex and must be implemented at the engine level. For example, the DERIVATIVE() function must retrieve all the points of a time window before the function performs calculation. The iterators of this type are always created by the query engine and are not implemented at the shard level.


InfluxDB® is a trademark registered by InfluxData, which is not affiliated with, and does not endorse, TSDB for InfluxDB®.