MaxCompute allows you to query data by using SELECT statements. This topic describes the syntax of SELECT statements in MaxCompute. It also describes how to use SELECT statements to perform operations, such as nested queries, sorting, and queries by group.

Before you execute SELECT statements, make sure that you have been granted the SELECT permission on the destination table. For more information, see Authorize users.

You can execute the INSERT INTO or INSERT OVERWRITE statement on the following platforms:

Introduction

SELECT statements are used to query data that meets the specified conditions from a table. The following table describes the query operations that can be performed in different scenarios.
Type Description
Subqueries Allows you to perform further queries based on the result of a query.
INTERSECT, UNION, EXCEPT, and MINUS Allows you to obtain the intersection, union, or supplementary set of two datasets.
JOIN Allows you to perform JOIN operations to connect tables and obtain the data that meets the join condition and query condition.
SEMI JOIN Allows you to filter data in the left table by using the right table, and obtain the data that appears only in the left table.
MAPJOIN hints Allows you to explicitly specify MAPJOIN hints in SELECT statements when you perform JOIN operations on one large table and one or more small tables. This improves query performance.
SKEWJOIN HINT If two tables that you want to join contain hot key values, a long tail issue may occur. You can remove the hot key value and divide the data into hot data and non-hot data, and then merge them to improve the join efficiency.
LATERAL VIEW Allows you to use LATERAL VIEW with a user-defined table-valued function (UDTF) to split one row of data into multiple rows.
GROUPING SETS Allows you to aggregate and analyze data from multiple dimensions.
SELECT TRANSFORM Allows you to start a specified child process and use standard input to enter data in the required format. Then, you can parse the standard output of the child process to obtain the output data.

Limits

  • After a SELECT statement is executed, a maximum of 10,000 rows of results can be displayed. This limit does not apply to SELECT clauses. SELECT clauses return all results in response to the query from the upper layer.
  • If you execute a SELECT statement to query data from a partitioned table, you cannot perform a full table scan on the table.

    If your project was created after 20:00:00 on January 10, 2018, you cannot perform a full table scan on a partitioned table in the project. This limit applies when you execute a SELECT statement to query data from the table. To query data from a partitioned table, you must specify the partitions that you want to scan. This reduces unnecessary I/O and conserves computing resources. This also reduces your computing costs if you use the pay-as-you-go billing method.

    To perform a full table scan on a partitioned table, add the set odps.sql.allow.fullscan=true; command before the SQL statement that is used for the full table scan. Then, commit and run the added command with the SQL statement. For example, if you want to perform a full table scan on the sale_detail partitioned table, run the following statement:
    set odps.sql.allow.fullscan=true;
    select * from sale_detail;

Syntax

select [all | distinct] <select_expr>, <select_expr>, ...
from <table_reference>
[where <where_condition>]
[group by <col_list>]
[having <having_condition>]
[order by <order_condition>]
[distribute by <distribute_condition> [sort by <sort_condition>] ]
[limit <number>]

For more information about the sequence to execute clauses in a SELECT statement, see Sequence for executing clauses in a SELECT statement.

Sample data

This topic provides sample source data and sample statements to demonstrate how to prepare source data. The following sample statements show how to create the sale_detail table and insert data into this table.
-- Create a partitioned table named sale_detail. 
create table if not exists sale_detail
(
shop_name     string,
customer_id   string,
total_price   double
)
partitioned by (sale_date string, region string);

-- Add a partition to the sale_detail table. 
alter table sale_detail add partition (sale_date='2013', region='china');

-- Insert data into the sale_detail table. 
insert into sale_detail partition (sale_date='2013', region='china') values ('s1','c1',100.1),('s2','c2',100.2),('s3','c3',100.3);

Column expression (select_expr)

select_expr is required. select_expr is in the format of col1_name, col2_name, column expression,.... This format indicates a common column or partition key column that you want to query, or a regular expression that you use to query data. When you use select_expr, you must comply with the following rules:
  • Specify the names of the columns from which you want to read data.
    The following statement reads data of the shop_name column from the sale_detail table. Sample statement:
    select shop_name from sale_detail;
    The following result is returned:
    +------------+
    | shop_name  |
    +------------+
    | s1         |
    | s2         |
    | s3         |
    +------------+
  • Use an asterisk (*) to represent all columns. You can also use an asterisk (*) with where_condition to specify filter conditions.
    • The following statement queries data of all columns from the sale_detail table. Sample statement:
      -- Enable a full table scan. This operation takes effect only for the current session. 
      set odps.sql.allow.fullscan=true;
      select * from sale_detail;
      The following result is returned:
      +------------+-------------+-------------+------------+------------+
      | shop_name  | customer_id | total_price | sale_date  | region     |
      +------------+-------------+-------------+------------+------------+
      | s1         | c1          | 100.1       | 2013       | china      |
      | s2         | c2          | 100.2       | 2013       | china      |
      | s3         | c3          | 100.3       | 2013       | china      |
      +------------+-------------+-------------+------------+------------+
    • The following statement uses an asterisk (*) with where_condition to specify filter conditions. Sample statement:
      select * from sale_detail where shop_name='s1';
      The following result is returned:
      +------------+-------------+-------------+------------+------------+
      | shop_name  | customer_id | total_price | sale_date  | region     |
      +------------+-------------+-------------+------------+------------+
      | s1         | c1          | 100.1       | 2013       | china      |
      +------------+-------------+-------------+------------+------------+
  • Use a regular expression.
    • The following statement queries data of all columns whose names start with sh from the sale_detail table. Sample statement:
      select `sh.*` from sale_detail;
      The following result is returned:
      +------------+
      | shop_name  |
      +------------+
      | s1         |
      | s2         |
      | s3         |
      +------------+
    • The following statement queries data of all columns whose names are not shop_name from the sale_detail table. Sample statement:
      select `(shop_name)?+.+` from sale_detail;
      The following result is returned:
      +-------------+-------------+------------+------------+
      | customer_id | total_price | sale_date  | region     |
      +-------------+-------------+------------+------------+
      | c1          | 100.1       | 2013       | china      |
      | c2          | 100.2       | 2013       | china      |
      | c3          | 100.3       | 2013       | china      |
      +-------------+-------------+------------+------------+
    • The following statement queries data of all columns except the columns whose names are shop_name and customer_id from the sale_detail table. Sample statement:
      select `(shop_name|customer_id)?+.+` from sale_detail;
      The following result is returned:
      +-------------+------------+------------+
      | total_price | sale_date  | region     |
      +-------------+------------+------------+
      | 100.1       | 2013       | china      |
      | 100.2       | 2013       | china      |
      | 100.3       | 2013       | china      |
      +-------------+------------+------------+
    • The following statement queries data of all columns except the columns whose names start with t from the sale_detail table. Sample statement:
      select `(t.*)?+.+` from sale_detail;
      The following result is returned:
      +------------+-------------+------------+------------+
      | shop_name  | customer_id | sale_date  | region     |
      +------------+-------------+------------+------------+
      | s1         | c1          | 2013       | china      |
      | s2         | c2          | 2013       | china      |
      | s3         | c3          | 2013       | china      |
      +------------+-------------+------------+------------+
      Note

      If the name of col2 is the prefix of the name of col1 and you want to exclude multiple columns, make sure that the name of col1 is placed before that of col2. The longer column name is placed before the shorter column name. For example, if you want to exclude partitions ds and dshh from your query of a table, execute SELECT `(dshh|ds)?+.+` from t;. If you execute SELECT `(ds|dshh)?+.+` from t;, an error is returned.

  • Use DISTINCT before the name of a column to filter out duplicate values from the column and return only distinct values. If you use ALL before the name of a column, all values of the column, including the duplicate values, are returned. If DISTINCT is not used, ALL is used.
    • The following statement queries data of the region column from the sale_detail table and returns only one distinct value. Sample statement:
      select distinct region from sale_detail;
      The following result is returned:
      +------------+
      | region     |
      +------------+
      | china      |
      +------------+
    • The following statement specifies multiple columns after the DISTINCT option. The DISTINCT option takes effect on all the specified columns instead of a single column. Sample statement:
      select distinct region, sale_date from sale_detail;
      The following result is returned:
      +------------+------------+
      | region     | sale_date  |
      +------------+------------+
      | china      | 2013       |
      +------------+------------+

Destination table information (table_reference)

table_reference is required. table_reference specifies the table that you want to query. When you use table_reference, you must comply with the following rules:
  • Specify the name of a destination table. Sample statement:
    select customer_id from sale_detail;
    The following result is returned:
    +-------------+
    | customer_id |
    +-------------+
    | c1          |
    | c2          |
    | c3          |
    +-------------+
  • Use a nested subquery. Sample statement:
    select * from (select region,sale_date from sale_detail) t where region = 'china';
    The following result is returned:
    +------------+------------+
    | region     | sale_date  |
    +------------+------------+
    | china      | 2013       |
    | china      | 2013       |
    | china      | 2013       |
    +------------+------------+

WHERE clause (where_condition)

where_condition is optional. where_condition specifies a filter condition. If where_condition is used for a partitioned table, column pruning can be implemented. When you use where_condition, you must comply with the following rules:
  • Use where_condition with relational operators to obtain the data that meets the specified conditions. Relational operators include:
    • >, <, =, >=, <=, and <>
    • LIKE and RLIKE
    • IN and NOT IN
    • BETWEEN…AND

    For more information, see Relational operators.

    The following statement specifies the partitions that you want to scan in where_condition. This avoids a full table scan. Sample statement:
    select * 
    from sale_detail
    where sale_date >= '2008' and sale_date <= '2014';
    -- The preceding statement is equivalent to the following statement. 
    select * 
    from sale_detail 
    where sale_date between '2008' and '2014';
    The following result is returned:
    +------------+-------------+-------------+------------+------------+
    | shop_name  | customer_id | total_price | sale_date  | region     |
    +------------+-------------+-------------+------------+------------+
    | s1         | c1          | 100.1       | 2013       | china      |
    | s2         | c2          | 100.2       | 2013       | china      |
    | s3         | c3          | 100.3       | 2013       | china      |
    +------------+-------------+-------------+------------+------------+
    Note You can execute the EXPLAIN statement to check whether partition pruning takes effect. A common user-defined function (UDF) or the partition condition settings of JOIN may cause partition pruning to fail. For more information, see Check whether partition pruning is effective.
  • Use UDF-based partition pruning. If you use UDFs, MaxCompute executes the UDFs as small jobs and backfills partitions with the results of these jobs. You can enable UDF-based partition pruning by using one of the following methods:
    • Add an annotation to a UDF class when you write a UDF.
      @com.aliyun.odps.udf.annotation.UdfProperty(isDeterministic=true)
      Note The UDF annotation com.aliyun.odps.udf.annotation.UdfProperty is defined in the odps-sdk-udf.jar file. To use this annotation, you must update the version of the referenced odps-sdk-udf to 0.30.X or later.
    • Add set odps.sql.udf.ppr.deterministic = true; before the SQL statement that you want to execute. Then, all UDFs in the SQL statement are considered deterministic UDFs. The preceding SET command backfills partitions with the results of jobs. A maximum of 1,000 partitions can be backfilled with the results of jobs. If you add an annotation to a UDF class, an error that indicates more than 1,000 partitions are backfilled may be returned. To ignore this error, you can run the set odps.sql.udf.ppr.to.subquery = false; command. After you run this command, UDF-based partition pruning is no longer in effect.

GROUP BY (col_list)

GROUP BY is optional. In most cases, GROUP BY is used with aggregate functions to group columns based on the specified common columns, partition key columns, or regular expressions. When you use GROUP BY, you must comply with the following rules:
  • GROUP BY takes precedence over SELECT. Therefore, columns in GROUP BY can be specified by column names of the input table of SELECT or an expression that is formed by columns of the input table of SELECT. When you use GROUP BY, take note of the following items:
    • Columns in GROUP BY cannot be specified by column aliases in a SELECT statement.
    • If columns in GROUP BY are specified by a regular expression, the complete expression must be used.
    • The columns that do not use aggregate functions in a SELECT statement must be specified in GROUP BY.
    • The following statement groups table data by the column name region. In this case, data is grouped based on the values of the region column. Sample statement:
      select region from sale_detail group by region;
      The following result is returned:
      +------------+
      | region     |
      +------------+
      | china      |
      +------------+
    • The following statement groups table data based on the values of the region column and returns the total sales of each group. Sample statement:
      select sum(total_price) from sale_detail group by region;
      The following result is returned:
      +------------+
      | _c0        |
      +------------+
      | 300.6      |
      +------------+
    • The following statement groups table data based on the values of the region column and returns distinct values and total sales of each group. Sample statement:
      select region, sum (total_price) from sale_detail group by region;
      The following result is returned:
      +------------+------------+
      | region     | _c1        |
      +------------+------------+
      | china      | 300.6      |
      +------------+------------+
    • If you group table data by the alias of an output column of a SELECT statement, an error is returned. Sample statement of incorrect usage:
      select region as r from sale_detail group by r;
      Sample statement of correct usage:
      select region as r from sale_detail group by region;
      The following result is returned:
      +------------+
      | r          |
      +------------+
      | china      |
      +------------+
    • The following statement groups data by select_expr. Note that select_expr must be a complete regular expression. Sample statement:
      select 2 + total_price as r from sale_detail group by 2 + total_price;
      The following result is returned:
      +------------+
      | r          |
      +------------+
      | 102.1      |
      | 102.2      |
      | 102.3      |
      +------------+
    • If some columns in a SELECT statement do not use aggregate functions, these columns must be specified in GROUP BY. Otherwise, an error is returned. Sample statement of incorrect usage:
      select region, total_price from sale_detail group by region;
      Sample statement of correct usage:
      select region, total_price from sale_detail group by region, total_price;
      The following result is returned:
      +------------+-------------+
      | region     | total_price |
      +------------+-------------+
      | china      | 100.1       |
      | china      | 100.2       |
      | china      | 100.3       |
      +------------+-------------+
  • If you add the set hive.groupby.position.alias=true; command before a SELECT statement, integer constants in the GROUP BY clause are considered column numbers in a SELECT statement. Sample statement:
    -- Run this command with the following SELECT statement. 
    set odps.sql.groupby.position.alias=true;
    -- 1 indicates the region column, which is the first column read by the following SELECT statement. This statement groups table data based on the values of the region column and returns distinct values of the region column and total sales of each group. 
    select region, sum(total_price) from sale_detail group by 1;
    The following result is returned:
    +------------+------------+
    | region     | _c1        |
    +------------+------------+
    | china      | 300.6      |
    +------------+------------+

HAVING clause (having_condition)

having_condition is optional. In most cases, having_condition is used with aggregate functions to filter data. Sample statement:
-- Insert data into the sale_detail table to display the data rendering effect. 
insert into sale_detail partition (sale_date='2014', region='shanghai') values ('null','c5',null),('s6','c6',100.4),('s7','c7',100.5);
-- Use having_condition with aggregate functions to filter data. 
select region,sum(total_price) from sale_detail 
group by region 
having sum(total_price)<305;
The following result is returned:
+------------+------------+
| region     | _c1        |
+------------+------------+
| china      | 300.6      |
| shanghai   | 200.9      |
+------------+------------+

ORDER BY (order_condition)

ORDER BY is optional. ORDER BY is used to sort all data records based on a specified common column or partition key column. ORDER BY can also be used to sort all data records based on a specified constant. When you use ORDER BY, you must comply with the following rules:
  • By default, data is sorted in ascending order. If you want to sort data in descending order, the DESC keyword is required.
  • By default, ORDER BY is followed by LIMIT <number> to limit the number of data rows that are displayed in the output. If ORDER BY is not followed by LIMIT <number>, an error is returned. You can also work around this limit. For more information, see LIMIT <number>.
    • The following statement queries data from the sale_detail table, sorts data records in ascending order based on the values of the total_price column, and then displays the first two records. Sample statement:
      select * from sale_detail order by total_price limit 2;
      The following result is returned:
      +------------+-------------+-------------+------------+------------+
      | shop_name  | customer_id | total_price | sale_date  | region     |
      +------------+-------------+-------------+------------+------------+
      | s1         | c1          | 100.1       | 2013       | china      |
      | s2         | c2          | 100.2       | 2013       | china      |
      +------------+-------------+-------------+------------+------------+
    • The following statement queries data from the sale_detail table, sorts data records in descending order based on the values of the total_price column, and then displays the first two records. Sample statement:
      select * from sale_detail order by total_price desc limit 2;
      The following result is returned:
      +------------+-------------+-------------+------------+------------+
      | shop_name  | customer_id | total_price | sale_date  | region     |
      +------------+-------------+-------------+------------+------------+
      | s3         | c3          | 100.3       | 2013       | china      |
      | s2         | c2          | 100.2       | 2013       | china      |
      +------------+-------------+-------------+------------+------------+
  • NULL is the smallest value when you use ORDER BY to sort data. This is also the case in MySQL. However, this is not the case in Oracle.
    The following statement queries data from the sale_detail table, sorts data records in ascending order based on the values of the total_price column, and then displays the first second records. Sample statement:
    select * from sale_detail order by total_price limit 2;
    The following result is returned:
    +------------+-------------+-------------+------------+------------+
    | shop_name  | customer_id | total_price | sale_date  | region     |
    +------------+-------------+-------------+------------+------------+
    | s1         | c1          | 100.1       | 2013       | china      |
    | s2         | c2          | 100.2       | 2013       | china      |
    +------------+-------------+-------------+------------+------------+
  • ORDER BY is followed by the alias of an output column of a SELECT statement. If you do not specify the alias of an output column of a SELECT statement, the name of this column is used as the alias of this column.
    The following statement adds the alias of an output column after ORDER BY. Sample statement:
    select total_price as t from sale_detail order by total_price limit 3;
    -- The preceding statement is equivalent to the following statement: 
    select total_price as t from sale_detail order by t limit 3;
    The following result is returned:
    +------------+
    | t          |
    +------------+
    | 100.1      |
    | 100.2      |
    | 100.3      |
    +------------+
  • If you add the set hive.orderby.position.alias=true; command before a SELECT statement, integer constants in the ORDER BY clause are considered column numbers in the SELECT statement. Sample statement:
    -- Run this command with the following SELECT statement. 
    set odps.sql.orderby.position.alias=true;
    select * from sale_detail order by 3 limit 3;
    The following result is returned:
    +------------+-------------+-------------+------------+------------+
    | shop_name  | customer_id | total_price | sale_date  | region     |
    +------------+-------------+-------------+------------+------------+
    | s1         | c1          | 100.1       | 2013       | china      |
    | s2         | c2          | 100.2       | 2013       | china      |
    | s3         | c3          | 100.3       | 2013       | china      |
    +------------+-------------+-------------+------------+------------+
  • An OFFSET clause can be used with an ORDER BY...LIMIT clause to specify the number of rows to skip. The format is ORDER BY...LIMIT m OFFSET n, which can be abbreviated as ORDER BY...LIMIT n, m. LIMIT m specifies that m rows of data are returned. OFFSET n specifies that n rows are skipped before data is returned. If you do not want to skip rows, you can use OFFSET 0 in the statement that you want to execute. You can also execute the statement without specifying an OFFSET clause.
    The following statement sorts the data of the sale_detail table in ascending order based on the values of the total_price column and displays three rows of records starting from the third row. Sample statement:
    select customer_id,total_price from sale_detail order by total_price limit 3 offset 2;
    -- The preceding statement is equivalent to the following statement. 
    select customer_id,total_price from sale_detail order by total_price limit 2, 3;
    The following result is returned:
    +-------------+-------------+
    | customer_id | total_price |
    +-------------+-------------+
    | c3          | 100.3       |
    +-------------+-------------+
    The queried data contains only three rows of records. In this case, only the third row is returned.

DISTRIBUTE BY hash partition (distribute_condition)

DISTRIBUTE BY is optional. DISTRIBUTE BY is used to perform hash partitioning on data based on the values of specific columns.

DISTRIBUTE BY controls how the output of a mapper is distributed among reducers. To prevent the same data from being distributed to different reducers, you can use DISTRIBUTE BY. This ensures that the same group of data is distributed to the same reducer.

The alias of an output column of a SELECT statement must be specified. If you execute a SELECT statement to query data of a column and the alias of this column is not specified, the column name is used as the alias. Sample statement:
-- The following statement queries the values of the region column from the sale_detail table and performs hash partitioning on data based on the values of the region column. 
select region from sale_detail distribute by region;
-- The preceding statement is equivalent to the following statements: 
select region as r from sale_detail distribute by region;
select region as r from sale_detail distribute by r;

SORT BY (sort_condition)

SORT BY is optional. In most cases, SORT BY is used with DISTRIBUTE BY. When you use SORT BY, you must comply with the following rules:
  • By default, data is sorted in ascending order. If you want to sort data in descending order, the DESC keyword is required.
  • If SORT BY is preceded by DISTRIBUTE BY, SORT BY sorts the result of DISTRIBUTE BY based on the values of a specified column.
    • The following statements query the values of the region and total_price columns from the sale_detail table, perform hash partitioning on the query results based on the values of the region column, and then sort the partitioning results in ascending order based on the values of the total_price column. Sample statements:
      -- Insert data into the sale_detail table to display the data rendering effect. 
      insert into sale_detail partition (sale_date='2014', region='shanghai') values ('null','c5',null),('s6','c6',100.4),('s7','c7',100.5);
      select region,total_price from sale_detail distribute by region sort by total_price;
      The following result is returned:
      +------------+-------------+
      | region     | total_price |
      +------------+-------------+
      | shanghai   | NULL        |
      | china      | 100.1       |
      | china      | 100.2       |
      | china      | 100.3       |
      | shanghai   | 100.4       |
      | shanghai   | 100.5       |
      +------------+-------------+
    • The following statement queries the values of the region and total_price columns from the sale_detail table, performs hash partitioning on the query results based on the values of the region column, and then sorts the partitioning results in descending order based on the values of the total_price column. Sample statement:
      select region,total_price from sale_detail distribute by region sort by total_price desc;
      The following result is returned:
      +------------+-------------+
      | region     | total_price |
      +------------+-------------+
      | shanghai   | 100.5       |
      | shanghai   | 100.4       |
      | china      | 100.3       |
      | china      | 100.2       |
      | china      | 100.1       |
      | shanghai   | NULL        |
      +------------+-------------+
  • If SORT BY is not preceded by DISTRIBUTE BY, SORT BY sorts the data that is distributed to each reducer.
    This ensures that the output data of each reducer is sorted in order and increases the storage compression ratio. If data is filtered during data reading, this method reduces the amount of data that is read from disks and improves the efficiency of subsequent global sorting. Sample statement:
    select region,total_price from sale_detail sort by total_price desc;
    The following result is returned:
    +------------+-------------+
    | region     | total_price |
    +------------+-------------+
    | china      | 100.3       |
    | china      | 100.2       |
    | china      | 100.1       |
    | shanghai   | 100.5       |
    | shanghai   | 100.4       |
    | shanghai   | NULL        |
    +------------+-------------+
Note
  • Columns in an ORDER BY, DISTRIBUTE BY, or SORT BY clause must be specified by the aliases of the output columns in a SELECT statement. Column aliases can be Chinese.
  • In MaxCompute, an ORDER BY, DISTRIBUTE BY, or SORT BY clause is executed after a SELECT statement. Therefore, columns in ORDER BY, DISTRIBUTE BY, or SORT BY must be specified by the aliases of the output columns in the SELECT statement.
  • ORDER BY cannot be used at the same time with DISTRIBUTE BY or SORT BY. Similarly, GROUP BY cannot be used at the same time with DISTRIBUTE BY or SORT BY.

LIMIT <number>

LIMIT <number> is optional. In LIMIT <number>, number is a constant that restricts the number of rows that can be displayed.

Note LIMIT <number> is used to scan and filter data for a distributed query system. When you use LIMIT <number>, the amount of data returned is not reduced. Therefore, computing costs are not reduced.
The following sections describe the limits of LIMIT <number> and how to work around these limits.
  • ORDER BY must be used with LIMIT <number>.
    ORDER BY sorts all data of a single node. By default, ORDER BY is used with LIMIT <number> to prevent a single node from processing large amounts of data. You can work around this limit by using the following methods:
    • To work around the limit for a project, run the setproject odps.sql.validate.orderby.limit=false; command.
    • To work around the limit for a session, commit and run the set.odps.sql.validate.orderby.limit=false; command with the SQL statement that you want to execute.
      Note After you work around this limit, if a single node has large amounts of data to sort, more resources and time are consumed.
  • Limited rows are displayed.

    If you execute a SELECT statement without a LIMIT <number> clause or the number specified in the LIMIT <number> clause exceeds the maximum number (n) of rows that can be displayed, a maximum of n rows can be displayed.

    The maximum number of rows that can be displayed varies based on projects. You can use one of the following methods to control the maximum number:
    • If project data protection is disabled, modify the odpscmd config.ini file.

      Set use_instance_tunnel to true in the odpscmd config.ini file. If the instance_tunnel_max_record parameter is not configured, the number of rows that can be displayed is not limited. Otherwise, the number of rows that can be displayed is limited by the instance_tunnel_max_record parameter. The maximum value of the instance_tunnel_max_record parameter is 10000. For more information about InstanceTunnel, see Usage notes.

    • If project data protection is enabled, the number of rows that can be displayed is limited by the READ_TABLE_MAX_ROW parameter. The maximum value of this parameter is 10000.
    Note You can run the SHOW SecurityConfiguration; command to view the value of ProjectProtection. If ProjectProtection is set to true, you can determine whether to disable project data protection based on your business requirements. You can run the set ProjectProtection=false; command to disable project data protection. By default, ProjectProtection is set to false. For more information about project data protection, see Project data protection.