All Products
Search
Document Center

PolarDB:Aggregate functions

Last Updated:Jun 13, 2024

Aggregate functions compute a single result from a set of input values. Aggregate functions that support Partial Mode are eligible to participate in various optimizations, such as parallel aggregation.

General-purpose aggregate functions

array_agg ( anynonarray ) → anyarray

Collects all the input values, including nulls, into an array.

No

array_agg ( anyarray ) → anyarray

Concatenates all the input arrays into an array of one higher dimension. (The inputs must all have the same dimensionality, and cannot be empty or null.)

No

avg ( smallint ) → numeric

avg ( integer ) → numeric

avg ( bigint ) → numeric

avg ( numeric ) → numeric

avg ( real ) → double precision

avg ( double precision ) → double precision

avg ( interval ) → interval

Computes the average (arithmetic mean) of all the non-null input values.

Yes

bit_and ( smallint ) → smallint

bit_and ( integer ) → integer

bit_and ( bigint ) → bigint

bit_and ( bit ) → bit

Computes the bitwise AND of all non-null input values.

Yes

bit_or ( smallint ) → smallint

bit_or ( integer ) → integer

bit_or ( bigint ) → bigint

bit_or ( bit ) → bit

Computes the bitwise OR of all non-null input values.

Yes

bool_and ( boolean ) → boolean

Returns true if all non-null input values are true, otherwise false.

Yes

bool_or ( boolean ) → boolean

Returns true if any non-null input value is true, otherwise false.

Yes

count ( * ) → bigint

Computes the number of input rows.

Yes

count ( "any" ) → bigint

Computes the number of input rows in which the input value is not null.

Yes

every ( boolean ) → boolean

This is the SQL standard's equivalent to bool_and.

Yes

json_agg ( anyelement ) → json

jsonb_agg ( anyelement ) → jsonb

Collects all the input values, including nulls, into a JSON array. Values are converted to JSON as per to_json or to_jsonb.

No

json_object_agg ( key "any", value "any" ) → json

jsonb_object_agg ( key "any", value "any" ) → jsonb

Collects all the key/value pairs into a JSON object. Key arguments are coerced to text and value arguments are converted as per to_json or to_jsonb. Values can be null, but not keys.

No

max ( see text ) → same as input type

Computes the maximum of the non-null input values. Available for any numeric, string, date/time, or enum type, in addition to inet, interval, money, oid, pg_lsn, tid, and arrays of any of these types.

Yes

min ( see text ) → same as input type

Computes the minimum of the non-null input values. Available for any numeric, string, date/time, or enum type, in addition to inet, interval, money, oid, pg_lsn, tid, and arrays of any of these types.

Yes

string_agg ( value text, delimiter text ) → text

string_agg ( value bytea, delimiter bytea ) → bytea

Concatenates the non-null input values into a string. Each value after the first is preceded by the corresponding delimiter (if it is not null).

No

sum ( smallint ) → bigint

sum ( integer ) → bigint

sum ( bigint ) → numeric

sum ( numeric ) → numeric

sum ( real ) → real

sum ( double precision ) → double precision

sum ( interval ) → interval

sum ( money ) → money

Computes the sum of the non-null input values.

Yes

xmlagg ( xml ) → xml

Concatenates the non-null XML input values

No

Note that except for count, these functions return a null value when no rows are selected. In particular, sum of no rows returns null, not zero, and array_agg returns null rather than an empty array when there are no input rows. The coalesce function can be used to substitute zero or an empty array for null when necessary.

The aggregate functions array_agg, json_agg, jsonb_agg, json_object_agg, jsonb_object_agg, string_agg, and xmlagg, in addition to similar user-defined aggregate functions, produce meaningfully different result values depending on the order of the input values. This ordering is unspecified by default, but can be controlled by writing an ORDER BY clause within the aggregate call. Alternatively, supplying the input values from a sorted subquery usually works. For example:

SELECT xmlagg(x) FROM (SELECT x FROM test ORDER BY y DESC) AS tab;
Beware that this approach can fail if the outer query level contains additional processing, such as a join, because additonal processing might cause the subquery output to be reordered before the aggregate is computed. 

Note

<span id="id-1.4.8.27.8.1" class="indexterm"></span><span id="id-1.4.8.27.8.2" class="indexterm"></span>

The boolean aggregates bool_and and bool_or correspond to the standard SQL aggregates every and any or some. PostgreSQL supports every, but not any or some, because there is an ambiguity built into the standard syntax:

```sql
    SELECT b1 = ANY((SELECT b2 FROM t2 ...)) FROM t1 ...;

Here ANY can be considered either as introducing a subquery, or as being an aggregate function, if the subquery returns one row with a Boolean value. Thus the standard name cannot be given to these aggregates.

Usage notes

Users accustomed to working with other SQL database management systems might be disappointed by the performance of the count aggregate when it is applied to the entire table. A query like:

    SELECT count(*) FROM sometable;

will require effort proportional to the size of the table: The database will need to scan either the entire table or the entirety of an index that includes all rows in the table.

The Aggregate functions for statistics table shows aggregate functions typically used in statistical analysis. (These are separated out merely to avoid cluttering the listing of more-commonly-used aggregates.) Functions shown as accepting numeric_type are available for all the types smallint, integer, bigint, numeric, real, and double precision. Where the description mentions N, it means the number of input rows for which all the input expressions are non-null. In all cases, null is returned if the computation is meaningless, for example when N is zero.

Aggregate functions for statistics

corr ( Y double precision, X double precision ) → double precision

Computes the correlation coefficient.

Yes

covar_pop ( Y double precision, X double precision ) → double precision

Computes the population covariance.

Yes

covar_samp ( Y double precision, X double precision ) → double precision

Computes the sample covariance.

Yes

regr_avgx ( Y double precision, X double precision ) → double precision

Computes the average of the independent variable, sum( X )/ N.

Yes

regr_avgy ( Y double precision, X double precision ) → double precision

Computes the average of the dependent variable, sum( Y )/ N.

Yes

regr_count ( Y double precision, X double precision ) → bigint

Computes the number of rows in which both inputs are non-null.

Yes

regr_intercept ( Y double precision, X double precision ) → double precision

Computes the y-intercept of the least-squares-fit linear equation determined by the (X, Y) pairs.

Yes

regr_r2 ( Y double precision, X double precision ) → double precision

Computes the square of the correlation coefficient.

Yes

regr_slope ( Y double precision, X double precision ) → double precision

Computes the slope of the least-squares-fit linear equation determined by the (X, Y) pairs.

Yes

regr_sxx ( Y double precision, X double precision ) → double precision

Computes the "sum of squares" of the independent variable, sum( X ^2) - sum( X )^2/ N.

Yes

regr_sxy ( Y double precision, X double precision ) → double precision

Computes the "sum of products" of independent times dependent variables, sum( X * Y ) - sum( X ) * sum( Y )/ N.

Yes

regr_syy ( Y double precision, X double precision ) → double precision

Computes the "sum of squares" of the dependent variable, sum( Y ^2) - sum( Y )^2/ N.

Yes

stddev ( numeric_type ) → double precision for real or double precision, otherwise numeric

This is a historical alias for stddev_samp.

Yes

stddev_pop ( numeric_type ) → double precision for real or double precision, otherwise numeric

Computes the population standard deviation of the input values.

Yes

stddev_samp ( numeric_type ) → double precision for real or double precision, otherwise numeric

Computes the sample standard deviation of the input values.

Yes

variance ( numeric_type ) → double precision for real or double precision, otherwise numeric

This is a historical alias for var_samp.

Yes

var_pop ( numeric_type ) → double precision for real or double precision, otherwise numeric

Computes the population variance of the input values (square of the population standard deviation).

Yes

var_samp ( numeric_type ) → double precision for real or double precision, otherwise numeric

Computes the sample variance of the input values (square of the sample standard deviation).

Yes

The Ordered-set aggregate functions table shows some aggregate functions that use the ordered-set aggregate syntax. These functions are sometimes referred to as "inverse distribution" functions. Their aggregated input is introduced by ORDER BY, and they may also take a direct argument that is not aggregated, but is computed only once. All these functions ignore null values in their aggregated input. For those that take a fraction parameter, the fraction value must be between 0 and 1. An error is thrown if not. However, a null fraction value simply produces a null result.

Ordered-set aggregate functions

mode () WITHIN GROUP ( ORDER BY anyelement ) → anyelement

Computes the mode, the most frequent value of the aggregated argument (arbitrarily choosing the first one if there are multiple equally-frequent values). The aggregated argument must be of a sortable type.

No

percentile_cont ( fraction double precision ) WITHIN GROUP ( ORDER BY double precision ) → double precision

percentile_cont ( fraction double precision ) WITHIN GROUP ( ORDER BY interval ) → interval

Computes the continuous percentile, a value which corresponds to the specified fraction within the ordered set of aggregated argument values. This will interpolate between adjacent input items if needed.

No

percentile_cont ( fractions double precision[] ) WITHIN GROUP ( ORDER BY double precision ) → double precision[]

percentile_cont ( fractions double precision[] ) WITHIN GROUP ( ORDER BY interval ) → interval[]

Computes multiple continuous percentiles. The result is an array of the same dimensions as the fractions parameter, with each non-null element replaced by the (possibly interpolated) value which corresponds to that percentile.

No

percentile_disc ( fraction double precision ) WITHIN GROUP ( ORDER BY anyelement ) → anyelement

Computes the discrete percentile, the first value within the ordered set of aggregated argument values whose position in the ordering equals or exceeds the specified fraction. The aggregated argument must be of a sortable type.

No

percentile_disc ( fractions double precision[] ) WITHIN GROUP ( ORDER BY anyelement ) → anyarray

Computes multiple discrete percentiles. The result is an array of the same dimensions as the fractions parameter, with each non-null element replaced by the input value which corresponds to that percentile. The aggregated argument must be of a sortable type.

No

Each of the "hypothetical-set" aggregates listed in the Hypothetical-set aggregate functions is associated with a window function of the same name. In each case, the aggregate's result is the value that the associated window function would have returned for the "hypothetical" row constructed from args, if such a row had been added to the sorted group of rows represented by the sorted_args. For each of these functions, the list of direct arguments given in args must match the number and types of the aggregated arguments given in sorted_args. Unlike most built-in aggregates, these aggregates are not strict, that is they do not drop input rows that contain nulls. Null values sort according to the rule specified in the ORDER BY clause.

Hypothetical-set aggregate functions

rank ( args ) WITHIN GROUP ( ORDER BY sorted_args ) → bigint

Computes the rank of the hypothetical row, with gaps, which isthe row number of the first row in its peer group.

No

dense_rank ( args ) WITHIN GROUP ( ORDER BY sorted_args ) → bigint

Computes the rank of the hypothetical row, without gaps. This function effectively counts peer groups.

No

percent_rank ( args ) WITHIN GROUP ( ORDER BY sorted_args ) → double precision

Computes the relative rank of the hypothetical row, that is (rank - 1) / (total rows - 1). The value thus ranges from 0 to 1 inclusive.

No

cume_dist ( args ) WITHIN GROUP ( ORDER BY sorted_args ) → double precision

Computes the cumulative distribution, that is (number of rows preceding or peers with hypothetical row) / (total rows). The value thus ranges from 1/N to 1.

No

Grouping operations

GROUPING ( group_by_expression(s) ) → integer

Returns a bit mask indicating which GROUP BY expressions are not included in the current grouping set. Bits are assigned with the rightmost argument which corresponds to the least-significant bit. Each bit is 0 if the corresponding expression is included in the grouping criteria of the grouping set generating the current result row, and 1 if it is not included.

The arguments to the GROUPING function are not actually evaluated, but they must exactly match expressions given in the GROUP BY clause of the associated query level. For example:

    => SELECT * FROM items_sold;
     make  | model | sales
    -------+-------+-------
     Foo   | GT    |  10
     Foo   | Tour  |  20
     Bar   | City  |  15
     Bar   | Sport |  5
    (4 rows)

    => SELECT make, model, GROUPING(make,model), sum(sales) FROM items_sold GROUP BY ROLLUP(make,model);
     make  | model | grouping | sum
    -------+-------+----------+-----
     Foo   | GT    |        0 | 10
     Foo   | Tour  |        0 | 20
     Bar   | City  |        0 | 15
     Bar   | Sport |        0 | 5
     Foo   |       |        1 | 30
     Bar   |       |        1 | 20
           |       |        3 | 50
    (7 rows)

Here, the grouping value 0 in the first four rows shows that those have been grouped normally, over both the grouping columns. The value 1 indicates that model was not grouped by in the next-to-last two rows, and the value 3 indicates that neither make nor model was grouped by in the last row (which therefore is an aggregate over all the input rows).

MEDIAN

Description

MEDIAN is an inverse distribution function that assumes a continuous distribution model. It takes a numeric or datetime value and returns the middle value or an interpolated value that would be the intermediary value once the values are sorted. NULL are ignored in the calculation.

Syntax

MEDIAN(expr) [ OVER (query_partition_clause) ]

Parameters

Parameter

Description

expr

This function takes as arguments any numeric data type or any non-numeric data type that can be implicitly converted to a numeric data type.

Return type

If you specify only expr, the function returns the same data type as the numeric data type of the argument. If you specify the OVER clause, the database determines the argument with the highest numeric precedence, implicitly converts the remaining arguments to that data type, and returns that data type.

Example

create table median_test_t_1(i interval);
insert into median_test_t_1 values(interval '1 day 1 second');
insert into median_test_t_1 values(interval '2 day 1 second');
insert into median_test_t_1 values(interval '2 day 2 second');
select median(i) from median_test_t_1;
   median    
-------------
 02 00:00:01

GROUPING_ID

Description

GROUPING_ID returns a number corresponding to the GROUPING bit vector associated with a row.

  • GROUPING_ID is applicable only in a SELECT statement that contains a GROUP BY extension, such as ROLLUP or CUBE, and a GROUPING function.

  • In queries with many GROUP BY expressions, determining the GROUP BY level of a particular row requires many GROUPING functions, which leads to cumbersome SQL. GROUPING_ID is useful in these cases.

  • GROUPING_ID is functionally equivalent to taking the results of multiple GROUPING functions and concatenating them into a bit vector (a string of ones and zeros). By using GROUPING_ID, you can avoid using multiple GROUPING functions and make row filtering conditions easier to express. Row filtering is easier with GROUPING_ID because the desired rows can be identified with a single condition of GROUPING_ID = n. The function is especially useful when multiple levels of aggregation are stored in a single table.

Syntax

GROUPING_ID(expr[, expr]...)

Parameters

Parameter

Description

expr

An expression in the GROUP BY clause.

Return type

Returns a value of the NUMBER data type.

Example

create table aggregate_functions_tb_agg(VARCHAR2_1 VARCHAR2(100),INTEGER_1 INTEGER);
insert into aggregate_functions_tb_agg(VARCHAR2_1,INTEGER_1) values('aggregate_functions_tb_agg',10);
select sum(INTEGER_1),grouping_id(VARCHAR2_1) from aggregate_functions_tb_agg group by VARCHAR2_1;
 sum | grouping 
-----+----------
  10 |        0

GROUPING

Description

GROUPING distinguishes superaggregate rows from regular grouped rows. GROUP BY extensions such as ROLLUP and CUBE produce superaggregate rows where the set of all values is represented by null. Using the GROUPING function, you can distinguish a null representing the set of all values in a superaggregate row from a null in a regular row.

The expr in the GROUPING function must match one of the expressions in the GROUP BY clause.

Syntax

GROUPING(expr)

Parameters

Parameter

Description

expr

An expression in the GROUP BY clause.

Return type

Returns a value of the NUMBER data type.

  • The function returns a value of 1 if the value of expr in the row is a null representing the set of all values.

  • Otherwise, it returns 0.

Example

create temp view gstest1(a,b,v)
  as values (1,1,10),(1,1,11),(1,2,12),(1,2,13),(1,3,14),
            (2,3,15),
            (3,3,16),(3,4,17),
            (4,1,18),(4,1,19);

select a, b, grouping(a,b), sum(v), count(*), max(v)
  from gstest1 group by rollup (a,b);
 a | b | grouping | sum | count | max 
---+---+----------+-----+-------+-----
 1 | 1 |        0 |  21 |     2 |  11
 1 | 2 |        0 |  25 |     2 |  13
 1 | 3 |        0 |  14 |     1 |  14
 1 |   |        1 |  60 |     5 |  14
 2 | 3 |        0 |  15 |     1 |  15
 2 |   |        1 |  15 |     1 |  15
 3 | 3 |        0 |  16 |     1 |  16
 3 | 4 |        0 |  17 |     1 |  17
 3 |   |        1 |  33 |     2 |  17
 4 | 1 |        0 |  37 |     2 |  19
 4 |   |        1 |  37 |     2 |  19
   |   |        3 | 145 |    10 |  19

GRUOP_ID

Description

GROUP_ID distinguishes duplicate groups that result from a GROUP BY specification. It is useful in filtering out duplicate groupings from the query result. It returns a NUMBER to uniquely identify duplicate groups. This function is applicable only in a SELECT statement that contains a GROUP BY clause.

If n duplicates exist for a particular grouping, then GROUP_ID returns numbers in the range 0 to n-1.

Syntax

GROUP_ID()

Return type

Returns a value of the NUMBER data type.

Example

create table t(a int, b int, c int);
insert into t(a, b, c) values(1, 2, 3);
select a, b, c, grouping(a,b,c), group_id() from t group by cube(a,b,c) order by a, b, c, grouping(a, b, c), group_id();
 a | b | c | grouping | group_id 
---+---+---+----------+----------
 1 | 2 | 3 |        0 |        0
 1 | 2 |   |        1 |        0
 1 |   | 3 |        2 |        0
 1 |   |   |        3 |        0
   | 2 | 3 |        4 |        0
   | 2 |   |        5 |        0
   |   | 3 |        6 |        0
   |   |   |        7 |        0

COLLECT

Description

COLLECT takes as its argument a column of any type and creates a nested table of the input type out of the rows selected. To get accurate results from this function you must use it within a CAST function.

If column is itself a collection, then the output of COLLECT is a nested table of collections.

Syntax

COLLECT(column)

Parameters

Parameter

Description

column

A column of any type.

Return type

Returns a nested table of the same type as the input.

Example

CREATE TABLE collect_warehouses
    ( warehouse_id       NUMBER(3) 
    , warehouse_name     VARCHAR2(35)
    , location_id        NUMBER(4)
    );
CREATE TYPE collect_warehouse_name_t AS TABLE OF VARCHAR2(35);
INSERT INTO collect_warehouses VALUES (1,'Southlake, Texas',1400);
INSERT INTO collect_warehouses VALUES (2,'San Francisco',1500);
INSERT INTO collect_warehouses VALUES (3,'New Jersey',1600);
INSERT INTO collect_warehouses VALUES (4,'Seattle, Washington',1700);
SELECT CAST(COLLECT(warehouse_name) AS collect_warehouse_name_t) "Warehouses" FROM collect_warehouses;
                               Warehouses                                
-------------------------------------------------------------------------
 {"Southlake, Texas","San Francisco","New Jersey","Seattle, Washington"}