Tablestore SDK for PHP provides multi-row operations such as BatchGetRow, BatchWriteRow, and GetRange.

BatchGetRow

BatchGetRow

You can call this operation to read several rows of data from one or more tables.

The BatchGetRow operation is a set of multiple GetRow operations. The execution, returning of results, and capacity unit (CU) consumption of each operation are carried out independently.

Compared with the execution of a large number of GetRow operations, the use of the BatchGetRow operation can effectively reduce the request response time and increase the data read rate.

The parameters of the BatchGetRow operation are the same as those of the GetRow operation. Note that BatchGetRow uses the same parameter configurations for all rows. For example, if columns_to_get is set to colA, only the value of colA is read for all the rows.

Similar to the use of the BatchWriteRow operation, you must check the response when you use the BatchGetRow operation. If the operation fails for some rows, an exception may not occur, but information about the failed rows is stored in BatchGetRowResponse.

Operation
    /**
     * Read the specified rows of data.
     * Note that if the BatchGetRow operation fails for some rows, an exception does not occur, but information about the failed rows is stored in $response. For more information, see the example on handling the response of BatchGetRow.
     * @api
     * @param [] $request Request parameters.
     * @return [] Response.
     * @throws OTSClientException The exception that occurs when a parameter error occurs or the Tablestore server returns a verification error.
     * @throws OTSServerException The exception that occurs when the Tablestore server returns an error.
     */
    public function batchGetRow(array $request);  
			
Sample requests
$result = $client->batchGetRow([
    'tables' => [                                            // REQUIRED
        [
            'table_name' => '<string>',                      // REQUIRED
            'primary_keys' => [                              // REQUIRED
                [
                    ['<string>', <PrimaryKeyValue>], 
                    ['<string>', <PrimaryKeyValue>],
                    ['<string>', <PrimaryKeyValue>, <PrimaryKeyType>]
                ], 
                // other primary keys
            ]
            'max_versions' => <integer>,
            'time_range' => [
                'start_time' => <integer>,
                'end_time' => <integer>,
                'specific_time' => <integer>
            ],
            'start_column' => '<string>',
            'end_column' => '<string>',
            'token' => '<string>',
            'columns_to_get' => [
                '<string>',
                '<string>',
                //...   
            ],
            'column_filter' =>  <ColumnCondition>
        ],
        // other tables.
    ]
]);

			
Code analysis
  • Differences between BatchGetRow and GetRow
    • The primary_key parameter is changed to the primary_keys parameter in BatchGetRow. Multiple rows can be read at a time.
    • Hierarchies are created for tables. Multiple tables can be read at a time.
  • The tables parameter specifies tables. You can specify information about the rows to read for each table.
    • table_name: required. This parameter specifies the name of the table.
    • primary_keys: required. This parameter specifies the list of primary key columns for each row. Each item in the list corresponds to a primary_key. The schema of each primary key column is as follows:
      • The primary key of a table can contain one to four primary key columns. Primary key columns are sorted by the order they are added. For example, primary key schema PRIMARY KEY (A, B, C) is different from PRIMARY KEY (A, C, B). Tablestore sorts rows based on the values of all primary key columns.
      • Each item in primary_keys specifies the values in the following sequence: the primary key name, primary key value (PrimaryKeyValue), and primary key value type (PrimaryKeyType, which is optional).
      • The value of PrimaryKeyValue can be an integer or a string.
      • PrimaryKeyType can be set to PrimaryKeyTypeConst::CONST_INTEGER, PrimaryKeyTypeConst::CONST_STRING (UTF-8 encoded string), or PrimaryKeyTypeConst::CONST_BINARY. If the type of data to query is INTEGER or STRING, this parameter can be ignored. Otherwise, the type must be specified.
    • max_versions: the maximum number of versions from which data is read.
    • time_range: the range of versions from which data is read. For more information, see TimeRange.
      • start_time: the beginning of the time range to query, in milliseconds. Valid values: [0, INT64.MAX]. To query data within a range, specify start_time and end_time. The range is a left-closed right-open interval.
      • end_time: the end of the time range to query, in milliseconds. Valid values: [0, INT64.MAX].
      • specific_time: the specific timestamp. To query data at a specific point in time, specify specific_time. You can set one of specific_time and [start_time, end_time), in milliseconds. Valid values: [0, INT64.MAX).
    • You must specify at least one of max_versions and time_range.
      • If you specify only max_versions, data of up to the specified number of versions is returned from the latest to the earliest.
      • If you specify only time_range, all data within the range is returned.
      • If you specify both max_versions and time_range, data of up to the specified number of versions within the time range is returned from the latest to the earliest.
    • columns_to_get: the set of columns to read. If you do not specify this parameter, all columns are read.
    • start_column: the start column to read, which is used to read wide rows. The response contains the start column. The column names are sorted alphabetically. Example: A table contains columns a, b, and c. If the value of start_column is b, the reading starts from column b, and columns b and c are returned.
    • end_column: the end column to read, which is used to read wide rows. The response does not contain the end column. The column names are sorted alphabetically. Example: A table contains columns a, b, and c. If the value of end_column is b, the reading ends at column b, and column a is returned.
    • token: the start column of a wide row to read next time. This parameter is unavailable.
    • column_filter: the filtering condition. Only rows that meet the condition are returned. This parameter is similar to column_condition in condition. For more information, see Filter.
Responses
[
    'tables' => [
        [
            'table_name' => '<string>',
            'rows' => [
                [
                    'is_ok' => true || false,
                    'error' => [
                        'code' => '<string>',
                        'message' => '<string>',
                    ]
                    'consumed' => [
                        'capacity_unit' => [
                            'read' => <integer>,
                            'write' => <integer>
                        ]
                    ],
                    'primary_key' => [ 
                        ['<string>', <PrimaryKeyValue>], 
                        ['<string>', <PrimaryKeyValue>],
                        ['<string>', <PrimaryKeyValue>, <PrimaryKeyType>]
                    ],  
                    'attribute_columns' => [
                            ['<string>', <ColumnValue>, <ColumnType>, <integer>]
                            ['<string>', <ColumnValue>, <ColumnType>, <integer>]
                            ['<string>', <ColumnValue>, <ColumnType>, <integer>]
                    ],
                    'next_token' => '<string>'
                ],
                // other rows
            ]
        ],
        // other tables.
    ]
]
			
Code analysis
  • BatchGetRow describes scenarios where all rows or partial rows fail to read. Ensure that you have read this section.
  • MaxCompute

  • The tables parameter is organized in the unit of tables. Each table corresponds to one request.
    • table_name: the name of the table.
    • is_ok: indicates whether the row is read. A value of true indicates that the row is read. In this case, error is invalid. A value of false indicates that the row fails to read. In this case, consumed, primary_key, and attribute_columns are invalid.
    • error: the error information in the response when the row fails to read.
      • code: the error code in the response when the row fails to read.
      • message: the error message in the response when the row fails to read.
    • consumed: the number of CUs consumed by this operation.
      capacity_unit: the number of read and write CUs consumed.
      • read: the number of read CUs consumed.
      • write: the number of write CUs consumed.
    • primary_key: the list of the primary key columns, which is consistent with that in the request.
    • attribute_columns: the list of attribute columns.
      • Each item specifies the values in the following sequence: the attribute column name, attribute column value (ColumnValue), attribute column value type (ColumnType), and timestamp.
      • ColumnType can be set to ColumnTypeConst::CONST_INTEGER, ColumnTypeConst::CONST_STRING (UTF-8 encoded string), ColumnTypeConst::CONST_BINARY, ColumnTypeConst::CONST_BOOLEAN, or ColumnTypeConst::CONST_DOUBLE.
      • The timestamp is a 64-bit integer that represents the version of the value of an attribute column.
      • The attribute columns in the response are sorted by attribute column name in ascending alphabetical order. The versions of the attribute columns are sorted by timestamp in descending order.
      • The order of the columns may be inconsistent with that of columns_to_get in the request.
    • next_token: the start column of a wide row to read next time. This parameter is unavailable.
Examples

The following code provides an example on how to read 30 rows of data at a time:

// Read data from three tables, 10 rows per table.
    $tables = array();
    for($i = 0; $i < 3; $i++) {
        $primary_keys = array();
        for($j = 0; $j < 10; $j++) {
            $primary_keys[] = [
                ['pk0', $i],
                ['pk1', $j]
            ];
        }
        $tables[] = [
            'table_name' => 'SampleTable' . $i,
            'max_versions' => 1,
            'primary_keys' => $primary_keys
        ];
    }
    $request = [
        'tables' => $tables
    ];
    $response = $otsClient->batchGetRow ($request);

    // Process each table that is returned.
    foreach ($response['tables'] as $tableData) {
      print "Handling table {$tableData['table_name']} ...\n";

// Determine how to process each row of data in this table.
      foreach ($tableData['rows'] as $rowData) {

        if ($rowData['is_ok']) {

// Process the data that is read.
            $row = json_encode($rowData['primary_key']);
            print "Handling row: {$row}\n";

        } else {

          // Display the error information.
          print "Error: {$rowData['error']['code']} {$rowData['error']['message']}\n";
        }
      }
    }
			

For more information, see the sample files listed in the following table.

Sample file Description
BatchGetRow1.php Shows you how to use BatchGetRow to read multiple rows from a table.
BatchGetRow2.php Shows you how to use BatchGetRow to read multiple rows from multiple tables.
BatchGetRow3.php Shows you how to use BatchGetRow to read specified columns for multiple rows from a table.
BatchGetRow4.php Shows you how to use BatchGetRow to process returned results.
BatchGetRowWithColumnFilter.php Shows you how to use BatchGetRow with conditional filtering.

BatchWriteRow

BatchWriteRow

You can call this operation to insert, modify, or delete several rows of data in one or more tables.

The BatchWriteRow operation is a set of PutRow, UpdateRow, and DeleteRow operations. The execution, returning of results, and CU consumption of each operation are carried out independently.

The BatchWriteRow operation allows you to perform multiple write operations by using a single request. These write operations include PutRow, UpdateRow, and DeleteRow. This operation also allows you to write data to multiple tables at a time.

The process to create a BatchWriteRow operation is the same as that to create a PutRow, UpdateRow, or DeleteRow operation. You can also set update conditions.

When you call the BatchWriteRow operation, you must check the response. When multiple rows are written, some rows may fail to write. If the operation fails for some rows, an exception may not occur, but information about the index and error messages of the failed rows is stored in BatchWriteRowResponse. We recommend that you check the response in case of failed operations. In some situations, an exception may occur when you call the BatchWriteRow operation. For example, if the server detects incorrect parameters during some operations, a parameter exception may occur when none of the operations are completed for the request.

Operation
    /**
     * Write, update, or delete the specified rows of data.
     * Note that if the BatchWriteRow operation fails for some rows, an exception does not occur, but information about the failed rows is stored in $response. For more information, see the example on handling the response of BatchWriteRow.
     * @api
     * @param [] $request Request parameters.
     * @return [] Response.
     * @throws OTSClientException The exception that occurs when a parameter error occurs or the Tablestore server returns a verification error.
     * @throws OTSServerException The exception that occurs when the Tablestore server returns an error.
     */
    public function batchWriteRow(array $request);  
			
Sample requests
$result = $client->batchWriteRow([
    'tables' => [                                            // REQUIRED
        [
            'table_name' => '<string>',                      // REQUIRED
            'operation_type' => <OperationType>,
            'condition' => [
                'row_existence' => <RowExistence>,   
                'column_condition' => <ColumnCondition>
            ],
            'primary_key' => [                               // REQUIRED
                ['<string>', <PrimaryKeyValue>], 
                ['<string>', <PrimaryKeyValue>],
                ['<string>', <PrimaryKeyValue>, <PrimaryKeyType>]
            ], 
            'attribute_columns' => [                        // REQUIRED in PUT
                    ['<string>', <ColumnValue>], 
                    ['<string>', <ColumnValue>, <ColumnType>],
                    ['<string>', <ColumnValue>, <ColumnType>, <integer>]
            ],
            'update_of_attribute_columns' => [               // REQUIRED in UPDATE
                'PUT' => [
                    ['<string>', <ColumnValue>], 
                    ['<string>', <ColumnValue>, <ColumnType>],
                    ['<string>', <ColumnValue>, <ColumnType>, <integer>]
                ],
                'DELETE' => [
                    ['<string>', <integer>], 
                    ['<string>', <integer>], 
                    ['<string>', <integer>], 
                    ['<string>', <integer>]
                ],
                'DELETE_ALL' => [
                    '<string>',
                    '<string>',
                    '<string>',
                    '<string>'
                ],
            ],
            'return_content' => [
                'return_type' => <ReturnType>
            ]
        ],
        // other tables.
    ]
]);        
Code analysis
  • BatchWriteRow is a combination of PutRow, UpdateRow, and DeleteRow.
    • The operation_type parameter is added to distinguish operation types.
    • Hierarchies are created for tables. Multiple tables can be processed at a time.
  • The tables parameter specifies tables. You can specify information about the rows to write, modify, and delete for each table

    • table_name: required. This parameter specifies the name of the table.
    • condition: For more information about condition, see Conditional update.
      • row_existence: the row existence condition.
      • column_condition: the column condition.
    • operation_type: the operation type. Valid values: PUT(OperationTypeConst::CONST_PUT), UPDATE(OperationTypeConst::CONST_UPDATE), and DELETE(OperationTypeConst::CONST_DELETE).
      • PUT: To perform this operation, you must set the primary_key and attribute_columns parameters.
      • UPDATE: To perform this operation, you must set the primary_key and update_of_attribute_columns parameters.
      • DELETE: To perform this operation, you must set the primary_key parameter.
    • return_content: the content to return.

      return_type: Only this parameter is available for return_content.

      You can set ReturnTypeConst::CONST_PK to return the value of the auto-increment primary key column.

Responses
  [
      'tables' => [
          [
              'table_name' => '<string>',
              'rows' => [
                  [
                      'is_ok' => true || false,
                      'error' => [
                          'code' => '<string>',
                          'message' => '<string>',
                      ]
                      'consumed' => [
                          'capacity_unit' => [
                              'read' => <integer>,
                              'write' => <integer>
                          ]
                      ],
                      'primary_key' => [ 
                          ['<string>', <PrimaryKeyValue>], 
                          ['<string>', <PrimaryKeyValue>],
                          ['<string>', <PrimaryKeyValue>, <PrimaryKeyType>]
                      ],
                      'attribute_columns' => []
                  ],
                  // other rows
              ]
          ],
          // other tables.
      ]
  ]
			
Code analysis
  • BatchWriteRow describes scenarios where all rows or partial rows fail to read. Ensure that you have read this section.
  • The tables parameter is organized in the unit of tables. Each table corresponds to one request.
    • table_name: the name of the table.
    • is_ok: indicates whether the row is read. A value of true indicates that the row is written. In this case, error is invalid. A value of false indicates that the row fails to be written.
    • error: the error information in the response when the row fails to read.
      • code: the error code in the response when the row fails to read.
      • message: the error message in the response when the row fails to read.
    • consumed: the number of CUs consumed by this operation.
      capacity_unit: the number of read and write CUs consumed.
      • read: the number of read CUs consumed.
      • write: the number of write CUs consumed.
    • primary_key: the list of the primary key columns, which is consistent with that in the request. If you set ReturnTypeConst::CONST_PK, the value of the auto-increment primary key column is returned.
    • attribute_columns: the list of attribute columns. No attribute columns are returned in this example.
Examples

The following code provides an example on how to import 30 rows of data at a time:

    // Insert data into three tables, 10 rows per table.    $tables = array();
    for($i = 0; $i < 3; $i++) {
        $rows = array();
        for($j = 0; $j < 10; $j++) {
            $rows[] = [
                'operation_type' => OperationTypeConst::CONST_PUT,            //The operation type is PUT.
                'condition' => RowExistenceExpectationConst::CONST_IGNORE,
                'primary_key' => [
                    ['pk0', $i],
                    ['pk1', $j]
                ],
                'attribute_columns' => [
                    ['Col0', 4],
                    ['Col2', 'Beijing']
                ]
            ];
        }
        $tables[] = [
            'table_name' => 'SampleTable' . $i,
            'rows' => $rows
        ];
    }
    $request = [
        'tables' => $tables
    ];
    $response = $otsClient->batchWriteRow ($request);
    // Process each table that is returned.
    foreach ($response['tables'] as $tableData) {
      print "Handling table {$tableData['table_name']} ...\n";

      // Process the result returned by PutRow in this table.
      $putRows = $tableData['rows'];

      foreach ($putRows as $rowData) {

        if ($rowData['is_ok']) {
          // Data is written.
          print "Capacity Unit Consumed: {$rowData['consumed']['capacity_unit']['write']}\n";
        } else {

          // Display the error information.
          print "Error: {$rowData['error']['code']} {$rowData['error']['message']}\n";
        }
      }
    }
			

For more information, see the sample files listed in the following table.

Sample file Description
BatchWriteRow1.php Shows you how to perform multiple PUT operations in BatchWriteRow.
BatchWriteRow2.php Shows you how to perform multiple UPDATE operations in BatchWriteRow.
BatchWriteRow3.php Shows you how to perform multiple DELETE operations in BatchWriteRow.
BatchWriteRow4.php Shows you how to perform the UPDATE, PUT, and DELETE operations in BatchWriteRow.
BatchWriteRowWithColumnFilter.php Shows you how to use BatchWriteRow with conditional update.

GetRange

GetRange

You can call this operation to read data within the specified primary key range.

This operation is used to read data within a range. In Tablestore tables, all rows are sorted by primary key. The primary key of a table sequentially consists of all primary key columns. Therefore, do not assume that the rows are sorted based on a certain primary key column.

The GetRange operation allows you to read data in a forward or backward direction based on a specified range. You can also limit the number of rows to read. If the range is large and the number of scanned rows or the volume of data exceeds the limit, the scan stops and the read rows and the next primary key column are returned. If not all rows are read, you can initiate a request to start from where the last operation left off and read the remaining rows based on the next primary key column returned by the previous operation.

Operation
    /**
     * Read data between the start primary key column and the end primary key column.
     * Note that the server may truncate this range. You need to determine whether to call the GetRange operation again based on the next_start_primary_key value in the response.
     * You can specify the maximum number of rows to read.
     * When you specify the start and end primary key columns, you can use INF_MIN and INF_MAX to indicate the minimum and maximum values. For more information, see the following code example.
     * @api
     * @param [] $request Request parameters.
     * @return [] Response.
     * @throws OTSClientException The exception that occurs when a parameter error occurs or the Tablestore server returns a verification error.
     * @throws OTSServerException The exception that occurs when the Tablestore server returns an error.
     */
    public function getRange(array $request); 
			
Sample requests
$result = $client->getRange([
    'table_name' => '<string>',                                     // REQUIRED
    'inclusive_start_primary_key' => [                              // REQUIRED
        ['<string>', <PrimaryKeyValue>], 
        ['<string>', <PrimaryKeyValue>],
        ['<string>', <PrimaryKeyValue>, <PrimaryKeyType>]
    ], 
    'exclusive_end_primary_key' => [                                // REQUIRED
        ['<string>', <PrimaryKeyValue>], 
        ['<string>', <PrimaryKeyValue>],
        ['<string>', <PrimaryKeyValue>, <PrimaryKeyType>]
    ], 
    'direction' => <Direction>,                                     // REQUIRED
    'limit' => <Direction>,
    'max_versions' => <integer>,
    'time_range' => [
        'start_time' => <integer>,
        'end_time' => <integer>,
        'specific_time' => <integer>
    ],
    'start_column' => '<string>',
    'end_column' => '<string>',
    'token' => '<string>',
    'columns_to_get' => [
        '<string>',
        '<string>',
        //...   
    ],
    'column_filter' =>  <ColumnCondition>
]);      
Code analysis
  • Differences between GetRange and GetRow
    • The primary_key parameter is changed to the inclusive_start_primary_key and exclusive_end_primary_key parameters. The range is a left-closed right-open interval.
    • The direction parameter is added to specify the direction.
    • The limit parameter is added to limit the number of returned rows.
  • table_name: required. This parameter specifies the name of the table.
  • inclusive_start_primary_key: the primary key column from which the read starts. If the specified row exists, the row is included in the response.
    • The primary key of a table can contain one to four primary key columns. Primary key columns are sorted by the order they are added. For example, primary key schema PRIMARY KEY (A, B, C) is different from PRIMARY KEY (A, C, B). Tablestore sorts rows based on the values of all primary key columns.
    • Each item in primary_keys specifies the values in the following sequence: the primary key name, primary key value (PrimaryKeyValue), and primary key value type (PrimaryKeyType, which is optional).
    • The value of PrimaryKeyValue can be an integer or a string.
    • PrimaryKeyType can be set to PrimaryKeyTypeConst::CONST_INTEGER, PrimaryKeyTypeConst::CONST_STRING (UTF-8 encoded string), or PrimaryKeyTypeConst::CONST_BINARY. If the type of data to query is INTEGER or STRING, this parameter can be ignored. Otherwise, the type must be specified.
    • A primary key column can also be of the INF_MIN or INF_MAX type. Values of INF_MIN columns are always smaller than other columns, whereas values of INF_MAX columns are always greater than those of other columns.
    • If PrimaryKeyType is set to PrimaryKeyTypeConst::CONST_INF_MIN or PrimaryKeyTypeConst::CONST_INF_MAX, you can set PrimaryKeyValue to null.
  • exclusive_end_primary_key: required. This parameter specifies the primary key column at which the read ends. The end primary key column is excluded from the response even if the row exists.
    • The primary key of a table can contain one to four primary key columns. Primary key columns are sorted by the order they are added. For example, primary key schema PRIMARY KEY (A, B, C) is different from PRIMARY KEY (A, C, B). Tablestore sorts rows based on the values of all primary key columns.
    • Each item in primary_keys specifies the values in the following sequence: the primary key name, primary key value (PrimaryKeyValue), and primary key value type (PrimaryKeyType, which is optional).
    • The value of PrimaryKeyValue can be an integer or a string.
    • PrimaryKeyType can be set to PrimaryKeyTypeConst::CONST_INTEGER, PrimaryKeyTypeConst::CONST_STRING (UTF-8 encoded string), or PrimaryKeyTypeConst::CONST_BINARY. If the type of data to query is INTEGER or STRING, this parameter can be ignored. Otherwise, the type must be specified.
    • A primary key column can also be of the INF_MIN or INF_MAX type. Values of INF_MIN columns are always smaller than other columns, whereas values of INF_MAX columns are always greater than those of other columns.
    • If PrimaryKeyType is set to PrimaryKeyTypeConst::CONST_INF_MIN or PrimaryKeyTypeConst::CONST_INF_MAX, you can set PrimaryKeyValue to null.
  • direction: required. This parameter specifies the order for this query. If direction is set to DirectionConst::CONST_FORWARD (the forward direction), the value of inclusive_start_primary must be smaller than that of exclusive_end_primary. All rows in the response are sorted by primary key in ascending order. If direction is set to DirectionConst::CONST_BACKWARD (the backward direction), the value of inclusive_start_primary must be greater than that of exclusive_end_primary. All rows in the response are sorted by primary key in descending order.
  • limit: the maximum number of rows to return.
    • If the number of rows queried exceeds this value, the response includes a breakpoint to record the position where the read ends in this operation, so that the next read can start from this position. This value must be greater than 0.
    • Tablestore returns a maximum of 5,000 rows of data if this parameter is not set. The total data size does not exceed 4 MB.
  • max_versions: the maximum number of versions from which data is read.
  • time_range: the range of versions from which data is read. For more information, see TimeRange.
    • start_time: the beginning of the time range to query, in milliseconds. Valid values: [0, INT64.MAX]. To query data within a range, specify start_time and end_time. The range is a left-closed right-open interval.
    • end_time: the end of the time range to query, in milliseconds. Valid values: [0, INT64.MAX].
    • specific_time: the specific timestamp. To query data at a specific point in time, specify specific_time. You can set one of specific_time and [start_time, end_time), in milliseconds. Valid values: [0, INT64.MAX).
  • You must specify at least one of max_versions and time_range.
    • If you specify only max_versions, data of up to the specified number of versions is returned from the latest to the earliest.
    • If you specify only time_range, all data within the range is returned.
    • If you specify both max_versions and time_range, data of up to the specified number of versions within the time range is returned from the latest to the earliest.
  • columns_to_get: the set of columns to read. If you do not specify this parameter, all columns are read.
  • start_column: the start column to read, which is used to read wide rows. The response contains the start column. The column names are sorted alphabetically. Example: A table contains columns a, b, and c. If the value of start_column is b, the reading starts from column b, and columns b and c are returned.
  • end_column: the end column to read, which is used to read wide rows. The response does not contain the start column. The column names are sorted alphabetically. Example: A table contains columns a, b, and c. If the value of end_column is b, the reading ends at column b, and column a is returned.
  • token: the start column of a wide row to read next time. This parameter is unavailable.
  • column_filter: the filtering condition. Only rows that meet the condition are returned. This parameter is similar to column_condition in condition. For more information, see Filter.
Responses
[
    'consumed' => [
        'capacity_unit' => [
            'read' => <integer>,
            'write' => <integer>
        ]
    ],
    'next_start_primary_key' => [ 
        ['<string>', <PrimaryKeyValue>], 
        ['<string>', <PrimaryKeyValue>],
        ['<string>', <PrimaryKeyValue>, <PrimaryKeyType>]
    ], 
    'rows' => [
        [
            'primary_key' => [ 
                ['<string>', <PrimaryKeyValue>], 
                ['<string>', <PrimaryKeyValue>],
                ['<string>', <PrimaryKeyValue>, <PrimaryKeyType>]
            ],  
            'attribute_columns' => [
                    ['<string>', <ColumnValue>, <ColumnType>, <integer>]
                    ['<string>', <ColumnValue>, <ColumnType>, <integer>]
                    ['<string>', <ColumnValue>, <ColumnType>, <integer>]
            ]
        ],
        // other rows
    ]
]
			
Code analysis
  • GetRange describes scenarios where all rows or partial rows fails to read. Ensure that you have read this section.
  • consumed: the number of CUs consumed by this operation.
    capacity_unit: the number of read and write CUs consumed.
    • read: the number of read CUs consumed.
    • write: the number of write CUs consumed.
  • rows
    • If direction in the request is set to DirectionConst::CONST_FORWARD, all rows are sorted by primary key in ascending order. If direction in the request is set to DirectionConst::CONST_BACKWARD, all rows are sorted by primary key in descending order.
    • The attribute_columns parameter for each row only contains the columns specified in columns_to_get. The order of the columns may be inconsistent with that of columns_to_get in the request.
    • If the specified columns_to_get in the request does not contain any primary key column, rows that do not contain any attribute column specified in columns_to_get are not included in the response although the primary key column of the rows is within the query range.
    • primary_key: the list of the primary key columns, which is consistent with that in the request. If you set ReturnTypeConst::CONST_PK, the value of the auto-increment primary key column is returned.
    • attribute_columns: the list of attribute columns.
      • Each item specifies the values in the following sequence: the attribute column name, attribute column value (ColumnValue), attribute column value type (ColumnType), and timestamp.
      • ColumnType can be set to ColumnTypeConst::CONST_INTEGER, ColumnTypeConst::CONST_STRING (UTF-8 encoded string), ColumnTypeConst::CONST_BINARY, ColumnTypeConst::CONST_BOOLEAN, or ColumnTypeConst::CONST_DOUBLE.
      • The timestamp is a 64-bit integer that indicates the version of an attribute column.
      • The attribute columns in the response are sorted by attribute column name in ascending alphabetical order. The versions of the attribute columns are sorted by timestamp in descending order.
  • next_start_primary_key: the breakpoint of this GetRange operation.
    • The format is the same as that of primary_key.
    • If this parameter is not specified, the response to this GetRange operation contains all data within the request range.
    • If this parameter is specified, the response to this GetRange operation includes only the data within the range of [inclusive_start_primary_key, next_start_primary_key). To obtain the remaining data, set inclusive_start_primary_key to the value of next_start_primary_key, and retain the value of exclusive_end_primary_key in the original request to perform the subsequent GetRange operation.
Examples

The following code provides an example on how to read data within the specified range:

    // Read data with UID within the range of [1, 4).
    // The complete list of primary key columns is required to define the range boundary. If the queried range does not involve the range of values in a column, set the column to be infinitely great or small.
    $startPK = [
        ['PK0', 1], 
['PK1', null, PrimaryKeyTypeConst::CONST_INF_MIN]  // INF_MIN indicates the minimum value.    ];
    // The complete list of primary key columns is required to define the range boundary. If the queried range does not involve the range of values in a column, set the column to be infinitely great or small.
    $endPK = [
        ['PK0', 4], 
['PK1', null, PrimaryKeyTypeConst::CONST_INF_MAX]  // INF_MAX indicates the maximum value.    ];
    $request = [
        'table_name' => 'SampleTable',
        'max_versions' => 1,                          // Set this parameter to read the latest version.
        'direction' => DirectionConst::CONST_FORWARD, // Query data in the forward direction.
        'inclusive_start_primary_key' => $startPK,    // The start primary key column.
        'exclusive_end_primary_key' => $endPK,        // The end primary key column.
        'limit' => 10                                 // A maximum of 10 rows are returned.
    ];
    $response = $otsClient->getRange ($request);
    print "Read CU Consumed: {$response['consumed']['capacity_unit']['read']}\n";

    foreach ($response['rows'] as $rowData) {
      // Process each row of data.
    }
			

For more information, see the sample files listed in the following table.

Sample file Description
GetRange1.php Shows you how to use GetRange.
GetRange2.php Shows you how to use GetRange to obtain the specified columns.
GetRange3.php Shows you how to use GetRange to obtain the specified number of rows.
GetRangeWithColumnFilter.php Shows you how to use GetRange with conditional filtering.