You can call the CreateSearchIndex operation to create a search index for a data table. When you create a search index, you can add the fields that you want to query to the search index and configure advanced settings for the search index. For example, you can configure the routing key and presorting settings.
Prerequisites
A client is initialized. For more information, see Initialize a Tablestore client.
A data table that meets the following conditions is created. For more information, see Create a data table.
The max versions parameter is set to 1.
The time to live (TTL) is set to -1 or updates on the data table are prohibited.
Usage notes
The data types of the fields in a search index must match the data types of the fields in the data table for which the search index is created. For more information, see Data types.
To set the time_to_live parameter of a search index to a value other than -1, make sure that the UpdateRow operation is prohibited on the data table for which the search index is created. The value of the time_to_live parameter of the search index must be less than or equal to the value of the time_to_live parameter of the data table. For more information, see Specify the TTL of a search index.
Parameters
When you create a search index, you must specify the table_name, index_name, and schema parameters. In the schema parameter, configure the field_schemas, index_setting, and index_sort parameters. The following table describes the preceding parameters.
Parameter | Description |
table_name | The name of the table. |
index_name | The name of the search index. |
field_schemas | The list of field schemas. In each field schema, configure the following parameters:
|
index_setting | The settings of the search index, including the settings of the routing_fields parameter. routing_fields (optional): the custom routing fields. You can specify multiple primary key columns as the routing fields. In most cases, you need to specify only one routing field. If you specify multiple routing fields, the system concatenates the values of the routing fields into one value as the partition key. Tablestore distributes data that is written to a search index across different partitions based on the specified routing fields. The data that has the same routing field values is distributed to the same partition. |
index_sort | The presorting settings of the search index, including the settings of the sorters parameter. If you do not configure the index_sort parameter, field values are sorted by primary key. Note If you set the field_type parameter to Nested, you cannot configure the indexSort parameter. sorters (required): the presorting method of the search index. Valid values: PrimaryKeySort and FieldSort. For more information, see Sorting and paging.
|
Examples
Create a search index with an analyzer type specified
The following sample code provides an example on how to create a search index with an analyzer type specified. In this example, the search index consists of the following fields: the k field of the Keyword type, the t field of the Text type, the g field of the Geo-point type, the ka field of the array Keyword type, the la field of the array Long type, and the n field of the Nested type. The n field consists of the following subfields: the nk field of the Keyword type, the nl field of the Long type, and the nt field of the Text type.
def create_search_index(client):
# Create an index on the Keyword field and enable the aggregation feature for the field.
field_a = FieldSchema('k', FieldType.KEYWORD, index=True, enable_sort_and_agg=True, store=True)
# Create an index on the Text field and set the analyzer type to single-word tokenization.
field_b = FieldSchema('t', FieldType.TEXT, index=True, store=True, analyzer=AnalyzerType.SINGLEWORD)
# Create an index on the Text field and set the analyzer type to fuzzy tokenization.
#field_b = FieldSchema('t', FieldType.TEXT, index=True, store=True, analyzer=AnalyzerType.FUZZY,analyzer_parameter=FuzzyAnalyzerParameter(1, 6))
# Create an index on the Text field and set the analyzer type to delimiter tokenization.
#field_b = FieldSchema('t', FieldType.TEXT, index=True, store=True, analyzer=AnalyzerType.SPLIT, analyzer_parameter = SplitAnalyzerParameter(","))
# Create an index on the Geo-point field.
field_c = FieldSchema('g', FieldType.GEOPOINT, index=True, store=True)
# Create an index on the array Keyword field.
field_d = FieldSchema('ka', FieldType.KEYWORD, index=True, is_array=True, store=True)
# Create an index on the array Long field.
field_e = FieldSchema('la', FieldType.LONG, index=True, is_array=True, store=True)
# The Nested field consists of three subfields: the nk subfield of the Keyword type, the nl subfield of the Long type, and the nt subfield of the Text type.
field_n = FieldSchema('n', FieldType.NESTED, sub_field_schemas=[
FieldSchema('nk', FieldType.KEYWORD, index=True, store=True),
FieldSchema('nl', FieldType.LONG, index=True, store=True),
FieldSchema('nt', FieldType.TEXT, index=True, store=True),
])
fields = [field_a, field_b, field_c, field_d, field_e, field_n]
index_setting = IndexSetting(routing_fields=['PK1'])
index_sort = None # If the search index contains Nested fields, you cannot configure presorting for the search index.
#index_sort = Sort(sorters=[PrimaryKeySort(SortOrder.ASC)])
index_meta = SearchIndexMeta(fields, index_setting=index_setting, index_sort=index_sort)
client.create_search_index('<TABLE_NAME>', '<SEARCH_INDEX_NAME>', index_meta)
Create a search index that contains Vector fields
The following sample code provides an example on how to create a search index that contains Vector fields. In this example, the search index consists of the following fields: the col_keyword field of the Keyword type, the col_long field of the Long type, and the col_vector field of the Vector type. The dot product algorithm is used to measure the distance of vectors.
def create_search_index(client):
index_meta = SearchIndexMeta([
FieldSchema('col_keyword', FieldType.KEYWORD, index=True, enable_sort_and_agg=True, store=True), # The Keyword type.
FieldSchema('col_long', FieldType.LONG, index=True, store=True), # The Long type.
FieldSchema("col_vector", FieldType.VECTOR, # The Vector type.
vector_options=VectorOptions(
data_type=VectorDataType.VD_FLOAT_32,
dimension=4, # Number of vector dimensions: 4. Distance measurement algorithm used for the vector: dot product.
metric_type=VectorMetricType.VM_DOT_PRODUCT
)),
])
client.create_search_index(table_name, index_name, index_meta)
Create a search index with the highlight feature enabled
The following sample code provides an example on how to create a search index with the highlight feature enabled. In this example, the search index consists of the following fields: the k field of the Keyword type, the t field of the Text type, and the n field of the Nested type. The n field consists of the following subfields: the nk field of the Keyword type, the nk field of the Long type, and the nt field of the Text type. In addition, the highlight feature is enabled for the t field and nt subfield of the Text field.
def create_search_index0905(client):
# Create an index on the Keyword field and enable the aggregation feature for the field.
field_a = FieldSchema('k', FieldType.KEYWORD, index=True, enable_sort_and_agg=True, store=True)
# Create an index on the Text field, set the analyzer type of the field to single-word tokenization, and enable the highlight feature for the field.
field_b = FieldSchema('t', FieldType.TEXT, index=True, store=True, analyzer=AnalyzerType.SINGLEWORD,
enable_highlighting=True)
# Create an index on the Nested field that consists of the following subfields: the nk field of the Keyword type, the nl field of the Long type, and the nt field of the Text type. Enable the highlight feature for the nt subfield of the Text type.
field_n = FieldSchema('n', FieldType.NESTED, sub_field_schemas=[
FieldSchema('nk', FieldType.KEYWORD, index=True, store=True),
FieldSchema('nl', FieldType.LONG, index=True, store=True),
FieldSchema('nt', FieldType.TEXT, index=True, store=True, enable_highlighting=True),
])
fields = [field_a, field_b, field_n]
index_setting = IndexSetting(routing_fields=['id'])
index_sort = None # If the search index contains Nested fields, you cannot configure presorting for the search index.
# index_sort = Sort(sorters=[PrimaryKeySort(SortOrder.ASC)])
index_meta = SearchIndexMeta(fields, index_setting=index_setting, index_sort=index_sort)
client.create_search_index('pythontest', 'pythontest_0905', index_meta)
FAQ
References
After you create a search index, you can use the query methods provided by the search index to query data from multiple dimensions based on your business requirements. A search index usually provides the following query methods: term query, terms query, match all query, match query, match phrase query, prefix query, range query, wildcard query, geo query, Boolean query, KNN vector query, nested query, and exists query.
When you query data, you can perform sorting and paging on the result set, use the highlight feature to highlight the keywords in the result set, or use the collapse (distinct) feature to collapse the result set based on a specific field. For more information, see Sorting and paging, Highlight the query results, and Collapse (distinct).
After you create a search index, you can manage the search index based on your business requirements. For example, you can manage the TTL of the search index, dynamically modify the schema of the search index, query the names of search indexes in an instance, query information of the search index, and delete the search index. For more information, see Specify the TTL of a search index, Dynamically modify the schema of a search index, List search indexes, Query the description of a search index, and Delete a search index.
You can use the aggregation feature or the SQL query feature to analyze data in a table. For example, you can query the maximum and minimum values, the sum of the values, and the number of rows. For more information, see Aggregation and SQL query.
If you want to obtain all rows that meet the query conditions without the need to sort the rows, you can use the parallel scan feature. For more information, see Parallel scan.