Configure environment variables
Configure the environment variables ALIBABA_CLOUD_ACCESS_KEY_ID and ALIBABA_CLOUD_ACCESS_KEY_SECRET.
-
The Alibaba Cloud account AccessKey has access privileges to all APIs. It is recommended to use a RAM user for API access or daily operations. For specific operations, see Create a RAM User.
-
For creating an AccessKey ID and AccessKey Secret, refer to Create an AccessKey.
-
If you are using the AccessKey of a RAM user, ensure that the primary account has authorized the AliyunServiceRoleForOpenSearch service-linked role. Refer to OpenSearch Industry Algorithm Version Service-Linked Role. For more information, see Access Authorization Rules.
-
Do not save the AccessKey ID and AccessKey Secret in the project code, as this may lead to AccessKey leakage and threaten the security of all resources under your account.
-
Linux and macOS Systems Configuration Method:
Execute the following commands, where
<access_key_id>
should be replaced with your RAM user's AccessKey ID, and<access_key_secret>
should be replaced with your RAM user's AccessKey Secret.export ALIBABA_CLOUD_ACCESS_KEY_ID=<access_key_id> export ALIBABA_CLOUD_ACCESS_KEY_SECRET=<access_key_secret>
-
Windows System Configuration Method:
-
Create a new environment variable file, add the environment variables ALIBABA_CLOUD_ACCESS_KEY_ID and ALIBABA_CLOUD_ACCESS_KEY_SECRET, and write the prepared AccessKey ID and AccessKey Secret.
-
Restart the Windows system to take effect.
-
V3.1 SDK Commit Method New Document Sample Code
The Commit data submission method primarily involves dynamically encapsulating the corresponding document data into a Map object within the program, then adding these Map objects to the cache using the add method, and finally submitting these Map object document data in batches by calling the Commit method.
Scenarios
-
Dynamic Data Assembly Submission Scenario
-
Single Document Submission Scenario
-
Small Batch Document Submission Scenario
package com.aliyun.opensearch;
import com.aliyun.opensearch.sdk.dependencies.com.google.common.collect.Lists;
import com.aliyun.opensearch.sdk.dependencies.com.google.common.collect.Maps;
import com.aliyun.opensearch.sdk.dependencies.org.json.JSONObject;
import com.aliyun.opensearch.sdk.generated.OpenSearch;
import com.aliyun.opensearch.sdk.generated.commons.OpenSearchClientException;
import com.aliyun.opensearch.sdk.generated.commons.OpenSearchException;
import com.aliyun.opensearch.sdk.generated.commons.OpenSearchResult;
import com.aliyun.opensearch.sdk.generated.search.Config;
import com.aliyun.opensearch.sdk.generated.search.SearchFormat;
import com.aliyun.opensearch.sdk.generated.search.SearchParams;
import com.aliyun.opensearch.sdk.generated.search.general.SearchResult;
import java.io.UnsupportedEncodingException;
import java.nio.charset.Charset;
import java.util.Map;
import java.util.Random;
public class testCommitSearch {
private static String appName = "Name of the OpenSearch application that you want to manage";
private static String tableName = "Name of the OpenSearch application table";
private static String host = "Endpoint of the OpenSearch API in your region";
public static void main(String[] args) {
// Specify your AccessKey pair
// Obtain the configured AccessKey ID and AccessKey Secret from environment variables
// Before running the code example, you must first configure the environment variables. For more information, see the "Configure environment variables" step above
String accesskey = System.getenv("ALIBABA_CLOUD_ACCESS_KEY_ID");
String secret = System.getenv("ALIBABA_CLOUD_ACCESS_KEY_SECRET");
// Obtain the file encoding format and default encoding format
System.out.println(
String.format("file.encoding: %s", System.getProperty("file.encoding"))
);
System.out.println(
String.format("defaultCharset: %s", Charset.defaultCharset().name())
);
// Generate a random number as the primary key value
Random rand = new Random();
int value = rand.nextInt(Integer.MAX_VALUE);
// Define a Map object to store the upload data doc1
Map<String, Object> doc1 = Maps.newLinkedHashMap();
doc1.put("id", value);
String title_string = "New Document 1 by Commit Method"; // utf-8
byte[] bytes;
try {
bytes = title_string.getBytes("utf-8");
String utf8_string = new String(bytes, "utf-8");
doc1.put("name", utf8_string);
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
doc1.put("phone", "1381111****");
int[] int_arr = { 33, 44 };
doc1.put("int_arr", int_arr);
String[] literal_arr = {
"New Document 1 by Commit Method",
"Test New Document 1 by Commit Method",
};
doc1.put("literal_arr", literal_arr);
float[] float_arr = { (float) 1.1, (float) 1.2 };
doc1.put("float_arr", float_arr);
doc1.put("cate_id", 1);
// Create an OpenSearch object
OpenSearch openSearch1 = new OpenSearch(accesskey, secret, host);
// Use the OpenSearch object as a parameter to create an OpenSearchClient object
OpenSearchClient serviceClient1 = new OpenSearchClient(openSearch1);
// Define a DocumentClient object to add data and submit
DocumentClient documentClient1 = new DocumentClient(serviceClient1);
// Add doc1 to the cache and set it as a new document
documentClient1.add(doc1);
// Document output
System.out.println(doc1.toString());
try {
// Execute the commit new operation. For testing purposes, delay a single commit by 10 seconds to view operation information. You can also execute operations in a batch later
OpenSearchResult osr = documentClient1.commit(appName, tableName);
// Determine whether the data was pushed successfully. Mainly check two places: first, whether the user-side push was successful; second, whether there are error logs in the application console
// After the user-side push is successful, it may still fail on the application side. This error will be directly generated in the application console error log, such as field content conversion failure
if (osr.getResult().equalsIgnoreCase("true")) {
System.out.println(
"No errors in user-side push!\nBelow is the getTraceInfo push request Id:" +
osr.getTraceInfo().getRequestId()
);
} else {
System.out.println("User-side push error!" + osr.getTraceInfo());
}
} catch (OpenSearchException e) {
e.printStackTrace();
} catch (OpenSearchClientException e) {
e.printStackTrace();
}
try {
Thread.sleep(10000); // Hibernate for 10 seconds to view the newly added data in the console
} catch (InterruptedException e) {
e.printStackTrace();
}
// Define a Map object doc2 and update doc1. The update uses update, which can only be used if the primary key value already exists
Map<String, Object> doc2 = Maps.newLinkedHashMap();
doc2.put("id", value);
String title_string2 = "Update Document 1 by Commit Method"; // utf-8
byte[] bytes2;
try {
bytes2 = title_string2.getBytes("utf-8");
String utf8_string2 = new String(bytes2, "utf-8");
doc2.put("name", utf8_string2);
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
doc2.put("phone", "1390000****");
int[] int_arr2 = { 22, 22 };
doc2.put("int_arr", int_arr2);
String[] literal_arr2 = { "Update Document 1 by Commit Method", "Update Document 1 by Commit Method" };
doc2.put("literal_arr", literal_arr2);
float[] float_arr2 = { (float) 1.1, (float) 1.2 };
doc2.put("float_arr", float_arr2);
doc2.put("cate_id", 1);
// Add doc2 to the cache. Since this data's primary key already exists, the update can be executed normally
documentClient1.update(doc2);
// Document output
System.out.println(doc2.toString());
try {
// Execute the update and submit. For testing purposes, delay a single commit by 10 seconds to view operation information. You can also execute operations in a batch later
OpenSearchResult osr = documentClient1.commit(appName, tableName);
// Determine whether the data was pushed successfully. Mainly check two places: first, whether the user-side push was successful; second, whether there are error logs in the application console
// After the user-side push is successful, it may still fail on the application side. This error will be directly generated in the application console error log, such as field content conversion failure
if (osr.getResult().equalsIgnoreCase("true")) {
System.out.println(
"No errors in user-side push!\nBelow is the getTraceInfo push request Id:" +
osr.getTraceInfo().getRequestId()
);
} else {
System.out.println("User-side push error!" + osr.getTraceInfo());
}
} catch (OpenSearchException e) {
e.printStackTrace();
} catch (OpenSearchClientException e) {
e.printStackTrace();
}
try {
Thread.sleep(10000); // Hibernate for 10 seconds to view the updated data in the console
} catch (InterruptedException e) {
e.printStackTrace();
}
// Define a Map object doc3. To delete a document, only the primary key value of the document to be deleted needs to be specified
Map<String, Object> doc3 = Maps.newLinkedHashMap();
doc3.put("id", value);
// Add doc3 to the cache. This is for document deletion processing
documentClient1.remove(doc3);
// Document output
System.out.println(doc3.toString());
try {
// Execute the delete and submit. For testing purposes, delay a single commit by 10 seconds to view operation information. You can also execute operations in a batch later
OpenSearchResult osr = documentClient1.commit(appName, tableName);
// Determine whether the data was pushed successfully. Mainly check two places: first, whether the user-side push was successful; second, whether there are error logs in the application console
// After the user-side push is successful, it may still fail on the application side. This error will be directly generated in the application console error log, such as field content conversion failure
if (osr.getResult().equalsIgnoreCase("true")) {
System.out.println(
"No errors in user-side push!\nBelow is the getTraceInfo push request Id:" +
osr.getTraceInfo().getRequestId()
);
} else {
System.out.println("User-side push error!" + osr.getTraceInfo());
}
} catch (OpenSearchException e) {
e.printStackTrace();
} catch (OpenSearchClientException e) {
e.printStackTrace();
}
try {
Thread.sleep(10000); // Hibernate for 10 seconds, then check the deleted data. If you do not hibernate here and query immediately, you may find existing data because the data has not been deleted in time. Hibernate for at least 1 second
} catch (InterruptedException e) {
e.printStackTrace();
}
// Create an OpenSearch object
OpenSearch openSearch2 = new OpenSearch(accesskey, secret, host);
// Use the OpenSearch object as a parameter to create an OpenSearchClient object
OpenSearchClient serviceClient2 = new OpenSearchClient(openSearch2);
// Use the OpenSearchClient object as a parameter to create a SearcherClient object
SearcherClient searcherClient2 = new SearcherClient(serviceClient2);
// Create a Config object to set config clause parameters, paging, or data return format, etc.
Config config = new Config(Lists.newArrayList(appName));
config.setStart(0);
config.setHits(30);
// Set the return format to JSON. Currently, only XML and JSON formats are supported. Full JSON type is not supported yet
config.setSearchFormat(SearchFormat.JSON);
SearchParams searchParams = new SearchParams(config);
searchParams.setQuery("id:'" + value + "'");
//// Execute and return the query results
SearchResult searchResult;
try {
searchResult = searcherClient2.execute(searchParams);
String result = searchResult.getResult();
JSONObject obj = new JSONObject(result);
// Display the search results
System.out.println("Query debug output:" + obj.toString());
} catch (OpenSearchException e) {
e.printStackTrace();
} catch (OpenSearchClientException e) {
e.printStackTrace();
}
}
}
When pushing data, it must only contain fields from the same table. Cross-table pushing is not permitted.