To restore data from an automatic snapshot of an Elasticsearch cluster to another cluster in the same region and under the same Alibaba Cloud account, set up a shared OSS repository. This feature lets the destination cluster reference the source cluster's snapshot repository. You can then restore the data by running commands in Kibana Dev Tools.
Add a shared OSS repository reference
A destination cluster can only reference a repository from a source cluster that runs the same or an older version of Elasticsearch. Referencing a repository from a higher-version source cluster is not supported. When you restore data across different versions, data format incompatibilities can cause restoration failures. For example, Elasticsearch 6.7.0 does not support the multi-type indices found in version 5.5.3, which can cause restoration issues. If both the source and destination clusters are Commercial Edition 6.7.0, ensure that both clusters run the latest kernel version, or that the destination cluster's kernel version is newer than the source cluster's.
Log on to the Alibaba Cloud Elasticsearch console.
In the navigation pane on the left, click Elasticsearch Clusters.
In the top menu bar, select a resource group and a region.
On the Elasticsearch Clusters page, click the ID of the destination cluster.
In the navigation pane on the left, click Data Backup.
In the Shared OSS Repositories section, click Create Now. If this is not the first repository reference you are adding, click Create Shared Repository.
On the Create Shared Repository page, select the source cluster. The source cluster must meet the following conditions: it is in the same region as the destination cluster, belongs to the same Alibaba Cloud account, and runs a version equal to or older than that of the destination cluster.
Click OK. After the reference is created, the source cluster appears on the page, along with the status of the referenced repository.
After you add the repository reference, the destination cluster may briefly enter the Initializing state. During this time, you cannot modify the cluster configuration, including the Kibana whitelist. Wait for the cluster status to return to Normal before performing other operations.
The repository list is retrieved from the corresponding cluster. If the cluster is being modified, is unhealthy, or is under high load, its repository may not be accessible. In this case, you can run the
GET _snapshotcommand in the Kibana console of the source cluster to get the addresses of all repositories.
Restore index data
Setting up a shared OSS repository only creates a reference; it does not automatically restore data. You must manually run restore commands in the Kibana console of the destination cluster. For more information, see Connect to an Elasticsearch cluster through Kibana.
In the navigation pane on the left of the Kibana console, click Dev Tools.
Retrieve information about all snapshots in the referenced repository.
In the following example,
aliyun_snapshot_from_es-cn-ais the name of the referenced repository. Replace it with your actual repository name.GET /_cat/snapshots/aliyun_snapshot_from_es-cn-a?vThis request returns information about all snapshots in the specified repository.
Based on the snapshot ID from the previous step, run the following command to restore a specified index.
Ensure that the specified index is either closed or does not exist in the destination cluster. Otherwise, an index name conflict error occurs during restoration. Restoring a system index starting with a
.may cause Kibana access to fail. Do not restore system indices.Restore a single index
POST _snapshot/aliyun_snapshot_from_es-cn-a/<snapshot_id>/_restore {"indices": "file-2019-08-25"}Restore multiple indices
POST _snapshot/aliyun_snapshot_from_es-cn-a/<snapshot_id>/_restore {"indices": "kibana_sample_data_ecommerce,kibana_sample_data_logs"}Restore all indices (excluding system indices)
POST _snapshot/aliyun_snapshot_from_es-cn-a/<snapshot_id>/_restore {"indices":"*,-.monitoring*,-.security*,-.kibana*,-.apm*,-.ds-ilm-history-*,-.tasks","ignore_unavailable":"true"}The exclusion pattern in the example above includes common system indices. Different cluster versions may contain other system indices. You may need to adjust the exclusion list based on the system indices in your cluster. Run the following command to list system indices, which start with a
., and update the exclusion list accordingly:GET _cat/indices/.*?v