edit-icon download-icon

SAP HANA HA Cross-Zone with SLES HAE

Last Updated: Sep 07, 2018

SAP HANA HA Cross-Zone solution on SUSE Linux Enterprise Server for SAP Applications

Version Control:

Version Revision Date Types Of Changes Effective Date
1.0 2018/3/7
1.1 2018/7/04 Add corosync and cluster configuration example 2018/7/04

Solution Overview

SAP HANA System Replication

SAP HANA provides a feature called System Replication which is available in every SAP HANA installation offering an inherent disaster recovery support.

For details, please refer to SAP Help Portal HANA system replication.

HAE of SUSE Linux Enterprise Server for SAP Applications

SUSE High Availability Extension (HAE) is a high availability solution based on Corosync and Pacemaker. With SUSE Linux Enterprise Server for SAP Applications, SUSE provides SAP specific Resource Agents (SAPHana, SAPHanaTopology etc.) used by Pacemaker to help users to buildup SAP HANA HA solution more effectively.

For details, please refer to latest version of SAP HANA SR Performance Optimized Scenario at SUSE documentation center.

Architecture Overview

This document guides you on how to deploy a SAP HANA HA solution cross different Zones. Following is a brief architecture:

  • HAE of SUSE Linux Enterprise Server for SAP Applications is used to setup the HA Cluster;
  • SAP HANA System Replication is activated between the two HANA nodes;
  • Two HANA nodes locates in different Zones of the same Region;
  • Alibaba Cloud Specific Virtual IP Resource Agent is used to allow Moving IP automatically switched to Active SAP HANA node;Alibaba Cloud specific STONITH device is used for fencing;
    arch

Network Design

network location usage subnet
business eu-central-1 zone A For Business 192.168.0.82/24
heartbeat eu-central-1 zone B For SR/HA 192.168.1.245/24
hostname role heartbeat IP business IP virtual IP
hana0 Hana primary node 192.168.0.83 192.168.0.82 192.168.4.1
hana1 Hana secondary node 192.168.1.246 192.168.1.245 192.168.4.1
HanaStudio Hana Studio no 192.168.0.79 no

Infrastructure Preparation

Infrastructure List

  • 1 VPC network;
  • 2 ECS instances in different zones of the same VPC;
  • 2 Elastic Network Interface (ENI), one for each ECS instance;
  • Alibaba Cloud specific Virtual IP Resource Agent and STONITH device;
  • NAT Gateway and SNAT entry;

Creating VPC

First of all, a VPC should be created.
In this example, we create a VPC named suse_hana_ha in EU Central 1 (frankfurt) Region as follow:
vpc
There should be at least 2 VSwitches(subnets) defined within the VPC network, each VSwitch bound to a different Zone. In this example, we have following 2 VSwitches(subnets):

  • Switch1 192.168.0.0/24 Zone A, for SAP HANA Primary Node
  • Switch2 192.168.1.0/24 Zone B, for SAP HANA Secondary Node
    instance

Creating ECS Instances

Two ECS instances are created in different Zones of the same VPC. Choose the “SUSE Linux Enterprise Server for SAP Applications” image from the Image Market place.
In this example, 2 ECS instances (hostname: hana0 and hana1) are created in eu-central-1 Region, Zone A and Zone B, within VPC: suse_hana_ha, with SUSE Linux Enterprise Server for SAP Applications 12 SP2 image from the Image Market Place. Host hana0 is the primary SAP HANA database node, and hana1 is the secondary SAP HANA database node.
vswitch

Creating ENIs and binding to ECS instances

Create two ENIs, and attach one for each ECS instance, for HANA System Replication purpose. Configure the IP addresses of the ENIs to the subnet for HANA System Replication only.
In this example, the ENIs are attached to ECS instances hana0 and hana1, and IP addresses are configured as 192.168.0.83 and 192.168.1.246 within the same VSwitches of hana0 and hana1, and put in the VPC: suse_hana_ha
ENI

Meanwhile, within the Guest OS, /etc/hosts should also be configured as well.In this example, please run following two commands on boths sites:
echo "192.168.0.82 hana0 hana0" >> /etc/hosts
echo "192.168.1.245 hana1 hana1" >> /etc/hosts
hosts

Creating NAT Gateway and configure SNAT entry

First of all, create a NAT Gateway attached to the given VPC; In our example, we create a NAT Gateway named suse_hana_ha_GW as follows:
NATGW

After creating NAT Gateway, you need to create corresponding SNAT entry to allow ECS instances within the VPC can access public address on Internet. (Caution: Alibaba Cloud specific STONITH device and Virtual IP Resource Agent are mandatory for cluster and they need to access Alibaba Cloud OpenAPI through a public domain);

In our example, we create to two SNAT entries, for ECS instances locates in different network range as follows:
SNAT

Creating STONITH device and Virtual IP Resource Agent

  1. Download software from with following command:
    wget http://repository-iso.oss-cn-beijing.aliyuncs.com/ha/aliyun-ecs-pacemaker.tar.gz
    aliyunpacemaker

  2. Extract the package and install the software
    tar –xvf aliyun-ecs-pacemaker.tar.gz
    ./install
    installpacemaker

  3. Install Alibaba Cloud OpenAPI SDK
    pip install aliyun-python-sdk-ecs aliyun-python-sdk-vpc aliyuncli
    SDKInstall

  4. Configure Alibaba Cloud OpenAPI SDK and Client
    aliyuncli configure
    SDKConfigure

You can get your Access Key from following:
AccessKey

Software Preparation

Software List

  • SUSE Linux Enterprise Server for SAP Applications 12 SP2
  • HANA Installation Media
  • SAP Host Agent Installation Media

HAE installation

Both ECS instances are created with the SUSE Linux Enterprise Server for SAP Applications image. Both ECS instances should install the HAE component, as well as package SAPHanaSR. In this example, we install HAE (major software component: Corosync and Pacemaker), and SAPHanaSR on both ECS instances as follows:

Install the pattern High Availability on both nodes. To do so, for example, use zypper :
zypper in -t pattern ha_sles

Now the Resource Agents for controlling the SAP HANA system replication needs to be installed at both cluster nodes.
zypper in SAPHanaSR SAPHanaSR-doc

SAP HANA Installation

Install SAP HANA software on both ECS instances, and make sure the SAP HANA SID and Instance Number are the same (requirement by SAP HANA System Replication). It is recommended to use hdblcm to do the installation. For details please refer to SAP HANA Server Installation and Update Guide.

In this example, both node are installed with SAP HANA (Rev. 2.00.030.00), and SID: JL0, Instance Number: 00.
HanaVersion

SAP Host Agent Installation

When you have finished hana installation with hdblcm as mentioned above, the SAP Host Agent should already be installed on your server. In case you want to install it manually, please kindly refer to Installing SAP Host Agent Manually.
In this example, you can check SAP Host Agent status after you have SAP HANA with hdblcm installed on hana0 and hana1 as follows:
HostControl

Configuring SAP HANA System Replication

Backup HANA on primary ECS instance

To do backup on HANA, you can either use SAP HANA studio or hdbsql as the client command tool.
The backup command is:
For HANA 1 single container mode:
BACKUP DATA USING FILE('COMPLETE_DATA_BACKUP');
For HANA 2 with multitenant as default mode (You should backup systemDB and also all tenantDB as shown below in our example):
BACKUP DATA for <DATABASE> using FILE('COMPLETE_DATA_BACKUP')
In this example, we execute SAP HANA database backup on both ECS instances as follows:
backup

Configuring SAP HANA System Replication on primary node

a) Log onto the primary node with: su - <sid>adm;
[sidadm] should be replaced by your SAP HANA database SID. In our example it is su - jl0adm;
b) Stop HANA with: HDB stop;
c) Change following file content as user root:
/hana/shared/<SID>/global/hdb/custom/config/global.ini>/global/hdb/custom/config/global.ini`

Add following content:

  1. [system_replication_hostname_resolution]
  2. <IP> = <HOSTNAME>

[IP]] should be address of the ENI (heartbeat IP address for HANA system replication) attached to the Secondary node;
[HOSTNAME] should be hostname of the Secondary node;

In this example, we have following configuration:
[system_replication_hostname_resolution]
192.168.1.246 = hana1

Configuring SAP HANA System Replication on secondary node

Same as above for primary, but use IP and hostname of primary node

In this example, we have following configuration:
[system_replication_hostname_resolution]
192.168.0.83 = hana0

Enable SAP HANA System Replication on primary node

a) Log onto the primary node with: su - <sid>adm;
b) Start HANA with: HDB start;
c) Enable System Replication with:
hdbnsutil -sr_enable --name= [primary location name]
[primary location name] should be replaced by location of your primary HANA node.
In this example, we use following command:
hdbnsutil -sr_enable --name=hana0
CAUTION: all above operations are done on primary node.

Register the Secondary node to the Primary HANA node

a) Log onto the secondary node with: su - <sid>adm;
b) Stop HANA with: HDB stop;
c) Register the Secondary HANA node to the Primary HANA node by running following command:
hdbnsutil -sr_register --remoteHost=[location of primary Node] --remoteInstance=[instance number of primary node] --replicationMode=sync --name=[location of the secondary node] --operationMode=logreplay
In this example, we use following command:
hdbnsutil -sr_register --name=hana1 --remoteHost=hana0 --remoteInstance=00 --replicationMode=sync --operationMode=logreplay
d) Start HANA with: HDB start;
e) Verify the System Replication Status with:
hdbnsutil -sr_state
In this example, we have following status on secondary HANA node hana1:
HSRStatus
CAUTION: all above operations are done on secondary node.

Configuring HAE for SAP HANA

STONITH: fence_aliyun
For a HA solution, a fencing device is a must. Alibaba Cloud provides its own STONITH device, which allows the servers in the HA cluster to shut down the other which is not responsible. The STONITH device leverage Alibaba Cloud OpenAPI underneath the ECS instance, which is similar to a physical reset / shutdown on a on-premise environment.

Configuration of Corosync

It is desirable that, you add more redundancy for messaging (Heartbeat) by using separate ENIs attached to the ECS instances with separate network range.On Alibaba Cloud, it is strongly suggested that, only using Unicast for the transport setting in Corosync.Follow the following steps to configure Corosync:

  1. Create Keys
    Run corosync-keygen on primary HANA node. The generated key will be located in the file: /etc/corosync/authkey.
    In our example, we execute the command on hana1:
    authKey

  2. Configure /etc/corosync/corosync.conf with following content as root on primary HANA node:

    1. totem {
    2. version: 2
    3. token: 5000
    4. token_retransmits_before_loss_const: 6
    5. secauth: on
    6. crypto_hash: sha1
    7. crypto_cipher: aes256
    8. clear_node_high_bit: yes
    9. interface {
    10. ringnumber: 0
    11. bindnetaddr: **IP-address-for-heart-beating-for-the-current-server**
    12. mcastport: 5405
    13. ttl: 1
    14. }
    15. # On Alibaba Cloud, transport should be set to udpu, means: unicast
    16. transport: udpu
    17. }
    18. logging {
    19. fileline: off
    20. to_logfile: yes
    21. to_syslog: yes
    22. logfile: /var/log/cluster/corosync.log
    23. debug: off
    24. timestamp: on
    25. logger_subsys {
    26. subsys: QUORUM
    27. debug: off
    28. }
    29. }
    30. nodelist {
    31. node {
    32. ring0_addr: **ip-node-1**
    33. nodeid: 1
    34. }
    35. node {
    36. ring0_addr: **ip-node-2**
    37. nodeid: 2
    38. }
    39. }
    40. quorum {
    41. # Enable and configure quorum subsystem (default: off)
    42. # see also corosync.conf.5 and votequorum.5
    43. provider: corosync_votequorum
    44. expected_votes: 2
    45. two_node: 1
    46. }

    IP-address-for-heart-beating-for-the-current-server should be replaced by the IP address of the current server, used for messaging (heartbeat) or HANA System Replication. In our example, we use IP address of ENI of the current node (192.168.0.83 for hana0 and 192.168.1.246 for hana1); Caution: this value will be different on primary and secondary node.nodelist directive is used to list all nodes in the cluster.
    ip-node-1 and ip-node-2 should be replaced by the IP addresses of the ENIs attached to ECS instances for Heartbeat Purpose or HANA System Replication Purpose (in this example it should be 192.168.0.83 for hana0 and 192.168.1.246 for hana1).

After completing edit of /etc/corosync/corosync.conf on primary HANA node, copy the /etc/corosync/authkey and /etc/corosync/corosync.conf to /etc/corosync on the secondary HANA node with following command:
scp /etc/corosync/authkey root@hostnameOfSecondaryNode:/etc/corosync
scp /etc/corosync/corosync.conf root@hostnameOfSecondaryNode:/etc/corosync
In our example, we execute following command:
scpkey1
scpkey2
After copy the corosync.conf to the secondary node, please kindly configure the bindnetaddr as above to the local heart beating IP address.

Configuration of pacemaker

For SAP HANA HA solution, we need to configure 7 Resource Agents and corresponding constraints in Pacemaker.
CAUTION the following pacemaker configuration only need to be done on one node (normally primary node).

  1. Cluster bootstrap and more
    Add configuration of bootstrap and default setting of resource and operations to the cluster; Save following scripts in a file: crm-bs.txt
    1. property $id='cib-bootstrap-options' \
    2. stonith-enabled="true" \
    3. stonith-action="off" \
    4. stonith-timeout="150s"
    5. rsc_defaults $id="rsc-options" \
    6. resource-stickness="1000" \
    7. migration-threshold="5000"
    8. op_defaults $id="op-options" \
    9. timeout="600"
    Execute command to add setting to the cluster:
    crm configure load update crm-bs.txt
  2. STONITH device
    This part defines Aliyun STONITH devices in the cluster;
    Save following scripts in a file: crm-stonith.txt
    1. primitive res_ALIYUN_STONITH_1 stonith:fence_aliyun \
    2. op monitor interval=120 timeout=60 \
    3. params pcmk_host_list=<primary node hostname> port=<primary node instance id> \
    4. access_key=<access key> secret_key=<secret key> \
    5. region=<region> \
    6. meta target-role=Started
    7. primitive res_ALIYUN_STONITH_2 stonith:fence_aliyun \
    8. op monitor interval=120 timeout=60 \
    9. params pcmk_host_list=<secondary node hostname> port=<secondary node instance id> \
    10. access_key=<access key> secret_key=<secret key> \
    11. region=<region> \
    12. meta target-role=Started
    13. location loc_<primary node hostname>_stonith_not_on_<primary node hostname> res_ALIYUN_STONITH_1 -inf: <primary node hostname>
    14. #Stonith 1 should not run on primary node because it is controling primary node
    15. location loc_<secondary node hostname>_stonith_not_on_<secondary node hostname> res_ALIYUN_STONITH_2 -inf: <secondary node hostname>
    16. #Stonith 2 should not run on secondary node because it is controling secondary node
    [secondary node hostname] / [primary node hostname] should be replaced by the real hostname of your secondary node;
    [secondary node instance id] / [secondary node instance id] should be replaced by the real instance-id of your secondary node; you can get this from the console;
    [access key] should be replaced with real access key;
    [secret key] should be replaced with real secret key;
    [region] should be replaced with real region name where the node locates;
    Execute command to add the resource to the cluster:
    crm configure load update crm-stonith.txt
  3. SAPHanaTopology
    This part defines a SAPHanaTopology RA, and a clone of SAPHanaTopology on both nodes in the cluster. Save following scripts in a file: crm-saphanatop.txt
    1. primitive rsc_SAPHanaTopology_<SID>_HDB<instance number> ocf:suse:SAPHanaTopology \
    2. operations $id="rsc_SAPHanaTopology_<SID>_HDB<instance number>-operations" \
    3. op monitor interval="10" timeout="600" \
    4. op start interval="0" timeout="600" \
    5. op stop interval="0" timeout="300" \
    6. params SID="<SID>" InstanceNumber="<instance number>"
    7. clone cln_SAPHanaTopology_<SID>_HDB<instance number> rsc_SAPHanaTopology_<SID>_HDB<instance number> \
    8. meta clone-node-max="1" interleave="true"
    [SID] should be replaced by the real SAP HANA SID;
    [instance number] should be replaced by the real SAP HANA Instance Number;
    Execute command to add resources to the cluster:
    crm configure load update crm-saphanatop.txt
  4. SAPHana
    This part defines a SAPHana RA, and a Multi-state resource of SAPHana on both nodes in the cluster. Save following scripts in a file: crm-saphana.txt
    1. primitive rsc_SAPHana_<SID>_HDB<instance number> ocf:suse:SAPHana \
    2. operatoins $id="rsc_sap_<SID>_HDB<instance number>-operations" \
    3. op start interval="0" timeout="3600" \
    4. op stop interval="0" timeout="3600" \
    5. op promote interval="0" timeout="3600" \
    6. op monitor interval="60" role="Master" timeout="700" \
    7. op monitor interval="61" role="Slave" timeout="700" \
    8. params SID="<SID>" InstanceNumber="<instance number>" PREFER_SITE_TAKEOVER="true" \
    9. DUPLICATE_PRIMARY_TIMEOUT="7200" AUTOMATED_REGISTER="false"
    10. ms msl_SAPHana_<SID>_HDB<instance number> rsc_SAPHana_<SID>_HDB<instance number> \
    11. meta clone-max="2" clone-node-max="1" interleave="true"
    [SID] should be replaced by the real SAP HANA SID;
    [instance number] should be replaced by the real SAP HANA Instance Number;
    Execute command to add resources to the cluster:
    crm configure load update crm-saphana.txt
  5. Virtual IP
    This part defines a Virtual IP RA in the cluster. Save following scripts in a file: crm-vip.txt.
    1. primitive res_vip_<SID>_HDB<instance number> ocf:aliyun:vpc-move-ip \
    2. op monitor interval=60 \
    3. meta target-role=Started \
    4. params address=<virtual_IPv4_address> routing_table=<route_table_ID> interface=eth0
    [virtual_IP4_address] should be replaced by the real IP address you prefer to provide service;
    [route_table_ID] should be replaced by the route table ID of your VPC;
    [SID] should be replaced by the real SAP HANA SID;
    [instance number] should be replaced by the real SAP HANA Instance Number;
    Execute command to add the resource to the cluster:
    crm configure load update crm-vip.txt
  6. Constraints
    Two constraints are organizing the correct placement of the virtual IP address for the client database access and the start order between the two resource agents SAPHana and SAPHanaTopology. Save following scripts in a file: crm-constraint.txt
    1. colocation col_SAPHana_vip_<SID>_HDB<instance number> 2000: rsc_vip_<SID>_HDB<instance number>:started \
    2. msl_SAPHana_<SID>_HDB<instance number>:Master
    3. order ord_SAPHana_<SID>_HDB<instance number> Optional: cln_SAPHanaTopology_<SID>_HDB<instance number> \
    4. msl_SAPHana_<SID>_HDB<instance number>
    [SID] should be replaced by the real SAP HANA SID;
    [instance number] should be replaced by the real SAP HANA Instance Number;
    Execute command to add the resource to the cluster:
    crm configure load update crm-constraint.txt
  7. check cluster status
    a) Start HANA HA Cluster on both nodes
    Execute command: systemctl start pacemaker
    b) Monitor the HANA HA Cluster
    Execute command: systemctl status pacemaker
    Execute command: crm_mon –r
    In our example we have following result:
    clusterStatushana0

Meanwhile, please kindly check, if a new entry [virtual_IP4_address] is added into the route table of VPC.
In our example, we have following:
RouteTableHANA0

Verify the HA takeover

  • Shutdown the primary node;
  • Check the status of Pacemaker as follows:
    clusterStatushana2

  • Compare the entry of route table in VPC as follows:
    RouteTableHana1

Example

Example Cluster Configuration

In our example, the cluster configuration (you can check it via command “crm configure show”) should have content as below:

  1. node 1: hana0 \
  2. attributes hana_jl0_vhost=hana0 hana_jl0_srmode=sync hana_jl0_remoteHost=hana1 hana_jl0_site=hana0 lpa_jl0_lpt=10 hana_jl0_op_mode=logreplay
  3. node 2: hana1 \
  4. attributes lpa_jl0_lpt=1529509236 hana_jl0_op_mode=logreplay hana_jl0_vhost=hana1 hana_jl0_site=hana1 hana_jl0_srmode=sync hana_jl0_remoteHost=hana0
  5. primitive res_ALIYUN_STONITH_0 stonith:fence_aliyun \
  6. op monitor interval=120 timeout=60 \
  7. params pcmk_host_list=hana0 port=i-gw8byf3m4f9a8os6rke8 access_key=<access key> secret_key=<secret key> region=eu-central-1 \
  8. meta target-role=Started
  9. primitive res_ALIYUN_STONITH_1 stonith:fence_aliyun \
  10. op monitor interval=120 timeout=60 \
  11. params pcmk_host_list=hana1 port=i-gw8byf3m4f9a8os6rke9 access_key=<access key> secret_key=<secret key> region=eu-central-1 \
  12. meta target-role=Started
  13. primitive rsc_SAPHanaTopology_JL0_HDB00 ocf:suse:SAPHanaTopology \
  14. operations $id=rsc_SAPHanaTopology_JL0_HDB00-operations \
  15. op monitor interval=10 timeout=600 \
  16. op start interval=0 timeout=600 \
  17. op stop interval=0 timeout=300 \
  18. params SID=JL0 InstanceNumber=00
  19. primitive rsc_SAPHana_JL0_HDB00 ocf:suse:SAPHana \
  20. operations $id=rsc_SAPHana_JL0_HDB00-operations \
  21. op start interval=0 timeout=3600 \
  22. op stop interval=0 timeout=3600 \
  23. op promote interval=0 timeout=3600 \
  24. op monitor interval=60 role=Master timeout=700 \
  25. op monitor interval=61 role=Slave timeout=700 \
  26. params SID=JL0 InstanceNumber=00 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false
  27. primitive rsc_vip_JL0_HDB00 ocf:aliyun:vpc-move-ip \
  28. op monitor interval=60 \
  29. meta target-role=Started \
  30. params address=192.168.4.1 routing_table=vtb-gw8fii1g1d8cp14tzynub interface=eth0
  31. ms msl_SAPHana_JL0_HDB00 rsc_SAPHana_JL0_HDB00 \
  32. meta clone-max=2 clone-node-max=1 interleave=true target-role=Started
  33. clone cln_SAPHanaTopology_JL0_HDB00 rsc_SAPHanaTopology_JL0_HDB00 \
  34. meta clone-node-max=1 interleave=true
  35. colocation col_SAPHana_vip_JL0_HDB00 2000: rsc_vip_JL0_HDB00:Started msl_SAPHana_JL0_HDB00:Master
  36. location loc_hana0_stonith_not_on_hana0 res_ALIYUN_STONITH_0 -inf: hana0
  37. location loc_hana1_stonith_not_on_hana1 res_ALIYUN_STONITH_1 -inf: hana1
  38. order ord_SAPHana_JL0_HDB00 Optional: cln_SAPHanaTopology_JL0_HDB00 msl_SAPHana_JL0_HDB00
  39. property cib-bootstrap-options: \
  40. have-watchdog=false \
  41. dc-version=1.1.15-21.1-e174ec8 \
  42. cluster-infrastructure=corosync \
  43. stonith-action=off \
  44. stonith-enabled=true \
  45. stonith-timeout=150s \
  46. last-lrm-refresh=1529503606 \
  47. maintenance-mode=false
  48. rsc_defaults rsc-options: \
  49. resource-stickness=1000 \
  50. migration-threshold=5000
  51. op_defaults op-options: \
  52. timeout=600

Example for /etc/corosync/corosync.conf

In our example, the corosync.conf should on hana1 should have content as below:

  1. totem{
  2. version: 2
  3. token: 5000
  4. token_retransmits_before_loss_const: 6
  5. secauth: on
  6. crypto_hash: sha1
  7. crypto_cipher: aes256
  8. clear_node_high_bit: yes
  9. interface {
  10. ringnumber: 0
  11. bindnetaddr: 192.168.0.83
  12. mcastport: 5405
  13. ttl: 1
  14. }
  15. # On Alibaba Cloud, transport should be set to udpu, means: unicast
  16. transport: udpu
  17. }
  18. logging {
  19. fileline: off
  20. to_logfile: yes
  21. to_syslog: yes
  22. logfile: /var/log/cluster/corosync.log
  23. debug: off
  24. timestamp: on
  25. logger_subsys {
  26. subsys: QUORUM
  27. debug: off
  28. }
  29. }
  30. nodelist {
  31. node {
  32. ring0_addr: 192.168.0.83
  33. nodeid: 1
  34. }
  35. node {
  36. ring0_addr: 192.168.1.246
  37. nodeid: 2
  38. }
  39. }
  40. quorum {
  41. # Enable and configure quorum subsystem (default: off)
  42. # see also corosync.conf.5 and votequorum.5
  43. provider: corosync_votequorum
  44. expected_votes: 2
  45. two_node: 1
  46. }

Reference

Thank you! We've received your feedback.