All Products
Search
Document Center

SAP HANA Intra-Availability Zone HA Deployment (Based on SLES HAE)

Last Updated: Mar 01, 2019

SAP HANA Intra-Availability Zone HA Deployment (Based on SLES HAE)

Overview

This document describes how to deploy SAP HANA high availability (HA) within an availability zone (zone for short) of Alibaba Cloud.

HANA HA architecture

The architecture in this deployment is as follows:

HANA

Preparations

Installation media

Installation Package File or Path Description
SUSE for SAP SLE-12-SP2-SAP-x86_64-GM-DVD1.iso The SUSE can be downloaded from the SuSE official website, and the software has a 60-day trial period.
SUSE for SAP SLE-12-SP2-SAP-x86_64-GM-DVD2.iso The SUSE can be downloaded from the SuSE official website, and the software has a 60-day trial period.
SAP HANA database installation package HDB_SERVER_LINUX_X86_64
SAP HANA client installation package HDB_CLIENT_LINUX_X86_64
SAP HANA Studio installation package HDB_STUDIO_WINDOWS_X86 For Windows

Access to installation media

Access Method Process Remarks
Direct upload Directly upload the package to the ECS Upload through EIP or VPC
OSS + ossutil Upload the package to OSS and then download it to ECS.
OSS + ossfs Upload the package to the OSS, and then use OSSFS to access the installation media in the OSS. The source code is required for OSSFS installation on SUSE.

VPC planning

Network planning

Network Location Usage Allocated Subnet
Service network East China 2 zone A For Business 192.168.10.0/24
Heartbeat network (redundant) East China 2 zone A For SR/HA 192.168.20.0/24
Host Name Role Heartbeat Address Service Address Virtual Address
hana01.poc.com Hana primary node 192.168.20.19 192.168.10.214 192.168.10.12
hana02.poc.com Hana backup node 192.168.20.20 192.168.10.215 192.168.10.12
HanaStudio Hana Studio None 192.168.10.210 None

Create a VPC

The Virtual Private Cloud (VPC) is an isolated network environment built on Alibaba Cloud. VPCs are logically isolated from each other. The VPC is your dedicated private network on the cloud. You have full control over your own VPC instance, including choosing the IP address range and configuring the route table and gateway. For more information and related documents, see the product documentation.

Log on to the VPC console.
pic1

Create a service network

Create a service subnet as planned.
pic3

Create a heartbeat network

Create a heartbeat subnet as planned.
pic4

Create HANA ECS instance

Create HANA primary node ECS

ECS purchasing page

Access https://www.aliyun.com/product/ecs to open the purchasing page. Select an instance type under SAP HANA and click Buy.

Select a payment method

Select either of the following payment methods: Subscription and Pay-As-You-Go.

Select the region and zone.

Select the region and zone. By default, the zones are allocated randomly. You can select a zone according to your needs. For details about the region and zone selection, see Region and Zone.
In this example, China East 2 zone A is selected.

Select instance specifications

Select an instance type certified by SAP HANA, namely, 56 vCPU 480GB (ecs.se1.14xlarge) in series III - Memory type se1 instance type family, or 80 vCPU 960GB (ecs.re4.20xlarge) in the enhanced memory type re4 instance type family. In this example, ecs.se1.14xlarge is selected.

Select image

You can select the public, custom, or shared image, or select an image from the market.
The SUSE linux for SAP-12SP2 selected from the market is recommended for SAP HANA.

Note: Select the SUSE for SAP edition, instead of the SUSE 12 standard edition on the official website.
pic5

Click Marketplace Image to enter the image market. Enter the keyword sap for searching and select SUSE linux for SAP-12SP2.
pic6

Configure storage

System disk:Mandatory. Used to install the operating system. You need to specify the cloud disk type and capacity of the system disk.
Data disk:Optional. If you create a cloud disk as a data disk, you must specify the cloud disk type, capacity, quantity, and whether to encrypt. You can create an empty cloud disk, or use snapshot to create a cloud disk. A maximum of 16 cloud disks can be configured as the data disks.
The capacity of data disks needs to be adjusted according to the number of HANA instances.
pic7

Select a network type

Click Next: Network and Security Group to configure the network and security group.:

1. Select a network type.
VPC,Select the VPC and switch. If you do not create the VPC or switch, you can retain the default VPC and switch.

2. Set the public network bandwidth.
If your instance does not need to access the public network or your VPC-type ECS instance uses an elastic public IP address (EIP) to access the public network, you do not need to assign a public IP address to your instance. The EIP can be unbind from the instance anytime.
Note: SAP HANA does not provide external services directly, so the instance does not need a public IP address.
pic8

Select security group

Select a security group. If you do not create a security group, retain the default security group. For the rules of the default security group, see Default security group rules

ENI configuration

Note: The second ENI should be added after the ECS instance is successfully created.
pic9

Complete the system configuration, grouping, and ECS purchasing.

Create HANA backup node ECS

The creation of HANA backup node ECS is the same as that of HANA primary node ECS, except that the storage space allocation method is different. We recommend that the HANA backup node not be attached to an HANA backup volume as long as the HANA data storage space is sufficient.
pic10

Configure shared storage

ECS shared block storage refers to the data block-level storage device that allows multiple ECS instances to read and write data concurrently. It features high concurrency rate, high performance, and high reliability. A single block can be attached to a maximum of 16 ECS instances. For the operation procedure, watch the video Attach a Shared Block to Multiple ECS Instances
In this example, the shared block is used as the STNOITH of a HA cluster. Select the same zone as the ECS instance and attach the block to the ECS instance of the HA cluster.

Create shared block storage


pic11
Select at least 20 GB SSD for the STONITH.
pic12
After the creation is successful, the following interface is displayed:
pic13

Attach shared block storage

Select the ECS instance to be attached to the HA cluster.
pic14

Configure ENI

ENI is a virtual network card that can be appended to an ECS instance in a VPC. With ENI, you can build highly available clusters, implement failover at a low cost, and achieve refined network management. All regions support the ENI. For more information, see ENI.

Create an ENI

Log on to the ECS console, Select Network and Security > ENI from the left navigation bar. Select a region. Click Create an ENI.
pic15

An auxiliary ENI is successfully created.
pic16

Bind the HANA ECS instance.

Click Bind Instance for the auxiliary ENI to bind the HANA ECS instance.
pic17

Configure HaVip

Private High-Availability Virtual IP Address (HaVip) is a private IP resource which can be created and released independently. The uniqueness of HaVip is that you can broadcast the IP address on ECS using ARP. In this deployment, the HaVip is used as the virtual IP address of the cluster and is attached to each node in the cluster.

Create HaVip


pic18

The HaVip is used by the HANA instance to provide service, and is an IP address on the service subnet.
pic19

Bind HaVip

Click the ECS instance bound to the HA cluster. Ensure that each ECS instance in the cluster is bound.
pic20

Associate the HANA primary and backup nodes

Access the management page of the created HaVIP.
Click + to add the ECS instances to be associated with, and associate the HANA primary and backup nodes with the HaVip.
pic21

Configure HANA ECS

Modify the host name

Configure domain name resolution on the two HANA servers of the HA cluster. Modify the host names as follows:

  • Edit /etc/hostname.
  • Set hostname
  • Edit /etc/hosts and comment out the IPv6 part

Configure SSH password-free connection service

The SSH password-free connection service must be configured on the two HANA servers. The operation is as follows:

Configure the authentication public key

Run the following command on the HANA primary node:
pic23

Run the following command on the HANA backup node:
pic24

Verify the configurations.

Verify the SSH password-free connection service: Log on to the nodes from each other through SSH. If both logon processes do not need a password, the service is successful.
Perform verification on the HANA primary node:
pic25

Perform verification on the HANA backup node:
pic26

Configure the NTP service

The nodes in the cluster need to synchronize time. In this example, the HANA primary node is configured as the NTP server, and the backup node is configured as the client.

HANA primary node:

  1. # vim /etc/ntp.conf
  2. server 127.127.1.0 # local clock (LCL)
  3. fudge 127.127.1.0 stratum 10 # LCL is unsynchronized
  4. # systemctl restart ntpd.service
  5. # ntpq -p


pic27

HANA backup node:

  1. # vim /etc/ntp.conf
  2. server hana01 iburst
  3. # systemctl restart ntpd.service
  4. # ntpq -p


pic28

Note: If the offset between local time and NTP server time exceeds 1000 seconds, run the systemctl stop ntpd.service command to stop the ntpd service, and run ntpdate 192.168.20.9 (replace 192.168.20.9 with the NTP server address) to synchronize the time manually. Then run systemctl start ntpd.service to restart the ntdp service.

Partition the HANA file system

File system partitioning differences between HANA primary and backup nodes:

HANA Node File System Partition
HANA primary node OS disk
HANA primary node /hana/data
HANA primary node /hana/log
HANA primary node /hana/shared
HANA primary node /hana/backup
HANA primary and backup nodes Arbitration disk
HANA backup node OS disk
HANA backup node /hana/data
HANA backup node /hana/log
HANA backup node /hana/shared

Configure the heartbeat network

  1. HANA01:~ # yast network

HANA primary node:
pic29

HANA backup node::
pic30

Configure the HaVip primary node

After the HaVip is configured on Alibaba Cloud, the two ECS instances are in backup mode by default. The HaVip cannot be used for communication. It takes effect only after the HaVip primary node is configured. Therefore, you need to configure the HANA primary node to the HaVip primary node. Assign the HaVip to the ENI of the HANA primary node. This IP address is the additional address (or Linux subinterface) of the corresponding ENI.

  1. HANA01:~ # yast network


pic31

After the configuration, the instance bound with HaVip turns into the primary state.
pic32

Install the HANA database

Note: HANA primary and backup nodes must have consistent system ID and instance ID. In this example, the system ID is HAN and instance ID is 00.

Check whether hdblcm is an executable program. Install the HANA instances on the primary and backup nodes.

  1. hana01:~/HDB_SERVER_LINUX_X86_64 # ./hdblcm
  1. SAP HANA Lifecycle Management - SAP HANA Database 2.00.020.00.1500920972
  2. ************************************************************************
  3. Scanning software locations...
  4. Detected components:
  5. SAP HANA Database (2.00.020.00.1500920972) in /root/HDB_SERVER_LINUX_X86_64/server
  6. Choose an action
  7. Index | Action | Description
  8. -----------------------------------------------
  9. 1 | install | Install new system
  10. 2 | extract_components | Extract components
  11. 3 | Exit (do nothing) |
  12. Enter selected action index [3]: 1
  13. Enter Installation Path [/hana/shared]:
  14. Enter Local Host Name [hana01]:
  15. Do you want to add hosts to the system? (y/n) [n]:
  16. Enter SAP HANA System ID: HAN
  17. Enter Instance Number [00]:
  18. Enter Local Host Worker Group [default]:
  19. Index | System Usage | Description
  20. -------------------------------------------------------------------------------
  21. 1 | production | System is used in a production environment
  22. 2 | test | System is used for testing, not production
  23. 3 | development | System is used for development, not production
  24. 4 | custom | System usage is neither production, test nor development
  25. Select System Usage / Enter Index [4]: 2
  26. Enter Location of Data Volumes [/hana/data/HAN]:
  27. Enter Location of Log Volumes [/hana/log/HAN]:
  28. Restrict maximum memory allocation? [n]:
  29. Enter Certificate Host Name For Host 'hana01' [hana01]:
  30. Enter SAP Host Agent User (sapadm) Password:
  31. Confirm SAP Host Agent User (sapadm) Password:
  32. Enter System Administrator (hanadm) Password:
  33. Confirm System Administrator (hanadm) Password:
  34. Enter System Administrator Home Directory [/usr/sap/HAN/home]:
  35. Enter System Administrator Login Shell [/bin/sh]:
  36. Enter System Administrator User ID [1000]:
  37. Enter ID of User Group (sapsys) [79]:
  38. Enter System Database User (SYSTEM) Password:
  39. Confirm System Database User (SYSTEM) Password:
  40. Restart system after machine reboot? [n]:
  41. Summary before execution:
  42. =========================
  43. SAP HANA Database System Installation
  44. Installation Parameters
  45. Remote Execution: ssh
  46. Database Isolation: low
  47. Installation Path: /hana/shared
  48. Local Host Name: hana01
  49. SAP HANA System ID: HAN
  50. Instance Number: 00
  51. Local Host Worker Group: default
  52. System Usage: test
  53. Location of Data Volumes: /hana/data/HAN
  54. Location of Log Volumes: /hana/log/HAN
  55. Certificate Host Names: hana01 -> hana01
  56. System Administrator Home Directory: /usr/sap/HAN/home
  57. System Administrator Login Shell: /bin/sh
  58. System Administrator User ID: 1000
  59. ID of User Group (sapsys): 79
  60. Software Components
  61. SAP HANA Database
  62. Install version 2.00.020.00.1500920972
  63. Location: /root/HDB_SERVER_LINUX_X86_64/server
  64. Do you want to continue? (y/n): y
  65. Installing components...
  66. Installing SAP HANA Database...
  67. Preparing package 'Saphostagent Setup'...
  68. Preparing package 'Python Support'...
  69. Preparing package 'Python Runtime'...
  70. Preparing package 'Product Manifest'...
  71. Preparing package 'Binaries'...
  72. Preparing package 'Data Quality'...
  73. Preparing package 'Krb5 Runtime'...
  74. Preparing package 'Installer'...
  75. Preparing package 'Ini Files'...
  76. Preparing package 'HWCCT'...
  77. Preparing package 'Documentation'...
  78. Preparing package 'Delivery Units'...
  79. Preparing package 'Offline Cockpit'...
  80. Preparing package 'DAT Languages (EN, DE)'...
  81. Preparing package 'DAT Languages (other)'...
  82. Preparing package 'DAT Configfiles (EN, DE)'...
  83. Preparing package 'DAT Configfiles (other)'...
  84. Creating System...
  85. Extracting software...
  86. Installing package 'Saphostagent Setup'...
  87. Installing package 'Python Support'...
  88. Installing package 'Python Runtime'...
  89. Installing package 'Product Manifest'...
  90. Installing package 'Binaries'...
  91. Installing package 'Data Quality'...
  92. Installing package 'Krb5 Runtime'...
  93. Installing package 'Installer'...
  94. Installing package 'Ini Files'...
  95. Installing package 'HWCCT'...
  96. Installing package 'Documentation'...
  97. Installing package 'Delivery Units'...
  98. Installing package 'Offline Cockpit'...
  99. Installing package 'DAT Languages (EN, DE)'...
  100. Installing package 'DAT Languages (other)'...
  101. Installing package 'DAT Configfiles (EN, DE)'...
  102. Installing package 'DAT Configfiles (other)'...
  103. Creating instance...
  104. Installing SAP Host Agent version 7.21.26...
  105. Starting SAP HANA Database system...
  106. Starting 4 processes on host 'hana01' (worker):
  107. Starting on 'hana01': hdbcompileserver, hdbdaemon, hdbnameserver, hdbpreprocessor
  108. Starting 7 processes on host 'hana01' (worker):
  109. Starting on 'hana01': hdbcompileserver, hdbdaemon, hdbindexserver, hdbnameserver, hdbpreprocessor, hdbwebdispatcher, hdbxsengine
  110. Starting on 'hana01': hdbdaemon, hdbindexserver, hdbwebdispatcher, hdbxsengine
  111. Starting on 'hana01': hdbdaemon, hdbwebdispatcher, hdbxsengine
  112. Starting on 'hana01': hdbdaemon, hdbwebdispatcher
  113. All server processes started on host 'hana01' (worker).
  114. Importing delivery units...
  115. Importing delivery unit HCO_INA_SERVICE
  116. Importing delivery unit HANA_DT_BASE
  117. Importing delivery unit HANA_IDE_CORE
  118. Importing delivery unit HANA_TA_CONFIG
  119. Importing delivery unit HANA_UI_INTEGRATION_SVC
  120. Importing delivery unit HANA_UI_INTEGRATION_CONTENT
  121. Importing delivery unit HANA_XS_BASE
  122. Importing delivery unit HANA_XS_DBUTILS
  123. Importing delivery unit HANA_XS_EDITOR
  124. Importing delivery unit HANA_XS_IDE
  125. Importing delivery unit HANA_XS_LM
  126. Importing delivery unit HDC_ADMIN
  127. Importing delivery unit HDC_BACKUP
  128. Importing delivery unit HDC_IDE_CORE
  129. Importing delivery unit HDC_SEC_CP
  130. Importing delivery unit HDC_SYS_ADMIN
  131. Importing delivery unit HDC_XS_BASE
  132. Importing delivery unit HDC_XS_LM
  133. Importing delivery unit SAPUI5_1
  134. Importing delivery unit SAP_WATT
  135. Importing delivery unit HANA_SEC_CP
  136. Importing delivery unit HANA_BACKUP
  137. Importing delivery unit HANA_HDBLCM
  138. Importing delivery unit HANA_SEC_BASE
  139. Importing delivery unit HANA_SYS_ADMIN
  140. Importing delivery unit HANA_ADMIN
  141. Importing delivery unit HANA_WKLD_ANLZ
  142. Installing Resident hdblcm...
  143. Updating SAP HANA Database Instance Integration on Local Host...
  144. Regenerating SSL certificates...
  145. Deploying SAP Host Agent configurations...
  146. Creating Component List...
  147. SAP HANA Database System installed
  148. You can send feedback to SAP with this form: https://hana01:1129/lmsl/HDBLCM/HAN/feedback/feedback.html
  149. Log file written to '/var/tmp/hdb_HAN_hdblcm_install_2017-12-30_20.55.04/hdblcm.log' on host 'hana01'.

Verify the HANA installation on the primary and backup nodes by checking the HANA process status.
pic33

Install the HANA Studio

Configure a Windows ECS
Double-click the hdbsetup executable file in the Studio installation package.
pic34

Click “Next”.。
pic35

Click “Install”。
pic36

Complete the installation and close the program.
pic37

Configure HANA system replication

Back up the database

Connect the HANA Studio to the HANA primary node to back up the database.

  • System-level database backup
    pic38

  • Tenant-level database backup
    pic39

Enable HANA system replication on the primary node

Activate HANA system replication on the primary node.
pic40

Maintain the logic names on the primary node.
pic41

Register the backup node to the primary node

Copy the PKI SSFS file on the primary node to the corresponding location on the backup node:
pic42

Location of the PKI SSFS file on the primary node:
/usr/sap/HAN/SYS/global/security/rsecssfs/data
/usr/sap/HAN/SYS/global/security/rsecssfs/key

Note: When copying the file, do not delete the original file owner; otherwise, some operations may be failed due to insufficient rights.

Register the backup node on the HANA Studio console.
pic43

Maintain the backup node information using the system replication method.
pic44

Check the HANA system replication status.

Install and configure SLES 12 Cluster HA

Install the SUSE HAE software

Add the local source to the primary and backup nodes.

  1. #zypper addrepo iso:/?iso=/root/SLE-12-SP2-HA-DVD-x86_64-GM-CD1.iso SAP1
  2. #zypper addrepo iso:/?iso=/root/SLE-12-SP2-HA-DVD-x86_64-GM-CD2.iso SAP2

Note: The ISO path needs to be adjusted.

  1. hana01#yast

Select all software packages on the right.
pic45

Select the dependent package and click Accept.
pic46

Configure the cluster

Generate the cluster configuration file

Generate the corosync.conf file on the HANA primary node.

  1. hana01:~ # yast cluster

The configuration is as follows. Other configuration options retain the default values.
pic47
pic48

Copy the corosync.conf file to the HANA backup node.

  1. hana01# scp /etc/corosync/corosync.conf hana02:/etc/corosync/corosync.conf

Start the cluster
Run the following commands on both nodes:

  1. # rcpacemaker start

View the cluster status.

  1. # crm_mon -1
  2. Stack: corosync
  3. Current DC: hana01 (version 1.1.16-4.8-77ea74d) - partition with quorum
  4. Last updated: Tue Nov 7 23:13:06 2017
  5. Last change: Tue Nov 7 23:13:05 2017 by hacluster via crmd on hana01
  6. 2 nodes configured
  7. 0 resources configured
  8. Online: [ hana01 hana02 ] # Both nodes should be in online state.
  9. No active resources

Close the STONITH (which will be configured later).

  1. # crm
  2. crm(live)# configure
  3. crm(live)configure# property stonith-enabled=false
  4. crm(live)configure# commit

Enable web-based configuration.

(1)Set the HA cluster user password on hana01 to hacluster.

  1. # passwd hacluster
  2. New password:
  3. BAD PASSWORD: it is based on a dictionary word
  4. Retype new password:
  5. passwd: password updated successfully
  6. # systemctl restart hawk.service

(2)Access https://192.168.10.214:7630/ (through HANA Studio ECS) with the user name and password hacluster.
pic49

Configure the SBD arbitration disk.

Disable the cluster on hana01 and hana02.

  1. # rcpacemaker stop

View disk information.

  1. Hana01:~ # lsblk
  2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  3. sr0 11:0 1 1024M 0 rom
  4. vda 253:0 0 60G 0 disk
  5. └─vda1 253:1 0 60G 0 part /
  6. vdb 253:16 0 1024G 0 disk
  7. └─vdb1 253:17 0 1024G 0 part /hana/data
  8. vdc 253:32 0 512G 0 disk
  9. └─vdc1 253:33 0 512G 0 part /hana/log
  10. vdd 253:48 0 512G 0 disk
  11. └─vdd1 253:49 0 512G 0 part /hana/shared
  12. vde 253:64 0 20G 0 disk

The cloud disk vde uses shared block storage.。

Configure watchdog on hana01 and hana02.

  1. # echo softdog > /etc/modules-load.d/watchdog.conf
  2. # systemctl restart systemd-modules-load
  3. # systemctl status systemd-modules-load
  4. systemd-modules-load.service - Load Kernel Modules
  5. Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static; vendor preset: disabled)
  6. Active: active (exited) since Mon 2018-01-01 21:18:41 CST; 8s ago
  7. Docs: man:systemd-modules-load.service(8)
  8. man:modules-load.d(5)
  9. Process: 2300 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=0/SUCCESS)
  10. Main PID: 2300 (code=exited, status=0/SUCCESS)
  11. Jan 01 21:18:41 s4hsvra systemd[1]: Starting Load Kernel Modules...
  12. Jan 01 21:18:41 s4hsvra systemd[1]: Started Load Kernel Modules.
  13. # lsmod | grep dog
  14. softdog 16384 0
  15. # vim /etc/init.d/boot.local
  16. modprobe softdog
  17. # vim /etc/sysconfig/sbd
  18. SBD_DEVICE="/dev/vde"
  19. SBD_OPTS="-W"

Create the SBD partition on hana01.

  1. sbd -d /dev/vde -4 30 -1 15 create
  2. # sbd -d /dev/vde dump
  3. ==Dumping header on disk /dev/vde
  4. Header version : 2.1
  5. UUID : 94d700ee-837b-46c7-95cc-27f3d1ffcf9f
  6. Number of slots : 255
  7. Sector size : 512
  8. Timeout (watchdog) : 15
  9. Timeout (allocate) : 2
  10. Timeout (loop) : 1
  11. Timeout (msgwait) : 30
  12. ==Header on disk /dev/vde is dumped

Description of msgwait timeout and watchdog timeout:
-4 indicates the msgwait timeout. In the preceding example, the msgwait timeout interval is 30s.
-1 indicates the watchdog timeout. In the preceding example, the watchdog timeout interval is 15s. The minimum value for simulated package detection is 15s.
If SBD stays in the multi-path group, the timeout interval required by SBD needs to be modified, because the MPIO detection along the path is time consuming. If msgwait times out, it is assumed that the message has been transmitted to the target node. For multi-path, the delay is the time for switching to the next path when MPIO detects a path failure. You may need to test this function in your system environment. If the SBD on the node does not reset the package detection timer in time, the node is automatically stopped. The watchdog timeout interval must be shorter than the msgwait timeout interval. The former should be a half of the later.
The following formula expresses the relationships between the three values:
Timeout (msgwait) = (Timeout (watchdog) * 2)
stonith-timeout = Timeout (msgwait) + 20%
For more information, run the man sbd command.

Configure SBD on hana01 and hana02.

  1. # vim /etc/sysconfig/sbd
  2. SBD_DEVICE="/dev/vde"
  3. SBD_OPTS="-W"

The SBD program automatically starts on hana01 and hana02 when the system is started.

  1. # systemctl enable sbd

Enable the cluster on hana01 and hana02.

  1. # rcpacemaker start

Modify the cluster SBD parameters on hana01.

  1. # crm configure
  2. crm(live)configure# primitive stonith_sbd stonith:external/sbd params pcmk_delay_max=30
  3. crm(live)configure# commit
  4. crm(live)configure# exit

Enable STONITH.

  1. # crm configure
  2. crm(live)configure# property stonith-enabled="true"
  3. crm(live)configure# property stonith-timeout="40s"
  4. crm(live)configure# property no-quorum-policy="ignore"
  5. crm(live)configure# property default-resource-stickiness="1000"
  6. crm(live)configure# commit
  7. crm(live)configure# exit

Note: We recommend that you set stonith-timeout to 40s (calculated based on the previous formula).

View the SBD process and service on hana01 and hana02.

  1. # ps -ef | grep sbd
  2. root 5946 1 0 22:44 ? 00:00:01 sbd: inquisitor
  3. root 5947 5946 0 22:44 ? 00:00:00 sbd: watcher: /dev/vde - slot: 0 - uuid: 94d700ee-837b-46c7-95cc-27f3d1ffcf9f
  4. root 5948 5946 0 22:44 ? 00:00:01 sbd: watcher: Pacemaker
  5. root 5949 5946 0 22:44 ? 00:00:00 sbd: watcher: Cluster
  6. root 6915 2540 0 23:25 pts/0 00:00:00 grep --color=auto sbd
  7. # systemctl status sbd
  8. sbd.service - Shared-storage based fencing daemon
  9. Loaded: loaded (/usr/lib/systemd/system/sbd.service; enabled; vendor preset: disabled)
  10. Active: active (running) since Tue 2017-12-26 22:44:51 CST; 41min ago
  11. Process: 5934 ExecStart=/usr/sbin/sbd $SBD_OPTS -p /var/run/sbd.pid watch (code=exited, status=0/SUCCESS)
  12. Main PID: 5946 (sbd)
  13. Tasks: 4 (limit: 512)
  14. CGroup: /system.slice/sbd.service
  15. ├─5946 sbd: inquisitor
  16. ├─5947 sbd: watcher: /dev/vde - slot: 0 - uuid: 94d700ee-837b-46c7-95cc-27f3d1ffcf9f
  17. ├─5948 sbd: watcher: Pacemaker
  18. └─5949 sbd: watcher: Cluster
  19. Dec 26 22:44:50 node001 systemd[1]: Starting Shared-storage based fencing daemon...
  20. Dec 26 22:44:51 node001 systemd[1]: Started Shared-storage based fencing daemon.

Verify the SBD configuration.

Note: Ensure that important processes on hana02 have been closed.

  1. Hana01# sbd -d /dev/vde message hana02 reset

If hana2 is restarted properly, the SBD disk is successfully configured.

Integrate SAP HANA with SUSE HA

Add SAP HANA resources

Open the SUSE Hawk management interface, click Wizards, and select the HSR options to maintain HANA information.
pic50

The script after successful configuration is as follows:
pic51

Verify the cluster status

Check the resource status in the cluster.
pic52

Check the node status in the cluster.
pic53

Test the SAP HANA HA failover

Test the HANA primary node failure

Ensure that HA works normally before the test.

Check the HANA node status.
HA_status

Check the HANA system replication status.
HSR_status

Check the cluster node column information.
HANA_HAE_status

Check the service status in the cluster.
HAE_status

Test the primary node failover.

Forcibly close hana01 on the ECS console.。
Pnode_stop

VIP(sap_ip) floats to hana02.
HA-status2

HANA status after failover.
HANA_status

Recover the HANA primary node

Test the recovery of the HANA primary node.

Enable the ECS of hana01 and start the cluster software pacemaker.

  1. hana01:~ # rcpacemaker start


HA_status3

Configure HSR on the console.。
HANA_status4

Register hana01 as the backup node.。
SR_status2

Set the synchronization mode to syncmem
HANA_status3

Check the HANA node status.
HANA_stastus5

Check the HSR copy status.
11

Check the HAE cluster status and clean up the nodes reporting errors. After cleanup, the cluster is recovered.
15

The HA cluster recovers, and the HANA backup node starts to provide services.
20