- HANA HA architecture
- VPC planning
- Create HANA ECS instance
- Configure HANA ECS
- Install the HANA database
- Install the HANA Studio
- Configure HANA system replication
- Install and configure SLES 12 Cluster HA
- Integrate SAP HANA with SUSE HA
- Test the SAP HANA HA failover
This document describes how to deploy SAP HANA high availability (HA) within an availability zone (zone for short) of Alibaba Cloud.
The architecture in this deployment is as follows:
|Installation Package||File or Path||Description|
|SUSE for SAP||SLE-12-SP2-SAP-x86_64-GM-DVD1.iso||The SUSE can be downloaded from the SuSE official website, and the software has a 60-day trial period.|
|SUSE for SAP||SLE-12-SP2-SAP-x86_64-GM-DVD2.iso||The SUSE can be downloaded from the SuSE official website, and the software has a 60-day trial period.|
|SAP HANA database installation package||HDB_SERVER_LINUX_X86_64|
|SAP HANA client installation package||HDB_CLIENT_LINUX_X86_64|
|SAP HANA Studio installation package||HDB_STUDIO_WINDOWS_X86||For Windows|
|Direct upload||Directly upload the package to the ECS||Upload through EIP or VPC|
|OSS + ossutil||Upload the package to OSS and then download it to ECS.|
|OSS + ossfs||Upload the package to the OSS, and then use OSSFS to access the installation media in the OSS.||The source code is required for OSSFS installation on SUSE.|
|Service network||East China 2 zone A||For Business||192.168.10.0/24|
|Heartbeat network (redundant)||East China 2 zone A||For SR/HA||192.168.20.0/24|
|Host Name||Role||Heartbeat Address||Service Address||Virtual Address|
|hana01.poc.com||Hana primary node||192.168.20.19||192.168.10.214||192.168.10.12|
|hana02.poc.com||Hana backup node||192.168.20.20||192.168.10.215||192.168.10.12|
The Virtual Private Cloud (VPC) is an isolated network environment built on Alibaba Cloud. VPCs are logically isolated from each other. The VPC is your dedicated private network on the cloud. You have full control over your own VPC instance, including choosing the IP address range and configuring the route table and gateway. For more information and related documents, see the product documentation.
Log on to the VPC console.
Create a service subnet as planned.
Create a heartbeat subnet as planned.
Access https://www.aliyun.com/product/ecs to open the purchasing page. Select an instance type under SAP HANA and click Buy.
Select either of the following payment methods: Subscription and Pay-As-You-Go.
Select the region and zone. By default, the zones are allocated randomly. You can select a zone according to your needs. For details about the region and zone selection, see Region and Zone.
In this example, China East 2 zone A is selected.
Select an instance type certified by SAP HANA, namely, 56 vCPU 480GB (ecs.se1.14xlarge) in series III - Memory type se1 instance type family, or 80 vCPU 960GB (ecs.re4.20xlarge) in the enhanced memory type re4 instance type family. In this example, ecs.se1.14xlarge is selected.
You can select the public, custom, or shared image, or select an image from the market.
The SUSE linux for SAP-12SP2 selected from the market is recommended for SAP HANA.
Note: Select the SUSE for SAP edition, instead of the SUSE 12 standard edition on the official website.
Click Marketplace Image to enter the image market. Enter the keyword sap for searching and select SUSE linux for SAP-12SP2.
System disk：Mandatory. Used to install the operating system. You need to specify the cloud disk type and capacity of the system disk.
Data disk：Optional. If you create a cloud disk as a data disk, you must specify the cloud disk type, capacity, quantity, and whether to encrypt. You can create an empty cloud disk, or use snapshot to create a cloud disk. A maximum of 16 cloud disks can be configured as the data disks.
The capacity of data disks needs to be adjusted according to the number of HANA instances.
Click Next: Network and Security Group to configure the network and security group.：
1. Select a network type.
VPC，Select the VPC and switch. If you do not create the VPC or switch, you can retain the default VPC and switch.
2. Set the public network bandwidth.
If your instance does not need to access the public network or your VPC-type ECS instance uses an elastic public IP address (EIP) to access the public network, you do not need to assign a public IP address to your instance. The EIP can be unbind from the instance anytime.
Note: SAP HANA does not provide external services directly, so the instance does not need a public IP address.
Select a security group. If you do not create a security group, retain the default security group. For the rules of the default security group, see Default security group rules。
Note: The second ENI should be added after the ECS instance is successfully created.
Complete the system configuration, grouping, and ECS purchasing.
The creation of HANA backup node ECS is the same as that of HANA primary node ECS, except that the storage space allocation method is different. We recommend that the HANA backup node not be attached to an HANA backup volume as long as the HANA data storage space is sufficient.
ECS shared block storage refers to the data block-level storage device that allows multiple ECS instances to read and write data concurrently. It features high concurrency rate, high performance, and high reliability. A single block can be attached to a maximum of 16 ECS instances.
In this example, the shared block is used as the STNOITH of a HA cluster. Select the same zone as the ECS instance and attach the block to the ECS instance of the HA cluster.
Select at least 20 GB SSD for the STONITH.
After the creation is successful, the following interface is displayed:
Select the ECS instance to be attached to the HA cluster.
ENI is a virtual network card that can be appended to an ECS instance in a VPC. With ENI, you can build highly available clusters, implement failover at a low cost, and achieve refined network management. All regions support the ENI. For more information, see ENI.
Log on to the ECS console， Select Network and Security > ENI from the left navigation bar. Select a region. Click Create an ENI.
An auxiliary ENI is successfully created.
Click Bind Instance for the auxiliary ENI to bind the HANA ECS instance.
Private High-Availability Virtual IP Address (HaVip) is a private IP resource which can be created and released independently. The uniqueness of HaVip is that you can broadcast the IP address on ECS using ARP. In this deployment, the HaVip is used as the virtual IP address of the cluster and is attached to each node in the cluster.
The HaVip is used by the HANA instance to provide service, and is an IP address on the service subnet.
Click the ECS instance bound to the HA cluster. Ensure that each ECS instance in the cluster is bound.
Access the management page of the created HaVIP.
Click + to add the ECS instances to be associated with, and associate the HANA primary and backup nodes with the HaVip.
Configure domain name resolution on the two HANA servers of the HA cluster. Modify the host names as follows:
- Edit /etc/hostname.
- Set hostname
- Edit /etc/hosts and comment out the IPv6 part
The SSH password-free connection service must be configured on the two HANA servers. The operation is as follows:
Run the following command on the HANA primary node:
Run the following command on the HANA backup node:
Verify the SSH password-free connection service: Log on to the nodes from each other through SSH. If both logon processes do not need a password, the service is successful.
Perform verification on the HANA primary node:
Perform verification on the HANA backup node:
The nodes in the cluster need to synchronize time. In this example, the HANA primary node is configured as the NTP server, and the backup node is configured as the client.
HANA primary node:
# vim /etc/ntp.conf
server 127.127.1.0 # local clock (LCL)
fudge 127.127.1.0 stratum 10 # LCL is unsynchronized
# systemctl restart ntpd.service
# ntpq -p
HANA backup node:
# vim /etc/ntp.conf
server hana01 iburst
# systemctl restart ntpd.service
# ntpq -p
Note: If the offset between local time and NTP server time exceeds 1000 seconds, run the systemctl stop ntpd.service command to stop the ntpd service, and run ntpdate 192.168.20.9 (replace 192.168.20.9 with the NTP server address) to synchronize the time manually. Then run systemctl start ntpd.service to restart the ntdp service.
File system partitioning differences between HANA primary and backup nodes:
|HANA Node||File System Partition|
|HANA primary node||OS disk|
|HANA primary node||/hana/data|
|HANA primary node||/hana/log|
|HANA primary node||/hana/shared|
|HANA primary node||/hana/backup|
|HANA primary and backup nodes||Arbitration disk|
|HANA backup node||OS disk|
|HANA backup node||/hana/data|
|HANA backup node||/hana/log|
|HANA backup node||/hana/shared|
HANA01:~ # yast network
HANA primary node:
HANA backup node::
After the HaVip is configured on Alibaba Cloud, the two ECS instances are in backup mode by default. The HaVip cannot be used for communication. It takes effect only after the HaVip primary node is configured. Therefore, you need to configure the HANA primary node to the HaVip primary node. Assign the HaVip to the ENI of the HANA primary node. This IP address is the additional address (or Linux subinterface) of the corresponding ENI.
HANA01:~ # yast network
After the configuration, the instance bound with HaVip turns into the primary state.
Note: HANA primary and backup nodes must have consistent system ID and instance ID. In this example, the system ID is HAN and instance ID is 00.
Check whether hdblcm is an executable program. Install the HANA instances on the primary and backup nodes.
hana01:~/HDB_SERVER_LINUX_X86_64 # ./hdblcm
SAP HANA Lifecycle Management - SAP HANA Database 2.00.020.00.1500920972
Scanning software locations...
SAP HANA Database (2.00.020.00.1500920972) in /root/HDB_SERVER_LINUX_X86_64/server
Choose an action
Index | Action | Description
1 | install | Install new system
2 | extract_components | Extract components
3 | Exit (do nothing) |
Enter selected action index : 1
Enter Installation Path [/hana/shared]:
Enter Local Host Name [hana01]:
Do you want to add hosts to the system? (y/n) [n]:
Enter SAP HANA System ID: HAN
Enter Instance Number :
Enter Local Host Worker Group [default]:
Index | System Usage | Description
1 | production | System is used in a production environment
2 | test | System is used for testing, not production
3 | development | System is used for development, not production
4 | custom | System usage is neither production, test nor development
Select System Usage / Enter Index : 2
Enter Location of Data Volumes [/hana/data/HAN]:
Enter Location of Log Volumes [/hana/log/HAN]:
Restrict maximum memory allocation? [n]:
Enter Certificate Host Name For Host 'hana01' [hana01]:
Enter SAP Host Agent User (sapadm) Password:
Confirm SAP Host Agent User (sapadm) Password:
Enter System Administrator (hanadm) Password:
Confirm System Administrator (hanadm) Password:
Enter System Administrator Home Directory [/usr/sap/HAN/home]:
Enter System Administrator Login Shell [/bin/sh]:
Enter System Administrator User ID :
Enter ID of User Group (sapsys) :
Enter System Database User (SYSTEM) Password:
Confirm System Database User (SYSTEM) Password:
Restart system after machine reboot? [n]:
Summary before execution:
SAP HANA Database System Installation
Remote Execution: ssh
Database Isolation: low
Installation Path: /hana/shared
Local Host Name: hana01
SAP HANA System ID: HAN
Instance Number: 00
Local Host Worker Group: default
System Usage: test
Location of Data Volumes: /hana/data/HAN
Location of Log Volumes: /hana/log/HAN
Certificate Host Names: hana01 -> hana01
System Administrator Home Directory: /usr/sap/HAN/home
System Administrator Login Shell: /bin/sh
System Administrator User ID: 1000
ID of User Group (sapsys): 79
SAP HANA Database
Install version 2.00.020.00.1500920972
Do you want to continue? (y/n): y
Installing SAP HANA Database...
Preparing package 'Saphostagent Setup'...
Preparing package 'Python Support'...
Preparing package 'Python Runtime'...
Preparing package 'Product Manifest'...
Preparing package 'Binaries'...
Preparing package 'Data Quality'...
Preparing package 'Krb5 Runtime'...
Preparing package 'Installer'...
Preparing package 'Ini Files'...
Preparing package 'HWCCT'...
Preparing package 'Documentation'...
Preparing package 'Delivery Units'...
Preparing package 'Offline Cockpit'...
Preparing package 'DAT Languages (EN, DE)'...
Preparing package 'DAT Languages (other)'...
Preparing package 'DAT Configfiles (EN, DE)'...
Preparing package 'DAT Configfiles (other)'...
Installing package 'Saphostagent Setup'...
Installing package 'Python Support'...
Installing package 'Python Runtime'...
Installing package 'Product Manifest'...
Installing package 'Binaries'...
Installing package 'Data Quality'...
Installing package 'Krb5 Runtime'...
Installing package 'Installer'...
Installing package 'Ini Files'...
Installing package 'HWCCT'...
Installing package 'Documentation'...
Installing package 'Delivery Units'...
Installing package 'Offline Cockpit'...
Installing package 'DAT Languages (EN, DE)'...
Installing package 'DAT Languages (other)'...
Installing package 'DAT Configfiles (EN, DE)'...
Installing package 'DAT Configfiles (other)'...
Installing SAP Host Agent version 7.21.26...
Starting SAP HANA Database system...
Starting 4 processes on host 'hana01' (worker):
Starting on 'hana01': hdbcompileserver, hdbdaemon, hdbnameserver, hdbpreprocessor
Starting 7 processes on host 'hana01' (worker):
Starting on 'hana01': hdbcompileserver, hdbdaemon, hdbindexserver, hdbnameserver, hdbpreprocessor, hdbwebdispatcher, hdbxsengine
Starting on 'hana01': hdbdaemon, hdbindexserver, hdbwebdispatcher, hdbxsengine
Starting on 'hana01': hdbdaemon, hdbwebdispatcher, hdbxsengine
Starting on 'hana01': hdbdaemon, hdbwebdispatcher
All server processes started on host 'hana01' (worker).
Importing delivery units...
Importing delivery unit HCO_INA_SERVICE
Importing delivery unit HANA_DT_BASE
Importing delivery unit HANA_IDE_CORE
Importing delivery unit HANA_TA_CONFIG
Importing delivery unit HANA_UI_INTEGRATION_SVC
Importing delivery unit HANA_UI_INTEGRATION_CONTENT
Importing delivery unit HANA_XS_BASE
Importing delivery unit HANA_XS_DBUTILS
Importing delivery unit HANA_XS_EDITOR
Importing delivery unit HANA_XS_IDE
Importing delivery unit HANA_XS_LM
Importing delivery unit HDC_ADMIN
Importing delivery unit HDC_BACKUP
Importing delivery unit HDC_IDE_CORE
Importing delivery unit HDC_SEC_CP
Importing delivery unit HDC_SYS_ADMIN
Importing delivery unit HDC_XS_BASE
Importing delivery unit HDC_XS_LM
Importing delivery unit SAPUI5_1
Importing delivery unit SAP_WATT
Importing delivery unit HANA_SEC_CP
Importing delivery unit HANA_BACKUP
Importing delivery unit HANA_HDBLCM
Importing delivery unit HANA_SEC_BASE
Importing delivery unit HANA_SYS_ADMIN
Importing delivery unit HANA_ADMIN
Importing delivery unit HANA_WKLD_ANLZ
Installing Resident hdblcm...
Updating SAP HANA Database Instance Integration on Local Host...
Regenerating SSL certificates...
Deploying SAP Host Agent configurations...
Creating Component List...
SAP HANA Database System installed
You can send feedback to SAP with this form: https://hana01:1129/lmsl/HDBLCM/HAN/feedback/feedback.html
Log file written to '/var/tmp/hdb_HAN_hdblcm_install_2017-12-30_20.55.04/hdblcm.log' on host 'hana01'.
Verify the HANA installation on the primary and backup nodes by checking the HANA process status.
Configure a Windows ECS
Double-click the hdbsetup executable file in the Studio installation package.
Complete the installation and close the program.
Connect the HANA Studio to the HANA primary node to back up the database.
System-level database backup
Tenant-level database backup
Activate HANA system replication on the primary node.
Maintain the logic names on the primary node.
Copy the PKI SSFS file on the primary node to the corresponding location on the backup node：
Location of the PKI SSFS file on the primary node:
Note: When copying the file, do not delete the original file owner; otherwise, some operations may be failed due to insufficient rights.
Register the backup node on the HANA Studio console.
Maintain the backup node information using the system replication method.
Check the HANA system replication status.
Add the local source to the primary and backup nodes.
#zypper addrepo iso:/?iso=/root/SLE-12-SP2-HA-DVD-x86_64-GM-CD1.iso SAP1
#zypper addrepo iso:/?iso=/root/SLE-12-SP2-HA-DVD-x86_64-GM-CD2.iso SAP2
Note: The ISO path needs to be adjusted.
Select all software packages on the right.
Select the dependent package and click Accept.
Generate the corosync.conf file on the HANA primary node.
hana01:~ # yast cluster
The configuration is as follows. Other configuration options retain the default values.
Copy the corosync.conf file to the HANA backup node.
hana01# scp /etc/corosync/corosync.conf hana02:/etc/corosync/corosync.conf
Start the cluster
Run the following commands on both nodes:
# rcpacemaker start
View the cluster status.
# crm_mon -1
Current DC: hana01 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Tue Nov 7 23:13:06 2017
Last change: Tue Nov 7 23:13:05 2017 by hacluster via crmd on hana01
2 nodes configured
0 resources configured
Online: [ hana01 hana02 ] # Both nodes should be in online state.
No active resources
Close the STONITH (which will be configured later).
crm(live)configure# property stonith-enabled=false
Enable web-based configuration.
（1）Set the HA cluster user password on hana01 to hacluster.
# passwd hacluster
BAD PASSWORD: it is based on a dictionary word
Retype new password:
passwd: password updated successfully
# systemctl restart hawk.service
（2）Access https://192.168.10.214:7630/ (through HANA Studio ECS) with the user name and password hacluster.
Disable the cluster on hana01 and hana02.
# rcpacemaker stop
View disk information.
Hana01:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 253:0 0 60G 0 disk
└─vda1 253:1 0 60G 0 part /
vdb 253:16 0 1024G 0 disk
└─vdb1 253:17 0 1024G 0 part /hana/data
vdc 253:32 0 512G 0 disk
└─vdc1 253:33 0 512G 0 part /hana/log
vdd 253:48 0 512G 0 disk
└─vdd1 253:49 0 512G 0 part /hana/shared
vde 253:64 0 20G 0 disk
The cloud disk vde uses shared block storage.。
Configure watchdog on hana01 and hana02.
# echo softdog > /etc/modules-load.d/watchdog.conf
# systemctl restart systemd-modules-load
# systemctl status systemd-modules-load
● systemd-modules-load.service - Load Kernel Modules
Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static; vendor preset: disabled)
Active: active (exited) since Mon 2018-01-01 21:18:41 CST; 8s ago
Process: 2300 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=0/SUCCESS)
Main PID: 2300 (code=exited, status=0/SUCCESS)
Jan 01 21:18:41 s4hsvra systemd: Starting Load Kernel Modules...
Jan 01 21:18:41 s4hsvra systemd: Started Load Kernel Modules.
# lsmod | grep dog
softdog 16384 0
# vim /etc/init.d/boot.local
# vim /etc/sysconfig/sbd
Create the SBD partition on hana01.
sbd -d /dev/vde -4 30 -1 15 create
# sbd -d /dev/vde dump
==Dumping header on disk /dev/vde
Header version : 2.1
UUID : 94d700ee-837b-46c7-95cc-27f3d1ffcf9f
Number of slots : 255
Sector size : 512
Timeout (watchdog) : 15
Timeout (allocate) : 2
Timeout (loop) : 1
Timeout (msgwait) : 30
==Header on disk /dev/vde is dumped
Description of msgwait timeout and watchdog timeout:
-4 indicates the msgwait timeout. In the preceding example, the msgwait timeout interval is 30s.
-1 indicates the watchdog timeout. In the preceding example, the watchdog timeout interval is 15s. The minimum value for simulated package detection is 15s.
If SBD stays in the multi-path group, the timeout interval required by SBD needs to be modified, because the MPIO detection along the path is time consuming. If msgwait times out, it is assumed that the message has been transmitted to the target node. For multi-path, the delay is the time for switching to the next path when MPIO detects a path failure. You may need to test this function in your system environment. If the SBD on the node does not reset the package detection timer in time, the node is automatically stopped. The watchdog timeout interval must be shorter than the msgwait timeout interval. The former should be a half of the later.
The following formula expresses the relationships between the three values:
Timeout (msgwait) = (Timeout (watchdog) * 2)
stonith-timeout = Timeout (msgwait) + 20%
For more information, run the man sbd command.
Configure SBD on hana01 and hana02.
# vim /etc/sysconfig/sbd
The SBD program automatically starts on hana01 and hana02 when the system is started.
# systemctl enable sbd
Enable the cluster on hana01 and hana02.
# rcpacemaker start
Modify the cluster SBD parameters on hana01.
# crm configure
crm(live)configure# primitive stonith_sbd stonith:external/sbd params pcmk_delay_max=30
# crm configure
crm(live)configure# property stonith-enabled="true"
crm(live)configure# property stonith-timeout="40s"
crm(live)configure# property no-quorum-policy="ignore"
crm(live)configure# property default-resource-stickiness="1000"
Note: We recommend that you set stonith-timeout to 40s (calculated based on the previous formula).
View the SBD process and service on hana01 and hana02.
# ps -ef | grep sbd
root 5946 1 0 22:44 ? 00:00:01 sbd: inquisitor
root 5947 5946 0 22:44 ? 00:00:00 sbd: watcher: /dev/vde - slot: 0 - uuid: 94d700ee-837b-46c7-95cc-27f3d1ffcf9f
root 5948 5946 0 22:44 ? 00:00:01 sbd: watcher: Pacemaker
root 5949 5946 0 22:44 ? 00:00:00 sbd: watcher: Cluster
root 6915 2540 0 23:25 pts/0 00:00:00 grep --color=auto sbd
# systemctl status sbd
● sbd.service - Shared-storage based fencing daemon
Loaded: loaded (/usr/lib/systemd/system/sbd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2017-12-26 22:44:51 CST; 41min ago
Process: 5934 ExecStart=/usr/sbin/sbd $SBD_OPTS -p /var/run/sbd.pid watch (code=exited, status=0/SUCCESS)
Main PID: 5946 (sbd)
Tasks: 4 (limit: 512)
├─5946 sbd: inquisitor
├─5947 sbd: watcher: /dev/vde - slot: 0 - uuid: 94d700ee-837b-46c7-95cc-27f3d1ffcf9f
├─5948 sbd: watcher: Pacemaker
└─5949 sbd: watcher: Cluster
Dec 26 22:44:50 node001 systemd: Starting Shared-storage based fencing daemon...
Dec 26 22:44:51 node001 systemd: Started Shared-storage based fencing daemon.
Verify the SBD configuration.
Note: Ensure that important processes on hana02 have been closed.
Hana01# sbd -d /dev/vde message hana02 reset
If hana2 is restarted properly, the SBD disk is successfully configured.
Open the SUSE Hawk management interface, click Wizards, and select the HSR options to maintain HANA information.
The script after successful configuration is as follows:
Check the resource status in the cluster.
Check the node status in the cluster.
Check the HANA node status.
Check the HANA system replication status.
Check the cluster node column information.
Check the service status in the cluster.
Forcibly close hana01 on the ECS console.。
VIP(sap_ip) floats to hana02.
HANA status after failover.
Test the recovery of the HANA primary node.
Enable the ECS of hana01 and start the cluster software pacemaker.
hana01:~ # rcpacemaker start
Configure HSR on the console.。
Register hana01 as the backup node.。
Set the synchronization mode to syncmem。
Check the HANA node status.
Check the HSR copy status.
Check the HAE cluster status and clean up the nodes reporting errors. After cleanup, the cluster is recovered.
The HA cluster recovers, and the HANA backup node starts to provide services.