All Products
Search
Document Center

SAP S/4HANA 1809 intra-zone HA Deployment Guide

Last Updated: Aug 05, 2019

SAP S/4HANA 1809 intra-zone HA Deployment Guide

Release history

Version Revision date Changes Release date
1.0 2019-05-07
1.1 2019-07-04 Optimized NAS parameters. 2019-07-04

Overview

This topic describes how to deploy SAP S/4HANA ABAP platform 1809 in high availability (HA) mode based on SUSE High Availability Extension (SUSE HAE) in a zone of Alibaba Cloud. Since SAP NetWeaver 7.51, standalone enqueue server architecture 2 (ENSA2) has been available and become the default installation option for SAP S/4HANA ABAP platform 1809. This topic describes the HA deployment of SAP S/4HANA ABAP platform 1809. It is for reference only. For more information about installation, configuration, and system resizing, see SAP installation guides. We recommend that you read the relevant SAP installation guide and SAP notes before deployment.
(1) In the old standalone enqueue server architecture (ENSA1), if the ABAP Central Services (ASCS) instance fails, the cluster needs to migrate the ASCS instance to a primary node where Enqueue Replication Server (ERS, replicator of ASCS) is running and restart the ASCS instance. The ASCS instance processes the lock recovery by fetching the locks from ERS over the shared memory.
(2) In ENSA2, if the ASCS instance fails, the cluster may migrate the ASCS instance to a node where ERS is not running and restart the ASCS instance. The ASCS instance processes the lock recovery by fetching the locks from Enqueue Replicator 2 over the network instead of the shared memory.
 arch2
(3) In ENSA1, Pacemaker supports a dual-node cluster. The ASCS instance and the ERS instance must work in primary/secondary mode. Alibaba Cloud provides the best practice for the primary/secondary mode. In ENSA2, Pacemaker supports not only a dual-node cluster but also a multi-node cluster.

This topic uses a dual-node (primary/secondary) cluster as an example to describe how to install the SAP S/4HANA 1809 server. In this example, the SAP Fiori front-end server is not installed. In addition, this deployment does not integrate SAP liveCache.

Architecture

The following figure shows the deployment architecture.
arch1

Resource planning

Network

Network Location CIDR block VSwitch VPC
Business network China (Beijing) Zone G 10.0.10.0/24 SAP_Business_Vswitch S4_1809_VPC
Heartbeat network China (Beijing) Zone G 10.0.20.0/24 SAP_Heartbeat_Vswitch S4_1809_VPC

SAP and hosts

System ID for SAP applications: S4T
System ID for SAP HANA: S4T
You can also set different system IDs for SAP applications and SAP HANA.

Hostname IP address Type Instance number Description
s4app1 10.0.10.10/10.0.20.10 Primary Application Server (PAS) instance 01 N/A
s4app2 10.0.10.11/10.0.20.11 Additional Application Server (AAS) instance 02 N/A
VASCSS4T 10.0.10.12 ASCS instance 00 N/A
VERSS4T 10.0.10.13 ERS instance 10 N/A
VDBS4T 10.0.10.9 Database instance N/A
hana01 10.0.10.7/10.0.20.7 Primary database 00 N/A
hana02 10.0.10.8/10.0.20.8 Secondary database 00 N/A


Users and groups

In the HA cluster, the user ID and group ID on a node must be the same as those on the other node for SAP applications or SAP HANA.
User ID: Set the sidadm user ID to 2000, and the sapadm user ID to 2001.
Group ID: Set the sapsys group ID to 2000.

Swap space

The swap space is required for installing SAP applications and SAP HANA. We recommend that you create an SSD to provide the swap space when you create an Elastic Compute Service (ECS) instance. For more information about the swap space, see SAP Note 1597355 - Swap-space recommendation for Linux.

Physical memory (RAM) Recommended swap space
< 32 GB 2 times the amount of RAM
32 to 63 GB 64 GB
64 to 127 GB 96 GB
128 to 255 GB 128 GB
256 to 511 GB 160 GB
512 to 1,023 GB 192 GB
1,024 to 2,047 GB 224 GB
2,048 to 4,095 GB 256 GB
4,096 to 8,191 GB 288 GB
> 8,192 GB 320 GB

File systems

We recommend that you use Autofs to mount the global file system and the file system of the transport host for SAP applications. The planning of file systems in this topic is for reference only. For more information about file system resizing, see the relevant SAP installation guide or the planning of implementation service providers.

File system Type Logical volume Volume group
/usr/sap XFS usrsaplv sapvg
/sapmnt NAS
/usr/sap/trans NAS
/hana/data XFS datalv hanavg
/hana/log XFS loglv hanavg
/hana/shared XFS sharedlv hanavg

Preparations

Alibaba Cloud account

If you do not have an Alibaba Cloud account, register one on the Alibaba Cloud official website or in the Alibaba Cloud app. You need to use a mobile number to register the account and complete real-name verification for the account. Then, you can log on to the Alibaba Cloud app with this account to manage and monitor your cloud resources, perform authentication, and ask questions and acquire knowledge in Yunqi Community.
For more information, see Account registration and real-name verification (Alibaba Cloud app)“).

VPC

Virtual Private Cloud (VPC) is an isolated network environment built on Alibaba Cloud. VPCs are logically isolated from one another. A VPC is a private network dedicated to you on Alibaba Cloud. You can configure the IP address range, routing table, and gateway to customize your VPC. For more information, see Virtual Private Cloud.
The following figure shows how to create a VPC as planned.
create a VPC
The following figure shows how to create a VSwitch and specify a CIDR block for the business network as planned.
switch1
The following figure shows how to create a VSwitch and specify a CIDR block for the heartbeat network as planned.
switch2

ECS instance

ECS is a basic cloud computing service provided by Alibaba Cloud.You can log on to the ECS console or Alibaba Cloud app to configure your ECS resources. For more information about SAP NetWeaver on Alibaba Cloud, see SAP Note 1380654 - SAP support in cloud environments.
1. Create an ECS instance.
Create an ECS instance in the ECS console. Specify the billing method and zone. For this deployment, select China (Beijing) Zone G.
Select the SUSE Linux Enterprise Server for SAP Applications 12 SP3 image from the image marketplace.
image
Specify the number and capacity of disks to be created as planned. In this example, the data disk size is 300 GB and the swap disk size is 50 GB. We recommend that you use an ultra disk or SSD as the system disk and enhance SSDs (ESSDs) or SSDs as data disks, and create an SSD or ESSD to provide the swap space. For more information about disks, see Block storage performance.
disk
Select an existing VPC and an existing security group for the ECS instance. This example uses hanasg as the security group. For more information about the security group, see Security group FAQ.
network_sg
(Optional) Specify the RAM users as required. For more information about RAM, see RAM introduction
Check all configuration items and ensure that they are correct. Then, create the ECS instance. According to the planning of this deployment, you need to create four ECS instances in China (Beijing) Zone G. After the four ECS instances are created, update hostnames or private IP addresses as planned.
2. Configure elastic network interfaces (ENIs).
Create an ENI for each ECS instance in the HA cluster to configure the heartbeat network. In this example, you need to configure four ENIs.
eni1
Bind an ENI to each ECS instance in the HA cluster.
eni2

ECS Metrics Collector

ECS Metrics Collector is a monitor agent that the SAP system uses on Alibaba Cloud to collect required information about virtual machine configuration and underlying physical resource usage.
When the SAP system runs in an ECS instance, the SAP Host Agent uses the metadata service and APIs to obtain the information required for monitoring the SAP system, including the information about the operating system, network, storage, and SAP architecture. Then, the SAP Host Agent provides the information for SAP applications to analyze events and system performance.
You need to install ECS Metrics Collector for SAP for each ECS instance in which the SAP system is running, either for SAP applications or SAP HANA. For more information, see ECS Metrics Collector for SAP deployment guide.

Shared block storage

ECS shared block storage is a block-level storage device that allows multiple ECS instances to concurrently read and write data. It features high concurrency, high performance, and high reliability. You can attach a shared block storage device to a maximum of 16 ECS instances at the same time.For more information, watch the video Attach a shared block storage device to multiple ECS instances. This deployment uses shared block storage as the shoot the other node in the head (STONITH) device for the HA cluster. Ensure that the shared block storage is in the same region and zone as ECS instances in the HA cluster so that it can be attached to these ECS instances.
1. Create shared block storage.
Select SSD with the minimum capacity of 20 GB for the STONITH device.
sbd
The following figure shows the created shared block storage.
sbd2
2. Attach the shared block storage
Attach the shared block storage to two ECS instances in the cluster.
sbd3

HaVip

Private high-availability virtual IP address (HaVip) is a private IP resource that you can create and release separately. The uniqueness of HaVip is that you can use an Address Resolution Protocol (ARP) announcement to broadcast the IP address in an ECS instance. This deployment binds HaVip as a virtual IP address to each node in the HA cluster.
1. Create an HaVip.
The following describes how to create an HaVip for the ASCS instance.havip1
Select an IP address of the business CIDR block to create an HaVip for the ASCS instance. The same rule applies when you create an HaVip for the ERS instance.
havip2
2. Bind the HaVip.
Bind the HaVip to each ECS instance in the HA cluster.
havip3
3. Configure HaVips.
Log on to the primary node of the HA cluster. Add the HaVips for ASCS and ERS instances to the Additional Addresses resource pool. This ensures that these HaVips can be reached by PING messages during the installation of ASCS and ERS instances. Run the following command:
#yast2 network
bind1
bind2
bind3
Create an HaVip for the ERS instance. The procedure is similar to that for the ASCS instance.
bind4Send PING messages to the HaVip for the ASCS instance to test its connectivity.
ping1
Send PING messages to the HaVip for the ERS instance to test its connectivity.
ping2

NAS

Alibaba Cloud Network Attached Storage (NAS) is a file storage service for compute nodes, such as ECS instances, Elastic High Performance Computing (E-HPC) clusters, and Container Service clusters. NAS complies with standard file access protocols. Without modifying existing applications, you can have a distributed file system that features unlimited capacity and performance scaling, a single namespace, shared access, high reliability, and high availability. In Alibaba Cloud, we recommend that you use the NAS file system for the SAP global host and SAP transport host.
1. Create an NAS file system.
Select a region and a storage type. This deployment uses the NAS Capacity storage type. For more information about NAS performance, see Storage types.
nas_new
Add a mount point for the file system. Select the previously created VPC and business VSwitch.
nas2
Click the file system ID or name to view the mount address of the NAS file system.
nas2
2. Record the mount addresses of NAS file systems.
Create two NAS file systems for /sapmnt and /usr/sap/trans as planned. Record their mount addresses as follows:
114a34ad6d-avf43.cn-beijing.nas.aliyuncs.com
1610a48dd1-sgw90.cn-beijing.nas.aliyuncs.com

SAP HANA installation

For more information about how to install SAP HANA, configure system replication, and configure HA, see SAP HANA intra-zone HA deployment (based on SLES HAE)“). For more information about how to install SAP HANA in HA mode across zones, see SAP HANA HA cross-zone with SLES HAE.

HA cluster configuration

Modify hostnames

Modify hostnames for all nodes of SAP applications and SAP HANA.
Add the following information to the hosts file in the /etc directory as planned:

  1. ###S4 1809 application business###
  2. 10.0.10.10 s4app1 s4app1.alibaba.com
  3. 10.0.10.11 s4app2 s4app2.alibaba.com
  4. 10.0.10.12 VASCSS4T VASCSS4T.alibaba.com
  5. 10.0.10.13 VERSS4T VERSS4T.alibaba.com
  6. ###S4 1809 application heatbeat###
  7. 10.0.20.10 s4app1-ha
  8. 10.0.20.11 s4app2-ha
  9. ###S4 1809 HANA database####
  10. 10.0.10.7 hana01 hana01.alibaba.com
  11. 10.0.10.8 hana02 hana02.alibaba.com
  12. 10.0.10.9 VDBS4T VDBS4T.alibaba.com
  13. ###S4 1809 HANA database heartbeat####
  14. 10.0.20.7 hana01-ha
  15. 10.0.20.8 hana02-ha

Create file systems

Create NAS file systems for /sapmnt and /usr/sap/trans. Create a local XFS file system for /usr/sap.
1. Create the /usr/sap file system.
(1) Check disks.
(2) Create a physical volume.
(3) Create the sapvg volume group.
(4) Create the usrsaplv logical volume.
(5) Create a file system.
(6) Add a mount point for the file system. Enable the file system to be mounted upon system startup.

  1. #fdisk -l
  2. #pvcreate /dev/vdb
  3. #vgcreate sapvg /dev/vdb
  4. #lvcreate -L 100G -n usrsaplv sapvg
  5. #mkfs.xfs /dev/sapvg/usrsaplv
  6. #mkdir -p /usr/sap

Run the vi /etc/fstab command to edit the fstab file as follows:
/dev/sapvg/usrsaplv /usr/sap xfs defaults 0 0

#mount -a: Run this command to mount all file systems.
2. Create the swap space.
(1) Check the disk used to provide the swap space.
#fdisk -l
Run the fdisk -l command to obtain disk information. In this example, /dev/vdd provides the swap space.
(2) Configure the swap space as follows:

  1. mkswap /dev/vdc
  2. swapon /dev/vdc
  3. swapon -s #Checks the size of the swap space.#

Run the vi /etc/fstab command to edit the fstab file as follows:
/dev/vdc swap swap defaults 0 0
3. Mount the global file system and the file system of the transport host.
We recommend that you use Autofs to mount the global file system and the file system of the transport host: /sapmnt and /usr/sap/trans. In this way, you do not need to create directories as mount points.
To configure Autofs, follow these steps:
(1) Run the following command to edit the auto.master file:
#vim /etc/auto.master
Add /- /etc/auto.nfs.
autofs
(2) Create and edit the auto.nfs file in the /etc directory as follows:

  1. /sapmnt -rw,hard,intr,noresvport,timeo=60,retrans=2 114a34ad6d-avf43.cn-beijing.nas.aliyuncs.com:/
  2. /usr/sap/trans -rw,hard,intr,noresvport,retrans=2,timeo=60 1610a48dd1-sgw90.cn-beijing.nas.aliyuncs.com:/

(3) Run the following command to start Autofs:
#systemctl start autofs
(4) Run the following command to enable Autofs to mount file systems upon system startup:
#systemctl enable autofs
You can run the cd command to access the two file systems to check whether they are mounted.

Prepare the operating system and installation packages

You need to configure both nodes of the HA cluster. The following procedure shows how to configure a node.
1. Install the packages required by HA configuration and optimization.
Run the following command to install the sbd, crosync, pacemaker, sap_cluster_connector, saptune, and resource-agents packages:
#zypper install -y sbd corosync pacemaker sap-suse-cluster-connector saptune resource-agents
Run the following command to check whether these packages are installed:
# for p in corosync pacemaker sap-suse-cluster-connector saptune resource-agents;do rpm -qv $p && echo installed;done
This example uses the SUSE Linux Enterprise Server for SAP Applications 12 SP3 image. According to SAP Note 2641019 - Installation of ENSA2 and update from ENSA1 to ENSA2 in SUSE HA environment and other relevant SUSE documentations, you need to install the sap-suse-cluster-connector package of V3.1.0 or a later version, and the resource-agents package of V4.0.1-2.18.1 or a later version.
zypper1
2. Install the ha_sles pattern.
#zypper in -t pattern ha_sles
3. Check the Network Time Protocol (NTP) service.
ntpq -p
By default, the NTP service is enabled for Alibaba ECS instances. If the time zone of your ECS instances is not Asia/Shanghai, change the time zone and configure the NTP service. Ensure that all ECS instances enable the NTP service and use the same time zone. For more information, see Time setting: Synchronize NTP servers and change time zone for Linux instances.
4. Install saptune.
As an upgraded version of the sapconf tool, saptune is available to SUSE Linux Enterprise Server 12 SP2 and later versions. You can use saptune to tune parameters for operating systems and databases. This ensures better performance for SAP NetWeaver or SAP HANA.
The syntax is as follows:
SAP note

  1. Tune system according to SAP and SUSE notes:
  2. saptune note [ list | verify ]
  3. saptune note [ apply | simulate | verify | customise | revert ] NoteID

SAP solution

  1. Tune system for all notes applicable to your SAP solution:
  2. saptune solution [ list | verify ]
  3. saptune solution [ apply | simulate | verify | revert ] SolutionName

For this deployment, SAP NetWeaver is installed. The following figures show the installation results of saptune.
saptune_note

saptune_solution

Run the following command to start a daemon:
#saptune daemon start
#systemctl enable tuned
For more information about saptune, see Prepare your Linux for your SAP solution with saptune or the official documentation of SUSE Linux Enterprise Server.

Configure the STONITH device

In Alibaba Cloud, you can use shared block storage as the STONITH device. The procedure for configuring the shared block device (SBD) is as follows:
1. Check the SBD.
#fdisk -l
Run the fdisk -l command to obtain disk information, where the shared block storage with the capacity of 20 GB is used as the STONITH device for both nodes of the HA cluster.
 Check the SBD
2. Create the SBD for both nodes.
(1) Run the following command to initialize the SBD:
#sbd -d /dev/vdd create
(2) Run the following command to write the dump information to the SBD:
#sbd -d /dev/vdd dump
3. Configure the watchdog (softdog).
On both nodes of the HA cluster, run the vim /etc/init.d/boot.local command to edit the boot.local file and add the following information:
modprobe softdog
 watchdog1
Run the following command:
#modprobe softdog
Run the following command to verify that the softdog module is running:
#lsmod | egrep "(wd|dog)"
 checkwatchdog
4. Configure the system configuration file.
Modify parameters in the sbd file of the /etc/sysconfig directory on both nodes as follows:

  1. SBD_DEVICE="/dev/vdd"
  2. SBD_WATCHDOG="yes"
  3. SBD_STARTMODE="clean"
  4. SBD_OPTS=""

You can run the sbd -d
command to view SBD parameters and parameter description.
**5. Test the SBD.
Run the following command to start the SBD on both nodes:
#/usr/share/sbd/sbd.sh start
Run the following command to check the SBD status on both nodes:
#sbd -d /dev/vdd list
Ensure that the SBD status is clear on both nodes, as shown in the following figure.
 SBD status
You can also run the following command to send a message from the primary node to the secondary node:#sbd -d <message> message <br> <br>
Run the following command on the secondary node to check whether it receives the message:
#sbd -d <node2> list
Run the following command to reset the SBD status on the secondary node to clear:
#sbd -d <SBD Device Name> message <node2> clear

Configure the cluster

1. Configure Corosync.
(1) Start the cluster GUI.
Log on to the primary node of the cluster, start YaST2, and then click Cluster.
 cluster
(2) Configure Communication Channels as follows:
Transport: Select Unicast.
Channel: Enter 10.0.20.0 in the Bind Network Address field to specify the heartbeat CIDR block.
Redundant Channel: Enter 10.0.10.0 in the Bind Network Address field to specify the business CIDR block.
In the Member Address section, add the heartbeat IP addresses and business IP addresses of both nodes in the IP and Redundant IP columns, respectively. Enter 2 in the Expected Votes field to indicate the number of nodes in the cluster.
 corosync1
(3) Configure Security as follows:
Click Generate Auth Key File.
 corosync4
(4) (Optional) Configure Csync2 as follows:
Csync2 is a synchronous replication tool. You can use it to copy configuration files to nodes in a cluster.
a. Add a host and click Add Suggested Files.
b. Click Turn csync2 ON.
c. Click Generate Pre-Shared-Keys.
d. Copy the generated key_hagroup file in the /etc/csync2 directory to the same directory of the secondary node.
 corosync5
(5) Configure Service as follows:
Booting: By default, Off is selected, indicating that you need to manually start Pacemaker. Keep the default configuration.
 corosync6
(6) Copy files.
Run the following command to copy the corosync.conf and authkey files in the /etc/corosync directory of the primary node to the same directory of the secondary node:
#scp -pr corosync.conf authkey root@s4app2:/etc/corosync
2. Start pacemaker.
Run the following command to start Pacemaker on both nodes:
#systemctl start pacemaker
Run the following command to verify that both nodes are online:
#crm_mon status
 corosync2

SAP S/4HANA 1809 installation

Install the ASCS instance

Log on to the primary node and start the Software Provisioning Manager (SWPM). Run the following command to install the ASCS instance on the virtual host VASCSS4T:
# ./sapinst SAPINST_USE_HOSTNAME=VASCSS4T
On a Windows jump server, enter the following URL in the address bar of a browser:
https://VASCSS4T:4237/sapinst/docs/index.html
Use the root username and password to log on to the host. Ensure that the hostname can be resolved and the port can be accessed.
 ascs1
Enter the system ID and the directory of the mounted global file system as planned.
 ascs2
Set the fully qualified domain name (FQDN).
![3] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000793561/ascs3.png)
Set a password.
![4] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000807736/ascs4.png)
Enter the user ID and group ID as planned.
![5] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000818777/ascs5.png)
Enter the path of kernel packages.
![6] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000833905/ascs6.png)
![7] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000844872/ascs7.png)
Enter the ASCS instance number and virtual hostname as planned.
![8] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000856051/ascs8.png)
![9] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000870728/ascs9.png)
Integrate an SAP Web Dispatcher and a gateway with the ASCS instance.
![10] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000883721/ascs10.png)
Configure the SAP Web Dispatcher. You can modify the configuration later.
![11] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000895888/ascs11.png)
For security reasons, we recommend that you remove the sidadm user from the sapinst group.
![12] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000905701/ascs12.png)
Review parameter settings. You can modify parameters in this step.
![13] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000916847/ascs13.png)
![14] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000930632/ascs14.png)
Check the statuses of the message server and enqueue server.![15] (http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/117749/cn_zh/1558000947188/ascs15.png)

Install the ERS instance

Log on to the primary node and start the SWPM. Run the following command to install the ERS instance on the virtual host VERS4T:
# ./sapinst SAPINST_USE_HOSTNAME=VERSS4T
On a Windows jump server, enter the following URL in the address bar of a browser:
https://VERSS4T:4237/sapinst/docs/index.html
Use the root username and password to log on to the host. Ensure that the hostname can be resolved and the port can be accessed.
 ers1
 ers2
Enter the user ID as planned.
 ers3
 ers4
Enter the ERS instance number and virtual hostname as planned.
 ers5
Enter the user ID as planned.
 ers6
 ers7
 ers8
Check the status of ERS.
 ers9

Configure ASCS and ERS instances on the secondary node

1. Create the users and group.
On the secondary node, start the SWPM to create the same users and group as those on the primary node. Run the following command:
Create the sapadm user.
#./sapinst
 user1
 user2
Create the sidadm user.
 user3
Enter the system ID and select Based on AS ABAP.
 user4
Enter the user IDs and group ID as planned, which are the same as those on the primary node.
 user5
2. Copy files.
Log on to the primary node.
(1) Run the following command to copy the services file in the /etc directory to the same directory of the secondary node:
#scp -pr services root@s4app2:/etc/
(2) Run the following command to copy the sapservices file in the /usr/sap directory to the same directory of the secondary node:
#scp -pr sapservices root@s4app2:/usr/sap/
(3) Run the following commands to copy the ASCS00, ERS10, and SYS directories to the secondary node:
#cd /usr/sap/S4T
#tar -cvf ASCSERSSYS.tar *
Log on to the secondary node. Run the following commands to create an S4T directory that has the same permissions in the /usr/sap directory of the secondary node, and copy and decompress the ASCSERSSYS.tar package:
#scp –pr ASCSERSSYS.tar root@s4app2:/usr/sap/S4T
#tar –xvf ASCSERSSYS.tar
(4) Check whether the symbolic link of the SYS directory is correct.
 softlink

Install the database instance

Log on to the primary node and start the SWPM. Run the following command to install the database instance on the virtual host VDBS4T:
# ./sapinst SAPINST_USE_HOSTNAME=VDBS4T
On a Windows jump server, enter the following URL in the address bar of a browser:
https://VDBS4T:4237/sapinst/docs/index.html
Use the root username and password to log on to the host. Ensure that the hostname can be resolved and the port can be accessed.
 db1
Enter the system ID, database instance number, and virtual hostname as planned.
 db3
Specify the path of the export package.
 db5
Set a password.
 db6
 db6_2
 db_final

Integrate the SAP instance

1. Add the sidadm user to the haclint group.
Run the following command to add the sidadm user to the haclient group on both nodes:
#usermod -a -G haclient s4tadm
2. Modify the parameter file of the ASCS instance.
(1) Add related configuration to configure the integration with sap_suse_cluster_connector.
(2) Change related configuration to prevent the SAP startup framework from restarting enqueue server upon failure.

  1. ####added for sap-suse-cluster-connector####
  2. #-----------------------------------
  3. #SUSE HAE sap_suse_cluster_connector
  4. #-----------------------------------
  5. service/halib = $(DIR_CT_RUN)/saphascriptco.so
  6. service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
  7. ####chanegd for not to self-restart the enqueue process####
  8. # Start SAP enqueue server
  9. _EN = en.sap$(SAPSYSTEMNAME)_$(INSTANCE_NAME)
  10. Execute_04 = local rm -f $(_EN)
  11. Execute_05 = local ln -s -f $(DIR_EXECUTABLE)/enserver$(FT_EXE) $(_EN)
  12. #Restart_Program_01 = local $(_EN) pf=$(_PF)
  13. Start_Program_01 = local $(_EN) pf=$(_PF)
  14. ##################################

3. Modify the parameter file of the ERS instance.
(1) Add related configuration to configure the integration with sap_suse_cluster_connector.
(2) Change related configuration to prevent the SAP startup framework from restarting enqueue replication server (Enqueue Replicator 2) upon failure.

  1. ####added for sap-suse-cluster-connector####
  2. #-----------------------------------
  3. #SUSE HAE sap_suse_cluster_connector
  4. #-----------------------------------
  5. service/halib = $(DIR_CT_RUN)/saphascriptco.so
  6. service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
  7. ###############################################################
  8. #####changed by dongchen_201804###
  9. #Restart_Program_00 = local $(_ENQR) pf=$(_PF) NR=$(SCSID)
  10. Start_Program_00 = local $(_ENQR) pf=$(_PF) NR=$(SCSID)
  11. ##################################



Configure the resource agent

1. Configure resources.
Run the following crm command to configure resources:
#crm configure load update HA_script.txt
The content of the HA_script.txt file is as follows:

  1. #Cluster settings
  2. property cib-bootstrap-options: \
  3. have-watchdog=true \
  4. cluster-infrastructure=corosync \
  5. cluster-name=hacluster \
  6. stonith-enabled=true \
  7. placement-strategy=balanced \
  8. maintenance-mode=false
  9. rsc_defaults rsc-options: \
  10. resource-stickiness=1 \
  11. migration-threshold=3
  12. op_defaults op-options: \
  13. timeout=600 \
  14. record-pending=true
  15. #STONITH resource setting
  16. primitive stonith-sbd stonith:external/sbd \
  17. params pcmk_delay_max=30s
  18. #ASCS resource setting
  19. primitive rsc_ip_S4T_ASCS00 IPaddr2 \
  20. params ip=10.0.10.12 \
  21. op monitor interval=10s timeout=20s
  22. primitive rsc_sap_S4T_ASCS00 SAPInstance \
  23. operations $id=rsc_sap_S4T_ASCS00-operations \
  24. op monitor interval=11 timeout=60 on_fail=restart \
  25. params InstanceName=S4T_ASCS00_VASCSS4T START_PROFILE="/sapmnt/S4T/profile/S4T_ASCS00_VASCSS4T" AUTOMATIC_RECOVER=false \
  26. meta resource-stickiness=5000 target-role=Started
  27. #ERS resource setting
  28. primitive rsc_ip_S4T_ERS10 IPaddr2 \
  29. params ip=10.0.10.13 \
  30. op monitor interval=10s timeout=20s
  31. primitive rsc_sap_S4T_ERS10 SAPInstance \
  32. operations $id=rsc_sap_S4T_ERS10-operations \
  33. op monitor interval=11 timeout=60 on_fail=restart \
  34. params InstanceName=S4T_ERS10_VERSS4T START_PROFILE="/sapmnt/S4T/profile/S4T_ERS10_VERSS4T" AUTOMATIC_RECOVER=false IS_ERS=true \
  35. meta target-role=Started maintenance=false
  36. #Groups and colocations
  37. group grp_S4T_ASCS00 rsc_ip_S4T_ASCS00 rsc_sap_S4T_ASCS00 \
  38. meta resource-stickiness=3000
  39. group grp_S4T_ERS10 rsc_ip_S4T_ERS10 rsc_sap_S4T_ERS10 \
  40. meta target-role=Started
  41. colocation col_sap_S4T_no_both -5000: grp_S4T_ERS10 grp_S4T_ASCS00
  42. order ord_sap_S4T_first_start_ascs Optional: rsc_sap_S4T_ASCS00:start rsc_sap_S4T_ERS10:stop symmetrical=false

2. Unbind and remove the configured HaVips.
#yast2 network:Run this command to unbind and remove the previously configured HaVips of the ASCS and ERS instances.
remove-temp-havip
3. Start or stop the ASCS or ERS instance.
Start the ASCS or ERS instance.

  1. su - s4tadm
  2. #Start the ASCS instance.
  3. sapcontrol -nr 00 -function StartService S4T
  4. sapcontrol -nr 00 -function Start
  5. #Start the ERS instance.
  6. sapcontrol -nr 10 -function StartService S4T
  7. sapcontrol -nr 10 -function Start

Stop the ASCS or ERS instance.

  1. su - s4tadm
  2. #Stop the ASCS instance.
  3. sapcontrol -nr 00 -function StopService S4T
  4. sapcontrol -nr 00 -function Stop
  5. #Stop the ERS instance.
  6. sapcontrol -nr 10 -function StopService S4T
  7. sapcontrol -nr 10 -function Stop

4. Check the HA cluster.
Check the HAGetFailoverConfig feature.
sapcontrol -nr 00 -function HAGetFailoverConfig
ha_check2
Check the HACheckConfig feature.
sapcontrol -nr 00 -function HACheckConfig
ha_check3
Check the HACheckFailoverConfig feature.
sapcontrol -nr 00 -function HACheckFailoverConfig
ha_check4
Check the HA cluster status. Ensure that all resources are started.
#crm_mon -r
ha_check1

Install the PAS instance

You can install the PAS instance on a local host because it is not involved in an HA failover.
Start the SWPM. Run the following command to install the PAS instance on the local host s4app1:
# ./sapinst
On a Windows jump server, enter the following URL in the address bar of a browser:
https://s4app1:4237/sapinst/docs/index.html
Use the root username and password to log on to the host. Ensure that the hostname can be resolved and the port can be accessed.
 pas1
Enter the PAS instance number as planned.
 pas3
Select No SLD destination. You can register a system landscape directory (SLD) later.
 pas4
Select Do not create Message Server Access Control List. You can create an access control list (ACL) for the message server later as needed.
 pas5

The procedure for installing the AAS instance on the local host s4app2 is similar, and therefore is not described in this topic.

Configure hdbuserstore

After installing the PAS and AAS instances, run the following commands to configure hdbuserstore to ensure that the PAS and AAS instances are connected to the virtual hosts of SAP HANA:

  1. su - s4tadm
  2. hdbuserstore set default VDBS4T:30015 SAPHANADB "pasword"
  3. hdbuserstore list


 hdbuser

HA failover test

After deploying SAP applications and SAP HANA in HA mode, you need to run an HA failover test. For more information, see SAP HA test cases on Alibaba Cloud.
For more information about SAP HA test cases, see SAP system HA operation guide.
For more information about routine administrative tasks and commands for SUSE HAE, see SUSE Administration Guide.