All Products
Search
Document Center

SAP S/4HANA 1709 high availability installation within available zone

Last Updated: Aug 05, 2019

SAP S/4HANA 1709 intra-zone HA deployment

Release history

Version Revision date Changes Release date
1.0 2019-08-05

Overview

This topic describes how to deploy SAP S/4HANA 1709 in high availability (HA) mode based on SUSE High Availability Extension (HAE) in a zone of Alibaba Cloud. This deployment uses the primary/secondary mode to integrate a cluster with the SAP startup frameworksapstartsrv. This topic describes the installation of the SAP S/4HANA 1709 server. In this example, the SAP Fiori front-end server is not installed. In addition, this deployment does not integrate SAP liveCache.
This topic describes the HA deployment of SAP S/4HANA 1709. It is for reference only. For more information about installation, configuration, and system resizing, see SAP installation guides. We recommend that you read the relevant SAP installation guide and SAP notes before deployment.

Architecture

The following figure shows the deployment architecture.

Architecture_new

Resource planning

Network

Network Location CIDR block VSwitch VPC
Business network China (Shanghai) Zone A 192.168.10.0/24 sap_business SAP_Network
Heartbeat network China (Shanghai) Zone A 192.168.20.0/24 sap_heartbeat SAP_Network

SAP and hosts

System ID for SAP applications: S4T
System ID for SAP HANA: S4T
You can also set different system IDs for SAP applications and SAP HANA.

Hostname IP address Type Instance ID Description
s4app1 192.168.10.212 or 192.168.20.212 Primary Application Server (PAS) instance 01 N/A
s4app2 192.168.10.213 or 192.168.20.213 Additional Application Server (AAS) instance 02 N/A
VASCSS4T 192.168.10.11 ABAP Central Services (ASCS) instance 00 N/A
VDBS4T 192.168.10.12 Database instance N/A
VERSS4T 192.168.10.212 or 192.168.10.213 Enqueue Replication Server (ERS) instance 10 N/A
hana01 192.168.10.214 or 192.168.20.19 Primary database 00 N/A
hana02 192.168.10.215 or 192.168.20.20 Secondary database 00 N/A

Note: The ERS instance is installed on the virtual host VERSS4T. In the hosts files of the s4app1 and s4app2 hosts, you need to associate the virtual hostname of the ERS instance with the physical IP addresses of respective hosts. Then, the ERS instance is installed on two hosts for failover.

Users and groups

In the HA cluster, the user ID and group ID on a node must be the same as those on the other node for SAP applications or SAP HANA.
User ID: Set the sidadm user ID to 2000, and the sapadm user ID to 2001.
Group ID: Set the sapsys group ID to 2000.

Swap space

The swap space is required for installing SAP applications and SAP HANA. We recommend that you create an SSD to provide the swap space when you create an Elastic Compute Service (ECS) instance. For more information about the swap space, see SAP Note 1597355 - Swap-space recommendation for Linux.

Physical memory (RAM) Recommended swap space
< 32 GB 2 times the amount of RAM
32 to 63 GB 64 GB
64 to 127 GB 96 GB
128 to 255 GB 128 GB
256 to 511 GB 160 GB
512 to 1,023 GB 192 GB
1,024 to 2,047 GB 224 GB
2,048 to 4,095 GB 256 GB
4,096 to 8,191 GB 288 GB
> 8,192 GB 320 GB

File systems

We recommend that you use Autofs to mount the global file system and the file system of the transport host for SAP applications. The planning of file systems in this topic is for reference only. For more information about file system resizing, see the relevant SAP installation guide or the planning of implementation service providers.

File system Type Logical volume Volume group
/usr/sap XFS usrsaplv sapvg
/sapmnt NAS
/usr/sap/trans NAS
/hana/data XFS datalv hanavg
/hana/log XFS loglv hanavg
/hana/shared XFS sharedlv hanavg

Preparations

Alibaba Cloud account

If you do not have an Alibaba Cloud account, register one on the Alibaba Cloud official website or in the Alibaba Cloud app. You need to use a mobile number to register the account and complete real-name verification for the account. Then, you can log on to the Alibaba Cloud app with this account to manage and monitor your cloud resources, perform authentication, and ask questions and acquire knowledge in Yunqi Community.For more information, see Account registration and real-name verification (Alibaba Cloud app).

VPC

Virtual Private Cloud (VPC) is an isolated network environment built on Alibaba Cloud. VPCs are logically isolated from one another. A VPC is a private network dedicated to you on Alibaba Cloud. You can configure the IP address range, routing table, and gateway to customize your VPC. For more information, see Virtual Private Cloud.
The following figure shows how to create a VPC as planned.

Create a VPC
The following figure shows how to create a VSwitch and specify a CIDR block for the business network as planned.

Create the business network
The following figure shows how to create a VSwitch and specify a CIDR block for the heartbeat network as planned.

Create the heartbeat network

ECS instance

ECS is a basic cloud computing service provided by Alibaba Cloud.You can log on to the ECS console or Alibaba Cloud app to configure your ECS resources. For more information about SAP NetWeaver on Alibaba Cloud, see SAP Note 1380654 - SAP support in cloud environments.
1. Create an ECS instance.
Create an ECS instance in the ECS console. Specify the billing method, region, and zone. For this deployment, select China (Shanghai) Zone A.
Select an existing VPC and an existing security group for the ECS instance. For more information about the security group, see Security group FAQ.

Create an ECS instance 1

Specify the image.
Select the SUSE Linux Enterprise Server for SAP Applications 12 SP2 image from the image marketplace.

Select an image 2

Specify the number and capacity of disks to be created as planned. For more information about disks, see Block storage performance.
We recommend that you use an ultra disk as the system disk and SSDs as data disks, and create an SSD to provide the swap space.

Create disks

(Optional) Specify the RAM users as required. For more information about RAM, see RAM introduction
Check all configuration items and ensure that they are correct. Then, create the ECS instance. According to the planning of this deployment, you need to create four ECS instances in China (Shanghai) Zone A. After the four ECS instances are created, update hostnames or private IP addresses as planned.
2. Configure elastic network interfaces (ENIs).
Create an ENI for each ECS instance in the HA cluster to configure the heartbeat network.

Create ENIs

Select an IP address of the heartbeat CIDR block to bind a heartbeat IP address to each ENI.

Create an ENI

Bind an ENI to each ECS instance in the HA cluster.

Bind an ENI

ECS Metrics Collector

ECS Metrics Collector is a monitor agent that the SAP system uses on Alibaba Cloud to collect required information about virtual machine configuration and underlying physical resource usage.

When the SAP system runs in an ECS instance, the SAP Host Agent uses the metadata service and APIs to obtain the information required for monitoring the SAP system, including the information about the operating system, network, storage, and SAP architecture. Then, the SAP Host Agent provides the information for SAP applications to analyze events and system performance.

You need to install ECS Metrics Collector for SAP for each ECS instance in which the SAP system is running, either for SAP applications or SAP HANA.

For more information, see ECS Metrics Collector for SAP deployment guide.

Shared block storage

ECS shared block storage is a block-level storage device that allows multiple ECS instances to concurrently read and write data. It features high concurrency, high performance, and high reliability. You can attach a shared block storage device to a maximum of 16 ECS instances at the same time.For more information, watch the video Attach a shared block storage device to multiple ECS instances. This deployment uses shared block storage as the shoot the other node in the head (STONITH) device for the HA cluster. Ensure that the shared block storage is in the same region and zone as ECS instances in the HA cluster so that it can be attached to these ECS instances.
1. Create shared block storage.

Create shared block storage 1

Select SSD with the minimum capacity of 20 GB for the STONITH device.

Create shared block storage 2

The following figure shows the created shared block storage.
Create shared block storage 4

2. Attach the shared block storage.
Attach the shared block storage to each ECS instance in the HA cluster.

Attach the shared block storage

HaVip

Private high-availability virtual IP address (HaVip) is a private IP resource that you can create and release separately. The uniqueness of HaVip is that you can use an Address Resolution Protocol (ARP) announcement to broadcast the IP address in an ECS instance. This deployment binds HaVip as a virtual IP address to each node in the HA cluster.

1. Create an HaVip.

HaVip 2

Select an IP address of the business CIDR block to create an HaVip for the ASCS instance. The same rule applies when you create an HaVip for the database instance.

HaVip 3

2. Bind the HaVip.
Bind the HaVip to each ECS instance in the HA cluster.

HaVip 4
3. Configure HaVips.
Log on to the primary node of the HA cluster. Add the HaVips for ASCS and ERS instances to the Additional Addresses resource pool. This ensures that these HaVips can be reached by PING messages during the installation of ASCS and ERS instances. Run the following command:
#yast2 network
HaVip 5
Send PING messages to the HaVips to test their connectivity.

NAS

Alibaba Cloud Network Attached Storage (NAS) is a file storage service for compute nodes, such as ECS instances, Elastic High Performance Computing (E-HPC) clusters, and Container Service clusters. NAS complies with standard file access protocols. Without modifying existing applications, you can have a distributed file system that features unlimited capacity and performance scaling, a single namespace, shared access, high reliability, and high availability. For more information, see Network Attached Storage.
1. Create an NAS file system.
Select a region and a storage type. This deployment uses the NAS Performance storage type. For more information about NAS performance, see Storage types.

nas1

Add a mount point for the file system. Select the previously created VPC and business VSwitch.

nas2

Click the file system ID or name to purchase a storage package as needed.

nas3

Click the file system ID or name. On the details page, you can view the mount address of the NAS file system, as shown in the following figure.

nas4

2. Record the mount addresses of NAS file systems.
Create two NAS file systems for /sapmnt and /usr/sap/trans as planned. Record their mount addresses as follows:
30b074a9fd-dac31.cn-shanghai.nas.aliyuncs.com
30b074a9fd-fye73.cn-shanghai.nas.aliyuncs.com

SAP HANA installation

For more information about how to install SAP HANA, configure system replication, and configure HA, see SAP HANA intra-zone HA deployment (based on SLES HAE). For more information about how to install SAP HANA in HA mode across zones, see SAP HANA HA cross-zone with SLES HAE.

HA cluster configuration

Modify hostnames

Modify hostnames for all nodes of SAP applications and SAP HANA.
Add the following information to the hosts file in the /etc directory as planned:

  1. ###S4 1709 application business###
  2. 192.168.10.212 s4app1 s4app1.alibaba.com
  3. 192.168.10.213 s4app2 s4app2.alibaba.com
  4. 192.168.10.11 VASCSS4T VASCSS4T.alibaba.com
  5. ###S4 1709 application heatbeat###
  6. 192.168.20.212 s4app1-ha
  7. 192.168.20.213 s4app2-ha
  8. ###S4 1709 HANA database####
  9. 192.168.10.214 hana01 hana01.alibaba.com
  10. 192.168.10.215 hana02 hana02.alibaba.com
  11. 192.168.10.12 VDBS4T VDBS4T.alibaba.com
  12. ###S4 1709 HANA database heartbeat####
  13. 192.168.20.19 hana01-ha
  14. 192.168.20.20 hana02-ha

Create file systems

Create NAS file systems for /sapmnt and /usr/sap/trans. Create a local XFS file system for /usr/sap.

1. Create the /usr/sap file system.
(1) Check disks.
(2) Create a physical volume.
(3) Create the sapvg volume group.
(4) Create the usrsaplv logical volume.
(5) Create a file system.
(6) Add a mount point for the file system. Enable the file system to be mounted upon system startup.

  1. fdisk -l
  2. pvcreate /dev/vdb /dev/vdc
  3. vgcreate sapvg /dev/vdb /dev/vdc
  4. lvcreate -L 100G -n usrsaplv -i 2 -I 64 sapvg # Configures striping between two physical volumes.
  5. mkfs.xfs /dev/sapvg/usrsaplv
  6. mkdir -p /usr/sap

Run the vi /etc/fstab command to edit the fstab file as follows:
/dev/sapvg/usrsaplv /usr/sap xfs defaults 1 1

Run the mount -a command to mount all file systems.
2. Create the swap space.
(1) Check the disk used to provide the swap space.
#fdisk -l: Run this command to obtain disk information. In this example, /dev/vdd provides the swap space.
(2) Configure the swap space as follows:

  1. mkswap /dev/vdd
  2. swapon /dev/vdd
  3. swapon -s # Checks the size of the swap space.

3. Mount the global file system and the file system of the transport host.
We recommend that you use Autofs to mount the global file system and the file system of the transport host: /sapmnt and /usr/sap/trans. In this way, you do not need to create directories as mount points.
To configure Autofs, follow these steps:
(1) Run the following command to edit the auto.master file:
#vim /etc/auto.master
Add /- /etc/auto.nfs.

autofs

(2) Create and edit the auto.nfs file in the /etc directory as follows:

  1. /sapmnt -rw,hard,intr,noresvport,timeo=60,retrans=2 93fd149795-bia97.cn-shanghai.nas.aliyuncs.com:/
  2. /usr/sap/trans -rw,hard,intr,noresvport,timeo=60,retrans=2 39c4c4b07d-rxx66.cn-shanghai.nas.aliyuncs.com:/

(3) Run the following command to start Autofs:
#systemctl start autofs
(4) Run the following command to enable Autofs to mount file systems upon system startup:
#systemctl enable autofs
You can run the cd command to access the two file systems to check whether they are mounted.

Prepare the operating system and installation packages

You need to configure both nodes of the HA cluster. The following procedure shows how to configure a node.

1. Install the packages required by HA configuration and optimization.
Run the following command to install the sbd, crosync, pacemaker, sap_cluster_connector, and saptune packages:
#zypper install -y sbd corosync pacemaker sap_suse_cluster_connector saptune
Run the following command to check whether these packages are installed:

  1. for p in corosync pacemaker sap_suse_cluster_connector saptune;do rpm q $p &&echo installed;done


The following figure shows the results.

Check package installation

2. Install the ha_sles pattern.
#zypper in -t pattern ha_sles

3. Check the Network Time Protocol (NTP) service.
#ntpq -p
By default, the NTP service is enabled for Alibaba ECS instances. If the time zone of your ECS instances is not Asia/Shanghai, change the time zone and configure the NTP service. Ensure that all ECS instances enable the NTP service and use the same time zone. For more information, see Time setting: Synchronize NTP servers and change time zone for Linux instances.

4. Install saptune.
As an upgraded version of the sapconf tool, saptune is available to SUSE Linux Enterprise Server 12 SP2 and later versions. You can use saptune to tune parameters for operating systems and databases. This ensures better performance for SAP NetWeaver or SAP HANA.
The syntax is as follows:
SAP note

  1. Tune system according to SAP and SUSE notes:
  2. saptune note [ list | verify ]
  3. saptune note [ apply | simulate | verify | customise | revert ] NoteID

SAP solution

  1. Tune system for all notes applicable to your SAP solution:
  2. saptune solution [ list | verify ]
  3. saptune solution [ apply | simulate | verify | revert ] SolutionName

For this deployment, SAP NetWeaver is installed. The following figures show the installation results of saptune.
saptune_note

saptune_solution

Run the following command to start a daemon:
#saptune daemon start

For more information about saptune, see Prepare your Linux for your SAP solution with saptune or the official documentation of SUSE Linux Enterprise Server.

Configure the STONITH device

In Alibaba Cloud, you can use shared block storage as the STONITH device. The procedure for configuring the shared block device (SBD) is as follows:
1. Check the SBD.
#fdisk -l
Run the fdisk -l command to obtain disk information, where the shared block storage with the capacity of 20 GB is used as the STONITH device for both nodes of the HA cluster.
 Check the SBD
2. Create the SBD for both nodes.
(1) Run the following command to initialize the SBD:
#sbd -d /dev/vdc create
(2) Run the following command to write the dump information to the SBD:
#sbd -d /dev/vdc dump
3. Configure the watchdog (softdog).
On both nodes of the HA cluster, run the vim /etc/init.d/boot.local command to edit the boot.local file and add the following information:
/etc/init.d/boot.local

watchdog1

Run the following command:
#modprobe softdog
Run the following command to verify that the softdog module is running:
#lsmod | egrep "(wd|dog)"
checkwatchdog
4. Configure the system configuration file.
Modify parameters in the sbd file of the /etc/sysconfig directory on both nodes as follows:

  1. SBD_DEVICE="/dev/vdc"
  2. SBD_STARTMODE=clean
  3. SBD_OPTS=""

You can run the #sbd -d
command to view SBD parameters and parameter description. 5. Test the SBD.
Run the following command to start the SBD on both nodes:`#/usr/share/sbd/sbd.sh start<br><br> Run the following command to check the SBD status on both nodes:#sbd -d /dev/vdc list<br><br> Ensure that the SBD status is clear on both nodes, as shown in the following figure.![ ](http://docs-aliyun.cn-hangzhou.oss.aliyun-inc.com/assets/pic/71049/cn_zh/1525422236575/sbd%E7%8A%B6%E6%80%81.png)Check the SBD status<br><br> You can also run the following command to send a message from the primary node to the secondary node:

#sbd -d message <br><br>Run the following command on the secondary node to check whether it receives the message:<br>#sbd -d <br> listRun the following command to reset the SBD status on the secondary node to clear:<br>#sbd -d <SBD Device Name> message <node2> clear<>

Configure the cluster

1. Configure Corosync.
(1) Start the cluster GUI.
Log on to the primary node of the cluster, start YaST2, and then click Cluster.

cluster

(2) Configure Communication Channels as follows:
Transport: Select Unicast.
Channel: Enter 192.168.20.0 in the Bind Network Address field to specify the heartbeat CIDR block.
Redundant Channel: Enter 192.168.10.0 in the Bind Network Address field to specify the business CIDR block.

corosync_new1
In the Member Address section, add the heartbeat IP addresses and business IP addresses of both nodes in the IP and Redundant IP columns, respectively. Enter 2 in the Expected Votes field to indicate the number of nodes in the cluster.

corosync_new2
(3) Configure Security as follows:
Click Generate Auth Key File.

corosync4

(4) (Optional) Configure Csync2 as follows:
Csync2 is a synchronous replication tool. You can use it to copy configuration files to nodes in a cluster.
a. Add a host and click Add Suggested Files.
b. Click Turn csync2 ON.
c. Click Generate Pre-Shared-Keys.
d. Copy the generated key_hagroup file in the /etc/csync2 directory to the same directory of the secondary node.

corosync5

(5) Configure Service as follows:
Booting: By default, Off is selected, indicating that you need to manually start Pacemaker. Keep the default configuration.

corosync6

(6) Copy files.
Run the following command to copy the corosync.conf and authkey files in the /etc/corosync directory of the primary node to the same directory of the secondary node:
#scp -pr corosync.conf authkey root@s4app2:/etc/corosync
2. Start Pacemaker.
Run the following command to start Pacemaker on both nodes:
#systemctl start pacemaker
Run the following command to verify that both nodes are online:
#crm status

SAP S/4HANA 1709 installation

Install the ASCS instance

Start the SAP Software Provisioning Manager (SWPM). Run the following command to install the ASCS instance on the virtual host VASCSS4T:
# ./sapinst SAPINST_USE_HOSTNAME=VASCSS4T
On a Windows jump server, enter the following URL in the address bar of a browser:
https://VASCSS4T:4237/sapinst/docs/index.html
Use the root username and password to log on to the host. Ensure that the hostname can be resolved and the port can be accessed.
ascs1

Enter the system ID and the directory of the mounted global file system as planned.

ascs2

Set the fully qualified domain name (FQDN).

ascs3

Set a password.

ascs4

Enter the user ID and group ID as planned.

ascs5

Enter the path of kernel packages.

ascs6

Enter the user ID and group ID as planned.

ascs7

Enter the ASCS instance number and virtual hostname as planned.

ascs8

Integrate an SAP Web Dispatcher and a gateway with the ASCS instance.

ascs9

Configure the SAP Web Dispatcher. You can modify the configuration later.

ascs10

For security reasons, we recommend that you remove the sidadm user from the sapinst group.

ascs11

Review parameter settings. You can modify parameters in this step.

ascs12

ascs13

Install the ERS instance

Start the SWPM. Run the following command to install the ERS instance on the virtual host VERSS4T (note that the virtual hostname of the ERS instance is associated with the physical IP addresses of the s4app1 host):
# ./sapinst SAPINST_USE_HOSTNAME=VERSS4T
On a Windows jump server, enter the following URL in the address bar of a browser:
https://VERSS4T:4237/sapinst/docs/index.html
Use the root username and password to log on to the host. Ensure that the hostname can be resolved and the port can be accessed.

ers1

ers2

Upgrade the SAP Host Agent.

ers3

Specify the file path of the SAP Host Agent.

ers4

Enter the ERS instance number as planned.

ers5

ers7

Configure ASCS and ERS instances on the secondary node

1. Create the users and group.
On the secondary node, start the SWPM to create the same users and group as those on the primary node. Run the following command:
#./sapinst
 Create the users and group 1

Create the users and group 2

Create the users and group 3

Enter the system ID and select Based on AS ABAP.

Create the users and group 4

Enter the user IDs and group ID as planned, which are the same as those on the primary node.

Create the users and group 5

2. Copy files.
(1) Run the following command to copy the services file in the /etc directory to the same directory of the secondary node:
#scp -pr services root@s4app2:/etc/
(2) Run the following command to copy the sapservices file in the /usr/sap directory to the same directory of the secondary node:
#scp -pr sapservices root@s4app2:/usr/sap/
(3) Run the following commands to copy the ASCS00, ERS10, and SYS directories to the secondary node:
#cd /usr/sap/S4T
#tar -cvf ASCSERSSYS.tar *
Run the following commands to create an S4T directory that has the same permissions in the /usr/sap directory of the secondary node, and copy and decompress the ASCSERSSYS.tar package:
#scp –pr ASCSERSSYS.tar root@s4app2:/usr/sap/S4T
#tar –xvf ASCSERSSYS.tar
(4) Check whether the symbolic link of the SYS directory is correct.
softlink

Install the database instance

Start the SWPM. Run the following command to install the database instance on the virtual host VDBS4T:
# ./sapinst SAPINST_USE_HOSTNAME=VDBS4T
On a Windows jump server, enter the following URL in the address bar of a browser:
https://VDBS4T:4237/sapinst/docs/index.html
Use the root username and password to log on to the host. Ensure that the hostname can be resolved and the port can be accessed.
 db1

Enter the system ID, database instance number, and virtual hostname as planned.

db2

Specify the path of the export package.

db3

Set a password.

db5

Use the default value SAPABAP1 for the Schema field. You can change the schema to a custom rule, such as SAPSID. However, to keep the consistency of the schema for future system upgrades, we recommend that you set the schema to SAPABAP1.

db6

db7

Select Do not use a parameter file for the scale-out architecture.

db8

db9

db10

db11

Integrate the SAP instance

1. Add a user to the haclient group.
Run the following command to add the sidadm user to the haclient group on both nodes:
#usermod -a -G haclient s4tadm

2. Modify the parameter file of the ASCS instance.
(1) Add the following configuration to configure the integration with sap_suse_cluster_connector:
service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector> and service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
(2) Add the following configuration to specify the maximum number of restarts for the message server, which is the threshold for triggering an HA failover:
Max_Program_Restart = 03
(3) Replace Restart_Program_01 = local $(_EN) pf=$(_PF) with Start_Program_01 = local $(_EN) pf=$(_PF) to prevent the enqueue server from restarting on the local host upon failure.
Comment out Restart_Program_01 = local $(_EN) pf=$(_PF).
Use Start_Program_01 = local $(_EN) pf=$(_PF).

  1. ####added for sap_suse_cluster_connector####
  2. #-----------------------------------
  3. #SUSE HAE sap_suse_cluster_connector
  4. #-----------------------------------
  5. service/halib = $(DIR_CT_RUN)/saphascriptco.so
  6. service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
  7. ###############################################################
  8. # Start SAP message server
  9. _MS = ms.sap$(SAPSYSTEMNAME)_$(INSTANCE_NAME)
  10. Execute_02 = local rm -f $(_MS)
  11. Execute_03 = local ln -s -f $(DIR_EXECUTABLE)/msg_server$(FT_EXE) $(_MS)
  12. Restart_Program_00 = local $(_MS) pf=$(_PF)
  13. ####added by dongchen_201804 for message server####
  14. Max_Program_Restart = 03
  15. ##################################
  16. # Start SAP enqueue server
  17. _EN = en.sap$(SAPSYSTEMNAME)_$(INSTANCE_NAME)
  18. Execute_04 = local rm -f $(_EN)
  19. Execute_05 = local ln -s -f $(DIR_EXECUTABLE)/enserver$(FT_EXE) $(_EN)
  20. ####chanegd for enqueue server####
  21. #Restart_Program_01 = local $(_EN) pf=$(_PF)
  22. Start_Program_01 = local $(_EN) pf=$(_PF)
  23. ##################################

4. Modify the parameter file of the ERS instance.
(1) Add the following configuration to configure the integration with sap_suse_cluster_connector:
service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector> and service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
(2) Change the value of Autostart from 1 to 0.
Comment out Autostart = 1.
Use Autostart = 0.
(3) Replace DIR_PROFILE = $(DIR_INSTANCE)$(DIR_SEP)profile with DIR_PROFILE = $(DIR_INSTALL)$(DIR_SEP)profile.

  1. ####added by dongchen_201804 for sap_suse_cluster_connector####
  2. #-----------------------------------
  3. #SUSE HAE sap_suse_cluster_connector
  4. #-----------------------------------
  5. service/halib = $(DIR_CT_RUN)/saphascriptco.so
  6. service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
  7. ###############################################################
  8. #####changed by dongchen_201804###
  9. #Autostart = 1
  10. Autostart = 0
  11. #DIR_PROFILE = $(DIR_INSTANCE)$(DIR_SEP)profile
  12. DIR_PROFILE = $(DIR_INSTALL)$(DIR_SEP)profile
  13. ##################################


5. Comment out some content in the sapserivces file.
In the sapservices file of the /usr/sap directory on both nodes, comment out the following content related to ASCS and ERS instances, so that the cluster can start the instances:

  1. #! /bin/sh
  2. #LD_LIBRARY_PATH=/usr/sap/S4T/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4T/ASCS00/exe/sapstartsrv pf=/usr/sap/S4T/SYS/profile/S4T_ASCS00_VASCSS4T -D -u s4tadm
  3. #LD_LIBRARY_PATH=/usr/sap/S4T/ERS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/S4T/ERS10/exe/sapstartsrv pf=/usr/sap/S4T/ERS10/profile/S4T_ERS10_VERSS4T -D -u s4tadm


Configure the resource agent

1. Configure resources.
Run the following crm command to configure resources:
#crm configure load update HA_script.txt
The content of the HA_script.txt file is as follows:

  1. #######Below is cluster bootstrap configuration####
  2. property cib-bootstrap-options: \
  3. have-watchdog=true \
  4. dc-version=1.1.15-19.15-e174ec8 \
  5. cluster-infrastructure=corosync \
  6. cluster-name=cluster \
  7. no-quorum-policy=ignore \
  8. stonith-enabled=true \
  9. stonith-action=reboot \
  10. stonith-timeout=150s \
  11. last-lrm-refresh=1524840298
  12. rsc_defaults rsc-options: \
  13. resource-stickiness=1000 \
  14. migration-threshold=5
  15. op_defaults op-options: \
  16. timeout=600 \
  17. op_defaults \
  18. record-pending=true
  19. #####Below is sbd configuration####
  20. primitive rsc_sbd stonith:external/sbd \
  21. operations $id=rsc_sbd-operations \
  22. op monitor interval=30 timeout=60 \
  23. meta target-role=Started
  24. #####Below is ASCS vip resource configuration####
  25. primitive rsc_vip IPaddr2 \
  26. params ip=192.168.10.11 iflabel=0 \
  27. op monitor interval=10 timeout=20 on_fail=restart
  28. #####Below is ASCS instance configuration####
  29. primitive rsc_sap_ASCS SAPInstance \
  30. operations $id=rsc_sap_ASCS-operations \
  31. op start interval=0 timeout=180 \
  32. op stop interval=0 timeout=240 \
  33. op monitor interval=11 role=Slave timeout=60 \
  34. op monitor interval=13 role=Master timeout=60 \
  35. params InstanceName=S4T_ASCS00_VASCSS4T START_PROFILE="/usr/sap/S4T/SYS/profile/S4T_ASCS00_VASCSS4T" ERS_InstanceName=S4T_ERS10_VERSS4T ERS_START_PROFILE="/usr/sap/S4T/SYS/profile/S4T_ERS10_VERSS4T" AUTOMATIC_RECOVER=true \
  36. meta migration-threshold=1 failure-timeout=3600
  37. #####Below is Multi-state/clone/colocation/order configuration#####
  38. ms msl_sap_ASCS rsc_sap_ASCS \
  39. meta clone-max=2 target-role=Started master-max=1 is-managed=true
  40. colocation col_grp_sap_s4_MASTER inf: msl_sap_ASCS:Master rsc_vip
  41. order ord_grp_sap_s4 0: rsc_vip:start msl_sap_ASCS:promote

2. Unbind and remove the configured HaVips.
#yast2 network: Run this command to unbind and remove the previously configured HaVips.
3. Check the configuration.
After configuration, run the following command to check the status of the HA cluster:
crm_mon -r

crmmon1

Install the PAS instance

You can install the PAS instance on a local host because it is not involved in an HA failover.
Start the SWPM. Run the following command to install the PAS instance on the local host s4app1:
# ./sapinst
On a Windows jump server, enter the following URL in the address bar of a browser:
https://s4app1:4237/sapinst/docs/index.html
Use the root username and password to log on to the host. Ensure that the hostname can be resolved and the port can be accessed.
 pas1
Enter the database instance number and password.

pas2
Do not select Install SAP liveCache for SAP System.

pas4
Enter the PAS instance number as planned.

pas6
Select No SLD destination. You can register a system landscape directory (SLD) later.

pas7
Select Do not create Message Server Access Control List. You can create an access control list (ACL) for the message server later as needed.

pas8
Select Individual Key for production systems.

pas10
Record the key information.

pas11
pas12

The procedure for installing the AAS instance on the local host s4app2 is similar, and therefore is not described in this topic.

Configure hdbuserstore

After installing the PAS and AAS instances, run the following commands to configure hdbuserstore to ensure that the PAS and AAS instances are connected to the virtual hosts of SAP HANA:

  1. su - s4tadm
  2. hdbuserstore set default VDBS4T:30015 SAPHANADB "pasword"
  3. hdbuserstore list

hdbuserstore

HA failover test

After deploying SAP applications and SAP HANA in HA mode, you need to run an HA failover test. For more information, see SAP HA test cases on Alibaba Cloud.
For more information about routine administrative tasks and commands for SUSE HAE, see SUSE Administration Guide