All Products
Search
Document Center

SAP:SAP System High Availability Maintenance Guide

Last Updated:Mar 14, 2023

SAP System High Availability Maintenance Guide

Release history

Version

Revision date

Changes

Release date

1.0

2019-04-15

Overview

This topic describes O&M scenarios and countermeasures for SAP applications or ECS instances for SAP HANA deployed based on SUSE Linux Enterprise High Availability Extension 12 (SUSE HAE 12). O&M scenarios include upgrading or downgrading the configuration of ECS instances, upgrading SAP applications or databases, regular maintenance for primary pr secondary nodes, and node failover.

In SAP systems managed by SUSE HAE, to perform maintenance on a cluster node, you need to stop resources on the node and migrate them to another node, or stop or restart the node. If the node controls cluster resources, you need to migrate the control permission of the node to another node.

The following example describes the maintenance operations for SAP HANA HA instances. You can perform similar operations to maintain ABAP Central Service (ASCS) HA instances and SAP HA databases.

For more information about SUSE HAE operations, see the following manual:

For more information about SAP HANA system replication (SR) configuration, see the following manual:

Scenarios

The following figure shows the architecture of SUSE HAE.

susehae

Pacemaker provides multiple configuration items for various maintenance scenarios:

Set the cluster to the maintenance mode

You can set the global property maintenance-mode to true to switch all resources in the cluster to the maintenance mode. In this case, the cluster stops monitoring these resources.

Set a node to the maintenance mode

You can set all resources on a specified node to the maintenance mode by one time. In this case, the cluster stops monitoring theseresources.

Set a node to the standby mode

After a node is set to the standby mode, resources cannot run on it. In this case, all resources on this node need to be migrated to another node. If no other node is available, these resources must be stopped. At the same time, the cluster stops monitoring operations for this node, except for the operations for which the role parameter is set to Stopped.

With this feature, you can stop a node but keep its resources running on another node.

Set a resource to the maintenance mode

You can set a resource to the maintenance mode. In this case, the cluster stops monitoring this resource. With this feature, you can stop the cluster from monitoring a resource when you adjust services managed by this resource.

Set a resource to the unmanaged mode

You can set a resource to the unmanaged mode by setting the is-managed parameter to false. In this case, the cluster stops managing this resource. This meansyou can adjust services managed by this resource, but the cluster keeps monitoring this resource and sends alerts if there are any. If you want to stop the cluster from monitoring a resource when you adjust services managed by this resource, set the resource to the maintenance mode.

1. Exception handling for the primary node

Note

When an exception occurs on the primary node, SUSE HAE triggers primary/secondary switchover by promoting the secondary node Node B to be the primary node. However, the former primary node Node A is still registered as the primary node. Therefore, after Node A is recovered, you need to configure SAP HANA SR, register Node A to be the secondary node, and then start Pacemaker.

In the following example, the primary node is saphana-01 while the secondary node is saphana-02.

1.1 View the normal status of SUSE HAE.

Log on to a node. Run the crm status command to view the normal status of SUSE HAE.

# crm status
Stack: corosync
Current DC: saphana-01 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 14:33:22 2019
Last change: Mon Apr 15 14:33:19 2019 by root via crm_attribute on saphana-01

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-01
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-01
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-01 ]
     Slaves: [ saphana-02 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

1.2 When an exception occurs on the primary node, SUSE HAE promotes the secondary node to be the primary node.

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 14:40:43 2019
Last change: Mon Apr 15 14:40:41 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Online: [ saphana-02 ]
OFFLINE: [ saphana-01 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Stopped: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-02 ]
     Stopped: [ saphana-01 ]

1.3 After the former primary node is recovered, configure SAP HANA SR and register the node to be the secondary node.

Note

You must properly configure SAP HANA SR and primary and secondary nodes. Wrong configuration may cause data overwritten or lost.

Log on to the former primary node as the SAP HANA instance user and configure SAP HANA SR.

h01adm@saphana-01:/usr/sap/H01/HDB00> hdbnsutil -sr_register --remoteHost=saphana-02 --remoteInstance=00 --replicationMode=syncmem --name=saphana-01 --operationMode=logreplay
adding site ...
checking for inactive nameserver ...
nameserver saphana-01:30001 not responding.
collecting information ...
updating local ini files ...
done.

1.4 Check the status of the shared block device (SBD).

Ensure that the status of the SBD slot is clear.

# sbd -d /dev/vdc list
0       saphana-01      reset   saphana-02
1       saphana-02      reset   saphana-01
# sbd -d /dev/vdc message saphana-01 clear
# sbd -d /dev/vdc message saphana-02 clear

# sbd -d /dev/vdc list
0       saphana-01      clear   saphana-01
1       saphana-02      clear   saphana-01

1.5 Start Pacemaker. SUSE HAE will automatically start SAP HANA.

# systemctl start pacemaker

The former secondary node is promoted to be the primary node. The SUSE HAE status is as follows:

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 15:10:58 2019
Last change: Mon Apr 15 15:09:56 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

1.6 Check the status of SAP HANA SR.

1.6.1 Run the built-in Python script of SAP HANA to check the status of SAP HANA SR.

Log on to current primary node as the SAP HANA instance user to check the status of SAP HANA SR. Ensure that the replication statuses for all processes are active.

saphana-02:~ # su - h01adm
h01adm@saphana-02:/usr/sap/H01/HDB00> cdpy
h01adm@saphana-02:/usr/sap/H01/HDB00/exe/python_support> python systemReplicationStatus.py 
| Database | Host       | Port  | Service Name | Volume ID | Site ID | Site Name  | Secondary  | Secondary | Secondary | Secondary  | Secondary     | Replication | Replication | Replication    | 
|          |            |       |              |           |         |            | Host       | Port      | Site ID   | Site Name  | Active Status | Mode        | Status      | Status Details | 
| -------- | ---------- | ----- | ------------ | --------- | ------- | ---------- | ---------- | --------- | --------- | ---------- | ------------- | ----------- | ----------- | -------------- | 
| SYSTEMDB | saphana-02 | 30001 | nameserver   |         1 |       2 | saphana-02 | saphana-01 |     30001 |         1 | saphana-01 | YES           | SYNCMEM     | ACTIVE      |                | 
| H01      | saphana-02 | 30007 | xsengine     |         3 |       2 | saphana-02 | saphana-01 |     30007 |         1 | saphana-01 | YES           | SYNCMEM     | ACTIVE      |                | 
| H01      | saphana-02 | 30003 | indexserver  |         2 |       2 | saphana-02 | saphana-01 |     30003 |         1 | saphana-01 | YES           | SYNCMEM     | ACTIVE      |                |

status system replication site "1": ACTIVE
overall system replication status: ACTIVE

Local System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

mode: PRIMARY
site id: 2
site name: saphana-02

1.6.2 Use the SAPHanaSR tool provided by SUSE to check the replication status. Ensure that the value of sync_state for the secondary node is SOK.

saphana-02:~ # SAPHanaSR-showAttr
Global cib-time                 
--------------------------------
global Mon Apr 15 15:17:12 2019 


Hosts      clone_state lpa_h01_lpt node_state op_mode   remoteHost roles                            site       srmode  standby sync_state version                vhost      
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
saphana-01 DEMOTED     30          online     logreplay saphana-02 4:S:master1:master:worker:master saphana-01 syncmem         SOK        2.00.020.00.1500920972 saphana-01 
saphana-02 PROMOTED    1555312632  online     logreplay saphana-01 4:P:master1:master:worker:master saphana-02 syncmem off     PRIM       2.00.020.00.1500920972 saphana-02

1.7 (Optional) Clean up the failcount.

A resource will be automatically restarted if it fails, but each failure raises the failcount of the resource. If a migration threshold has been set for that resource, the node will no longer be allowed to run the resource when the number of failures has reached the migration threshold. In this case, you need to clean up the failcount manually.

You can use the following syntax to clean up the failcount:

# crm resource cleanup [resouce name] [node]

After the rsc_SAPHana_HDB resource of saphana-01 is recovered, you need to run the following command to clean up the failcount:

crm resource cleanup rsc_SAPHana_HDB saphana-01

2. Exception handling for the secondary node

Note

If an exception occurs on the secondary node, the primary node is not affected. In this case, SUSE HAE does not trigger primary/secondary switchover. After you recover the secondary node and start Pacemaker, SUSE HAE will automatically start SAP HANA. The primary and secondary nodes remain unchanged. You do not need to change any configurations.

In the following example, the primary node is saphana-02 while the secondary node is saphana-01.

2.1 View the normal status of SUSE HAE.

Log on to a node. Run the crm status command to view the normal status of SUSE HAE.

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 15:34:52 2019
Last change: Mon Apr 15 15:33:50 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

2.2 After the secondary node is recovered, Check the SBD status, and then restart Pacemaker.

# systemctl start pacemaker

The primary and secondary nodes remain unchanged. The SUSE HAE status is as follows:

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 15:43:28 2019
Last change: Mon Apr 15 15:43:25 2019 by root via crm_attribute on saphana-01

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

2.3 Check the status of SAP HANA SR.

2.4 (Optional) Clean up the failcount.

3. Shutdown maintenance for primary and secondary nodes

Note

Set the cluster to the maintenance mode. Stop the secondary node and then the primary node.

In the following example, the primary node is saphana-02 while the secondary node is saphana-01.

3.1 View the normal status of SUSE HAE.

Log on to a node. Run the crm status command to view the normal status of SUSE HAE.

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 15:34:52 2019
Last change: Mon Apr 15 15:33:50 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

3.2 Set the cluster and primary/secondary resource group to the maintenance mode.

Log on to the primary node, and set the cluster to the maintenance mode.

# crm configure property maintenance-mode=true

Set the primary/secondary resource group to the maintenance mode. In this example, the primary resource is rsc_SAPHana_HDB while the secondary resource is rsc_SAPHanaTopology_HDB.

# crm resource maintenance rsc_SAPHana_HDB true
Performing update of 'maintenance' on 'msl_SAPHana_HDB', the parent of 'rsc_SAPHana_HDB'
Set 'msl_SAPHana_HDB' option: id=msl_SAPHana_HDB-meta_attributes-maintenance name=maintenance=true

# crm resource maintenance rsc_SAPHanaTopology_HDB true
Performing update of 'maintenance' on 'cln_SAPHanaTopology_HDB', the parent of 'rsc_SAPHanaTopology_HDB'
Set 'cln_SAPHanaTopology_HDB' option: id=cln_SAPHanaTopology_HDB-meta_attributes-maintenance name=maintenance=true

3.3 The current SUSE HAE status is as follows:

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 16:02:13 2019
Last change: Mon Apr 15 16:02:11 2019 by root via crm_resource on saphana-02

2 nodes configured
6 resources configured

              *** Resource management is DISABLED ***
  The cluster will not attempt to start, stop or recover services

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02 (unmanaged)
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02 (unmanaged)
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB] (unmanaged)
     rsc_SAPHana_HDB    (ocf::suse:SAPHana):    Slave saphana-01 (unmanaged)
     rsc_SAPHana_HDB    (ocf::suse:SAPHana):    Master saphana-02 (unmanaged)
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB] (unmanaged)
     rsc_SAPHanaTopology_HDB    (ocf::suse:SAPHanaTopology):    Started saphana-01 (unmanaged)
     rsc_SAPHanaTopology_HDB    (ocf::suse:SAPHanaTopology):    Started saphana-02 (unmanaged)

3.4 Stop SAP HANA on the secondary node and then on the primary node. Stop the ECS instance for shutdown maintenance.

Log on to the primary node as the SAP HANA instance user and stop SAP HANA. Then, Log on to the secondary node as the SAP HANA instance user and stop SAP HANA.

saphana-01:~ # su - h01adm
h01adm@saphana-01:/usr/sap/H01/HDB00> HDB stop
hdbdaemon will wait maximal 300 seconds for NewDB services finishing.
Stopping instance using: /usr/sap/H01/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function Stop 400

15.04.2019 16:46:42
Stop
OK
Waiting for stopped instance using: /usr/sap/H01/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function WaitforStopped 600 2


15.04.2019 16:46:54
WaitforStopped
OK
hdbdaemon is stopped.

saphana-02:~ # su - h01adm
h01adm@saphana-02:/usr/sap/H01/HDB00> HDB stop
hdbdaemon will wait maximal 300 seconds for NewDB services finishing.
Stopping instance using: /usr/sap/H01/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function Stop 400

15.04.2019 16:47:05
Stop
OK
Waiting for stopped instance using: /usr/sap/H01/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function WaitforStopped 600 2


15.04.2019 16:47:35
WaitforStopped
OK
hdbdaemon is stopped.

3.5 Start SAP HANA on the primary node and then on the secondary node. Recover the cluster and primary/secondary resource group from the maintenance mode.

Log on to the primary node and start Pacemaker. Then, Log on to the secondary node and start Pacemaker.

# systemctl start pacemaker

Recover the cluster and primary/secondary resource group from the maintenance mode.

saphana-02:~ # crm configure property maintenance-mode=false
saphana-02:~ # crm resource maintenance rsc_SAPHana_HDB false
Performing update of 'maintenance' on 'msl_SAPHana_HDB', the parent of 'rsc_SAPHana_HDB'
Set 'msl_SAPHana_HDB' option: id=msl_SAPHana_HDB-meta_attributes-maintenance name=maintenance=false
saphana-02:~ # crm resource maintenance rsc_SAPHanaTopology_HDB false
Performing update of 'maintenance' on 'cln_SAPHanaTopology_HDB', the parent of 'rsc_SAPHanaTopology_HDB'
Set 'cln_SAPHanaTopology_HDB' option: id=cln_SAPHanaTopology_HDB-meta_attributes-maintenance name=maintenance=false

SUSE HAE automatically starts SAP HANA on the primary and secondary nodes. The primary and secondary nodes remain unchanged.

3.6 The current SUSE HAE status is as follows:

# crm status
Stack: corosync
Current DC: saphana-01 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 16:56:49 2019
Last change: Mon Apr 15 16:56:43 2019 by root via crm_attribute on saphana-01

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-01
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

3.7 Check the status of SAP HANA SR.

3.8 (Optional) Clean up the failcount.

4. Shutdown maintenance for the primary node

Note

Set the cluster to the maintenance mode. Set the secondary node to the standby mode.

In the following example, the primary node is saphana-02 while the secondary node is saphana-01.

4.1 View the normal status of SUSE HAE.

Log on to a node. Run the crm status command to view the normal status of SUSE HAE.

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 15:34:52 2019
Last change: Mon Apr 15 15:33:50 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

4.2 Set the secondary node to the maintenance mode and then to the standby mode.

In this example, the secondary node is saphana-01. First, set saphana-01 to the maintenance mode.

# crm node maintenance saphana-01

Then, set saphana-01 to the standby mode.

# crm node standby saphana-01

4.2 The current SUSE HAE status is as follows:

# crm status
Stack: corosync
Current DC: saphana-01 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 17:07:56 2019
Last change: Mon Apr 15 17:07:38 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Node saphana-01: standby
Online: [ saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-01 (unmanaged)
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     rsc_SAPHana_HDB    (ocf::suse:SAPHana):    Slave saphana-01 (unmanaged)
     Masters: [ saphana-02 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     rsc_SAPHanaTopology_HDB    (ocf::suse:SAPHanaTopology):    Started saphana-01 (unmanaged)
     Started: [ saphana-02 ]

4.3 Stop SAP HANA on the primary node. Then, stop the ECS instance for shutdown maintenance.

Log on to the primary node as the SAP HANA instance user and stop SAP HANA.

saphana-02:~ # su - h01adm
h01adm@saphana-02:/usr/sap/H01/HDB00> HDB stop
hdbdaemon will wait maximal 300 seconds for NewDB services finishing.
Stopping instance using: /usr/sap/H01/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function Stop 400

15.04.2019 16:47:05
Stop
OK
Waiting for stopped instance using: /usr/sap/H01/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function WaitforStopped 600 2


15.04.2019 16:47:35
WaitforStopped
OK
hdbdaemon is stopped.

4.4 Start SAP HANA on the primary node. Recover the secondary node to the normal status.

Log on to the primary node and start Pacemaker.

# systemctl start pacemaker

If the rsc_sbd resource is not on the primary node, you need to migrate the resource to the primary node.

In this example, the primary node is saphana-02, but the rsc_sbd resource is on saphana-01. Therefore, you need to migrate the resource to saphana-02.
rsc_sbd (stonith:external/sbd): Started saphana-01

# crm resource migrate rsc_sbd saphana-02

The primary and secondary nodes remain unchanged.

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 17:57:56 2019
Last change: Mon Apr 15 17:57:22 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Node saphana-01: standby
Online: [ saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     rsc_SAPHana_HDB    (ocf::suse:SAPHana):    Slave saphana-01 (unmanaged)
     Masters: [ saphana-02 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     rsc_SAPHanaTopology_HDB    (ocf::suse:SAPHanaTopology):    Started saphana-01 (unmanaged)
     Started: [ saphana-02 ]

Recover the secondary node to the normal status.

saphana-02:~ # crm node ready saphana-02
saphana-02:~ # crm node online saphana-02

SUSE HAE starts SAP HANA on the secondary node. The primary and secondary nodes remain unchanged.

4.5 The current SUSE HAE status is as follows:

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 18:02:33 2019
Last change: Mon Apr 15 18:01:31 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

4.6 Check the status of SAP HANA SR.

4.7 (Optional) Clean up the failcount.

5. Shutdown maintenance for the secondary node

Note

Set the secondary node to the maintenance mode.

In the following example, the primary node is saphana-02 while the secondary node is saphana-01.

5.1 View the normal status of SUSE HAE.

Log on to a node. Run the crm status command to view the normal status of SUSE HAE.

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 15:34:52 2019
Last change: Mon Apr 15 15:33:50 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

5.2 Set the secondary node to the maintenance mode.

# crm node maintenance saphana-01

The current SUSE HAE status is as follows:

Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 18:18:10 2019
Last change: Mon Apr 15 18:17:49 2019 by root via crm_attribute on saphana-01

2 nodes configured
6 resources configured

Node saphana-01: maintenance
Online: [ saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     rsc_SAPHana_HDB    (ocf::suse:SAPHana):    Slave saphana-01 (unmanaged)
     Masters: [ saphana-02 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     rsc_SAPHanaTopology_HDB    (ocf::suse:SAPHanaTopology):    Started saphana-01 (unmanaged)
     Started: [ saphana-02 ]

5.3 Stop SAP HANA on the secondary node. Then, stop the ECS instance for shutdown maintenance.

Log on to the secondary node as the SAP HANA instance user and stop SAP HANA.

saphana-01:~ # su - h01adm
h01adm@saphana-01:/usr/sap/H01/HDB00> HDB stop
hdbdaemon will wait maximal 300 seconds for NewDB services finishing.
Stopping instance using: /usr/sap/H01/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function Stop 400

15.04.2019 16:47:05
Stop
OK
Waiting for stopped instance using: /usr/sap/H01/SYS/exe/hdb/sapcontrol -prot NI_HTTP -nr 00 -function WaitforStopped 600 2


15.04.2019 16:47:35
WaitforStopped
OK
hdbdaemon is stopped.

5.4 Start SAP HANA on the secondary node. Recover the secondary node to the normal status.

Log on to the secondary node and start Pacemaker.

# systemctl start pacemaker

Recover the secondary node to the normal status.

saphana-02:~ # crm node ready saphana-01

SUSE HAE starts SAP HANA on the secondary node. The primary and secondary nodes remain unchanged.

5.4 The current SUSE HAE status is as follows:

# crm status
Stack: corosync
Current DC: saphana-02 (version 1.1.16-4.8-77ea74d) - partition with quorum
Last updated: Mon Apr 15 18:02:33 2019
Last change: Mon Apr 15 18:01:31 2019 by root via crm_attribute on saphana-02

2 nodes configured
6 resources configured

Online: [ saphana-01 saphana-02 ]

Full list of resources:

rsc_sbd (stonith:external/sbd): Started saphana-02
rsc_vip (ocf::heartbeat:IPaddr2):       Started saphana-02
 Master/Slave Set: msl_SAPHana_HDB [rsc_SAPHana_HDB]
     Masters: [ saphana-02 ]
     Slaves: [ saphana-01 ]
 Clone Set: cln_SAPHanaTopology_HDB [rsc_SAPHanaTopology_HDB]
     Started: [ saphana-01 saphana-02 ]

5.5 Check the status of SAP HANA SR.

5.6 (Optional) Clean up the failcount.