Use virtual machines to try Oracle 10g edition 2-Alibaba Cloud developer community on Oracle Enterprise Linux for free

Revised by Wilson note: In order to facilitate beginners to complete this experiment better, Wilson has made some comments on this paper.

The VMware server registration code 1.0.3 required in this article is as follows:

registration key 1: 98XY4-54VA4-4216V-4PDZ6

registration Key 2: WH0M5-XW50J-WA4FU-4MTZ3

in addition, students should download an FTP client tool to transfer various softwares from Windows hosts to Linux virtual machines. It is best that this tool supports the ssh2 over ftp protocol. We recommend that you use the FileZilla: http://filezilla.sourceforge.net/

oracle's indestructible Linux, introduced in October Oracle OpenWorld 2006, aims to provide enterprise-level support services for Red Hat Linux, fix errors faster, and significantly reduce support prices. In addition, Oracle's own Enterprise Linux (based on Red Hat Advanced Server version 4 (Update 4) that includes additional error fixes) are available for free download.

Therefore, it is now possible to use Oracle Real Application cluster (RAC) 10 running on VMware Server for free on home computers through Red Hat Advanced Server (a free virtual environment provided by VMware) G .

Wilson note: you can choose to conduct this experiment on Redhat AS4 / CentOS 4.4 / Oracle Unbreakable Linux 4. For beginners, we recommend that you use Oracle Linux to avoid more troubles during the experiment. You can download Linux media from www.tuningking.com.

VMware Server allows you to run multiple operating systems on a physical computer. Each virtual machine is an independent operating environment with a set of its own virtual components, such as disks, processors, and memory. Virtual technology is very useful in a computing environment. It allows you to independently develop and test software on the same physical host to prevent data or software damage. VMware software is widely used for server integration to reduce total cost of ownership and speed up application development and testing cycles.

In this guide, you will learn how to install and configure two Enterprise Linux running on VMware Server and Oracle RAC 10 G the node of version 2. Note that this guide is for teaching/evaluation purposes only; Oracle and other suppliers will not support this configuration. This guide is divided into the following sections:

  1. hardware Requirements and overview
  2. configure the first virtual machine
  3. configure Enterprise Linux on the first virtual machine
  4. create and configure a second virtual machine
  5. configure Oracle Automatic Storage Management (ASM)
  6. configure the Oracle Cluster File System (OCFS2)
  7. install Oracle Cluster Components
  8. install Oracle 10GB edition 2
  9. explore RAC database environment
  10. test transparent application failover (TAF)
  11. database Backup and Recovery
  12. explore the Oracle Enterprise Manager (OEM) database console

1. Hardware requirements and overview

in this guide, you will install the 32-bit Linux operating system. Only the following 64-bit processors running on the host support the 64-bit customer Operating system:

  • AMD Athlon 64 Revision D or later
  • AMD Opteron Revision E or later
  • AMD Turion 64 Revision E or later
  • AMD Sempron 64-bit-capable Revision D or later Intel EM64T VT-capable processor

if you decide to install the 64-customer operating system, make sure that your processors are listed above. You also need to ensure that virtual technology (VT) is enabled in the BIOS. Some mainstream manufacturers disable the technology by default. Additional information about Processor compatibility is provided here. To verify that your processor is supported, download the processor compatibility check tool from the VMware website.

Allocate at least 700MB of memory to each virtual machine and reserve at least 30GB of disk space for all virtual machines.

Overview of the host operating system environment:

host Name operating system processor memory disk network card

overview of the customer operating system environment:

host Name operating system processor memory

overview of virtual disk layout:

virtual disks on the host operating system virtual disks on the customer's operating system virtual device node size (MB) description

(To configure shared storage, the customer OS cannot share the same SCSI bus with shared storage. The client OS uses SCSI0 and the shared disk uses scsi1.)

RAC database environment overview:

host Name ASM instance name RAC instance name database name database file storage OCR and Voting Disk (Voting Disk)

you will install the Oracle home directory on each node for redundancy. ASM and Oracle RAC instances on each node share the same Oracle home directory.

2. Configure the first virtual machine

to create and configure the first virtual machine, you need to add virtual hardware devices, such as disks and processors. Before continuing the installation, create the following windows folder to store virtual machines and shared storage.

D:\>mkdir vm\rac\rac1
D:\>mkdir vm\rac\rac2
D:\>mkdir vm\rac\sharedstorage

Double-click the VMware Server icon on the desktop to start the application:

  1. create a new virtual machine by CTRL-N.
  2. Create Virtual Machine Wizard: click Next .
  3. Select the appropriate configuration:
    1. virtual machine configuration: Select Custom .
  4. Select the customer Operating system:
    1. customer Operating system: Select Linux .
    2. Version: Select Red Hat Enterprise Linux 4 .
  5. Name a virtual machine:
    1. VM name: enter rac1 ".
    2. Location: Enter d:\vm\rac\rac1 ".
  6. Set access permissions:
    1. access permissions: Select Make this virtual machine private .
  7. Enable/disable options:
    1. virtual Machine account: Select User that powers on the virtual machine .
  8. Processor configuration:
    1. processor: Select a processor.
  9. Virtual machine memory:
    1. memory: Select 700MB .
  10. Network type:
    1. network Connection: Select Use bridged networking .
  11. Select I/O adapter type:
    1. I/O adapter type: Select LSI Logic .
  12. Select a disk:
    1. disk: Select Create a new virtual disk .
  13. Select the disk type:
    1. virtual disk type: Select SCSI (Recommended) .
  14. Specify the disk capacity:
    1. disk capacity: enter 20GB ".
    2. Deselect Allocate all disk space now . To save space, you do not need to allocate all disk space now.
  15. Specify the disk file:
    1. disk File: Enter localdisk.vmdk ".
    2. Click Finish .

Repeat steps 16-24 to create four virtual SCSI hard disks-ocfs2disk.vmdk (512MB), asmdisk1.vmdk (3GB), asmdisk2.vmdk (3GB), and asmdisk3.vmdk (2GB).

  1. VMware Server console: click Edit virtual machine settings .
  2. Virtual machine settings: click Add .
  3. Add hardware Wizard: click Next .
  4. Hardware Type:
    1. hardware Type: Select Hard Disk .
  5. Select a disk:
    1. disk: Select Create a new virtual disk .
  6. Select the disk type:
    1. virtual disk type: Select SCSI (Recommended) .
  7. Specify the disk capacity:
    1. disk capacity: enter 0.5GB ".
    2. Selector Allocate all disk space now . If you want to save space, you do not need to allocate all disk space. For performance reasons, you need to allocate all disk space to each virtual shared disk in advance. Especially during the creation of an Oracle database or when DML activities are frequent, if the size of the shared disk increases rapidly, virtual machines may be suspended intermittently for a short period of time or even crash (this is rare).
  8. Specify the disk file:
    1. disk File: Enter d:\vm\rac\sharedstorage\ocfs2disk.vmdk ".
    2. Click Advanced .
  9. Add hardware Wizard:
    1. virtual device node: Select SCSI 1:0 .
    2. Mode: Select Independent , select for all shared disks Persistent .
    3. Click Finish .

Finally, add an additional virtual Nic for dedicated interconnection and remove the floppy drive (if any).

  1. VMware Server console: click Edit virtual machine settings.
  2. Virtual machine settings: click Add .
  3. Add hardware Wizard: click Next .
  4. Hardware Type:
    1. hardware Type: Ethernet adapter.
  5. Network type:
    1. host mode: dedicated network shared with the host
    2. click Finish .
  6. Virtual machine settings:
    1. selector Floppy and click Remove .
  7. Virtual machine settings: click OK .

Modify the virtual machine configuration file. You also need to set other parameters to enable disk sharing between two virtual RAC nodes. Open the configuration file d:\vm\rac\rac1\Red Hat Enterprise Linux 4.vmx and add the following bold parameters.

config.version = "8"
virtualHW.version = "4"
scsi0.present = "TRUE"
scsi0.virtualDev = "lsilogic"
memsize = "700"
scsi0:0.present = "TRUE"
scsi0:0.fileName = "localdisk.vmdk"
ide1:0.present = "TRUE"
ide1:0.fileName = "auto detect"
ide1:0.deviceType = "cdrom-raw"
floppy0.fileName = "A:"
Ethernet0.present = "TRUE"
displayName = "rac1"
guestOS = "rhel4"
priority.grabbed = "normal"
priority.ungrabbed = "normal"
disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
scsi1.sharedBus = "virtual"
scsi1.present = "TRUE"
scsi1:0.present = "TRUE"
scsi1:0.fileName = "D:\vm\rac\sharedstorage\ocfs2disk.vmdk"
scsi1:0.mode = "independent-persistent"
scsi1:0.deviceType = "disk"
scsi1:1.present = "TRUE"
scsi1:1.fileName = "D:\vm\rac\sharedstorage\asmdisk1.vmdk"
scsi1:1.mode = "independent-persistent"
scsi1:1.deviceType = "disk"
scsi1:2.present = "TRUE"
scsi1:2.fileName = "D:\vm\rac\sharedstorage\asmdisk2.vmdk"
scsi1:2.mode = "independent-persistent"
scsi1:2.deviceType = "disk"
scsi1:3.present = "TRUE"
scsi1:3.fileName = "D:\vm\rac\sharedstorage\asmdisk3.vmdk"
scsi1:3.mode = "independent-persistent"
scsi1:3.deviceType = "disk"
scsi1.virtualDev = "lsilogic"
ide1:0.autodetect = "TRUE"
floppy0.present = "FALSE"
Ethernet1.present = "TRUE"
Ethernet1.connectionType = "hostonly"

3. Install and configure the Enterprise Linux on the first virtual machine

download the Enterprise Linux from the Oracle website and decompress the file:

  • Enterprise-R4-U4-i386-disc1.iso
  • Enterprise-R4-U4-i386-disc2.iso
  • Enterprise-R4-U4-i386-disc3.iso
  • Enterprise-R4-U4-i386-disc4.iso

Wilson Note: from http://www.tuningking.com/oraclelinux下载这些介质

  1. in the VMware Server console, double-click the CD-ROM device on the right and select the ISO image Enterprise-R4-U4-i386-disc1.iso of the first disk.
  2. VMware Server console:
    • click Start this virtual machine .
  3. By Enter the key is installed in graphic mode.
  4. Skip the media test and start the installation.
  5. Welcome to Enterprise Linux: click Next .
  6. Language selection: <select Language Preferences>.
  7. Keyboard Configuration: <select Keyboard Preferences>.
  8. Installation type: custom.
  9. Disk partition settings: use Disk Druid for manual partitioning.
    • Warning: click Yes each device-sda, sdb, sdc, sdd, and sde will be initialized.
  10. Disk settings: Double-click the mount point (/and/u01) and the/dev/sda available space of the swap space to allocate the disk space on the sda drive. You will configure the remaining drives for OCFS2 and ASM later.
    • Add partition:
      • file system type: Swap
      • start cylinder: 911
      • end cylinder: 1170
      • mount point:/u01
      • file system type: ext3
      • start cylinder: 1171
      • end cylinder: 2610
  1. boot loader configuration: only the default/dev/sda1 option is selected, and all other options remain unselected.
  2. Network configuration:
    1. network Device
      • select and edit eth0
        1. deselect Configure Using DHCP .
        2. Selector Activate on boot .
        3. IP address: enter 192.168.2.131 ". /* Wilson Note: This Nic uses the bridge mode. The IP address must be in Windows same CIDR block as the IP address of your host. In this way, the windows host can understand the Linux Virtual Machine and transfer Oracle Media to Linux. For example, if the IP address of your windows server is 192.168.1.5, set the IP address to 192.168.1.xxx instead of 192.168.2.xxx */
        4. netmask: enter 255.255.255.0 ".
      • Select and edit eth1
        1. deselect Configure Using DHCP .
        2. Selector Activate on boot .
        3. IP address: enter 10.10.10.31 ".
        4. Netmask: enter 255.255.255.0 ".
    2. Host Name
      • selector manually and enter rac1.myopic domain.com ".
    3. Miscellaneous settings
      • gateway: enter 192.168.2.1 ".
      • Preferred DNS:<optional>
      • standby DNS:&lt;optional&gt;
  3. firewall configuration:
    1. selector No Firewall . If firewall is enabled, you may encounter the error "mount.ocfs2:Transport endpoint is not connected while mounting" when you attempt to mount the ocfs2 file system later during the setup ".
    2. Enable SELinux? : Active.
  4. Warning-no firewall: click Proceed .
  5. Supported in other languages: &lt;select the required language&gt;.
  6. Time zone: <select your time zone>
  7. set the Root password: &lt;enter your root password&gt;
  8. select the package group:
    1. selector X Window System .
    2. Selector GNOME Desktop Environment .
    3. Selector Editors .
    4. Selector Graphical Internet .
    5. Selector Text-based Internet .
    6. Selector Office/Productivity .
    7. Selector Sound and Video .
    8. Selector Graphics .
    9. Selector Server Configuration Tools .
    10. Selector FTP Server .
    11. SelectorLegacy Network Server .
      • Click Details .
        1. Selector rsh-server .
        2. Selector telnet-server .
    12. Selector Development Tools .
    13. Selector Legacy Software Development .
    14. Selector Administration Tools .
    15. Selector System Tools .
      • Click Details . In addition to the package selected by default, select the following package.
        1. Selector ocfs-2-2.6.9-42.0.0.0.1EL (UP the kernel driver), or select ocfs-2-2.6.9-42.0.0.0.1ELsmp (SMP kernel driver).
        2. Selector ocfs2-tools .
        3. Selector ocfs2console .
        4. Selector oracle oracleasm-2.6.9-42.0.0.0.1EL (UP the kernel driver), or select oracleasm-2.6.9-42.0.0.0.1ELsmp (SMP kernel driver).
        5. Selector sysstat .
    16. Selector Printing Support .
  9. Prepare for installation: click Next .
  10. Required installation media: click Continue .
  11. Change CD-ROM: In the VMware Server console, press the CTRL-D to display the Virtual Machine Settings. Click CD-ROM device, select the ISO image Enterprise-R4-U4-i386-disc2.iso of the second disk, and then the ISO image Enterprise-R4-U4-i386-disc3.iso of the third disk.
  12. When the installation ends:
    1. in the VMware Server console, press the CTRL-D to display the Virtual Machine Settings. Click CD-ROM device and select Use physical drive .
    2. Click Reboot .
  13. Welcome page: click Next .
  14. License Agreement: Select Yes, I agree to the License Agreement .
  15. Date and Time: set the date and time.
  16. Display: &lt;select the desired resolution&gt;.
  17. System User: leave the project blank and click Next .
  18. Other CDs: click Next .
  19. Complete the settings: click Next .

Congratulations, you have installed VMware Server on the Enterprise Linux!

Install VMware Tools. VMware tools require the synchronization of host and client time.

Log on to the VMware console as the root user.

  1. Click VM , and then select Install VMware Tools .
  2. Rac1-vm: click Install .
  3. Double-click the VMware Tools icon on the desktop.
  4. cdrom: Double Click VMwareTools-1.0.1-29996.i386.rpm .
  1. Complete System Preparation: click Continue .
  2. Open a terminal and run vmware-config-tools.pl .
The time when the client OS is synchronized to the host OS. When installing the Oracle Cluster and Oracle database software, the Oracle installer will first install the software on the local node, and then remotely copy the software to the remote node. If the date and time of the two RAC nodes are not synchronized, you may receive an error similar to the following.
"/bin/tar: ./inventory/Components21/oracle.ordim.server/10.2.0.1.0: time
stamp  2006-11-04 06:24:04 is 25 s in the future"
To ensure that the Oracle RAC is successfully installed, the time on the virtual machine must be synchronized with the time on the host. Perform the following steps to synchronize the time as the root user.
  1. Run the vmware-toolbox command to display the VMware Tools Properties window. /* Wilson note: vmware-toolbox is an executable file, directly in Terminal window in as root knock vmware-toolbox can perform it. */in Options tab, select Time synchronization between the virtual machine and the host operating system . The tool. syncTime = "TRUE" parameter has been appended to the virtual machine configuration file d:\vm\rac\rac1\Red Hat Enterprise Linux 4.vmx.
  2. Edit/boot/grub/grub.conf, and add the option "clock = pit nosmp noapic nolapic" to the row that reads the kernel/boot. You have added options to two kernels. Now you only need to change the specific kernel.
    #boot=/dev/sda
        default=0
        timeout=5
        splashimage=(hd0,0)/boot/grub/splash.xpm.gz
        hiddenmenu
        title Enterprise (2.6.9-42.0.0.0.1.ELsmp)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.9-42.0.0.0.1.ELsmp ro
        root=LABEL=/ rhgb quiet clock=pit nosmp noapic nolapic
        initrd /boot/initrd-2.6.9-42.0.0.0.1.ELsmp.img
        title Enterprise-up (2.6.9-42.0.0.0.1.EL)
        root (hd0,0)
        kernel /boot/vmlinuz-2.6.9-42.0.0.0.1.EL ro root=LABEL=/
        rhgb quiet clock=pit nosmp noapic nolapic
        initrd /boot/initrd-2.6.9-42.0.0.0.1.EL.img
        
  3. Reboot RAC1.
    # reboot
Create an oracle user. Run as root user
# groupadd oinstall
# groupadd dba
# mkdir -p /export/home/oracle /ocfs
# useradd -d /export/home/oracle -g oinstall -G dba -s /bin/ksh oracle
# chown oracle:dba /export/home/oracle /u01
# passwd oracle
New Password:
Re-enter new Password:
passwd: password successfully changed for oracle
create an oracle User Environment file.

/export/home/oracle/.profile

export PS1="`/bin/hostname -s`-> "
export EDITOR=vi
export ORACLE_SID=devdb1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=$ORACLE_BASE/product/10.2.0/crs_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/bin:
/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin
umask 022
create a file system directory structure. Run as an oracle User
rac1-> mkdir p $ORACLE_BASE/admin
rac1-> mkdir p $ORACLE_HOME
rac1-> mkdir p $ORA_CRS_HOME
rac1-> mkdir -p /u01/oradata/devdb

improve the shell limits for Oracle users. Use a text editor to add the following rows to/etc/security/limits.conf,/etc/pam.d/login, and/etc/profile. Additional information can be obtained from the document.

/etc/security/limits.conf

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
/etc/pam.d/login
session required /lib/security/pam_limits.so
/etc/profile
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
install Enterprise Linux software package. The following additional packages are required to install Oracle software. If you have installed 64- bit Enterprise Linux, the installer must have installed these packages.
  • libaio-0.3.105-2.i386.rpm
  • openmotif21-2.1.30-11.RHEL4.6.i386.rpm

decompress the packages from the ISO CD and run the following command as the root user.

# ls
libaio-0.3.105-2.i386.rpm  openmotif21-2.1.30-11.RHEL4.6.i386.rpm
#
# rpm -Uvh *.rpm
warning: libaio-0.3.105-2.i386.rpm: V3 DSA signature: NOKEY, key ID b38a8516
Preparing...
########################################### [100%]
1:openmotif21
########################################### [ 50%]
2:libaio
########################################### [100%]
Configure kernel parameters. Use a text editor to add the following rows to/etc/sysctl.conf. For the changes to take effect immediately, run/sbin/sysctl -p.
# more  /etc/sysctl.conf
kernel.shmall                = 2097152
kernel.shmmax                = 2147483648
kernel.shmmni                = 4096
kernel.sem                   = 250 32000 100 128
fs.file-max                  = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default        = 1048576
net.core.rmem_max            = 1048576
net.core.wmem_default        = 262144
net.core.wmem_max            = 262144
Modify the/etc/hosts file.
# more /etc/hosts
127.0.0.1               localhost
192.168.2.131           rac1.mycorpdomain.com        rac1
192.168.2.31            rac1-vip.mycorpdomain.com    rac1-vip
10.10.10.31             rac1-priv.mycorpdomain.com   rac1-priv
192.168.2.132           rac2.mycorpdomain.com        rac2
192.168.2.32            rac2-vip.mycorpdomain.com    rac2-vip
10.10.10.32             rac2-priv.mycorpdomain.com   rac2-priv
Configure the hangcheck timer kernel module. The hangcheck timer kernel module monitors the running status of the system and restarts the faulty RAC node. It uses two parameters: hangcheck_tick (defining the system check frequency) and hangcheck_margin (defining the maximum suspension delay before resetting RAC nodes) to determine whether a node has failed.

Add and downlink to/etc/modprobe.conf to set the parameters of the hangcheck kernel module.

/etc/modprobe.conf options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

to load the module immediately, run modprobe -v hangcheck-timer ".

Create disk partitions for OCFS2 and Oracle ASM. Prepare a set of raw disks for OCFS2 (/dev/sdb) and Oracle ASM(/dev/sdc,/dev/sdd,/dev/sde).

Run as the root user on RAC1.

# fdisk /dev/sdb

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-512, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-512, default 512):
Using default value 512
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

# fdisk /dev/sdc

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-391, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-391, default 391):
Using default value 391
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

# fdisk /dev/sdd

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-391, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-391, default 391):
Using default value 391
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

# fdisk /dev/sde

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):
Using default value 261
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.

# fdisk -l

Disk /dev/sda: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1         910     7309543+  83  Linux
/dev/sda2             911        1170     2088450   82  Linux swap
/dev/sda3            1171        2610    11566800   83  Linux
Disk /dev/sdb: 536 MB, 536870912 bytes
64 heads, 32 sectors/track, 512 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         512      524272   83  Linux
Disk /dev/sdc: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         391     3140676   83  Linux
Disk /dev/sdd: 3221 MB, 3221225472 bytes
255 heads, 63 sectors/track, 391 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1         391     3140676   83  Linux
Disk /dev/sde: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1         261     2096451   83  Linux
install the oracleasmlib package. Download the ASM Library from OTN and install the ASM RPM as the root user.
 # rpm -Uvh oracleasmlib-2.0.2-1.i386.rpm
Preparing...
########################################### [100%]
1:oracleasmlib
########################################### [100%]

At this stage, you must have installed the following ASM package.

[root@rac1 swdl]# rpm -qa | grep oracleasm
oracleasm-support-2.0.3-2
oracleasm-2.6.9-42.0.0.0.1.ELsmp-2.0.3-2
oracleasmlib-2.0.2-1
Map the original device to the ASM disk. Only if the original device mapping is required only when you create an ASM disk using standard Linux I/O. Another way to create an ASM disk is to use the ASM Library driver provided by Oracle. Later, you will use the ASM Library driver to configure the ASM disk.

Perform the following tasks to map the original device to the previously created shared partition. Each time you boot a cluster node, the original device must be bound to the block device.

Add the following line to/etc/sysconfig/rawdevices.

/etc/sysconfig/rawdevices

/dev/raw/raw1 /dev/sdc1
/dev/raw/raw2 /dev/sdd1
/dev/raw/raw3 /dev/sde1
to make the mapping take effect immediately, run the following command as the root user:
# /sbin/service rawdevices restart
Assigning devices:
/dev/raw/raw1  -->   /dev/sdc1
/dev/raw/raw1:  bound to major 8, minor 33
/dev/raw/raw2  -->   /dev/sdd1
/dev/raw/raw2:  bound to major 8, minor 49
/dev/raw/raw3  -->   /dev/sde1
/dev/raw/raw3:  bound to major 8, minor 65
done
# chown oracle:dba /dev/raw/raw[1-3]
# chmod 660 /dev/raw/raw[1-3]
# ls -lat /dev/raw/raw*
crw-rw----  1 oracle dba 162, 3 Nov  4 07:04 /dev/raw/raw3
crw-rw----  1 oracle dba 162, 2 Nov  4 07:04 /dev/raw/raw2
crw-rw----  1 oracle dba 162, 1 Nov  4 07:04 /dev/raw/raw1

run as an oracle User

rac1-> ln -sf /dev/raw/raw1 /u01/oradata/devdb/asmdisk1
rac1-> ln -sf /dev/raw/raw2 /u01/oradata/devdb/asmdisk2
rac1-> ln -sf /dev/raw/raw3 /u01/oradata/devdb/asmdisk3

modify/etc/udev/permissions.d/50-udev.permissions. The original device is remapped during boot. By default, the owner of the original device changes to the root user during boot. If the owner is not an oracle User, ASM may encounter problems when accessing shared partitions. In/etc/udev/permissions.d/50-udev.permissions add a comment to the original line "raw/*:root:disk:0660" and then add a new line "raw/*:oracle:dba:0660".

/etc/udev/permissions.d/50-udev.permissions

# raw devices
ram*:root:disk:0660
#raw/*:root:disk:0660
raw/*:oracle:dba:0660

4. Create and configure a second virtual machine

to create a second virtual machine, simply shut down the first virtual machine, copy all files in d:\vm\rac\rac1 to d:\vm\rac\rac2, and then change several configurations.

Modify the network configuration.
  1. Run# shutdown -h now as the root user on RAC1.
  2. Copy all files in the rac1 folder to rac2. D:\&gt;copy d:\vm\rac\rac1 d:\vm\rac\rac2
  3. in the VMware Server console, press the CTRL-O to open the second VM d:\rac\rac2\Red Hat Enterprise Linux 4.vmx.
  4. VMware Server console:
    • rename the VM name from rac1 to rac2. Right-click the new rac1 tab, and then select Settings .
      • Selector Options tab. 1. Virtual Machine name: enter rac2 ".

  • Click Start this virtual machine start rac2 and keep rac1 in the power off state.
  • rac2-Virtual Machine: Select Create a new identifier.
  1. Log on as the root user and run system-config-network to modify the network configuration.

    IP address: double-click each Ethernet device and use the following table to make necessary changes.

    Equipment IP ADDRESS subnet mask default Gateway address

    MAC address: navigate Hardware Device TAB and detect the new MAC address of each Ethernet device.

    Host Name and DNS: use the following table to make necessary changes to the items in the DNS tab and save them as CTRL-S.

    Host Name preferred DNS standby DNS DNS search path

    Finally, activate each Ethernet device.

Modify/etc/hosts. Add the following items to/etc/hosts.

127.0.0.1 localhost later, VIPCA attempts to use the callback address during Oracle Cluster software installation.

Modify/export/home/oracle/.profile. Replace the value of ORACLE_SID with devdb2.

Use SSH to establish user equivalence. During cluster Readiness Service (CRS) and RAC installation, Oracle Universal Installer (OUI) must be able to copy software to all RAC nodes as oracle without prompting for a password. In Oracle 10 G you can use ssh instead of rsh to perform this operation.

To establish user equivalence, generate the user's public key and private key as an oracle User on both nodes. Power on rac1 and perform the following tasks on these two nodes. Run on RAC1.

rac1-> mkdir ~/.ssh
rac1-> chmod 700 ~/.ssh
rac1-> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/oracle/.ssh/id_rsa.
Your public key has been saved in /export/home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
87:54:4f:92:ba:ed:7b:51:5d:1d:59:5b:f9:44:da:b6 oracle@rac1.mycorpdomain.com
rac1-> ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/export/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/oracle/.ssh/id_dsa.
Your public key has been saved in /export/home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
31:76:96:e6:fc:b7:25:04:fd:70:42:04:1f:fc:9a:26 oracle@rac1.mycorpdomain.com

Run on rac2

rac2-> mkdir ~/.ssh
rac2-> chmod 700 ~/.ssh
rac2-> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/export/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/oracle/.ssh/id_rsa.
Your public key has been saved in /export/home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
29:5a:35:ac:0a:03:2c:38:22:3c:95:5d:68:aa:56:66 oracle@rac2.mycorpdomain.com
rac2-> ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/export/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /export/home/oracle/.ssh/id_dsa.
Your public key has been saved in /export/home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
4c:b2:5a:8d:56:0f:dc:7b:bc:e0:cd:3b:8e:b9:5c:7c oracle@rac2.mycorpdomain.com
run on RAC1.
rac1-> cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
rac1-> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
rac1-> ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'rac2 (192.168.2.132)' can't be established.
RSA key fingerprint is 63:d3:52:d4:4d:e2:cb:ac:8d:4a:66:9f:f1:ab:28:1f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac2,192.168.2.132' (RSA) to the list of known hosts.
oracle@rac2's password:
rac1-> ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@rac2's password:
rac1-> scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
oracle@rac2's password:
authorized_keys                           100% 1716     1.7KB/s   00:00
Test the connection on each node. Verify that the system does not prompt you to enter a password when you run the following command again.
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
ssh rac1.mycorpdomain.com date
ssh rac2.mycorpdomain.com date
ssh rac1-priv.mycorpdomain.com date
ssh rac2-priv.mycorpdomain.com date

5. Configure Oracle Automatic Storage Management (ASM)

Oracle ASM is closely integrated with Oracle databases and works with Oracle's data management tool suite. It simplifies database storage management and provides I/O performance for raw disks.

Configure ASMLib. Configure ASMLib on both nodes as the root user.
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting  without typing an
answer will keep that current value.  Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration:           [  OK  ]
Loading module "oracleasm":                                [  OK  ]
Mounting ASMlib driver filesystem:                         [  OK  ]
Scanning system for ASM disks:                             [  OK  ]
Create an ASM disk. Create an ASM disk on any node as the root user.
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1
Marking disk "/dev/sdc1" as an ASM disk:                   [  OK  ]
# /etc/init.d/oracleasm createdisk VOL2 /dev/sdd1
Marking disk "/dev/sdd1" as an ASM disk:                   [  OK  ]
# /etc/init.d/oracleasm createdisk VOL3 /dev/sde1
Marking disk "/dev/sde1" as an ASM disk:                   [  OK  ]
Verify that the ASM disks are visible from every node.
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks:                      [  OK  ]
# /etc/init.d/oracleasm listdisks
VOL1
VOL2
VOL3
VOL4

6. Configure the Oracle Cluster File System (OCFS2)

OCFS2 is a common cluster file system developed by Oracle, which is integrated with Enterprise Linux kernel. It allows all nodes to share files on the cluster file system at the same time, thus eliminating the need to manage raw devices. Here, you will host OCR and voting disks in the OCFS2 file system. You can obtain additional information about OCFS2 from The OCFS2 User Guide.

During Enterprise Linux installation, you must have installed OCFS2 RPM. Verify that RPM is installed on both nodes.

rac1-> rpm -qa | grep ocfs
ocfs2-tools-1.2.2-2
ocfs2console-1.2.2-2
ocfs2-2.6.9-42.0.0.0.1.ELsmp-1.2.3-2
Create an OCFS2 configuration file. Run as the root user on RAC1.
# ocfs2console
  1. OCFS2 console: Select Cluster , and then select Configure Nodes .
  2. Cluster Stack started: click Close .
  3. Node configuration: click Add .
  4. Add node: add the following nodes and click Apply .
    • Name: rac1
    • IP address: 192.168.2.131
    • IP port: 7777
    • name: rac2
    • IP address: 192.168.2.132
    • IP port: 7777
  5. verify the generated configuration file.
    # more /etc/ocfs2/cluster.conf
        node:
        ip_port = 7777
        ip_address = 192.168.2.131
        number = 0
        name = rac1
        cluster = ocfs2
        node:
        ip_port = 7777
        ip_address = 192.168.2.132
        number = 1
        name = rac2
        cluster = ocfs2
        cluster:
        node_count = 2
        name = ocfs2
        
  6. Propagate the configuration file to rac2. You can re-run the preceding steps on rac2 to generate a configuration file, or select Cluster and Propagate Configuration to propagate the configuration file to rac2.
Configure the O2CB driver. O2CB is a cluster service that manages the communication between nodes and cluster file systems. The following is a description of each service:
  • NM: node manager used to track all nodes in cluster.conf
  • HB: The Heartbeat service that sends up/down notifications when a node joins or leaves the cluster.
  • TCP: handles communication between nodes
  • DLM: a distributed lock manager that tracks all locks, their owners, and statuses.
  • CONFIGFS: the configuration file system of the user-space driver mounted in/config.
  • DLMFS: interface between user space and kernel space DLM

run the following procedure on both nodes to configure O2CB to start at boot time.

When prompted to specify the heartbeat death threshold, you must specify a value greater than 7 to prevent the node from crashing due to slow IDE disk drives. The heartbeat death threshold is a variable used to calculate the isolation time.

Fence time (seconds) = (heartbeat dead threshold -1) * 2

In our environment, 120 seconds of isolation is appropriate. The heartbeat death threshold on both nodes should be identical.

Run as root user

# /etc/init.d/o2cb unload
Stopping O2CB cluster ocfs2: OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
 without typing an answer will keep that current value.  Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [7]: 61
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
format the file system.Before formatting and mounting a file system, verify that the O2CB is online on both nodes. The O2CB heartbeat is not active because the file system is not mounted.
# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Checking O2CB heartbeat: Not active

You only need to format the file system on one node. Run as the root user on RAC1.

# ocfs2console
  1. OCFS2 console: Select Tasks and Format .
  2. Format:
  3. OCFS2 console: exit by CTRL-Q.
Mount a file system. To mount a file system, run the following command on both nodes.
# mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /ocfs

To mount a file system during boot, add the downlink to the/etc/fstab of the two nodes.

/etc/fstab

/dev/sdb1 /ocfs ocfs2 _netdev,datavolume,nointr 0 0
create an Oracle Cluster directory. Create a directory in the OCFS2 file system where the OCR and voting disks will reside.

Run on RAC1.

# mkdir /ocfs/clusterware
# chown -R oracle:dba /ocfs

Now, you have configured ocfs2. Verify that you can read and write files on the shared Cluster File system of the two nodes.

7. Install Oracle Cluster Components

after the download, run it as an oracle User on RAC1.

rac1-> /u01/staging/clusterware/runInstaller
  1. Welcome page: click Next .
  2. Specify Inventory directory and certificate:
    • enter the full path of the inventory Directory:/u01/app/oracle/oraInventory.
    • The name of the operating system Group: oinstall.
  3. Specify the Home details:
    • name: OraCrs10g_home
    • /u01/app/oracle/product/10.2.0/crs_1
  4. necessary product-specific inspection:
    • ignore warnings about physical memory requirements.
  5. Specify the cluster configuration: click Add .
    • Public node name: rac2.myopic Domain.com
    • dedicated node name: rac2-priv.mycorpdomain.com
    • virtual host name: rac2-vip.mycorpdomain.com
  6. specify the network interface usage:
    • API name: eth0
    • subnet: 192.168.2.0
    • API type: Public
    • API name: eth1
    • subnet: 10.10.10.0
    • API type: Private
  7. specify the location of the Oracle Cluster Registry (OCR): Select External Redundancy . For simplicity, OCR is not mirrored here. In a production environment, you may consider reusing OCR to achieve higher redundancy.
    • Specify the OCR location:/ocfs/clusterware/ocr
  8. specify the location of the voting disk: Select External Redundancy . Similarly, for simplicity, we choose not to mirror the voting disk.
    • Vote disk location:/ocfs/clusterware/votingdisk
  9. summary: click Install .
  10. Run the configuration script: run the following script in sequence as the root user (one at a time). After the current script is completed, run the next script.
    • Run/u01/app/oracle/oraInventory/orainstRoot.sh on RAC1.
    • Run/u01/app/oracle/oraInventory/orainstRoot.sh on rac2.
    • Run/u01/app/oracle/product/10.2.0/crs_1/root.sh on RAC1.
    • Run/u01/app/oracle/product/10.2.0/crs_1/root.sh on rac2.
    The root.sh script on rac2 automatically calls VIPCA, but fails due to The "The given interface(s), " eth0 "is not public.Public interfaces should be used to configure virtual IPs." error. If your public interface uses a non-routable IP address (192.168.x. x), the Oracle Cluster verification utility (CVU) cannot find an appropriate public interface. One solution is to manually run VIPCA.
  11. Manually call VIPCA on the second node as the root user. # /u01/app/oracle/product/10.2.0/crs_1/bin/vipca
  12. welcome page: click Next .
  13. Network interface: Select eth0 .
  14. The virtual IP address of the cluster node:
    • node name: rac1
    • IP alias: rac1-vip
    • IP address: 192.168.2.31
    • subnet mask: 255.255.255.0
    • node name: rac2
    • IP alias: rac2-vip
    • IP address: 192.168.2.32
    • subnet mask: 255.255.255.0
  15. summary: click Finish .

  16. Configuration assistant progress dialog box: After the configuration is complete, click OK .
  17. Configuration result: click Exit .
  18. Return to the execution configuration script screen of Rac1. click OK .

  19. Configuration assistant: verify that all checks are successful. OUI performs a post-installation check on cluster components. If the CVU fails, correct the problem and re-run the following command as an oracle User:
    rac1-> /u01/app/oracle/product/10.2.0/crs_1/bin/cluvfy stage
        -post crsinst -n rac1,rac2
        Performing post-checks for cluster services setup
        Checking node reachability...
        Node reachability check passed from node "rac1".
        Checking user equivalence...
        User equivalence check passed for user "oracle".
        Checking Cluster manager integrity...
        Checking CSS daemon...
        Daemon status check passed for "CSS daemon".
        Cluster manager integrity check passed.
        Checking cluster integrity...
        Cluster integrity check passed
        Checking OCR integrity...
        Checking the absence of a non-clustered configuration...
        All nodes free of non-clustered, local-only configurations.
        Uniqueness check for OCR device passed.
        Checking the version of OCR...
        OCR of correct Version "2" exists.
        Checking data integrity of OCR...
        Data integrity check for OCR passed.
        OCR integrity check passed.
        Checking CRS integrity...
        Checking daemon liveness...
        Liveness check passed for "CRS daemon".
        Checking daemon liveness...
        Liveness check passed for "CSS daemon".
        Checking daemon liveness...
        Liveness check passed for "EVM daemon".
        Checking CRS health...
        CRS health check passed.
        CRS integrity check passed.
        Checking node application existence...
        Checking existence of VIP node application (required)
        Check passed.
        Checking existence of ONS node application (optional)
        Check passed.
        Checking existence of GSD node application (optional)
        Check passed.
        Post-check for cluster services setup was successful.
        
  20. installation: click Exit .

8. Install Oracle Database 10 G di 2 ban

after the download, run it as an oracle User on RAC1.

rac1-> /u01/staging/database/runInstaller
  1. Welcome page: click Next .
  2. Select the installation type:
  3. Specify the Home details:
    • name: OraDb10g_home1
    • path:/u01/app/oracle/product/10.2.0/db_1
  4. specify the hardware cluster installation mode:
    • selector Cluster Installation .
    • Click Select All .
  5. Necessary product-specific inspection:
    • ignore warnings about physical memory requirements.
  6. Select configuration options:
    • create a database.
  7. Select database configuration:
    • selector Advanced .
  8. Summary: click Install .
  9. Database template:
    • selector General Purpose .
  10. Database ID:
    • global database name: devdb
    • SID prefix: devdb
  11. management options:
    • selector Configure the Database with Enterprise Manager .
  12. Database certificate:
    • use the same password for all accounts.
  13. Storage options:
    • selector Automatic Storage Management (ASM) .
  14. Create an ASM instance:
  15. ASM disk group:
    • click Create New .
  16. Create a disk group: create two disk groups-DG1 and RECOVERYDEST.
    • Disk Group name: DG1
    • selector Normal redundant.
    • Select the disk paths ORCL:VOL1 and ORCL: vol2. If you have configured an ASM disk using standard Linux I/O, select/u01/oradata/devdb/asmdisk1 and/u01/oradata/devdb/asmdisk2.
    • Click OK .

  • Disk Group name: RECOVERYDEST.
  • Selector External redundant.
  • Select the disk path ORCL: vol3. If you have configured an ASM disk using standard Linux I/O, select/u01/oradata/devdb/asmdisk3.
  • Click OK .

  1. ASM disk group: click Next .

  1. Database file location:
  2. restore configurations:
  3. Database content:
    • select or deselect the sample mode.
  4. Database Service:
    • click Next . Later, you can use DBCA or srvctl to create or modify other services.
  5. Initialization parameters:
  6. Database storage: click Next .
  7. Create options:
    • selector Create Database .
    • Click Finish .
  8. Summary: click OK .
  9. Database configuration assistant: click Exit .

  1. Run the configuration script: run the following script as the root user.
    • Run/u01/app/oracle/product/10.2.0/db_1/root.sh on RAC1.
    • Run/u01/app/oracle/product/10.2.0/db_1/root.sh on rac2.
  2. Return to the execution configuration script screen of Rac1. click OK .
  3. Installation: click Exit .

Congratulations, you have successfully installed Enterprise Linux database 10 on the Oracle RAC. G !

9. Explore RAC database environment

now that you have successfully installed the virtual two-node RAC database, let's take a look at the environment you just configured.

Check the status of application resources.
rac1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.devdb.db   application    ONLINE    ONLINE    rac1
ora....b1.inst application    ONLINE    ONLINE    rac1
ora....b2.inst application    ONLINE    ONLINE    rac2
ora....SM1.asm application    ONLINE    ONLINE    rac1
ora....C1.lsnr application    ONLINE    ONLINE    rac1
ora.rac1.gsd   application    ONLINE    ONLINE    rac1
ora.rac1.ons   application    ONLINE    ONLINE    rac1
ora.rac1.vip   application    ONLINE    ONLINE    rac1
ora....SM2.asm application    ONLINE    ONLINE    rac2
ora....C2.lsnr application    ONLINE    ONLINE    rac2
ora.rac2.gsd   application    ONLINE    ONLINE    rac2
ora.rac2.ons   application    ONLINE    ONLINE    rac2
ora.rac2.vip   application    ONLINE    ONLINE    rac2
rac1-> srvctl status nodeapps -n rac1
VIP is running on node: rac1
GSD is running on node: rac1
Listener is running on node: rac1
ONS daemon is running on node: rac1
rac1-> srvctl status nodeapps -n rac2
VIP is running on node: rac2
GSD is running on node: rac2
Listener is running on node: rac2
ONS daemon is running on node: rac2
rac1-> srvctl status asm -n rac1
ASM instance +ASM1 is running on node rac1.
rac1-> srvctl status asm -n rac2
ASM instance +ASM2 is running on node rac2.
rac1-> srvctl status database -d devdb
Instance devdb1 is running on node rac1
Instance devdb2 is running on node rac2
rac1-> srvctl status service -d devdb
rac1->
Check the status of the Oracle Cluster.
rac1-> crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
rac2-> crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

Run the crsctl command to view all available options.

Lists RAC instances.
SQL> select
2  instance_name,
3  host_name,
4  archiver,
5  thread#,
6  status
7  from gv$instance;
INSTANCE_NAME  HOST_NAME             ARCHIVE  THREAD# STATUS
-------------- --------------------- ------- -------- ------
devdb1         rac1.mycorpdomain.com STARTED        1 OPEN
devdb2         rac2.mycorpdomain.com STARTED        2 OPEN
Check the connection.

Verify that you can connect to instances and services on each node.

sqlplus system@devdb1
sqlplus system@devdb2
sqlplus system@devdb
Check the database configuration.
rac1-> export ORACLE_SID=devdb1
rac1-> sqlplus / as sysdba
SQL> show sga
Total System Global Area  209715200 bytes
Fixed Size                  1218556 bytes
Variable Size             104859652 bytes
Database Buffers          100663296 bytes
Redo Buffers                2973696 bytes
SQL> select file_name,bytes/1024/1024 from dba_data_files;
FILE_NAME                                   BYTES/1024/1024
------------------------------------------- ---------------
+DG1/devdb/datafile/users.259.606468449                   5
+DG1/devdb/datafile/sysaux.257.606468447                240
+DG1/devdb/datafile/undotbs1.258.606468449               30
+DG1/devdb/datafile/system.256.606468445                480
+DG1/devdb/datafile/undotbs2.264.606468677               25
SQL> select
2  group#,
3  type,
4  member,
5  is_recovery_dest_file
6  from v$logfile
7  order by group#;
GROUP# TYPE    MEMBER                                              IS_
------ ------- --------------------------------------------------- ---
1 ONLINE  +RECOVERYDEST/devdb/onlinelog/group_1.257.606468581 YES
1 ONLINE  +DG1/devdb/onlinelog/group_1.261.606468575          NO
2 ONLINE  +RECOVERYDEST/devdb/onlinelog/group_2.258.606468589 YES
2 ONLINE  +DG1/devdb/onlinelog/group_2.262.606468583          NO
3 ONLINE  +DG1/devdb/onlinelog/group_3.265.606468865          NO
3 ONLINE  +RECOVERYDEST/devdb/onlinelog/group_3.259.606468875 YES
4 ONLINE  +DG1/devdb/onlinelog/group_4.266.606468879          NO
4 ONLINE  +RECOVERYDEST/devdb/onlinelog/group_4.260.606468887 YES
rac1-> export ORACLE_SID=+ASM1
rac1-> sqlplus / as sysdba
SQL> show sga
Total System Global Area   92274688 bytes
Fixed Size                  1217884 bytes
Variable Size              65890980 bytes
ASM Cache                  25165824 bytes
SQL> show parameter asm_disk
NAME                           TYPE        VALUE
------------------------------ ----------- ------------------------
asm_diskgroups                 string      DG1, RECOVERYDEST
asm_diskstring                 string
SQL> select
2  group_number,
3  name,
4  allocation_unit_size alloc_unit_size,
5  state,
6  type,
7  total_mb,
8  usable_file_mb
9  from v$asm_diskgroup;
ALLOC                        USABLE
GROUP                  UNIT                 TOTAL    FILE
NUMBER NAME             SIZE STATE   TYPE       MB      MB
------ ------------ -------- ------- ------ ------ -------
1 DG1           1048576 MOUNTED NORMAL   6134    1868
2 RECOVERYDEST  1048576 MOUNTED EXTERN   2047    1713
SQL> select
2  name,
3  path,
4  header_status,
5  total_mb free_mb,
6  trunc(bytes_read/1024/1024) read_mb,
7  trunc(bytes_written/1024/1024) write_mb
8  from v$asm_disk;
NAME  PATH       HEADER_STATU    FREE_MB    READ_MB   WRITE_MB
----- ---------- ------------ ---------- ---------- ----------
VOL1  ORCL:VOL1  MEMBER             3067        229       1242
VOL2  ORCL:VOL2  MEMBER             3067        164       1242
VOL3  ORCL:VOL3  MEMBER             2047         11        354
Create a tablespace.
SQL> connect system/oracle@devdb
Connected.
SQL> create tablespace test_d datafile '+DG1' size 10M;
Tablespace created.
SQL> select
2  file_name,
3  tablespace_name,
4  bytes
5  from dba_data_files
6  where tablespace_name='TEST_D';
FILE_NAME                                TABLESPACE_NAME      BYTES
---------------------------------------- --------------- ----------
+DG1/devdb/datafile/test_d.269.606473423 TEST_D            10485760
Create an online redo log filegroup.
SQL> connect system/oracle@devdb
Connected.
SQL> alter database add logfile thread 1 group 5 size 50M;
Database altered.
SQL> alter database add logfile thread 2 group 6 size 50M;
Database altered.
SQL> select
2  group#,
3  thread#,
4  bytes,
5  members,
6  status
7  from v$log;
GROUP#    THREAD#      BYTES    MEMBERS STATUS
---------- ---------- ---------- ---------- ----------------
1          1   52428800          2 CURRENT
2          1   52428800          2 INACTIVE
3          2   52428800          2 ACTIVE
4          2   52428800          2 CURRENT
5          1   52428800          2 UNUSED
6          2   52428800          2 UNUSED
SQL> select
2  group#,
3  type,
4  member,
5  is_recovery_dest_file
6  from v$logfile
7  where group# in (5,6)
8  order by group#;
GROUP# TYPE    MEMBER                                               IS_
------ ------- ---------------------------------------------------- ---
5 ONLINE  +DG1/devdb/onlinelog/group_5.271.606473683           NO
5 ONLINE  +RECOVERYDEST/devdb/onlinelog/group_5.261.606473691  YES
6 ONLINE  +DG1/devdb/onlinelog/group_6.272.606473697           NO
6 ONLINE  +RECOVERYDEST/devdb/onlinelog/group_6.262.606473703  YES
Check the space usage in the flashback recovery area.
SQL> select * from v$recovery_file_dest;
NAME          SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
------------- ----------- ---------- ----------------- ---------------
+RECOVERYDEST  1572864000  331366400                 0               7
SQL> select * from v$flash_recovery_area_usage;
FILE_TYPE    PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE                 .97                         0               1
ONLINELOG                    20                         0               6
ARCHIVELOG                    0                         0               0
BACKUPPIECE                   0                         0               0
IMAGECOPY                     0                         0               0
FLASHBACKLOG                  0                         0               0
Start and stop application resources.

Follow these steps to start and stop individual application resources.

srvctl start nodeapps -n <node1 hostname>
srvctl start nodeapps -n <node2 hostname>
srvctl start asm -n <node1 hostname>
srvctl start asm -n <node2 hostname>
srvctl start database -d <database name>
srvctl start service -d <database name> -s <service name>
crs_stat -t
srvctl stop service -d <database name> -s <service name>
srvctl stop database -d <database name>
srvctl stop asm -n <node1 hostname>
srvctl stop asm -n <node2 hostname>
srvctl stop nodeapps -n <node1 hostname>
srvctl stop nodeapps -n <node2 hostname>
crs_stat -t

10. Test transparent failover (TAF)

the failover mechanism in Oracle TAF enables any failed database connection to reconnect to other nodes in the cluster. Failover is transparent to users. Oracle re-performs the query on the failover instance and continues to display the remaining results to the user.

Create a new database service. First, create a new service named CRM. You can use the DBCA or srvctl utility to create a database service. Here, you will use DBCA to create a CRM service on devdb1.

Service database name preferred instance available instances TAF policy

run as an oracle User on RAC1.

rac1-> dbca
  1. Welcome Page: Select Oracle Real Application Clusters database .
  2. Action: Select Services Management.
  3. Cluster database list: click Next .
  4. Database Service: click Add .
    • Add Service: enter CRM ".
      • Select devdb1 as the preferred instance.
      • Select devdb2 as the available instance.
      • TAF policy: Select Basic .
    • Click Finish .

  1. Database configuration assistant: click No exit.

The database configuration assistant creates the following CRM service items in tnsnames.ora:

CRM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac2-vip)(PORT = 1521))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CRM)
(FAILOVER_MODE =
(TYPE = SELECT)
(METHOD = BASIC)
(RETRIES = 180)
(DELAY = 5)
)
)
)
SQL> connect system/oracle@devdb1
Connected.
SQL> show parameter service
NAME                           TYPE        VALUE
------------------------------ ----------- ------------------------
service_names                  string      devdb, CRM
SQL> connect system/oracle@devdb2
Connected.
SQL> show parameter service
NAME                           TYPE        VALUE
------------------------------ ----------- ------------------------
service_names                  string      devdb
use the CRM service to connect to the first session. If failover_type and failover_mode return NONE, verify that the CRM service is correctly configured in tnsnames.ora.
SQL> connect system/oracle@crm
Connected.
SQL> select
2  instance_number instance#,
3  instance_name,
4  host_name,
5  status
6  from v$instance;
INSTANCE# INSTANCE_NAME    HOST_NAME             STATUS
---------- ---------------- --------------------- ------------
1 devdb1           rac1.mycorpdomain.com OPEN
SQL> select
2  failover_type,
3  failover_method,
4  failed_over
5  from v$session
6  where username='SYSTEM';
FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER
------------- --------------- ----------------
SELECT        BASIC           NO
Disable the instance from another session. Connect to the CRM instance as a sys user and close the instance.
rac1-> export ORACLE_SID=devdb1
rac1-> sqlplus / as sysdba
SQL> select
2  instance_number instance#,
3  instance_name,
4  host_name,
5  status
6  from v$instance;
INSTANCE# INSTANCE_NAME    HOST_NAME             STATUS
---------- ---------------- --------------------- ------------
1 devdb1           rac1.mycorpdomain.com OPEN
SQL> shutdown abort;
ORACLE instance shut down.
Verify that the session has failed. Run the following query from the same CRM session that you previously opened to verify that the session has failed over to another instance.
SQL> select
2  instance_number instance#,
3  instance_name,
4  host_name,
5  status
6  from v$instance;
INSTANCE# INSTANCE_NAME    HOST_NAME             STATUS
---------- ---------------- --------------------- ------------
2 devdb2           rac2.mycorpdomain.com OPEN
SQL> select
2  failover_type,
3  failover_method,
4  failed_over
5  from v$session
6  where username='SYSTEM';
FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER
------------- --------------- ----------------
SELECT        BASIC           YES
Relocate the CRM service to the preferred instance. After devdb1 is restored, the CRM service does not automatically relocate to the preferred instance. You must manually reposition the service to devdb1.
rac1-> export ORACLE_SID=devdb1
rac1-> sqlplus / as sysdba
SQL> startup
ORACLE instance started.
Total System Global Area  209715200 bytes
Fixed Size                  1218556 bytes
Variable Size             104859652 bytes
Database Buffers          100663296 bytes
Redo Buffers                2973696 bytes
Database mounted.
Database opened.
SQL> show parameter service
NAME                           TYPE        VALUE
------------------------------ ----------- ------------------------
service_names                  string      devdb
rac2-> export ORACLE_SID=devdb2
rac2-> sqlplus / as sysdba
SQL> show parameter service
NAME                           TYPE        VALUE
------------------------------ ----------- ------------------------
service_names                  string      devdb, CRM
rac1-> srvctl relocate service -d devdb -s crm -i devdb2 -t devdb1
SQL> connect system/oracle@devdb1
Connected.
SQL> show parameter service
NAME                           TYPE        VALUE
------------------------------ ----------- ------------------------
service_names                  string      devdb, CRM
SQL> connect system/oracle@devdb2
Connected.
SQL> show parameter service
NAME                           TYPE        VALUE
------------------------------ ----------- ------------------------
service_names                  string      devdb

11. Database backup and recovery

the process of using Oracle Recovery Manager (RMAN) to back up and restore Oracle RAC databases is the same as that of a single-instance database.

In this section, you will see a very simple backup and recovery case:

1. Perform a complete database backup. 2. Create a mytable table in the test_d tablespace. 3. Insert the first record into mytable at t1. 4 insert the second record into mytable at t2. 5. Delete the mytable table at t3. 6. Restore the test_d tablespace to a certain point in time. 7. Verify the recovery result.

Perform a complete database backup.

rac1-> rman nocatalog target /
Recovery Manager: Release 10.2.0.1.0 - Production on Mon Nov 13 18:15:09 2006
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
connected to target database: DEVDB (DBID=511198553)
using target database control file instead of recovery catalog
RMAN> configure controlfile autobackup on;
RMAN> backup database plus archivelog delete input;

Create a mytable table in the test_d tablespace.

19:01:56 SQL> connect system/oracle@devdb2
Connected.
19:02:01 SQL> create table mytable (col1 number) tablespace test_d;
Table created.

Insert the first record into mytable at t1.

19:02:50 SQL> insert into mytable values (1);
1 row created.
19:02:59 SQL> commit;
Commit complete.

Insert a second record into mytable at t2.

19:04:41 SQL> insert into mytable values (2);
1 row created.
19:04:46 SQL> commit;
Commit complete.

Delete the mytable table at t3.

19:05:09 SQL> drop table mytable;
Table dropped.

Restore the test_d tablespace to a point in time.

Create a secondary Directory for the secondary database.

rac1-> mkdir /u01/app/oracle/aux
RMAN> recover tablespace test_d
2> until time "to_date('13-NOV-2006 19:03:10','DD-MON-YYYY HH24:MI:SS')"
3> auxiliary destination '/u01/app/oracle/aux';
RMAN> backup tablespace test_d;
RMAN> sql 'alter tablespace test_d online';

Verify the recovery result.

19:15:09 SQL> connect system/oracle@devdb2
Connected.
19:15:16 SQL> select * from mytable;
COL1
----------
1

12. Explore the Oracle Enterprise Manager (OEM) database console

the Oracle Enterprise Manager database console provides a very good integrated GUI interface for managing the cluster database environment. You can run almost all tasks in the console.

To access the database console, open a Web browser and enter the following URL. Log on as a sysman user and enter the password you selected during database installation.

http://rac1:1158/em

start and stop the database console.

rac1-> emctl stop dbconsole
TZ set to US/Eastern
Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation.  All rights reserved.
http://rac1.mycorpdomain.com:1158/em/console/aboutApplication
Stopping Oracle Enterprise Manager 10g Database Control ...
...  Stopped.
rac1-> emctl start dbconsole
TZ set to US/Eastern
Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation.  All rights reserved.
http://rac1.mycorpdomain.com:1158/em/console/aboutApplication
Starting Oracle Enterprise Manager 10g Database Control
................... started.
------------------------------------------------------------------
Logs are generated in directory
/u01/app/oracle/product/10.2.0/db_1/rac1_devdb1/sysman/log

Verify the status of the database console.

rac1-> emctl status dbconsole
TZ set to US/Eastern
Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation.  All rights reserved.
http://rac1.mycorpdomain.com:1158/em/console/aboutApplication
Oracle Enterprise Manager 10g is running.
------------------------------------------------------------------
Logs are generated in directory
/u01/app/oracle/product/10.2.0/db_1/rac1_devdb1/sysman/log
rac1-> emctl status agent
TZ set to US/Eastern
Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
Copyright (c) 1996, 2005 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 10.1.0.4.1
OMS Version       : 10.1.0.4.0
Protocol Version  : 10.1.0.2.0
Agent Home        : /u01/app/oracle/product/10.2.0/db_1/rac1_devdb1
Agent binaries    : /u01/app/oracle/product/10.2.0/db_1
Agent Process ID  : 10263
Parent Process ID : 8171
Agent URL         : http://rac1.mycorpdomain.com:3938/emd/main
Started at        : 2006-11-12 08:10:01
Started by user   : oracle
Last Reload       : 2006-11-12 08:20:33
Last successful upload                       : 2006-11-12 08:41:53
Total Megabytes of XML files uploaded so far :     4.88
Number of XML files pending upload           :        0
Size of XML files pending upload(MB)         :     0.00
Available disk space on upload filesystem    :    71.53%
---------------------------------------------------------------
Agent is Running and Ready
Selected, One-Stop Store for Enterprise Applications
Support various scenarios to meet companies' needs at different stages of development

Start Building Today with a Free Trial to 50+ Products

Learn and experience the power of Alibaba Cloud.

Sign Up Now