Sunteți pe pagina 1din 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.

10G RAC INSTALL RH AS / HP RX1620 / MSA 1000 Version 1.0

Document Information Author Version Status Location Darren Moore, John P Hansen 1.0

Issue 1.0

Oracle Corporation - Company Confidential Page 1 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Contents
1 INTRODUCTION................................................................................................................... 3 1.1 References...................................................................................................................... 3 1.2 Revision History............................................................................................................... 3 2 INTRODUCTION.................................................................................................................. 4 3 HARDWARE OVERVIEW...................................................................................................... 5 4 HARDWARE INVENTORY ................................................................................................... 6 5 SOFTWARE INVENTORY .................................................................................................... 6 6 CONSOLE ACCESS ............................................................................................................. 6 7 REDHAT INSTALL................................................................................................................. 7 8 NETWORK CONFIGURATION ............................................................................................ 9 9 HBA SETUP........................................................................................................................ 10 10 SAN Switch Setup.............................................................................................................. 12 11 MSA FLASH UPGRADE ................................................................................................... 13 12 OCFS INSTALL................................................................................................................. 13 13 ASM INSTALL.................................................................................................................... 15 14 ORACLE PRE INSTALL SETUP TASKS...........................................................................16 14.1 Oracle User................................................................................................................. 16 14.2 System Parameters .................................................................................................... 17 14.3 Hangcheck-timer......................................................................................................... 18 14.4 Remote access setup.................................................................................................. 18 15 CRS STORAGE DISK SETUP........................................................................................... 19 16 DATABASE DISK SETUP.................................................................................................22 16.1 LUN Setup .................................................................................................................. 22 16.2 OCFS Database Disk Setup......................................................................................24 16.3 ASM Database Disk Setup.......................................................................................... 25 17 CRS INSTALL.................................................................................................................... 25 18 RAC SOFTWARE INSTALL.............................................................................................. 28 19 RAC DATABASE CREATION............................................................................................29 19.1 Storage Options .......................................................................................................... 29 19.1.1 Cluster File System .............................................................................................. 29 19.1.2 Automatic Storage Manager ASM.........................................................................30

Issue 1.0

Oracle Corporation - Company Confidential Page 2 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

1 INTRODUCTION

1.1 References
Reference 1. 2. 3. 4. 5. 6. Document Oracle Docs Hunter Rac Install 10g ASM lib / OCFS for IA64 RX1620 HOME PAGE MSA 1000 HOME PAGE 2/8V SAN Switch

1.2 Revision History


Revision 1.0 Author Darren Moore, John P Hansen Date 02/08/2005 Description

Issue 1.0

Oracle Corporation - Company Confidential Page 3 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

INTRODUCTION
The purpose of this document is to detail 10G RAC setup (10.1.0.3) on HP Itanium rx1620 servers with Red Hat Advanced Server and an MSA1000 as shared storage. Oracle technologies such as CRS, OCFS and ASM are used in conjunction with the Oracle RAC install. This document is to be used as a reference and details the steps and experience encountered during the setup and is not an actual install guide. Included in the doc are a number of references used during the setup, hardware overview including a topology, OS and associated S/W install instructions. For the Cluster Software install Cluster Ready Services (CRS) was used in conjunction with OCFS (Oracle Cluster File System) to manage the shared devices needed for CRS. For the purposes of customer demos we created two RAC databases on the shared storage. The first database taking advantage of OCFS and RAID 5 and the second database taking advantage of ASM to manage the shared database datafiles. Both databases were set up on a private LAN. This may seem confusing however we separated out both approaches in the procedure and you may ignore whichever approach does not suit your setup.

Issue 1.0

Oracle Corporation - Company Confidential Page 4 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

3 HARDWARE OVERVIEW
We first created a Primary DNS on an x86 PC to provide us with a private network. All systems were connected to a simple 100MB 8 way hub, including the DNS master. The purpose of the setup was to allow us to run demos for customers on a private network. You can create a simple DNS server using the following how to, we stayed true the docs and called our network linux.bogus. http://langfeldt.net/DNS-HOWTO/BIND-8/DNS-HOWTO-5.html The topology is as follows.

Issue 1.0

Oracle Corporation - Company Confidential Page 5 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

4 HARDWARE INVENTORY
2 * ex1620 Integrity HP Servers 1 * MSA 1000 Storage Array 1 * 8 /4 way fibre channel switch 2 netgear 8 way hubs 1 for private interconnect 1 for private LAN One x86 PC, which acted as the primary DNS for a private LAN. HP Proliant DL580 2/8V Fibre Channel SAN switch

5 SOFTWARE INVENTORY
RH AS 3.0 U5 ia64 for both rx1620s RH AS 3.0 U5 x86 for the PC with acting as the DNS mater 10.1.0.3 for the RAC install Oracle Cluster File System - ocfs-2.4.21 for IA64 RHAS 3.0 Linux ASMLib is a library add on for the Automatic Storage Manager - oracleasm-2.4.21-32.EL for IA64 RHAS 3.0 Linux MSA 1000 firmware v4.48 A7538A hpqla2x00_2005_05_11 HBA driver with fibreutils_1.11-3

6 CONSOLE ACCESS
Using a PC with Linux installed (RHAS 3.0 x86), we attached a serial cable connected to com port 1 on the pc to the com port on the back of the rx1620, (a console cable should be supplied with the rx1620) Using minicom we connected to the console on the rx1620 as follows > minicom -8 -o -con -L w You can however connect to the console using whatever method suits.

Issue 1.0

Oracle Corporation - Company Confidential Page 6 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

7 REDHAT INSTALL
We installed Redhat Advanced Server Update 5 for IA64, Kernel 2.4.21-32. The correct version of RHAS is important to satisfy the support matrix for ASMlib and OCFS. At boot time select the Boot Maintenance Menu as show below: EFI Boot Manager ver 1.10 [14.62] Firmware ver 2.11 [4445] Please select a boot option Red Hat Enterprise Linux AS EFI Shell [Built-in] Boot Option Maintenance Menu System Configuration Use ^ and v to change option(s). Use Enter to select an option EFI Boot Maintenance Manager ver 1.10 [14.62]

Select Removable Media Boot [Internal Bootable DVD] Boot From a File. Select a Volume

NO VOLUME LABEL [Acpi(HWP0002,0)/Pci(2| 0)/Ata(Primary,Master)/CD NO VOLUME LABEL [Acpi(HWP0002,100)/Pci(1| 0)/Scsi(Pun0,Lun0)/HD(P NO VOLUME LABEL [Acpi(HWP0002,100)/Pci(1| 0)/Scsi(Pun0,Lun0)/HD(P Removable Media Boot [Internal Bootable DVD] Load File [EFI Shell [Built-in]] Load File [Core LAN Gb A] Load File [Core LAN Gb B] Exit

At the ELILO boot prompt type linux console = ttyS0, remember you have approx 8 seconds to type this ELILO boot: linux console=ttyS0 Uncompressing Linux... done

Issue 1.0

Oracle Corporation - Company Confidential Page 7 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Loading initrd initrd.img...done Your system will now boot into the default anaconda installer in console mode. Next we installed redhat as follows: Select Skip, when prompted to test CD Select OK Select English Select Disk Druid We created a disk layout as follows, the most important step here is the creation of a /boot/efi boot partition which must be a FAT partition, sda2 sda3 sda4 sda5 sda6 sda7 sda8 13 20 2569 5119 6394 7669 8178 20 2569 5119 6394 7669 8178 8816 50M 20000M 20000M 10000M 10000M 4000M 5000M vfat ext3 ext3 ext3 ext3 swap ext3 /boot/efi /u02 /u01 /usr / /opt

Select OK Select Yes to verify your selection Next we assigned an IP Address and Network Mask to Activate on boot. We did not configure any additional network devices at this stage i.e. interconnect. Next we assigned the Primary DNS host. In Hostname Configuration we manually assigned the fully qualified hostname e.g. rac1.linux.bogus. We did not enable any firewall and selected No firewall. We then selected the appropriate Language, Time Zone and Root Password The installer first formats the disks, then asks for the CDs in the following Order Disk 1, Disk 2, Disk 3, Disk 4, Disk 1

Issue 1.0

Oracle Corporation - Company Confidential Page 8 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

8 NETWORK CONFIGURATION
At this stage all nodes in cluster should be on the network with a static IP address. The /etc/hosts file on each node should contain All hosts within the cluster, in our case rac1 and rac2 All VIP s within the cluster All interconnect devices within the cluster

Ensure each entry in the /etc/hosts file has the following format <ip-address> e.g. 192.168.196.100 192.168.196.101 10.0.0.2 rac1.linux.bogus rac1 rac1-vip.linux.bogus rac1-vip rac1-priv.linux.bogus rac1-priv <hostname-fully.qualified.domain> <domain>

The installer will prompt for the VIP Addresses during the RAC S/W installation. In our setup we configured 2 interconnects (however you only need 1, additional interconnects can be configured for high availability), on a private subnet using a separate hub. You can configure each network device using /usr/bin/redhat-config-network therefore in total each node in our two node cluster had two network devices assigned for 2 private interconnects e.g. on the system rac1.linux.bogus we configured eth1 with the IP address 10.0.0.2 Note: ensure you select Activate device when computer starts before you activate the device.

Issue 1.0

Oracle Corporation - Company Confidential Page 9 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

We also configured /etc/nsswitch.conf to allow each node to interrogate the dns server for hostname lookups before using the /etc/hosts/ file: # hosts: db files nisplus nis dns hosts: dns files

9 HBA SETUP
We installed internal qlogic HBA 2G fibre channel adaptors to connect the a fibre channel switch which was also connected to an MAS1000 (more in this later). All details about the HBA Fibre channel adaptors can be found online at: A7538A - 2Gb PCI-X Fibre Channel HBA for Linux Click on Software & drivers to download the appropriate HBA driver. We downloaded the driver kit for A7538A and use the INSTALL script contained within hp_qla2x00-2005-05-11.tar.gz which installed the A7538A driver and fibreutils. We also installed hp_sansurfer which is a useful gui tool used to connect and configure your qlogic card if needed. Both the driver and sansurfer (optional) were installed on both nodes. [root@rac2 hp_qla2x00]# ./INSTALL

Issue 1.0

Oracle Corporation - Company Confidential Page 10 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Installing hp_qla2x00src RPM... Preparing... ########################################### [100%] Logfile is /var/log/hp_qla2x00_install.log Getting list of QLA FC HBAs Getting list of SCSI adapters and Vendor IDs Producing list of SCSI adapters and Vendor IDs that are FCP adapters Checking Vendor IDs All Storage is HP Storage. Proceeding with installation 1:hp_qla2x00src ########################################### [100%] Loaded driver is in nonfailover mode Writing new /etc/hp_qla2x00.conf...done Copying /opt/hp/src/hp_qla2x00src/libqlsdm-ia64.so to /usr/lib/libqlsdm.so Modifying /etc/hba.conf Configuring kernel sources... Using /usr/src/linux-2.4/configs/kernel-2.4.21-ia64.config as .config Executing make mrproper Executing make oldconfig Executing make dep Compiling QLA driver... make clean make HSG80=n OSVER=linux-2.4 SMP=1 all rm -f qla2200.o qla2300.o qla2300_conf.o qla2200_conf.o qla_opts.o qla_opts cc -D__KERNEL__ -DMODULE -Wall -O -g -DUDEBUG -DLINUX -Dlinux -DINTAPI Copying qla2300.o to /lib/modules/2.4.2132.EL/kernel/drivers/addon/qla2200 Copying qla2300_conf.o to /lib/modules/2.4.2132.EL/kernel/drivers/scsi Running depmod -a adding line to /etc/modules.conf: alias scsi_hostadapter2 qla2300_conf adding line to /etc/modules.conf: alias scsi_hostadapter3 qla2300 adding line to /etc/modules.conf: alias scsi_hostadapter4 sg

Issue 1.0

Oracle Corporation - Company Confidential Page 11 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

adding line to /etc/modules.conf: options qla2300 ql2xmaxqdepth=16 qlport_down_retry=64 qlogin_retry_count=16 ql2xfailover=0 ql2xlbType=0 ql2xexcludemodel=0x0 Creating new initrd - initrd-2.4.21-32.EL.img Making symbolic link from /opt/hp/src/hp_qla2x00src/master.sh to /usr/sbin/hp_compile_qldriver qla2x00 driver source can be found in /opt/hp/src/hp_qla2x00src Installing fibreutils... Preparing... ########################################### [100%] 1:fibreutils ########################################### [100%] [root@rac2 HBA]# rpm -ivh hp_sansurfer-2.00.30b36-1.ia64.rpm Preparing... ########################################### [100%] 1:hp_sansurfer ########################################### [100%] Preparing to install... Extracting the JRE from the installer archive... Unpacking the JRE... Extracting the installation resources from the installer archive... Configuring the installer for this system's environment... strings: /lib/libc.so.6: No such file or directory Launching installer... Preparing SILENT Mode Installation... =============================================================== Installing... ------------Installation Complete. Creating /usr/sbin/SANsurfer link Stopping qlremote agent services Starting qlremote agent service

10 SAN Switch Setup


We configured the HP Storage Works SAN 2/8V Switch using the supplied console cable, once you boot up the switch you will be asked to supply a password for the root, factory, and admin and user users. We worked mainly with the admin user once

Issue 1.0

Oracle Corporation - Company Confidential Page 12 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

this was setup. We supplied the system with an IP address using the ipaddrset command. Once setup you can also gain console access using http//<switch ipaddress> as the switch hosts a web server presenting you with access to your switch, through a useful java applet. http://h18006.www1.hp.com/products/storageworks/sanswitch28v/

11 MSA FLASH UPGRADE


We also upgraded the MSA1000 firmware to v4.48 which is recommended by HP. The latest firmware can be downloaded form the HP site at http://www.hp.com/go/msa1000 Attaching a fibre channel switch with a Proliant DL580 box with Win 2003 server and a 2G HBA card allowed us to use the CLI interface using hyperterm via a console cable. This procedure can also be achieved from a linux box. Using msaflash32r34v448.tar from the above mentioned site: Instructions: 1. Unzip (tar -xf msaflash.tar) 2. run msainst by typing "./msainst" to copy library files to /usr/lib and binary file to /usr/bin 3. type msaflash in any console window to run program.

12 OCFS INSTALL
Download and install the ocfs rpms from http://oss.oracle.com and install the ocfs rpms on each host as follows: [root@rac2]# rpm -ivh ocfs-2.4.21-EL-1.0.14-1.ia64.rpm ocfs-support-1.0.10-1.ia64.rpm ocfs-tools-1.0.10-1.ia64.rpm Preparing... ########################################### [100%]

Issue 1.0

Oracle Corporation - Company Confidential Page 13 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

1:ocfs-support ########################################### 2:ocfs-2.4.21-EL ########################################### Linking OCFS module into the module path [ 3:ocfs-tools ###########################################

[ 33%] [ 67%] OK ] [100%]

Note: check your systems kernel version for compatibility with the ocfs rpms on oss.oracle.com. In our example [root@rac2]# uname -r 2.4.21-32.EL We used OCFS Release 1.0.14-1 for kernel 2.4.21-27.EL+ (EL3 U4+) and also installed the support and tools packages. Once the rpms are installed you must setup /etc/ocfs.conf. We used the ocfstool to configure the /etc/ocfs.conf file on each server which is required to use ocfs on each system. Note: You can start up vncserver on each system to gain access to an xserver on each system. We used the gnome xserver by configuring ~/.vnc/xstartup file to start up a gnome desktop by adding gnome-session to xstartup. # vi ~/.vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources xsetroot -solid grey vncconfig -iconic & xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & gnome-session & Once vncserver was configured we used vncviewer to logon to each host and create the /etc/ocfs.conf file using ocfstool (as root user) as follows: # ocfstool

Issue 1.0

Oracle Corporation - Company Confidential Page 14 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

13 ASM INSTALL
Download and install the asm rpms from http://www.oracle.com/technology/tech/linux/asmlib/index.html on each host as follows: [root@rac2 RH3]# rpm -ivh oracleasm-2.4.21-32.EL-1.0.41.ia64.rpm oracleasmlib-2.0.0-1.ia64.rpm oracleasm-support2.0.0-1.ia64.rpm Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.4.21-32.EL ########################################### [ 67%] 3:oracleasmlib ########################################### [100%] Note: As with OCFS , check your systems kernel version for compatibility with the ASM rpms on oss.oracle.com. In our example [root@rac2]# uname -r

Issue 1.0

Oracle Corporation - Company Confidential Page 15 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

2.4.21-32.EL On each node you will need to configure to the ASM library driver to start at boot time by the oracle user. On each node perform the following. [root@rac2 RH3]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: dba Start Oracle ASM library driver on boot (y/n) [n]: y Fix permissions of Oracle ASM disks on boot (y/n) [y]: Writing Oracle ASM library driver configuration: [ OK Loading module "oracleasm": [ OK Mounting ASMlib driver filesystem: [ OK Scanning system for ASM disks: [ OK

] ] ] ]

14 ORACLE PRE INSTALL SETUP TASKS


Ref: Oracle Documentation http://www.oracle.com/technology/documentation/

14.1 Oracle User


Create the Oracle User and dba group on both systems as follows as follows: [root@rac2 root]# mkdir p /usr/home/oracle [root@rac2 root]# groupadd -g 500 dba [root@rac2 root]# groupadd -g 501 oinstall [root@rac2 root]# useradd -u 500 -g dba -G oinstall -m -s /bin/bash oracle Set up your oracle user password using the passwd command Add the following to end of /etc/profile

Issue 1.0

Oracle Corporation - Company Confidential Page 16 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi

14.2 System Parameters


Add the following to /etc/security/limits.conf oracle oracle oracle oracle soft hard soft hard nproc 2047 nproc 16384 nofile 1024 nofile 65536

Add the following to /etc/sysctl.conf, # ADDED For 10g RAC INSTALL # Default settings in bytes of the socket receive buffer net.core.rmem_default = 262144 # Default setting in bytes of the socket send buffer net.core.wmem_default = 262144 # Max socket receive buffer size which may be sent using # the SO_RCVBUF socket option net.core.rmem_max = 262144 # Max socket send buffer size which may be sent using # the SO_RCVBUF socket option net.core.wmem_max = 262144 # SHMMAX maximum size (in bytes) for a System V shared memory segment kernel.shmmax = 2147483648 # SEM system Semaphores kernel.sem = 250 32000 100 128 # File Handels fs.file-max = 65536 # Shared Memory limit Max kernel.shmall = 2097152 # shared memory limit mIN kernel.shmmni = 4096 # Sockets net.ipv4.ip_local_port_range = 1024 65000 You can either update the system values with /sbin/sysctl p or reboot the system . At this stage reboot the system before you start the Oracle Install to ensure al changes are implemented at boot time.

Issue 1.0

Oracle Corporation - Company Confidential Page 17 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Note: these values are tuneable, for example you can increase SHMMAX to the actual size of your available physical memory this will allow you to increase the SGA area at a later stage without rebuilding the kernel.

14.3 Hangcheck-timer
This step is optional if you have installed CRS, however it is recommended and is used to monitor the health of the system. The hangcheck-timer will reset a node if the system hangs or pauses. It is installed with the linux kernel 2.4.9-e.12 or higher, you can verify if the hangcheck-timer is installed as follows: [root@rac1 etc]# find /lib/modules -name "hangcheck-timer.o" Perform the follow on each node to set up the hangcheck timer to check the health of the system every 30 seconds with a hang delay of 180 seconds before hangchecktimer reboots the system. [root@rac1 etc]# echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180" >>/etc/modules.conf Oracle will load the hangcheck timer when needed, however to manually load the hangchek timer and check it is working [root@rac1 etc]# modprobe hangcheck-timer [root@rac1 etc]# grep Hangcheck /var/log/messages | tail -2 Jul 28 13:02:48 rac1 kernel: Hangcheck: starting hangcheck timer 0.8.0 (tick is 30 seconds, margin is 180 seconds).

14.4 Remote access setup


Each node within the cluster needs remote access between each node as the oracle user, for remote access and to remotely copy files between each server. You can either use rsh or ssh. We used rsh however ssh is recommended for a secure environment. By default rsh-server will not be installed on RHAS, you will have to find the rpm on CD and install rsh-server on all nodes within the cluster as root user as follows: # rpm -ivh rsh-server-0.17-17.ia64.rpm Ensure you have rsh client and rsh server installed # rpm -q rsh rsh-server rsh-0.17-17 rsh-server-0.17-17 To enable the "rsh" service, the "disable" attribute in the /etc/xinetd.d/rsh file must be set to "no" and xinetd must be reloaded. # chkconfig rsh on # chkconfig rlogin on # service xinetd reload

Issue 1.0

Oracle Corporation - Company Confidential Page 18 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Reloading configuration: [ OK ] To allow the "oracle" user account to be trusted among the RAC nodes, create the /etc/hosts.equiv file on all nodes in the cluster as root user: # touch /etc/hosts.equiv # chmod 600 /etc/hosts.equiv # chown root.root /etc/hosts.equiv Now add all RAC nodes to the /etc/hosts.equiv including the VIP addresses, e.g.: # cat /etc/hosts.equiv +rac1 oracle +rac2 oracle +rac1-vip oracle +rac2-vip oracle +rac1-int1 oracle +rac2-int1 oracle +rac1-int2 oracle +rac2-int2 oracle Rename the Kerberos version of rsh as root user to use the standard rsh binary: # which rsh /usr/kerberos/bin/rsh # mv /usr/kerberos/bin/ rsh rsh.ORG # which rsh /usr/bin/rsh

15 CRS STORAGE DISK SETUP


Next we configured the external shared storage an MSA1000 using the CLI (command line interface) for a single CRS device which would be used for both the CSS and Voting disk during the CRS install. Initial Setup consisted of connected the Proliant DL580 server directly to the MSA 1000 storage box with a 2GB fibre cable C7525A (5065-5102). The MSA 1000 has 1 * 2GB SFP Transceiver which was connect directly to the back of the Server using the 2GB cable. 1 LUN was created using the CLI tool provided, accessed on a hyperterm from COM1 on the Proliant to the console port on the front of the MSA 1000 for the CRS disk as follows: LUN creation procedure: CLI> add unit 0 data = "disk101-disk102" raid_level=1 stripe_size=128 spare disk103

Issue 1.0

Oracle Corporation - Company Confidential Page 19 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

This above configuration will Mirror disk101 and disk102 adding disk103 as a hot spare, however any configuration you choose will work fine as ling as the LUN is created. Once the LUN is created you can reboot both servers. Use fdisk l to view all attached devices. The next operation involves partitioning the drive using fdisk in our example we partitioned the drive as follows: [root@rac2 root]# fdisk /dev/sdc Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable.

The number of cylinders for this disk is set to 4427. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): p Disk /dev/sdc: 36.4 GB, 36413314560 bytes 255 heads, 63 sectors/track, 4427 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System

Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-4427, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-4427, default 4427):

Issue 1.0

Oracle Corporation - Company Confidential Page 20 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Using default value 4427 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.

Ensure you can see the newly created partiton from both systems using fdisk. On both nodes create the directory where you want to mount the OCFS to with the appropiate permissions. [root@rac2 root]# mkdir p /u02/oradata/orcl ; chown oracle:dba /u02/oradata/orcl Next we must configure the newly created partition as an OCFS (Oracle Cluster File System) disk. As super user from one system and one system only run the mkfs.ocfs command as follows where [root@rac2 root]# mkfs.ocfs -F -b 128 -L /u02/oradata/orcl -m /u02/oradata/orcl -u '500' -g '501' -p 0755 /dev/sdc1 Cleared volume header sectors Cleared node config sectors Cleared publish sectors Cleared vote sectors Cleared bitmap sectors Cleared data block Wrote volume header Once we created the OCFS from one node we need to mount the file system from each node as follows, add the following entry to each /etc/fstab on each node /dev/sdc1 /u02/oradata/orcl ocfs _netdev 0 0

You can now mount the newly created filesystem form each node one at a time (as when you first mount he ocfs it must initialize) by typing: # mount a or # mount -t ocfs /dev/sdc1 /u02/oradata/orcl

Issue 1.0

Oracle Corporation - Company Confidential Page 21 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

16 DATABASE DISK SETUP


16.1 LUN Setup
As perviously stated we used an MSA 1000, which can be configured using the CLI interface which accesses a console port on the front the MSA 1000. The MSA itself is quiet easy to configure and allowed us flexibility to set up different disk configurations based on whatever database setup we needed. In the end we decided to create two databases on RAC cluster for demo purposes: - One Database taking advantage of ASM technology - One Database taking advantage of OCFS technology In summary the disk configuration was as follows 1 LUN 3 LUNS 1 LUN For CRS i.e. CSS and OCR disk (2 disks RAID 1) For our ASM database (3 separate disks) For the OCFS database (5 disks with one hot spare)

The CLI utility itself is quiet easy to use however here is a quick 101 on our setup

Issue 1.0

Oracle Corporation - Company Confidential Page 22 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

CRS LUN CLI> ADD UNIT 0 DATA = "disk101-disk102" raid_level=1 First volume to be configured on these drives. Logical Unit size = 17359 MB RAID overhead = 17359 MB Total space occupied by new unit = 34718 MB Free space left on this volume: = 0 MB Unit 0 is created successfully. OCFS LUN CLI> ADD UNIT 1 DATA = "disk103-disk107" raid_level=5 First volume to be configured on these drives. Logical Unit size = 69460 MB RAID overhead = 17365 MB Total space occupied by new unit = 86825 MB Free space left on this volume: = 0 MB Unit 1 is created successfully. CLI> add spare unit=1 disk108 Spare drive(s) has been added. Use 'show unit 1' to confirm.

ASM LUNS CLI> ADD UNIT 1 DATA = "disk110" raid_level=0 First volume to be configured on these drives. Logical Unit size = 17359 MB RAID overhead = 0 MB Total space occupied by new unit = 17359 MB Free space left on this volume: = 0 MB Unit 2 is created successfully. CLI> ADD UNIT 3 DATA = "disk111" raid_level=0 First volume to be configured on these drives. Logical Unit size = 17359 MB RAID overhead = 0 MB Total space occupied by new unit = 17359 MB Free space left on this volume: = 0 MB Unit 3 is created successfully.

Issue 1.0

Oracle Corporation - Company Confidential Page 23 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

CLI> ADD UNIT 4 DATA = "disk112" raid_level=0 First volume to be configured on these drives. Logical Unit size = 17359 MB RAID overhead = 0 MB Total space occupied by new unit = 17359 MB Free space left on this volume: = 0 MB Unit 4 is created successfully.

16.2 OCFS Database Disk Setup


At this stage you can now reboot both systems and using [root@rac2 root]# fdisk l you can now see the new SCSI devices. Using fdisk we created a new partition for the OCFS, which we used for our OCFS database setup, and 3 new partitions for the ASM disks, which we used for our ASM databsae setup. As with the CRS setup you need to create the OCFS partition for the OCFS database setup e.g. On both nodes: [root@rac2 root]# mkdir p /u02/oradata/db ; chown oracle:dba /u02/oradata/db On one node only [root@rac2 root]# mkfs.ocfs -F -b 128 -L /u02/oradata/db -m /u02/oradata/db -u '500' -g '501' -p 0755 /dev/sdd1 On both nodes place the appropriate entry into the /etc/vfstab file e.g. /dev/sdd1 /u02/oradata/db ocfs _netdev 0 0

You can now mount the newly created filesystem form each node one at a time (as when you first mount he ocfs it must initialize) by typing: # mount a or # mount -t ocfs /dev/sdd1 /u02/oradata/db

Issue 1.0

Oracle Corporation - Company Confidential Page 24 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

16.3 ASM Database Disk Setup


Again using fdisk l to discover the devices names created for ASM we created the ASM Volumes using /etc/init.d/oracleasm as follows: # /etc/init.d/oracleasm createdisk Marking disk "/dev/sde1" as an ASM # /etc/init.d/oracleasm createdisk Marking disk "/dev/sdf1" as an ASM # /etc/init.d/oracleasm createdisk Marking disk "/dev/sdg1" as an ASM VOL1 disk VOL2 disk VOL3 disk /dev/sde1 [ OK ] /dev/sdf1 [ OK ] /dev/sdg1 [ OK ]

On all nodes in the cluster issue the /etc/init.d/oracleasm scandisks command, so each node recognizes the newly created ASM Volumes. # /etc/init.d/oracleasm scandisks Scanning system for ASM disks [ OK You can view the disks as follows: # /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3 You will use these volumes at a later stage during the RAC database creation with dbca.

17 CRS INSTALL
On one node using the Oracle Installer from either CD or an nfs share with a copy of the CD contents perform the following as the oracle user: [oracle@rac2 oracle]$ ./runInstaller Note: You can record a response file for later use for silent installations with the following if you wish: ./runInstaller -record -destinationFile /private/temp/install_oracle91.rsp At the Welcome Screen click Next. At Specify Inventory directory and credentials screen specify the location of the oraInventory directory which defaults to your ORACLE_HOME and choose your OS group name, we choose dba. You will now be asked to run orainstRoot.sh which creates the /etc/oratab file.

Issue 1.0

Oracle Corporation - Company Confidential Page 25 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Specify the location of your ORACLE_HOME and Name and Click Next e.g. Name: crshome Location: /u02/oracle/crshome

Select your default language and Click Next. Specify the Cluster Configuration and Click Next e.g. Cluster Name: crs Public Node Name: rac1 Private Node Name: 10.0.0.2 Public Node Name: rac2 Private Node Name: 10.0.0.3 Note: You can use IP addresses or hostnames for the Private Note Name.

Specify the Network Interface Usage e.g. Interface Name: eth0 Subnet: 192.168.196.0 Interface Type: Public Interface Name: eth1 Subnet: 10.0.0.0 Interface Type: Private

Specify the location of the Oracle Cluster Registry e.g. /u02/oradata/orcl/CRSDisk (Note if for some reason you do not see this screen it means the /etc/oracle/ocr.loc file exists and there is a previous install of CRS on your system, you will have to remove this file and follow the De-Install CRS procedure in 17.1)

Specify the location of the Voting Disk e.g. /u02/oradata/orcl/CSSDisk

You will now be asked to run orainstRoot.sh on each node in the cluster, Run orainstRoot.sh on each node from a separate xterms. After the S/W installation is complete you will be asked to run root.sh from first the Node you are performing the install from and second the remaining nodes in the cluster in our case the second node. Remember be patient here and it is important you get a log similar to the following bellow from the first node you run root.sh on if this is successful the second node should be fine.

Issue 1.0

Oracle Corporation - Company Confidential Page 26 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

[root@rac1 crs]# ./root.sh Running Oracle10 root.sh script... \nThe following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u02/oracle/crs Finished running generic part of root.sh script. Now product-specific root actions will be performed. Checking to see if Oracle CRS stack is already up... Setting the permissions on OCR backup directory Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u02/oracle' is not owned by root WARNING: directory '/u02' is not owned by root clscfg: EXISTING configuration version 2 detected. clscfg: version 2 is 10G Release 1. assigning default hostname rac2 for node 1. assigning default hostname rac1 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node <nodenumber>: <nodename> <private interconnect name> <hostname> node 1: rac2 10.0.0.2 rac2 node 2: rac1 10.0.0.3 rac1 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Adding daemons to inittab Preparing Oracle Cluster Ready Services (CRS): Expecting the CRS daemons to be up within 600 seconds. Once this has completed go directly to the RAC Software Install and Install the RAC software.

Issue 1.0

Oracle Corporation - Company Confidential Page 27 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

18 RAC SOFTWARE INSTALL


At this stage we will install the RAC Software without installing a database. From either a CD or an NFS share containing 2 directories called Disk1 & Disk2, which contain copies of the CDs run the installer as the oracle user. [oracle@rac1 oracle]$ ./runInstaller At the Welcome Screen click Next. Specify the location of your ORACLE_HOME and Name and Click Next e.g. Name: orclhome Location: /u02/oracle/orclhome Select your default language and Click Next. At the next screen you should be presented with a list of nodes in your cluster, if CRS installed successfully you will see a list of all the nodes. Select Cluster Installation option and all nodes in the list and Click Next. Select the type of Installation we select Enterprise Edition. Select "Do not create a starter database". And Click Next. (Note: we will create or databases later with dbca) Click Next again to start S/W installation. At the end of the software installation you will be asked to run root.sh on all nodes in the cluster, again in our case both nodes in the cluster. First run root.sh from the node you are running the installation from, once the script is finished the VIP Configuration Assistant (VIPCA), will appear, please fill in all details i.e. Node name, IP Alias Name (the VIP hostname), IP Address and Subnet Mask for each node in the cluster: Node Name: rac1.linux.bogus IP Alias Name: rac1-vip.linux.bogus IP Address: 192.168.196.100 Subnet Mask: 255.255.255.0 Once this has completed run root.sh on the remaining nodes and Click Next back at the main screen. You can exit the installation at the end of the OUI

Issue 1.0

Oracle Corporation - Company Confidential Page 28 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

19 RAC DATABASE CREATION


At this stage we can create our database with dbca, in our example we created 2 databases one using OCFS and one using ASM. , As the ORACLE user launch dbca [oracle@rac2 oracle]$ ./dbca Select Oracle Real Application Clusters database and Click Next. Select Create a Database and Click Next. Click the Select All button to select all servers e.g. rac1 & rac2 and Click Next. Select the type of database you want, in our case we choose General Purpose database. In Database Identification enter Global Database Name and SID Prefix Global Database Name: orcl.linux.bogus SID Prefix: orcl In Database Management Options, we used the default Configure the Database with Enterprise Manager and Click Next. In Database Credentials specify the passwords for SYS, System, DBSNMP and SYSMAN. Click Next.

19.1 Storage Options


19.1.1 Cluster File System

At the Storage Options screen if you wish to create a database using Cluster File System click on Cluster File System. From Database File Location we choose Use Oralce-Managed File and specified the Database Area and Click Next e.g. /u02/oradata/db

You can now Click Next to accept all default parameters or change any parameters you wish.

Issue 1.0

Oracle Corporation - Company Confidential Page 29 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Note: We did not choose any Recovery options for the database and we added the Sample Schemas to the database by Clicking on the Sample Schemas check box in Database Content. The database will now install.

19.1.2
o

Automatic Storage Manager ASM

At the Storage Options screen if you wish to create a database using Cluster File System click on ASM. You will now be asked to create an ASM Instance, enter and confirm the SYS password and Click Next. AA dialog box will appear, at the prompt Click OK. Dbca will create and start and ASM instance on all nodes in the cluster.

Click Next and you will see the Create Disk Group window with the 3 ASM volumes you created. ORCL:VOL1, ORCL:VOL2, and ORCL:VOL3 For the disk group name field enter in a diskgroup name e.g. ORCL_ASMDG1

o o

In Select Members Disks, select all Volumes and Click OK. Once the ASM Disk group creation process has completed select the check box next to the new disk group and Click Next.

From Database File Location we choose Use Oralce-Managed File e.g. Database Area: +ORCL_ASMDG1

You can now Click Next to accept all default parameters or change any parameters you wish. Note: We did not choose any Recovery options for the database and we added the Sample Schemas to the database by Clicking on the Sample Schemas check box in Database Content. The database will now install.

Issue 1.0

Oracle Corporation - Company Confidential Page 30 of 31

10G RAC Install RH AS / HP rx1620 / MSA1000-Version: 1.0

Issue 1.0

Oracle Corporation - Company Confidential Page 31 of 31

S-ar putea să vă placă și