Documente Academic
Documente Profesional
Documente Cultură
Introduction
One of the biggest obstacles preventing people from setting up test RAC environments is the
requirement for shared storage. In a production environment, shared storage is often provided by
a SAN or high-end NAS device, but both of these options are very expensive when all you want
to do is get some experience installing and using RAC. A cheaper alternative is to use a FireWire
disk enclosure to allow two machines to access the same disk(s), but that still costs money and
requires two servers. A third option is to use VMware Server to fake the shared storage.
Using VMware Server you can run multiple Virtual Machines on a single server, allowing you to
run both RAC nodes on a single machine. In addition, it allows you to set up shared virtual disks,
overcoming the obstacle of expensive shared storage.
For this article, I will use Windows XP Professional with Service Pack 2 as the host OS and Red
Hat Enterprise Linux AS Version 4 Update 5 as the guest OS. I have
demonstrated the installation process with screen shots. Detailed explanation will be added
where necessary.
Click the OK button and continue.
Enter the serial number.
Double-click the VMware Server Console icon on your desktop.
Click the OK button.
2. Virtual Machine Setup
Click File New Virtual Machine.
Uncheck Make this virtual machine private.
Uncheck Allocate all disk space now and check Split disk into 2 GB files.
Click Edit virtual machine settings.
Click the Add… button.
Select Ethernet Adapter and click the Next button.
Again click Edit, select the CD-ROM, browse the ISO image and click OK button.
3. Guest Operating System Installation
Click the Start this virtual machine.
Click the Yes button.
Click the Proceed button.
Hint: The date & time should be smaller than the host machine. This will help to synchronize time
later on.
Click the Continue button.
4. Oracle Installation Prerequisites
Perform the following steps as the root user.
The /etc/hosts file must contain the following information.
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=262144
net.core.rmem_max=262144
net.core.wmem_default=262144
net.core.wmem_max=262144
/sbin/sysctl -p
Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as
follows.
SELINUX=disabled
Alternatively, this alteration can be done using the GUI tool (Applications > System Settings >
Security Level). Click on the SELinux tab and disable the feature.
Set the hangcheck kernel module parameters by adding the following line to the
/etc/modprobe.conf file.
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
groupadd oinstall
groupadd dba
groupadd oper
mkdir -p /u01/crs/oracle/product/10.2.0/crs
mkdir -p /u01/app/oracle/product/10.2.0/db_1
mkdir /u02
chown -R oracle:oinstall /u01 /u02
chmod -R 775 /u01 /u02
During the installation, both RSH and RSH-Server were installed. Enable remote shell and rlogin
by doing the following.
chkconfig rsh on
chkconfig rlogin on
service xinetd reload
touch /etc/hosts.equiv
chmod 600 /etc/hosts.equiv
chown root:root /etc/hosts.equiv
+rac1 oracle
+rac2 oracle
+rac1-priv oracle
+rac2-priv oracle
Login as the oracle user and add the following lines at the end of the .bash_profile file.
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
5. Install VMware Client Tools VMware client tools are now installed.
Login as the root user on the rac1 virtual machine, then select the "VM --> Install VMware
Tools..." option from the main VMware Server Console menu.
This should mount a virtual CD containing the VMware Tools software. Double-click on the CD
icon labeled "VMware Tools" to open the CD. Right-click on the ".rpm" package and select the
"Open with 'Install Packages'" menu option.
Click the "Continue" button on the "Completed System Preparation" screen and wait for the
installation to complete.
Once the package is loaded, the CD should unmount automatically. You must then run the
"vmware-config-tools.pl" script as the root user. The following listing is an example of the output
you should expect.
# vmware-config-tools.pl
pcnet32 30409 0
Unloading pcnet32 module
[1] "640x480"
[2] "800x600"
[3] "1024x768"
[4] "1152x864"
[5] "1280x800"
[6] "1152x900"
[7] "1280x1024"
[8] "1376x1032"
[9] "1400x1050"
[10] "1680x1050"
[11] "1600x1200"
[12]< "1920x1200"
[13] "2364x1773"
Please enter a number between 1 and 13:
[12] 3
The configuration of VMware Tools e.x.p build-22874 for Linux for this running
kernel completed successfully.
You must restart your X session before any mouse or graphics changes take
effect.
You can now run VMware Tools by invoking the following command:
"/usr/bin/vmware-toolbox" during an XFree86 session.
To use the vmxnet driver, restart networking using the following commands:
/etc/init.d/network stop
rmmod pcnet32
rmmod vmxnet
depmod -a
modprobe vmxnet
/etc/init.d/network start
Enjoy,
6. Time Synchronization
a) As root on rac1 run vmware-toolbox and Select the “Time synchrononization between the
virtual machine and the host operating system” option. This is the sample screen shot of rac2
machine just for demonstaration.
b) Edit the /boot/grub/grub.conf file and enter “clock=pit nosmp noapic nolapic” to the kernel line.
c) Reboot the machine.
Note: Time Zone of the host and guest operating systems should match.
7. Create Shared Disks
Shut down the rac1 virtual machine using the following command.
# shutdown -h now
Create a directory E:\rac\shared on the host system to hold the shared virtual disks.
On the VMware Server Console, click the "Edit virtual machine settings" button. On the "Virtual
Machine Settings" screen, click the "Add..." button.
Click the “Next” button.
Select the hardware type of "Hard Disk" and click the "Next" button.
Accept the "Create a new virtual disk" option by clicking the "Next" button.
Accept the "SCSI" option by clicking the "Next" button.
Set the disk size to "2.0" GB and uncheck the "Allocate all disk space now" option, then click the
"Next" button.
Set the disk name to "E:\rac\shared\ocr.vmdk" and click the "Advanced" button.
Set the virtual device node to "SCSI 1:0" and the mode to "Independent" and "Persistent", then
click the "Finish" button.
Repeat the previous hard disk creation steps 2 more times, using the following values:
# 2.0 GB
File Name: E:\rac\shared\votingdisk.vmdk
Virtual Device Node: SCSI 1:1
Mode: Independent and Persistent
# 30.0 GB
File Name: E:\rac\shared\shareddisk.vmdk
Virtual Device Node: SCSI 1:2
Mode: Independent and Persistent
At the end of this process, the virtual machine should look something like the picture below.
Edit the contents of the "E:\rac\rac1\Red Hat Enterprise Linux 4.vmx" file using a text editor,
making sure the following entries are present. Some of the tries will already be present, some will
not.
disk.locking = "FALSE"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.dataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "VIRTUAL"
scsi1:0.present = "TRUE"
scsi1:0.mode = "independent-persistent"
scsi1:0.fileName = “E:\rac\shared\ocr.vmdk"
scsi1:0.deviceType = "plainDisk"
scsi1:0.redo = ""
scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.fileName = “E:\rac\shared\votingdisk.vmdk"
scsi1:1.deviceType = "plainDisk"
scsi1:1.redo = ""
scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.fileName = “E:\rac\shared\shareddisk.vmdk"
scsi1:2.deviceType = "plainDisk"
scsi1:2.redo = ""
Start the rac1 virtual machine by clicking the "Start this virtual machine" button on the VMware
Server Console. When the server has started, log in as the root user so you can partition the
disks. The current disks can be seen by issueing the following commands.
# cd /dev
# ls sd*
sda sda1 sda2 sdb sdc sdd
Use the "fdisk" command to partition the disks sdb to sdd. The following output shows the
expected fdisk output for the sdb disk.
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
Once all the disks are partitioned, the results can be seen by repeating the previous "ls"
command.
# cd /dev
# ls sd*
sda sda1 sda2 sdb sdb1 sdc sdc1 sdd sdd1
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1
Create the file /etc/udev/permissions.d/49-oracle.permissions and add the following lines to it:
# OCR
raw/raw1:root:oinstall:0640
# Voting Disks
raw/raw2:oracle:oinstall:0640
8. Clone the Virtual Machine
Shut down the rac1 virtual machine using the following command.
# shutdown -h now
Edit the contents of the "E:\rac\rac2\Red Hat Enterprise Linux 4.vmx" file, making the following
change.
displayName = "rac2"
In the VMware Server Console, select the File > Open menu options and browse for the
"E:\rac\rac2\Red Hat Enterprise Linux 4.vmx" file. Once opened, the rac2 virtual machine is
visible on the console. Start the rac2 virtual machine by clicking the "Start this virtual machine"
button and click the "Always Create" button on the subsequent "Question" screen.
Ignore any errors during the server startup. We are expecting the networking components to fail
at this point.
Log in to the rac2 virtual machine as the root user and start the "Network Configuration" tool
(Applications > System Settings > Network).
Highlight the "eth0" interface and click the "Edit" button on the toolbar and alter the IP address to
"192.168.2.102" in the resulting screen.
Click on the "Hardware Device" tab and click the "Probe" button. Then accept the changes by
clicking the "OK" button.
Repeat the process for the "eth1" interface, this time setting the IP Address to "192.168.0.102".
Click on the "DNS" tab and change the host name to "rac2", then click on the "Devices" tab.
Once you have finished, save the changes (File > Save) and activate the network interfaces by
highlighting them and clicking the "Activate" button.
I will install the OCFS2 rpms onto two rac nodes. The installation process is simply a matter of
running the following command
on both Oracle RAC nodes in the cluster as the root user:
b) Now, click the SELinux tab and check off the "Enabled" checkbox.
After clicking [OK], you will be presented with a warning dialog.
Simply acknowledge this warning by clicking "Yes".
c) After making this change on both nodes in the cluster, each node will need
to be rebooted to implement the change
This will need to be done on both Oracle RAC nodes in the cluster as the root user:
# ocfs2console
1)Select [Cluster] -> [Configure Nodes...]. This will start the OCFS2 Cluster Stack
. Acknowledge this Information dialog box by clicking [Close].
You will then be presented with the "Node Configuration" dialog.
3)In the "Add Node" dialog, enter the Host name and IP address for the first node in the cluster.
Leave the IP Port set to its default value of 7777. In my example, I added both nodes using
rac1 / 192.168.0.101 for the first node and rac2 / 192.168.0.102 for the second node.
Note: The node name you enter "must" match the hostname of the machine and the IP addresses
will
use the private interconnect.
Click [Apply] on the "Node Configuration" dialog - All nodes should now be "Active" .
Click [Close] on the "Node Configuration" dialog.
After verifying all values are correct, exit the application using [File] -> [Quit].
This needs to be performed on both Oracle RAC nodes in the cluster
4)After exiting the ocfs2console, you will have a /etc/ocfs2/cluster.conf similar to the following.
This process needs to be completed on both Oracle RAC nodes in the cluster and the OCFS2
configuration
file should be exactly the same for both of the nodes:
/etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 192.168.0.101
number = 0
name = rac1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 192.168.0.102
number = 1
name = rac2
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
Before we can do anything with OCFS2 like formatting or mounting the file system,
we need to first have OCFS2's cluster stack, O2CB, running (which it will be as a result
of the configuration process performed above). The stack includes the following services:
NM: Node Manager that keep track of all the nodes in the cluster.conf
HB: Heart beat service that issues up/down notifications when nodes join or leave the cluster
TCP: Handles communication between the nodes
DLM: Distributed lock manager that keeps track of all locks, its owners and status
CONFIGFS: User space driven configuration file system mounted at /config
DLMFS: User space interface to the kernel space DLM
/etc/init.d/o2cb status
5) Configure O2CB to Start on Boot and Adjust O2CB Heartbeat Threshold --> both nodes
All of the tasks within this section will need to be performed on both nodes in the cluster.
Now activate it
# /etc/init.d/o2cb load
# /etc/init.d/o2cb online ocfs2
a) Unlike the other tasks in this section, creating the OCFS2 file system should only be executed
on one of nodes in the RAC cluster. I will be executing all commands in this section from rac1
only.
b) If the O2CB cluster is offline, start it. The format operation needs the cluster to be online,
as it needs to ensure that the volume is not mounted on some node in the cluster.
# /etc/init.d/o2cb load
# /etc/init.d/o2cb online ocfs2
c)
# mkfs.ocfs2 -b 4K -C 256K -N 4 -L dbfiles /dev/sdd1
Mounting the file system will need to be performed on both nodes in the Oracle RAC cluster
as the root user account using the OCFS2 label dbfiles!
First, here is how to manually mount the OCFS2 file system from the command-line.
Remember that this needs to be performed as the root user account:
If the mount was successful, you will simply get your prompt back. We should, however,
run the following checks to ensure the file system is mounted correctly.
Let's use the mount command to ensure that the new file system is really mounted.
This should be performed on both nodes in the RAC cluster:
# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
cartman:SHARE2 on /cartman type nfs (rw,addr=192.168.1.120)
configfs on /config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdd1 on /u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
Use the ls command to check ownership. The permissions should be set to 0775 with
owner "oracle" and group "oinstall".
# ls -ld /u02
drwxr-xr-x 3 root root 4096 Sep 3 00:42 /u02
As we can see from the listing above, the oracle user account (and the oinstall group) will
not be able to write to this directory. Let's fix that:
Let's now go back and re-check that the permissions are correct for both Oracle RAC nodes in
the cluster:
# ls -ld /u02
drwxrwxr-x 3 oracle oinstall 4096 Sep 3 00:42 /u02
The following tasks only need to be executed on one of nodes in the RAC cluster.
# mkdir -p /u02/oradata
# chown -R oracle:oinstall /u02/oradata
# chmod -R 775 /u02/oradata
# ls -l /u02/oradata
total 4
Before starting the next section, this would be a good place to reboot both of the nodes in the
RAC cluster.
When the machines come up, ensure that the cluster stack services are being loaded and the
new OCFS2
file system is being mounted:
# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
none on /proc type proc (rw)
none on /sys type sysfs (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbfs on /proc/bus/usb type usbfs (rw)
/dev/hda1 on /boot type ext3 (rw)
none on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
cartman:SHARE2 on /cartman type nfs (rw,addr=192.168.1.120)
configfs on /config type configfs (rw)
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdd1 on /u02 type ocfs2 (rw,_netdev,datavolume,nointr,heartbeat=local)
If you modified the O2CB heartbeat threshold, you should verify that it is set correctly:
# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
61
Edit the /home/oracle/.bash_profile file on the rac2 node to correct the ORACLE_SID value.
Start the rac1 virtual machine and restart the rac2 virtual machine. While starting up, the "Kudzu"
detection screen may be displayed. Press a key and accept the configuration change on the
following screen.
When both nodes have started, check they can both ping all the public and private IP addresses
using the following commands.
ping -c 3 rac1
ping -c 3 rac1-priv
ping -c 3 rac2
ping -c 3 rac2-priv
At this point the virtual IP addresses defined in the /etc/hosts file will not work, so don't bother
testing them. It is a good idea to make a consistent backup of this virtual environment. Shutdown
both the RAC nodes and compress the main rac folder in E drive. The virtual machine setup is
now complete.
Note: You can also configure ocfs2 on one node before cloning the virtual machine.
10. Oracle Clusterware and DB Installation
OCR home: /u01/crs/oracle/product/10.2.0/crs
OCR Location: /dev/raw/raw1
Voting Disk Location:/dev/raw/raw2
Oracle Software Home: /u01/app/oracle/product/10.2.0/db_1
Database Files location: /u02/oradata