Sunteți pe pagina 1din 58

This guide provides a step-by-step instructions to install and

configure Oracle 10g Release 2 RAC Database on Oracle


Enterprise Linux (EL4U5). It also includes steps to configure
Linux for Oracle. This walkthru uses a virtual two nodes cluster
on VMware using shared SCSI disks between the nodes. The OS
is installed for you and the shared disks are also setup for you.
This guide is developed for some one who wants to do hands-on
practice on installation and configuration of 10g RAC
components. We have done this with minimum system
requirements and fewest steps to keep it simple. The
configuration choices made in this guide may not be adequate
for the production environment and may not provide best
practices.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Hardware
<!--[if !supportEmptyParas]--> <!--[endif]-->
Two Virtual Machines each with:
1 cpu
2 GB memory
18 GB local disk with OS
10 GB local disk for Oracle binaries
3 x 10 GB shared disks for RAC
Software
OS: Oracle Enterprise Linux 4 Update 5
Oracle Clusterware
Oracle Cluster File System version 2 (OCFS2)
Oracle Automatic Storage Management (ASM)
instance (Optional)
Oracle 10g RAC R2 software
<!--[if !supportEmptyParas]--> <!--[endif]-->
This guide is divided into three main parts. Part I: Configure
Linux for Oracle, Part II: Prepare the Shared Disks, and Part III:
Install Oracle Software.
Lets get started and have some fun!
<!--[if !supportEmptyParas]--> <!--[endif]-->

Note: You will be setting up number of passwords (for root,


oracle, vnc, sys, system etc) during the installation. Please write
these down or keep all of them same to make it easy to
remember for you. When asked to run root.sh during
installation, run it on node1 and then when finished run on
node2. Be extra careful on reading and following all the steps as
described in this guide to a successful installation. But for any
reason if you run into a problem, it is best to look for help on
Metalink.
Part I: Configure Linux for Oracle
Oracle Groups and User Account
We need the Linux groups and user account that will be used to
install and maintain the Oracle 10g Release 2 software. The user
account will be called 'oracle' and the groups will be 'oinstall' and
'dba.' These are already created for you. You just need to set the
group oinstall as a primary group for oracle user and group dba
to be the secondary group.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Execute the following commands as root on each cluster node:
<!--[if !supportEmptyParas]--> <!--[endif]-->
/usr/sbin/usermod -g oinstall -G dba oracle
id oracle
<!--[if !supportEmptyParas]--> <!--[endif]-->
Ex:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /usr/sbin/usermod -g oinstall -G dba oracle
# id oracle
uid=1000(oracle) gid=2258(oinstall)
groups=2258(oinstall),1001(dba)
<!--[if !supportEmptyParas]--> <!--[endif]-->
The User ID and Group IDs must be the same on all cluster nodes.
Verify that when running above command on remaining cluster
nodes.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Set the password on the oracle account on all cluster nodes:

<!--[if !supportEmptyParas]--> <!--[endif]-->


# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated
successfully.
Create Mount Points
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Now create mount points to store the Oracle 10g Release 2 software.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Issue the following commands as root on each node:
<!--[if !supportEmptyParas]--> <!--[endif]-->
mkdir -p /u01/app/
chown -R oracle:oinstall /u01/app/
chmod -R 750 /u01/app/
<!--[if !supportEmptyParas]--> <!--[endif]-->
Ex:
# mkdir -p /u01/app/
# chown -R oracle:oinstall /u01/app/
# chmod -R 765 /u01/app/
Configure Kernel Parameters
<!--[if !supportEmptyParas]--> <!--[endif]-->
Login as root and configure the Linux kernel parameters on each
node.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Backup /etc/sysctl.conf file
<!--[if !supportEmptyParas]--> <!--[endif]-->
# cp /etc/sysctl.conf /etc/sysctl.conf.org
# vi /etc/sysctl.conf
<!--[if !supportEmptyParas]--> <!--[endif]-->
Make sure the values are equal or greater for the following kernel
parameters:

<!--[if !supportEmptyParas]--> <!--[endif]-->


kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 658576
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 1048536
net.core.wmem_default = 262144
net.core.wmem_max = 1048536
<!--[if !supportEmptyParas]--> <!--[endif]-->
save the file /etc/sysctl.conf if you need to make any changes, and
run following command to load new settings:
<!--[if !supportEmptyParas]--> <!--[endif]-->
Run the following command to change the current kernel parameters
/sbin/sysctl -p
Enter the command /sbin/sysctl -a to confirm that the values are set
correctly.

# /sbin/sysctl p
Setting Shell Limits for the oracle User
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Oracle recommends setting the limits to the number of processes and
number of open files each Linux account may use. To make these
changes, you may cut and paste the following commands one line at
a time as root on each node or edit the file and add the lines between
EOFs at the end of each file.
<!--[if !supportEmptyParas]--> <!--[endif]-->
cat >> /etc/security/limits.conf << EOF
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF

<!--[if !supportEmptyParas]--> <!--[endif]-->


cat >> /etc/pam.d/login << EOF
session
required
/lib/security/pam_limits.so
session
required
pam_unix.so
EOF
/*
The limits.conf file located under the /etc/security directory can be used to control and
limit resources for the users on your system. It is important to set resource limits on all
your users so they can't perform denial of service attacks number of processes, amount of
memory, etc). These limits will have to be set up for the user when he or she logs in. For
example, limits for all users on your system might look like this.
*/

<!--[if !supportEmptyParas]--> <!--[endif]-->


cat >> /etc/profile << EOF
if [ \$USER = "oracle" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
ulimit provides control over the resources available to the shell and to processes started
by it, on systems that allow such control.
The UMASK variable can be confusing to use, because it does work as a
mask. In other words, you set the permissions that you do not want in
the UMASK.
To calculate permissions which will result from specific UMASK values,
subtract the UMASK from 666 for files and from 777 for directories.
If you want all files created with permissions of 666, set your UMASK to
000. Alternatively, if you want all files created with permissions of 000,
set your UMASK to 666.

<!--[if !supportEmptyParas]--> <!--[endif]-->

cat >> /etc/csh.login << EOF


if ( \$USER == "oracle" ) then
limit maxproc 16384
limit descriptors 65536
umask 022
endif
EOF
Configure the Hangcheck Timer
<!--[if !supportEmptyParas]--> <!--[endif]-->
Run the following commands as root on each node:
<!--[if !supportEmptyParas]--> <!--[endif]-->
modprobe hangcheck-timer hangcheck_tick=30
hangcheck_margin=180
<!--[if !supportEmptyParas]--> <!--[endif]-->
Also add the following line to the end of /etc/rc.d/rc.local (but before
the exit 0 line) on both nodes.
<!--[if !supportEmptyParas]--> <!--[endif]-->
modprobe hangcheck-timer hangcheck_tick=30
hangcheck_margin=180
[root@ocvmrh2035 ~]# vi /etc/rc.d/rc.local
[root@ocvmrh2035 ~]#
between fi and exit paste this command
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
Debian has a file called /etc/rc.local which runs at the end of all the multi-user boot
levels, and which you can therefore put stuff in.
The hangcheck-timer.ko Module

The hangcheck-timer module uses a kernel-based timer that


periodically checks the system task scheduler to catch delays in order
to determine the health of the system. If the system hangs or pauses,
the timer resets the node. The hangcheck-timer module uses the Time
Stamp Counter (TSC) CPU register, which is incremented at each clock
signal. The TCS offers much more accurate time measurements
because this register is updated by the hardware automatically.

<!--[if !supportEmptyParas]--> <!--[endif]-->


Note: You may see the following printed to /var/log/messages a lot.
This is an issue with VMware, but should not be a problem. You
won't normally see those messages on non-VMware RAC setups.
<!--[if !supportEmptyParas]--> <!--[endif]-->
kernel: Hangcheck: hangcheck value past margin!
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Configure /etc/hosts
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
You will need three hostnames for each node of the cluster. One
public hostname for the primary interface, one private hostname for
the cluster interconnect, and one public (vip) hostname for the high
availability or failover. These hostnames are already pre-assigned,
and are put in /etc/hosts file on each node. You just have to copy the
private hostname lines for each node, on all other nodes. At the
end /etc/hosts file on each node should look like the example files
bellow.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Note: You will see one additional hostname with nfs in the /etc/hosts
example bellow, you will need that to access the /nfs/stage, so keep
it.
/etc/hosts file used for this walkthru on Node1:
<!--[if !supportEmptyParas]--> <!--[endif]-->
127.0.0.1
localhost.localdomain
localhost
140.87.222.133 ocvmrh2011.us.oracle.com
ocvmrh2011
# node1 public
140.87.241.153 ocvmrh2011-nfs.us.oracle.com
ocvmrh2011-nfs ocvmrh2011-a # node1 nfs
152.68.143.11
ocvmrh2011-priv.us.oracle.com
ocvmrh2011-priv
# node1 private
152.68.143.12
ocvmrh2013-priv.us.oracle.com
ocvmrh2013-priv
# node2 private
140.87.222.136 ocvmrh2014.us.oracle.com
ocvmrh2014
# node1 vip

140.87.222.137 ocvmrh2015.us.oracle.com
ocvmrh2015
# node2 vip
<!--[if !supportEmptyParas]--> <!--[endif]-->
/etc/hosts file used for this walkthru on Node2:
<!--[if !supportEmptyParas]--> <!--[endif]-->
127.0.0.1
localhost.localdomain
localhost
140.87.222.135 ocvmrh2013.us.oracle.com
ocvmrh2013
# node2 public
140.87.241.155 ocvmrh2013-nfs.us.oracle.com
ocvmrh2013-nfs ocvmrh2013-a # node2 nfs
152.68.143.11
ocvmrh2011-priv.us.oracle.com
ocvmrh2011-priv
# node1 private
152.68.143.12
ocvmrh2013-priv.us.oracle.com
ocvmrh2013-priv
# node2 private
140.87.222.136 ocvmrh2014.us.oracle.com
ocvmrh2014
# node1 vip
140.87.222.137 ocvmrh2015.us.oracle.com
ocvmrh2015
# node2 vip
<!--[if !supportEmptyParas]--> <!--[endif]-->
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1

localhost.localdomain localhost

140.87.222.159 ocvmrh2035.us.oracle.com ocvmrh2035


140.87.222.163 ocvmrh2039.us.oracle.com ocvmrh2039
140.87.241.184 ocvmrh2035-nfs.us.oracle.com ocvmrh2035-nfs
ocvmrh2035-a
152.68.143.143 ocvmrh2035-priv.us.oracle.com ocvmrh2035-priv
152.68.143.144 ocvmrh2039-priv.us.oracle.com ocvmrh2039-priv

140.87.222.161 ocvmrh2037.us.oracle.com ocvmrh2037


VIP
140.87.222.158 ocvmrh2034.us.oracle.com ocvmrh2034
VIP

#Node1
#Node2

[root@ocvmrh2039 ~]# cat /etc/hosts


# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1

localhost.localdomain localhost

140.87.222.163 ocvmrh2039.us.oracle.com ocvmrh2039


140.87.222.159 ocvmrh2035.us.oracle.com ocvmrh2035
140.87.241.188 ocvmrh2039-nfs.us.oracle.com ocvmrh2039-nfs
ocvmrh2039-a
152.68.143.144 ocvmrh2039-priv.us.oracle.com ocvmrh2039-priv
152.68.143.143 ocvmrh2035-priv.us.oracle.com ocvmrh2035-priv

140.87.222.161 ocvmrh2037.us.oracle.com ocvmrh2037


VIP

#Node1

140.87.222.158 ocvmrh2034.us.oracle.com ocvmrh2034


VIP
[root@ocvmrh2039 ~]#

#Node2

Configure SSH for User Equivalence


<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
During the installation of Oracle RAC 10g Release 2, OUI needs to
copy files to and execute programs on the other nodes in the cluster.
In order to allow OUI to do that, you must configure SSH to allow user
equivalence. Establishing user equivalence with SSH provides a
secure means of copying files and executing programs on other

nodes in the cluster without requiring password prompts.


The first step is to generate public and private keys for SSH. There
are two versions of the SSH protocol; version 1 uses RSA and
version 2 uses DSA, so we will create both types of keys to ensure
that SSH can use either version. The ssh-keygen program will
generate public and private keys of either type depending upon the
parameters passed to it.
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Note: In this guide we are using ssh-keygen, ssh-agent, and ssh-add
commands from /usr/local/git/bin directory but you will not have this
directory available at client site. You will use these commands from
/usr/bin directory at client site.
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
The first step in configuring SSH is to create RSA and DSA key pairs
on both Oracle RAC nodes in the cluster. The command to do this will
create a public and private key for both RSA and DSA (for a total of
four keys per node). The content of the RSA and DSA public keys will
then need to be copied into an authorized key file which is then
distributed to both Oracle RAC nodes in the cluster.
For the purpose of this article I will refer ocvmrh2011 as
node1 and ocvmrh2013 as node2.
C:\Documents and Settings\shaiali\Desktop\Rac\Oracle in
World Configure server to install Oracle RAC.mht
Generating RSA and DSA Keys
1)Log on as oracle user.
2)See whether .ssh directory exist or not. If does not exist then
create one.
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
3)Create the RSA-type public and private encryption keys by,
/usr/bin/ssh-keygen -t rsa
This command creates the public key in the
/home/oracle/.ssh/id_rsa.pub file and the private key in the
/home/oracle/.ssh/id_rsa file.

4)Create the DSA type public and private keys.


/usr/bin/ssh-keygen -t dsa
This command creates the public key in the
/home/oracle/.ssh/id_dsa.pub file and the private key in the
/home/oracle/.ssh/id_dsa file.
5)Repeat step 1 through 4 in all the nodes.
Adding the Keys to an Authorized Key File
1)Go to .ssh directory
$ cd ~/.ssh
2)Add the RSA and DSA keys to the authorized_keys files.
$ cat id_rsa.pub >>authorized_keys
$ cat id_dsa.pub >>authorized_keys
3)Using SCP copy the authorized_keys file to the oracle user .ssh
directory on a remote node.
scp authorized_keys node2:/home/oracle/.ssh/
4)Using SSH, log in to the node where you copied the
authorized_keys file, using the passphrase you created. Then
change to the .ssh directory, and using the cat command, add the
RSA and DSA keys for the second node to authorized_keys file.
ssh node2
Enter passphrase for key '/home/oracle/.ssh/id_rsa':
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
cat id_dsa.pub >> authorized_keys
5)If you have more than 2 nodes in your cluster, repeat step 3 and
step 4 for each node you intend to add to your cluster. Copy the
most recently updated authorized_keys file to the next node,
then add the public keys for that node to the authorized_keys
file.
6)After updating the authorized_keys file on all nodes, use SCP to
copy the complete authorized_keys file from the last node to be
updated to all the other cluster nodes, overwriting the existing
version on the other nodes. For example,
scp authorized_keys node1:/home/oracle/.ssh/

Use the following steps to create the RSA and DSA key
pairs. Please note that these steps will need to be
completed on both Oracle RAC nodes in the cluster.
<!--[if !supportLists]-->1.
<!--[endif]-->Logon as the
"oracle" UNIX user account.
# su oracle
<!--[if !supportEmptyParas]--> <!-[endif]-->
<!--[if !supportLists]-->2.
<!--[endif]-->If necessary,
create the .ssh directory in the "oracle" user's home
directory and set the correct permissions on it:
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
<!--[if !supportEmptyParas]--> <!-[endif]-->
<!--[if !supportLists]-->3.
<!--[endif]-->Enter the
following command to generate an RSA key pair
(public and private key) for version 1 of the SSH
protocol:
$ /usr/local/git/bin/ssh-keygen -t rsa
$ ssh-keygen -t rsa
Generating public/private rsa key
pair.
Enter file in which to save the key
(/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no
passphrase):
Enter same passphrase again:
Your identification has been saved
in /home/oracle/.ssh/id_rsa.
Your public key has been saved in
/home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
1f:59:21:f3:d8:16:bb:76:f0:c3:2e:0f:53
:c4:29:a9 oracle@ocvmrh2035
$ pwd
/usr/bin

$
<!--[if !supportEmptyParas]--> <!-[endif]-->
Generating public/private rsa key
pair.
Enter file in which to save the key
(/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no
passphrase):
Enter same passphrase again:
Your identification has been saved
in /home/oracle/.ssh/id_rsa.
Your public key has been saved in
/home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
7c:b9:d8:4e:e6:c2:25:65:73:cc:d1:84:b8
:b8:f0:c2 oracle@ocvmrh2011
This command will write the public key to the
~/.ssh/id_rsa.pub file and the private key to the
~/.ssh/id_rsa file. Note that you should never
distribute the private key to anyone!
<!--[if !supportLists]-->4.
<!--[endif]-->Enter the
following command to generate a DSA key pair
(public and private key) for version 2 of the SSH
protocol:
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ /usr/local/git/bin/ssh-keygen -t dsa
Generating public/private dsa key
pair.
Enter file in which to save the key
(/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no
passphrase):
Enter same passphrase again:
Your identification has been saved
in /home/oracle/.ssh/id_dsa.

Your public key has been saved in


/home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
d2:ba:1e:e1:57:6e:a9:25:75:07:04:61:d1
:a8:53:b1 oracle@ocvmrh2039
$
<!--[if !supportEmptyParas]--> <!-[endif]-->
Generating public/private dsa key
pair.
Enter file in which to save the key
(/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no
passphrase):
Enter same passphrase again:
Your identification has been saved
in /home/oracle/.ssh/id_dsa.
Your public key has been saved in
/home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
93:ae:86:78:c0:12:c0:80:66:56:0a:63:99
:39:b0:19 oracle@ocvmrh2011
This command will write the public key to the
~/.ssh/id_dsa.pub file and the private key to the
~/.ssh/id_dsa file. Note that you should never
distribute the private key to anyone!
<!--[if !supportLists]-->5.
<!--[endif]-->Repeat the
above steps for both Oracle RAC nodes in the
cluster.
Now that both Oracle RAC nodes contain a public and
private key for both RSA and DSA, you will need to create
an authorized key file on one of the nodes. An authorized
key file is nothing more than a single file that contains a
copy of everyone's (every node's) RSA and DSA public
key. Once the authorized key file contains all of the public
keys, it is then distributed to all other nodes in the cluster.

Complete the following steps on one of the nodes in the


cluster to create and then distribute the authorized key
file. For the purpose of this article, I am using node1:
ocvmrh2011
<!--[if !supportLists]-->1.
<!--[endif]-->First, determine if
an authorized key file already exists on the node
(~/.ssh/authorized_keys). In most cases this will not
exist since this article assumes you are working with a
new install. If the file doesn't exist, create it now:
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ touch ~/.ssh/authorized_keys
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ cd ~/.ssh
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ ls -l *.pub
-rw-r--r-- 1 oracle oinstall 607 Aug 2
14:40 id_dsa.pub
-rw-r--r-- 1 oracle oinstall 399 Aug 2
14:35 id_rsa.pub
<!--[if !supportEmptyParas]--> <!--[endif]-->
The listing above should show the id_rsa.pub and
id_dsa.pub public keys created in the previous section.
<!--[if !supportEmptyParas]--> <!--[endif]-->
In this step, use SSH to copy the content of the
~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub public key
from both Oracle RAC nodes in the cluster to the
authorized key file just created
(~/.ssh/authorized_keys). Again, this will be done from
node1. You will be prompted for the "oracle" UNIX user
account password for both Oracle RAC nodes
accessed. Notice that when using SSH to access the
node you are on (node1), the first time prompts for the
"oracle" UNIX user account password. For any of the
remaining nodes, it will always ask for the "oracle"
UNIX user account password.

The following example is being run from node1 and


assumes a two-node cluster, node1: ocvmrh2011, and
node2: ocvmrh2013
$ ssh ocvmrh2011 cat ~/.ssh/id_rsa.pub
>> ~/.ssh/authorized_keys
The authenticity of host 'ocvmrh2011
(140.87.222.133)' can't be established.
RSA key fingerprint is
35:4e:cb:25:95:5d:6e:0b:46:eb:3b:54:50:d
a:e3:f8.
Are you sure you want to continue
connecting (yes/no)? yes
Password: XXXXXX
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ ssh ocvmrh2011 cat ~/.ssh/id_dsa.pub
>> ~/.ssh/authorized_keys
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ ssh ocvmrh2013 cat ~/.ssh/id_rsa.pub
>> ~/.ssh/authorized_keys
The authenticity of host 'ocvmrh2013
(140.87.222.135)' can't be established.
RSA key fingerprint is
93:cc:4f:94:8a:61:a9:35:6f:24:b9:c8:b7:b
e:01:2f.
Are you sure you want to continue
connecting (yes/no)? yes
Password: XXXXXX
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ ssh ocvmrh2013 cat ~/.ssh/id_dsa.pub
>> ~/.ssh/authorized_keys
Password: XXXXXX
<!--[if !supportEmptyParas]--> <!-[endif]-->
<!--[if !supportLists]-->2.
<!--[endif]-->At this point, we
have the content of the RSA and DSA public keys from
every node in the cluster in the authorized key file

(~/.ssh/authorized_keys) on node1. We now need to


copy it to the remaining nodes in the cluster. In our
two-node cluster example, the only remaining node is
node2. Use the scp command to copy the authorized
key file to all remaining nodes in the cluster:
$ scp ~/.ssh/authorized_keys
ocvmrh2013:.ssh/authorized_keys
Password: XXXXXX
authorized_keys
100% 2012
2.0KB/s
00:00
<!--[if !supportEmptyParas]--> <!-[endif]-->
<!--[if !supportLists]-->3.
<!--[endif]-->Change the
permission of the authorized key file for both Oracle
RAC nodes in the cluster by logging into the node and
running the following:
****$ chmod 600 ~/.ssh/authorized_keys
Establish User Equivalence
<!--[if !supportEmptyParas]--> <!--[endif]-->
When running the OUI, it will need to run the secure shell
tool commands (ssh and scp) without being prompted for
a passphrase. Even though SSH is configured on both
Oracle RAC nodes in the cluster, using the secure shell
tool commands will still prompt for a passphrase. Before
running the OUI, you need to enable user equivalence for
the terminal session you plan to run the OUI from. For the
purpose of this article, all Oracle installations will be
performed from node1.
<!--[if !supportEmptyParas]--> <!--[endif]-->
User equivalence will need to be enabled on any new
terminal shell session before attempting to run the OUI. If
you log out and log back in to the node you will be
performing the Oracle installation from, you must enable
user equivalence for the terminal shell session as this is
not done by default.
<!--[if !supportEmptyParas]--> <!--[endif]-->

To enable user equivalence for the current terminal shell


session, perform the following steps:
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->1.
<!--[endif]-->Logon to the node
where you want to run the OUI from (node1) as the
"oracle" UNIX user account.
# su oracle
<!--[if !supportEmptyParas]--> <!-[endif]-->
<!--[if !supportLists]-->2.
<!--[endif]-->Enter the following
commands:
ssh-agent - authentication agent
ssh-agent is a program to hold private keys used for public key
authentication (RSA, DSA). The idea is that ssh-agent is started in the
beginning of an X-session or a login session, and all other windows or programs are started as clients to the ssh-agent program. Through use
of
environment variables the agent can be located and automatically
used for
authentication when logging in to other machines using ssh(1).
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add

$ ssh-agent $SHELL
$ ssh-add
Identity added: /home/oracle/.ssh/id_rsa
(/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa
(/home/oracle/.ssh/id_dsa)
$
$ cd /usr/bin
$ ls ssh*
ssh ssh-add ssh-agent ssh-keygen ssh-keyscan
$ ssh-agent $SHELL

$ ssh-add
Identity added: /home/oracle/.ssh/id_rsa
(/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa
(/home/oracle/.ssh/id_dsa)
$
Ssh-add adds identities to the authentication agent, ssh-agent. When run without
arguments, it adds the file %HOME%/.ssh/identity. Alternative file names can be given on
the command line. If any file requires a passphrase, ssh-add asks for the passphrase from
the user.
The authentication agent must be running and must be an ancestor of
the current process for ssh-add to work.

$ exec /usr/local/git/bin/ssh-agent
$SHELL
$ /usr/local/git/bin/ssh-add
Identity added: /home/oracle/.ssh/id_rsa
(/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa
(/home/oracle/.ssh/id_dsa)
Note: if the user equivalence is ever lost, you will have to run sshagent and ssh-add again to re-establish user equivalence.
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->

Test Connectivity
<!--[if !supportEmptyParas]--> <!--[endif]-->
If SSH is configured correctly, you will be able to use
the ssh and scp commands without being prompted for
anything from this terminal session:
<!--[if !supportEmptyParas]--> <!--[endif]-->
Enter following commands as oracle user on both
nodes.
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ ssh ocvmrh2011 "date;hostname"

Fri Aug 3 14:53:33 CDT 2007


ocvmrh2011
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ ssh ocvmrh2013 "date;hostname"
Fri Aug 3 14:53:49 CDT 2007
ocvmrh2013
<!--[if !supportEmptyParas]--> <!--[endif]-->
Note: It is possible that the first time you use SSH to
connect to a node from a particular system, you may
see a message similar to the following, just enter yes
if you see this prompt. The second time you try the
same command from the same node you will not see
any prompts.
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ ssh ocvmrh2011 "date;hostname"
The authenticity of host 'ocvmrh2011
(140.87.222.133)' can't be established.
RSA key fingerprint is
35:4e:cb:25:95:5d:6e:0b:46:eb:3b:54:50:da:e3:f8.
Are you sure you want to continue connecting
(yes/no)? yes
Fri Aug 3 14:54:12 CDT 2007
ocvmrh2011
<!--[if !supportEmptyParas]--> <!--[endif]-->
It is crucial that you test connectivity in each direction from all
servers. That will ensure that messages like the one above do not
occur when the OUI attempts to copy files during CRS and database
software installation. This message will only occur the first time an
operation on a remote node is performed, so by testing the
connectivity, you not only ensure that remote operations work
properly, you also complete the initial security key exchange.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Check all ssh combination by running above ssh date command. Do
this from both nodes.
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
1. ssh both public hostname
2. ssh both private hostname

3. ssh both public ip address


4. ssh both Private ip address
<!--[if !supportEmptyParas]--> <!--[endif]-->
Note: For some reason if the User Equivalence didnt work for you,
and if you still see a propmt when you do ssh commands shown
above, just do rm rf ~/.ssh as oracle user and start all over
again from the Configure SSH for user Equivalence section in this
document.
<!--[if !supportEmptyParas]--> <!--[endif]-->

Part II: Prepare the Shared Disks

Part II: Prepare the Shared Disks


<!--[if !supportEmptyParas]--> <!--[endif]-->
Both Oracle Clusterware and Oracle RAC require access to disks that
are shared by each node in the cluster. The shared disks must be

configured using one of the following methods. Note that you cannot
use a "standard" filesystem such as ext3 for shared disk volumes
since such file systems are not cluster aware.
For Clusterware:
<!--[if !supportLists]-->1.
<!--[endif]-->OCFS (Release 1 or 2)
<!--[if !supportLists]-->2.
<!--[endif]-->raw devices
<!--[if !supportLists]-->3.
<!--[endif]-->third party cluster
filesystem such as GPFS or Veritas
For RAC database storage:
<!--[if !supportLists]-->1.
<!--[endif]-->OCFS (Release 1 or 2)
<!--[if !supportLists]-->2.
<!--[endif]-->ASM
<!--[if !supportLists]-->3.
<!--[endif]-->raw devices
<!--[if !supportLists]-->4.
<!--[endif]-->third party cluster
filesystem such as GPFS or Veritas
<!--[if !supportEmptyParas]--> <!--[endif]-->
This guide covers installations using OCFS2 and ASM. If you have a
small number of shared disks, you may wish to use OCFS2 for both
Oracle Clusterware and the Oracle RAC database files. If you have
more than a few shared disks, consider using ASM for Oracle RAC
database files for the performance benefits ASM provides. Note that
ASM cannot be used to store Oracle Clusterware files since
Clusterware must be installed before ASM (ASM depends upon the
services of Oracle Clusterware). This guide uses OCFS2 for Oracle
Clusterware files, and ASM for RAC database files.
P a r t i t i o n
t h e
D i s k s
<!--[if !supportEmptyParas]--> <!--[endif]-->
In order to use either OCFS2 or ASM, you must have unused disk
partitions available. This section describes how to create the
partitions that will be used for OCFS2 and for ASM.
<!--[if !supportEmptyParas]--> <!--[endif]-->
WARNING: Improperly partitioning a disk is one of the surest and
fastest ways to wipe out everything on your hard disk. If you are
unsure how to proceed, stop and get help, or you will risk losing data.
<!--[if !supportEmptyParas]--> <!--[endif]-->
You have three empty SCSI disks setup for you to use for this cluster
install. These disks are configured as shared disks and are visible to
all nodes. They are:

/dev/sdc
/dev/sdd
/dev/sde
<!--[if !supportEmptyParas]--> <!--[endif]-->
http://arjudba.blogspot.com/2008/08/configureshared-storage-in-oracle-rac.html
Note: you can run /sbin/sfdisk s command as root user to
see all the disks.
<!--[if !supportEmptyParas]--> <!--[endif]-->
In this example we will use /dev/sdc for OCFS2, and use /dev/sdd
and /dev/sde for ASM.
You will now uses /dev/sdc (an empty SCSI disk with no existing
partitions) to create a single partition for the entire disk (10 GB) to be
used for OCFS2.
<!--[if !supportEmptyParas]--> <!--[endif]-->
As root user on Node1, run the following command
<!--[if !supportEmptyParas]--> <!--[endif]-->
# fdisk /dev/sdc
Device contains neither a valid DOS partition
table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain
in memory only,
until you decide to write them. After that, of
course, the previous
content won't be recoverable.
<!--[if !supportEmptyParas]--> <!--[endif]-->
The number of cylinders for this disk is set to
1305.
There is nothing wrong with that, but this is
larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old
versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4
will be corrected by w(rite)

<!--[if !supportEmptyParas]--> <!--[endif]-->


Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1): <enter>
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305,
default 1305): <enter>
Using default value 1305
<!--[if !supportEmptyParas]--> <!--[endif]-->
Command (m for help): w
The partition table has been altered!
<!--[if !supportEmptyParas]--> <!--[endif]-->
Calling ioctl() to re-read partition table.
Syncing disks.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Now verify the new partition:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# fdisk -l /dev/sdc
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
<!--[if !supportEmptyParas]--> <!--[endif]-->
Device Boot
Start
End
Blocks
Id System
/dev/sdc1
1
1305
10482381
83 Linux
<!--[if !supportEmptyParas]--> <!--[endif]-->
Repeat the above steps for each disk bellow and create a single
partition for each. Disk partitioning should be done as root from one
node only.
/dev/sdd
/dev/sde
<!--[if !supportEmptyParas]--> <!--[endif]-->
When finished partitioning, run the 'partprobe' command as root on
each of the remaining cluster nodes in order to assure that the new
partitions are configured.

<!--[if !supportEmptyParas]--> <!--[endif]-->


# partprobe
partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read
the partition table.
After running fdisk, you will almost always get an error about the kernel not using the new
partition table you just modified. Before GNU released parted, you had to reboot in order for
the kernel to purge it's cache and reload the partition table, but now, all you need to do is run
partprobe after exiting fdisk.

Oracle Cluster File System (OCFS) Release 2


<!--[if !supportEmptyParas]--> <!--[endif]-->
OCFS2 is a general-purpose cluster file system that can be used to
store Oracle Clusterware files, Oracle RAC database files, Oracle
software, or any other types of files normally stored on a standard
filesystem such as ext3. This is a significant change from OCFS
Release 1, which only supported Oracle Clusterware files and Oracle
RAC database files.
OCFS2 is available free of charge from Oracle as a set of three
RPMs: a kernel module, support tools, and a console. There are
different kernel module RPMs for each supported Linux kernel. In
this exercise, ocfs2 is pre installed for you on both the nodes. You
can run following command on all nodes to verify that, you should
see three rpms as bellow.
<!--[if !supportEmptyParas]--> <!--[endif]-->
# rpm -qa|grep ocfs
ocfs2-2.6.9-55.0.2.6.2.ELsmp-1.2.5-2
ocfs2-tools-1.2.7-1.el4
ocfs2console-1.2.7-1.el4
<!--[if !supportEmptyParas]--> <!--[endif]-->
(Optionally, OCFS2 kernel modules may be downloaded
from http://oss.oracle.com/projects/ocfs2/files/ and the tools and
console may be downloaded from
http://oss.oracle.com/projects/ocfs2-tools/files/.)
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Configure OCFS2
<!--[if !supportEmptyParas]--> <!--[endif]-->

You will need a graphical environment to run OCFS2 Console so lets


start vncserver on Node1 as oracle user:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ vncserver :55
<!--[if !supportEmptyParas]--> <!--[endif]-->
You will require a password to access your desktops.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Password:
Verify:
<!--[if !supportEmptyParas]--> <!--[endif]-->
New 'ocvmrh2011:55 (oracle)' desktop is
ocvmrh2011:55
<!--[if !supportEmptyParas]--> <!--[endif]-->
Creating default startup script
/home/oracle/.vnc/xstartup
Starting applications specified in
/home/oracle/.vnc/xstartup
Log file is /home/oracle/.vnc/ocvmrh2011:55.log
<!--[if !supportEmptyParas]--> <!--[endif]-->
Now using vncviewer from your desktop access the vnc session you
just started on Node1 (ocvmrh2011:55). Sign-on using the vnc
password you set in above step.
On vnc desktop, left click the mouse and select Xterm to open a
new window. In new window su to root:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ su
Password:
Run ocfs2console as root:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# ocfs2console
<!--[if !supportEmptyParas]--> <!--[endif]-->
Select C l u s t e r C o n f i g u r e
N o d e s
< ! - - [ i f
!
s u p p o r t E m p t y P a r a s ] - - >
< ! - [ e n d i f ] - - >
C l i c k
o n
C l o s e
i f
y o u
s e e
f o l l o w i n g
I n f o r m a t i o n
m e s s a g e .

N o t e :
T h i s
a p p l i c a t i o n
i
s l o w ,
a n d
m a y
t a k e
a
f e
s e c o n d s
b e f o r e
y o u
s e e
w i n d o w .
< ! - - [ i f
!
s u p p o r t E m p t y P a r a s ] - - >
[ e n d i f ] - - >

s
w
n e w
< ! - -

< ! - - [ i f
! v m l ] - - > < ! - - [ e n d i f ] - >
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Click on A d d on the next window, and enter the Name and IP
Address of each node in the cluster.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Note: Use node name to be the same as returned by the hostname
command
<!--[if !supportEmptyParas]--> <!--[endif]-->
Ex:
ocvmrh2011 (short name, without the us.oracle.com)
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Apply, and Close the window.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Once all of the nodes have been added, click on C l u s t e r
--> P r o p a g a t e
C o n f i g u r a t i o n .
This will copy the OCFS2 configuration file to each node in the
cluster. You may be prompted for root passwords as ocfs2console
uses ssh to propagate the configuration file. Answer yes if you see
following prompt.
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
When you see Finished!, click on Close, and leave the OCFS2
console by clicking on F i l e --> Q u i t .
<!--[if !supportEmptyParas]--> <!--[endif]-->
After exiting the ocfs2console, you will have a
/etc/ocfs2/cluster.conf similar to the following on all nodes. This
OCFS2 configuration file should be exactly the same on all of the
nodes:
<!--[if !supportEmptyParas]--> <!--[endif]-->
node:
ip_port = 7777
ip_address = 140.87.222.133
number = 0
name = ocvmrh2011
cluster = ocfs2
<!--[if !supportEmptyParas]--> <!--[endif]-->
node:
ip_port = 7777
ip_address = 140.87.222.135

number = 1
name = ocvmrh2013
cluster = ocfs2
<!--[if !supportEmptyParas]--> <!--[endif]-->
cluster:
node_count = 2
name = ocfs2
<!--[if !supportEmptyParas]--> <!--[endif]-->
If you dont see this file on any node, follow the steps bellow to copy
this file on missing nodes as root user.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Create /etc/ocfs2 directory if missing.
<!--[if !supportEmptyParas]--> <!--[endif]-->
# mkdir /etc/ocfs2
<!--[if !supportEmptyParas]--> <!--[endif]-->
Copy the cluster.conf file from the node1 (where it is found) to the
other node (where it is missing). You will be prompted for the root
password.
<!--[if !supportEmptyParas]--> <!--[endif]-->
# scp /etc/ocfs2/cluster.conf
ocvmrh2013:/etc/ocfs2/cluster.conf
Password:
cluster.conf
100%
240
0.2KB/s
00:00
<!--[if !supportEmptyParas]--> <!--[endif]-->
Configure O2CB to Start on Boot and Adjust O2CB Heartbeat
Threshold
Oracle RAC O2CB Cluster Service
Before we can do anything with OCFS2 like formatting or mounting the
file system, we need to first have OCFS2's cluster stack, O2CB, running
(which it will be as a result of the configuration process performed
above). The stack includes the following services:

NM: Node Manager that keep track of all the nodes in the cluster.conf
HB: Heart beat service that issues up/down notifications when nodes join or leave
the cluster
TCP: Handles communication between the nodes
DLM: Distributed lock manager that keeps track of all locks, its owners and
status

CONFIGFS: User space driven configuration file system mounted at /config


DLMFS: User space interface to the kernel space DLM

All of the above cluster services have been packaged in the o2cb
system service (/etc/init.d/o2cb). Here is a short listing of some of the
more useful commands and options for the o2cb system service.

http://www.rampantbooks.com/art_hunter_rac_oracle_o2cb_cluster_service.ht
m
<!--[if !supportEmptyParas]--> <!--[endif]-->
You now need to configure the on-boot properties of the O2CB driver
so that the cluster stack services will start on each boot. You will also
be adjusting the OCFS2 Heartbeat Threshold from its default setting
of 7 to 601. All the tasks within this section will need to be performed
on both nodes in the cluster as root user.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Set the on-boot properties as follows:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the
O2CB driver.
The following questions will determine whether the
driver is loaded on
boot. The current values will be shown in brackets
('[]'). Hitting
<ENTER> without typing an answer will keep that
current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear)
[ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [31]: 601
Specify network idle timeout in ms (>=5000)

[30000]: <enter>
Specify network keepalive delay in ms (>=1000)
[2000]: <enter>
Specify network reconnect delay in ms (>=2000)
[2000]: <enter>
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
<!--[if !supportEmptyParas]--> <!--[endif]-->
We can now check again to make sure the settings took place in for
the o2cb cluster stack:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
601
<!--[if !supportEmptyParas]--> <!--[endif]-->
The default value was 7, but what does this value represent? Well, it
is used in the formula below to determine the fence time (in seconds):
[fence time in seconds] = (O2CB_HEARTBEAT_THRESHOLD
- 1) * 2
So, with an O2CB heartbeat threshold of 7, we would have a fence
time of:
(7 - 1) * 2 = 12 seconds
In our case, I used a larger threshold (of 1200 seconds), so I adjusted
O2CB_HEARTBEAT_THRESHOLD to 601 as shown below:
(601 - 1) * 2 = 1200 seconds
<!--[if !supportEmptyParas]--> <!--[endif]-->
It is important to note that the value of 601 I used for the O2CB
heartbeat threshold is the maximum you can use to prevent OCFS2
from panicking the kernel.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Verify that ocfs2 and o2cb are started at boot time. Check this on
both nodes. As root user:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# chkconfig --list |egrep "ocfs2|o2cb"
ocfs2 0:off 1:off 2:on 3:on 4:on 5:on 6:off

o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:off


<!--[if !supportEmptyParas]--> <!--[endif]-->
If it doesnt look like above on both nodes, turn them on by following
command as root:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# chkconfig ocfs2 on
# chkconfig o2cb on
Create and format the OCFS2 filesystem on the unused disk partition
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
As root on each of the cluster nodes, create the mount point directory
for the OCFS2 filesystem
<!--[if !supportEmptyParas]--> <!--[endif]-->
# mkdir /u03
<!--[if !supportEmptyParas]--> <!--[endif]-->
Note: It is possible to format and mount the OCFS2 partitions using
the ocfs2console GUI; however, this guide will use the command line
utilities.
<!--[if !supportEmptyParas]--> <!--[endif]-->
The example below creates an OCFS2 filesystem on the unused
/dev/sdc1 partition with a volume label of "/u03" (-L /u03), a block size
of 4K (-b 4K) and a cluster size of 32K (-C 32K) with 4 node slots (-N
4). See the OCFS2 Users Guide for more information on mkfs.ocfs2
command line options.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Run the following command as root on node1 only
<!--[if !supportEmptyParas]--> <!--[endif]-->
# mkfs.ocfs2 -b 4K -C 32K -N 4 -L /u03 /dev/sdc1
mkfs.ocfs2 1.2.7
Filesystem label=/u03
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=10733944832 (327574 clusters) (2620592
blocks)
11 cluster groups (tail covers 5014 clusters, rest
cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done

Initializing superblock: done


Writing system files: done
Writing superblock: done
Writing backup superblock: 2 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful
<!--[endif]-->
Mount the OCFS2 filesystem
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Since this filesystem will contain the Oracle Clusterware files and
Oracle RAC database files, we must ensure that all I/O to these files
uses direct I/O (O_DIRECT). Use the "datavolume" option whenever
mounting the OCFS2 filesystem to enable direct I/O. Failure to do
this can lead to data loss in the event of system failure. Mount the
ocfs2 file system on all cluster nodes, run the following command as
root user.
<!--[if !supportEmptyParas]--> <!--[endif]-->
# mount -t ocfs2 -L /u03 -o datavolume /u03
<!--[if !supportEmptyParas]--> <!--[endif]-->
Notice that the mount command uses the filesystem label (-L u03)
used during the creation of the filesystem. This is a handy way to
refer to the filesystem without having to remember the device name.
To verify that the OCFS2 filesystem is mounted, issue the df
command on both nodes:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# df /u03
Filesystem
1K-blocks
Used Available
Use% Mounted on
/dev/sdc1
10482368
268736
10213632
3% /u03
<!--[if !supportEmptyParas]--> <!--[endif]-->
To automatically mount the OCFS2 filesystem at system boot, add a
line similar to the one below to /etc/fstab on each cluster node:
LABEL=/u03
/u03
ocfs2
_netdev,datavolume 0 0

Create the directories for shared files


<!--[if !supportEmptyParas]--> <!--[endif]-->
As root user, run the following commands on node1 only. Since /u03
is on a shared disk, all the files added from one node will be visible
on other nodes.
<!--[if !supportEmptyParas]--> <!--[endif]-->
CRS files:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# mkdir /u03/oracrs
# chown oracle:oinstall /u03/oracrs
# chmod 775 /u03/oracrs
<!--[if !supportEmptyParas]--> <!--[endif]-->
Database files:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# mkdir /u03/oradata
# chown oracle:oinstall /u03/oradata
# chmod 775 /u03/oradata
Automatic Storage Management (ASM)
<!--[if !supportEmptyParas]--> <!--[endif]-->
ASM was a new storage option introduced with Oracle Database
10gR1 that provides the services of a filesystem, logical volume
manager, and software RAID in a platform-independent manner. ASM
can stripe and mirror your disks, allow disks to be added or removed
while the database is under load, and automatically balance I/O to
remove "hot spots." It also supports direct and asynchronous I/O and
implements the Oracle Data Manager API (simplified I/O system call
interface) introduced in Oracle9i.
<!--[if !supportEmptyParas]--> <!--[endif]-->
ASM is not a general-purpose filesystem and can be used only for
Oracle data files, redo logs, control files, and flash recovery area.
Files in ASM can be created and named automatically by the
database (by use of the Oracle Managed Files feature) or manually
by the DBA. Because the files stored in ASM are not accessible to the
operating system, the only way to perform backup and recovery
operations on databases that use ASM files is through Recovery
Manager (RMAN).
<!--[if !supportEmptyParas]--> <!--[endif]-->

ASM is implemented as a separate Oracle instance that must be up if


other databases are to be able to access it. Memory requirements for
ASM are light: only 64 MB for most systems.
I n s t a l l i n g
A S M
<!--[if !supportEmptyParas]--> <!--[endif]-->
On Linux platforms, ASM can use raw devices or devices managed
via the ASMLib interface. Oracle recommends ASMLib over raw
devices for ease-of-use and performance reasons. ASMLib 2.0 is
available for free download from OTN. This section walks through the
process of configuring a simple ASM instance by using ASMLib 2.0
and building a database that uses ASM for disk storage.
< ! - - [ i f
!
s u p p o r t E m p t y P a r a s ] - - >
< ! - [ e n d i f ] - - >
D e t e r m i n e
W h i c h
V e r s i o n
o f
A S M L i b
Y o u
N e e d
<!--[if !supportEmptyParas]--> <!--[endif]-->
ASMLib 2.0 is delivered as a set of three Linux packages:
oracleasmlib-2.0 - the ASM libraries
oracleasm-support-2.0 - utilities needed to administer ASMLib
oracleasm - a kernel module for the ASM library
<!--[if !supportEmptyParas]--> <!--[endif]-->
Each Linux distribution has its own set of ASMLib 2.0 packages, and
within each distribution, each kernel version has a corresponding
oracleasm package. The following paragraphs describe how to
determine which set of packages you need.
<!--[if !supportEmptyParas]--> <!--[endif]-->
First, determine which kernel you are using by logging in as root and
running the following command:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# uname -rm
2.6.9-55.0.2.6.2.ELsmp i686
<!--[if !supportEmptyParas]--> <!--[endif]-->
We have downloaded the proper packages and staged it on
/u01/stage/asm directory for you. They are:
<!--[if !supportEmptyParas]--> <!--[endif]-->
oracleasm-2.6.9-55.0.2.6.2.ELsmp-2.0.42.el4.i686.rpm

oracleasmlib-2.0.2-1.i386.rpm
oracleasm-support-2.0.3-2.i386.rpm
<!--[if !supportEmptyParas]--> <!--[endif]-->
Optionally, you can download ASMLib packages from OTN:
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->1.
<!--[endif]-->Point your Web browser
to http://www.oracle.com/technology/tech/linux/asmlib/index.ht
ml
<!--[if !supportLists]-->2.
<!--[endif]-->Select the link for your
version of Linux under Downloads.
<!--[if !supportLists]-->3.
<!--[endif]-->Download the oracleasmlib
and oracleasm-support packages for your version of Linux
<!--[if !supportLists]-->4.
<!--[endif]-->Download the oracleasm
package corresponding to your kernel.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Since we have all the needed packages in /u01/stage/asm directory
on each node next, install the packages by executing the following
command as root on each node:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# cd /u01/stage/asm
# rpm -ivh oracleasm-2.6.9-55.0.2.6.2.ELsmp-2.0.42.el4.i686.rpm \
> oracleasmlib-2.0.2-1.i386.rpm \
> oracleasm-support-2.0.3-2.i386.rpm
Preparing...
###########################################
[100%]
1:oracleasm-support
########################################### [ 33%]
2:oracleasm-2.6.955.0.2.###########################################
[ 67%]
3:oracleasmlib
########################################### [100%]
< ! - - [ i f
!
s u p p o r t E m p t y P a r a s ] - - >
< ! - [ e n d i f ] - - >
C o n f i g u r i n g

A S M L i b

<!--[if !supportEmptyParas]--> <!--[endif]-->


Before using ASMLib, you must run a configuration script to prepare
the driver. Run the following command as root, and answer the
prompts as shown in the example below. Run this on each node in
the cluster.
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
<!--[if !supportEmptyParas]--> <!--[endif]-->
This will configure the on-boot properties of the
Oracle ASM library
driver. The following questions will determine
whether the driver is
loaded on boot and what permissions it will have.
The current values
will be shown in brackets ('[ ]'). Hitting <ENTER>
without typing an
answer will keep that current value. Ctrl-C will
abort.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Default user to own the driver interface [ ]:
oracle
Default group to own the driver interface [ ]: dba
Start Oracle ASM library driver on boot (y/n) [n]:
y
Fix permissions of Oracle ASM disks on boot (y/n)
[y]: y
Writing Oracle ASM library driver configuration:
[ OK ]
Creating /dev/oracleasm mount point:
[ OK ]
Loading module "oracleasm":
[ OK ]
Mounting ASMlib driver filesystem:
[ OK ]
Scanning system for ASM disks:
[ OK ]
<!--[if !supportEmptyParas]--> <!--[endif]-->

Next you tell the ASM driver which disks you want it to use. Oracle
recommends that each disk contain a single partition for the entire
disk.
We will use the partitions /dev/sdd1, and /devsde1 we created in
section P a r t i t i o n
t h e
D i s k s
You mark disks for use by ASMLib by running the following command
as root from one of the cluster nodes:
<!--[if !supportEmptyParas]--> <!--[endif]-->
/etc/init.d/oracleasm createdisk [DISK_NAME]
[device_name]
<!--[if !supportEmptyParas]--> <!--[endif]-->
Tip: Enter the DISK_NAME in UPPERCASE letters, and give each
disk a unique name i.e. VOL1, VOL2
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdd1
Marking disk "/dev/sdd1" as an ASM
disk:
[ OK ]
# /etc/init.d/oracleasm createdisk VOL2 /dev/sde1
Marking disk "/dev/sde1" as an ASM
disk:
[ OK ]
<!--[if !supportEmptyParas]--> <!--[endif]-->
On all other cluster nodes, run the following command as root to
scan for configured ASMLib disks:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/oracleasm scandisks
Scanning system for ASM
disks:
[ OK ]
<!--[if !supportEmptyParas]--> <!--[endif]-->
Verify that ASMLib has marked the disks on each node:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/oracleasm listdisks
VOL1
VOL2
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->

Part III: Install Oracle Software


<!--[if !supportEmptyParas]--> <!--[endif]-->
Oracle Database 10g Release 2 software is downloaded and staged
for you. The linux version of this software you will need for this install
is available in /nfs/stage/linux/oracle/10G-R2 directory on your nodes.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Optionally you can download Oracle Database 10g Release 2 from
OTN:
http://www.oracle.com/technology/software/products/database/oracle
10g/htdocs/10201linuxsoft.html.
<!--[if !supportEmptyParas]--> <!--[endif]-->
If everything is set up correctly, you can now use ssh to log in as
oracle user, execute programs, and copy files on the other cluster
nodes without having to enter a password. It is very important to
verify user equivalence on all nodes before you start installer. Run a
simple command like date on all cluster nodes, do this for all public
and private Ips (vip interface will not work until after the cluster
installed and VIPCA ran) :
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ ssh ocvmrh2011 date
Tue Aug 14 12:26:31 CDT 2007
<!--[if !supportEmptyParas]--> <!--[endif]-->
Check all ssh combination
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Do this from both nodes
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
1. ssh both public hostname
2. ssh both Private hostname

Install Oracle Clusterware


<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Before installing the Oracle RAC 10g Release 2 database software,
you must first install Oracle Clusterware. Oracle Clusterware requires
two files to be shared among all of the nodes in the cluster: the
Oracle Cluster Registry (100MB) and the Voting Disk (20MB). These
files may be stored on raw devices or on a cluster filesystem. (NFS is
also supported for certified NAS systems, but that is beyond the
scope of this guide.) Oracle ASM may not be used for these files
because ASM is dependent upon services provided by Clusterware.
This guide will use OCFS2 as a cluster filesystem to store the Oracle
Cluster Registry and Voting Disk files.
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Use the graphical login to log in as oracle user to node1. You can use
vncviewer to connecte to vncserver session we started earlier in this
guide (ocvmrh2011: 55)
<!--[if !supportEmptyParas]--> <!--[endif]-->
Set the ORACLE_BASE environment variable:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ ORACLE_BASE=/u01/app; export ORACLE_BASE
Start the installation using "runInstaller" from the "clusterware"
directory:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ cd /nfs/stage/linux/oracle/10g-R210.2.0.1.0/clusterware
$ ./runInstaller
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->1.
<!--[endif]-->Welcome
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->2.
<!--[endif]-->Specify Inventory Directory
and Credentials
<!--[if !supportLists]-->o
<!--[endif]-->Inventory directory:
/u01/app/oraInventory
<!--[if !supportLists]-->o
<!--[endif]-->Group name: oinstall
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->3.
<!--[endif]-->Specify Home Details

<!--[if !supportLists]-->o
<!--[endif]-->Name:
OraCrs10g_home
<!--[if !supportLists]-->o
<!--[endif]-->Path:
/u01/app/oracle/product/10.2.0/crs
<!--[if !supportLists]-->o
<!--[endif]--> Click on Next
<!--[if !supportLists]-->4.
<!--[endif]-->Product-Specific
Prerequisite Checks
<!--[if !supportLists]-->o
<!--[endif]-->Correct any problems
found before proceeding.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->1.
<!--[endif]-->Specify Cluster
Configuration
<!--[if !supportLists]-->o
<!--[endif]-->Enter the cluster
name (or accept the default of "crs");
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->

Click on Edit, and change the node names as assigned to


you. Do not use us.oracle.com on Private and Virtual host
names.
Change the names to look like this:
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Click on OK, then click on Add and enter node names for
other cluster nodes.
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Click on OK
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Click on Next.
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->2.
<!--[endif]-->Specify Network Interface
Usage - Specify the Interface Type (public, private, or "do no
use") for each interface
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
Select eth2, and Click on Edit. Change eth2 to Private as bellow:
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
Click OK. The final screen should look like this:
<!--[if !supportEmptyParas]--> <!--[endif]-->

<!--[if !vml]--><!--[endif]-->
Click Next.
<!--[if !supportLists]-->3.
<!--[endif]-->Specify Oracle Cluster
Registry (OCR) Location
<!--[if !supportLists]-->o
<!--[endif]-->Choose External
Redundancy and enter the full pathname of the OCR file
(/u03/oracrs/ocr.crs).
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->4.
<!--[endif]-->Specify Voting Disk
Location
<!--[if !supportLists]-->o
<!--[endif]-->Choose External
Redundancy and enter the full pathname of the voting
disk file (/u03/oracrs/vote.crs)
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->5.
<!--[endif]-->Summary

<!--[if !supportLists]-->o
<!--[endif]-->Click on Install
<!--[if !supportLists]-->6.
<!--[endif]-->Execute Configuration
Scripts
<!--[if !supportLists]-->o
<!--[endif]-->Execute the scripts as
root on each node, one at a time, starting with the
installation node.
<!--[if !supportLists]-->o
<!--[endif]-->Do not run the scripts
simultaneously. Wait for one to finish before starting
another.
<!--[if !supportLists]-->o
<!--[endif]-->Click on OK to
dismiss the window when done.
<!--[if !supportLists]-->o
<!--[endif]-->Configuration
Assistant will run automatically
<!--[if !supportLists]-->o
<!--[endif]-->Click Exit on End of
Installation screen to exit the installer.
End of Clusterware Installation
<!--[if !supportEmptyParas]--> <!--[endif]-->
Verify that the installation succeeded by running olsnodes from the
$ORACLE_BASE/oracle/product/10.2.0/crs/bin directory;
for example:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ /u01/app/oracle/product/10.2.0/crs/bin/olsnodes
ocvmrh2011
ocvmrh2013
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Once Oracle Clusterware is installed and operating, it's time to install
the rest of the Oracle RAC software.
Create the ASM Instance
<!--[if !supportEmptyParas]--> <!--[endif]-->
If you are planning to use OCFS2 for database storage, skip this
section and continue with Create the RAC Database. If you plan to
use Automatic Storage Management (ASM) for database storage,
follow the instructions below to create an ASM instance on each
cluster node. Be sure you have installed the ASMLib software as
described earlier in this guide before proceeding.
<!--[if !supportEmptyParas]--> <!--[endif]-->

If you have not already done so, login as oracle user on node1 and
set the ORACLE_BASE environment variable as in previous steps.
Start the installation using "runInstaller" from the "database" directory:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ cd /nfs/stage/linux/oracle/10g-R210.2.0.1.0/database
$ ./runInstaller
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->1.
<!--[endif]-->Welcome
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->2.
<!--[endif]-->Select Installation Type
<!--[if !supportLists]-->o
<!--[endif]-->Select Enterprise
Edition
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->3.
<!--[endif]-->Specify Home Details
<!--[if !supportLists]-->o
<!--[endif]-->Name: Ora10gASM
<!--[if !supportLists]-->o
<!--[endif]-->Path:
/u01/app/oracle/product/10.2.0/asm
Note:Oracle recommends using a different
ORACLE_HOME for ASM than the ORACLE_HOME
used for the database for ease of administration.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->4.
<!--[endif]-->Specify Hardware Cluster
Installation Mode
<!--[if !supportLists]-->o
<!--[endif]-->Select Cluster
Installation
<!--[if !supportLists]-->o
<!--[endif]-->Click on Select All
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->5.
<!--[endif]-->Product-specific
Prerequisite Checks
<!--[if !supportLists]-->o
<!--[endif]-->If you've been
following the steps in this guide, all the checks should
pass without difficulty. If one or more checks fail, correct
the problem before proceeding.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->6.
<!--[endif]-->Select Configuration
Option
<!--[if !supportLists]-->o
<!--[endif]-->Select Configure
Automatic Storage Management (ASM)

<!--[if !supportLists]-->o
<!--[endif]-->Enter the ASM SYS
password and confirm
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->7.
<!--[endif]-->Configure Automatic
Storage Management
<!--[if !supportLists]-->o
<!--[endif]-->Disk Group Name:
DATA
<!--[if !supportLists]-->o
<!--[endif]-->Redundancy
- High mirrors data twice.
- Normal mirrors data once. Select this default.
- External does not mirror data within ASM. This is
typically used if an external RAID array is providing
redundancy.
<!--[if !supportLists]-->o
<!--[endif]-->Add Disks
The disks you configured for use with ASMLib are listed
as Candidate Disks (VOL1, VOL2). Select both disks to
include in the disk group.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->8.
<!--[endif]-->Summary
<!--[if !supportLists]-->o
<!--[endif]-->A summary of the
products being installed is presented.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Install.
<!--[if !supportLists]-->9.
<!--[endif]-->Execute Configuration
Scripts
<!--[if !supportLists]-->o
<!--[endif]-->At the end of the
installation, a pop up window will appear indicating scripts
that need to be run as root. Login as root and run the
indicated scripts on all nodes as directed.
<!--[if !supportLists]-->o
<!--[endif]-->Click on OK when
finished.
<!--[if !supportLists]-->10. <!--[endif]-->End of Installation
<!--[if !supportLists]-->o
<!--[endif]-->Make note of the
URLs presented in the summary, and click on Exit when
ready.
<!--[if !supportLists]-->11. <!--[endif]-->Congratulations! Your new
Oracle ASM Instance is up and ready for use.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Create the RAC Database

<!--[if !supportEmptyParas]--> <!--[endif]-->


If you have not already done so, login as oracle user on node1 and
set the ORACLE_BASE environment variable as in previous steps.
Start the installation using "runInstaller" from the "database" directory:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ cd /nfs/stage/linux/oracle/10g-R210.2.0.1.0/database
$ ./runInstaller
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->1.
<!--[endif]-->Welcome
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->2.
<!--[endif]-->Select Installation Type
<!--[if !supportLists]-->o
<!--[endif]-->Select Enterprise
Edition
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->3.
<!--[endif]-->Specify Home Details
<!--[if !supportLists]-->o
<!--[endif]-->N a m e :
OraDb10g_home1
<!--[if !supportLists]-->o
<!--[endif]-->P a t h :
/u01/app/oracle/product/10.2.0/db
Note:Oracle recommends using a different
ORACLE_HOME for the database than the
ORACLE_HOME used for ASM.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->4.
<!--[endif]-->Specify Hardware Cluster
Installation Mode
<!--[if !supportLists]-->o
<!--[endif]-->Select Cluster
Installation
<!--[if !supportLists]-->o
<!--[endif]-->Click on Select All
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->5.
<!--[endif]-->Product-specific
Prerequisite Checks
<!--[if !supportLists]-->o
<!--[endif]-->If you've been
following the steps in this guide, all the checks should
pass without difficulty. If one or more checks fail, correct
the problem before proceeding.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->6.
<!--[endif]-->Select Configuration
Option

<!--[if !supportLists]-->o
<!--[endif]-->Select Create a
Database
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->7.
<!--[endif]-->Select Database
Configuration
<!--[if !supportLists]-->o
<!--[endif]-->Select General
Purpose
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->8.
<!--[endif]-->Specify Database
Configuration Options
<!--[if !supportLists]-->o
<!--[endif]-->Database Naming:
Enter the Global Database Name: and SID: racdb
<!--[if !supportLists]-->o
<!--[endif]-->Database Character
Set: Accept the default
<!--[if !supportLists]-->o
<!--[endif]-->Database
Examples: Select Create database with sample
schemas
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->9.
<!--[endif]-->Select Database
Management Option
<!--[if !supportLists]-->o
<!--[endif]-->Select Use Database
Control for Database Management
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->10. <!--[endif]-->Specify Database Storage
Option
<!--[if !supportLists]-->o
<!--[endif]-->If you are using
OCFS2 for database storage
<!--[if !supportLists]-->
<!--[endif]-->Select File
System
<!--[if !supportLists]-->
<!--[endif]-->Specify
Database file location: Enter the path name to the
OCFS2 filesystem directory you wish to use.
ex: /u03/oradata/racdb
<!--[if !supportLists]-->o
<!--[endif]-->If you are using ASM
for database storage ( for this exercise we are going to
select ASM)
<!--[if !supportLists]-->
<!--[endif]-->Select
Automatic Storage Management (ASM)

<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->11. <!--[endif]-->Specify Backup and
Recovery Options
<!--[if !supportLists]-->o
<!--[endif]-->Select Do not enable
Automated backups
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->12. <!--[endif]-->For ASM Installations Only:
<!--[if !supportLists]-->o
<!--[endif]-->Select ASM Disk
Group
<!--[if !supportLists]-->
<!--[endif]-->Select the
DATA disk group created in the previous section
<!--[if !supportLists]-->
<!--[endif]-->Click on Next
<!--[if !supportLists]-->13. <!--[endif]-->Specify Database Schema
Passwords
<!--[if !supportLists]-->o
<!--[endif]-->Select Use the same
password for all the accounts
<!--[if !supportLists]-->o
<!--[endif]-->Enter the password
and confirm
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->14. <!--[endif]-->Summary
<!--[if !supportLists]-->o
<!--[endif]-->A summary of the
products being installed is presented.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Install.
<!--[if !supportLists]-->15. <!--[endif]-->Configuration Assistants
<!--[if !supportLists]-->o
<!--[endif]-->The Oracle Net,
Oracle Database, and iSQL*Plus configuration assistants
will run automatically (you may have to click on vnc
desktop to see the new window, and to start the
Assistant.)
<!--[if !supportLists]-->16. <!--[endif]-->Execute Configuration
Scripts
<!--[if !supportLists]-->o
<!--[endif]-->At the end of the
installation, a pop up window will appear indicating scripts
that need to be run as root. Login as root and run the
indicated scripts.
<!--[if !supportLists]-->o
<!--[endif]-->Click on OK when
finished.
<!--[if !supportLists]-->17. <!--[endif]-->End of Installation

<!--[if !supportLists]-->o
<!--[endif]-->Make note of the
URLs presented in the summary, and click on Exit when
ready.
<!--[if !supportLists]-->18. <!--[endif]-->Congratulations! Your new
Oracle Database is up and ready for use.
Useful Commands
<!--[if !supportEmptyParas]--> <!--[endif]-->
Following are some commands for your reference you can
run as oracle user.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Add $ORACLE_HOME/bin to your PATH before using
these commands.
Note: All srvctl command can be run from one node to
perform action on another nodes since it is a cluster wide tool.
<!--[if !supportEmptyParas]--> <!--[endif]-->
To stop everything follow this sequence of commands.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Check status of entire database:
srvctl status database d {db name} (ex:
racdb)
<!--[if !supportEmptyParas]--> <!--[endif]-->
Stop database instance:
srvctl stop instance d {db name} i
{instance name} (ex: racdb1)
<!--[if !supportEmptyParas]--> <!--[endif]-->
Stop entire database on all nodes
srvctl stop database d {db name}
<!--[if !supportEmptyParas]--> <!--[endif]-->
Stop the ASM instance on one node
srvctl stop asm n {node name}
<!--[if !supportEmptyParas]--> <!--[endif]-->
Stop the nodeapps/clusterware on one node
srvctl stop nodeapps n {node name}
<!--[if !supportEmptyParas]--> <!--[endif]-->
To start everything follow the above sequence of commands in
reverse order with the start option.

Start enterprise manager


export ORACLE_SID={sid name} (ex: racdb1)
emctl start dbconsole
<!--[if !supportEmptyParas]--> <!--[endif]-->
Start isqlplus
isqlplusctl start
To access and manage the RAC database from enterprise
manager, point your browser to a link similar to:
ocvmrh2011.us.oracle.com:1158/em for your installation.
Sign-on to enterprise manager with
Username: sys , and the password you assigned to this
account
Connect as: sysdba
<!--[if !supportEmptyParas]--> <!--[endif]-->
Note: For any reason if you have to reboot the rac nodes, you may
need to stop nodeapps with srvctl stop nodeapps n {node
name} on both nodes, and then run above commands in reverse
order with the start option to start all rac components.
For more information and documentation for 10g RAC, go to:
http://www.oracle.com/technology/products/database/clustering/index
.html
<!--[if !supportEmptyParas]--> <!--[endif]-->
References
<!--[if !supportEmptyParas]--> <!--[endif]-->
Installing Oracle RAC 10g Release 2 on Linux x86 by John Smiley:
http://www.oracle.com/technology/pub/articles/smiley_rac10g_install.
html#background
Build your own oracle RAC cluster on Oracle Enterprise Linux and
iSCSI by Jeffrey Hunter:
http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi.
html
Oracle Real Application Clusters:
http://www.oracle.com/technology/products/database/clustering/index
.html

<!--[if !supportEmptyParas]--> <!--[endif]-->


This document is maintained by Ramesh Dadhania
(ramesh.dadhania@oracle.com)
Last updated: Wed Oct 15 17:03:02 PDT 2008 by Erik Niklas
(erik.niklas@oracle.com)

S-ar putea să vă placă și