Documente Academic
Documente Profesional
Documente Cultură
# /sbin/sysctl p
Setting Shell Limits for the oracle User
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Oracle recommends setting the limits to the number of processes and
number of open files each Linux account may use. To make these
changes, you may cut and paste the following commands one line at
a time as root on each node or edit the file and add the lines between
EOFs at the end of each file.
<!--[if !supportEmptyParas]--> <!--[endif]-->
cat >> /etc/security/limits.conf << EOF
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF
140.87.222.137 ocvmrh2015.us.oracle.com
ocvmrh2015
# node2 vip
<!--[if !supportEmptyParas]--> <!--[endif]-->
/etc/hosts file used for this walkthru on Node2:
<!--[if !supportEmptyParas]--> <!--[endif]-->
127.0.0.1
localhost.localdomain
localhost
140.87.222.135 ocvmrh2013.us.oracle.com
ocvmrh2013
# node2 public
140.87.241.155 ocvmrh2013-nfs.us.oracle.com
ocvmrh2013-nfs ocvmrh2013-a # node2 nfs
152.68.143.11
ocvmrh2011-priv.us.oracle.com
ocvmrh2011-priv
# node1 private
152.68.143.12
ocvmrh2013-priv.us.oracle.com
ocvmrh2013-priv
# node2 private
140.87.222.136 ocvmrh2014.us.oracle.com
ocvmrh2014
# node1 vip
140.87.222.137 ocvmrh2015.us.oracle.com
ocvmrh2015
# node2 vip
<!--[if !supportEmptyParas]--> <!--[endif]-->
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost
#Node1
#Node2
localhost.localdomain localhost
#Node1
#Node2
Use the following steps to create the RSA and DSA key
pairs. Please note that these steps will need to be
completed on both Oracle RAC nodes in the cluster.
<!--[if !supportLists]-->1.
<!--[endif]-->Logon as the
"oracle" UNIX user account.
# su oracle
<!--[if !supportEmptyParas]--> <!-[endif]-->
<!--[if !supportLists]-->2.
<!--[endif]-->If necessary,
create the .ssh directory in the "oracle" user's home
directory and set the correct permissions on it:
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
<!--[if !supportEmptyParas]--> <!-[endif]-->
<!--[if !supportLists]-->3.
<!--[endif]-->Enter the
following command to generate an RSA key pair
(public and private key) for version 1 of the SSH
protocol:
$ /usr/local/git/bin/ssh-keygen -t rsa
$ ssh-keygen -t rsa
Generating public/private rsa key
pair.
Enter file in which to save the key
(/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no
passphrase):
Enter same passphrase again:
Your identification has been saved
in /home/oracle/.ssh/id_rsa.
Your public key has been saved in
/home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
1f:59:21:f3:d8:16:bb:76:f0:c3:2e:0f:53
:c4:29:a9 oracle@ocvmrh2035
$ pwd
/usr/bin
$
<!--[if !supportEmptyParas]--> <!-[endif]-->
Generating public/private rsa key
pair.
Enter file in which to save the key
(/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no
passphrase):
Enter same passphrase again:
Your identification has been saved
in /home/oracle/.ssh/id_rsa.
Your public key has been saved in
/home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
7c:b9:d8:4e:e6:c2:25:65:73:cc:d1:84:b8
:b8:f0:c2 oracle@ocvmrh2011
This command will write the public key to the
~/.ssh/id_rsa.pub file and the private key to the
~/.ssh/id_rsa file. Note that you should never
distribute the private key to anyone!
<!--[if !supportLists]-->4.
<!--[endif]-->Enter the
following command to generate a DSA key pair
(public and private key) for version 2 of the SSH
protocol:
<!--[if !supportEmptyParas]--> <!-[endif]-->
$ /usr/local/git/bin/ssh-keygen -t dsa
Generating public/private dsa key
pair.
Enter file in which to save the key
(/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no
passphrase):
Enter same passphrase again:
Your identification has been saved
in /home/oracle/.ssh/id_dsa.
$ ssh-agent $SHELL
$ ssh-add
Identity added: /home/oracle/.ssh/id_rsa
(/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa
(/home/oracle/.ssh/id_dsa)
$
$ cd /usr/bin
$ ls ssh*
ssh ssh-add ssh-agent ssh-keygen ssh-keyscan
$ ssh-agent $SHELL
$ ssh-add
Identity added: /home/oracle/.ssh/id_rsa
(/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa
(/home/oracle/.ssh/id_dsa)
$
Ssh-add adds identities to the authentication agent, ssh-agent. When run without
arguments, it adds the file %HOME%/.ssh/identity. Alternative file names can be given on
the command line. If any file requires a passphrase, ssh-add asks for the passphrase from
the user.
The authentication agent must be running and must be an ancestor of
the current process for ssh-add to work.
$ exec /usr/local/git/bin/ssh-agent
$SHELL
$ /usr/local/git/bin/ssh-add
Identity added: /home/oracle/.ssh/id_rsa
(/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa
(/home/oracle/.ssh/id_dsa)
Note: if the user equivalence is ever lost, you will have to run sshagent and ssh-add again to re-establish user equivalence.
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Test Connectivity
<!--[if !supportEmptyParas]--> <!--[endif]-->
If SSH is configured correctly, you will be able to use
the ssh and scp commands without being prompted for
anything from this terminal session:
<!--[if !supportEmptyParas]--> <!--[endif]-->
Enter following commands as oracle user on both
nodes.
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ ssh ocvmrh2011 "date;hostname"
configured using one of the following methods. Note that you cannot
use a "standard" filesystem such as ext3 for shared disk volumes
since such file systems are not cluster aware.
For Clusterware:
<!--[if !supportLists]-->1.
<!--[endif]-->OCFS (Release 1 or 2)
<!--[if !supportLists]-->2.
<!--[endif]-->raw devices
<!--[if !supportLists]-->3.
<!--[endif]-->third party cluster
filesystem such as GPFS or Veritas
For RAC database storage:
<!--[if !supportLists]-->1.
<!--[endif]-->OCFS (Release 1 or 2)
<!--[if !supportLists]-->2.
<!--[endif]-->ASM
<!--[if !supportLists]-->3.
<!--[endif]-->raw devices
<!--[if !supportLists]-->4.
<!--[endif]-->third party cluster
filesystem such as GPFS or Veritas
<!--[if !supportEmptyParas]--> <!--[endif]-->
This guide covers installations using OCFS2 and ASM. If you have a
small number of shared disks, you may wish to use OCFS2 for both
Oracle Clusterware and the Oracle RAC database files. If you have
more than a few shared disks, consider using ASM for Oracle RAC
database files for the performance benefits ASM provides. Note that
ASM cannot be used to store Oracle Clusterware files since
Clusterware must be installed before ASM (ASM depends upon the
services of Oracle Clusterware). This guide uses OCFS2 for Oracle
Clusterware files, and ASM for RAC database files.
P a r t i t i o n
t h e
D i s k s
<!--[if !supportEmptyParas]--> <!--[endif]-->
In order to use either OCFS2 or ASM, you must have unused disk
partitions available. This section describes how to create the
partitions that will be used for OCFS2 and for ASM.
<!--[if !supportEmptyParas]--> <!--[endif]-->
WARNING: Improperly partitioning a disk is one of the surest and
fastest ways to wipe out everything on your hard disk. If you are
unsure how to proceed, stop and get help, or you will risk losing data.
<!--[if !supportEmptyParas]--> <!--[endif]-->
You have three empty SCSI disks setup for you to use for this cluster
install. These disks are configured as shared disks and are visible to
all nodes. They are:
/dev/sdc
/dev/sdd
/dev/sde
<!--[if !supportEmptyParas]--> <!--[endif]-->
http://arjudba.blogspot.com/2008/08/configureshared-storage-in-oracle-rac.html
Note: you can run /sbin/sfdisk s command as root user to
see all the disks.
<!--[if !supportEmptyParas]--> <!--[endif]-->
In this example we will use /dev/sdc for OCFS2, and use /dev/sdd
and /dev/sde for ASM.
You will now uses /dev/sdc (an empty SCSI disk with no existing
partitions) to create a single partition for the entire disk (10 GB) to be
used for OCFS2.
<!--[if !supportEmptyParas]--> <!--[endif]-->
As root user on Node1, run the following command
<!--[if !supportEmptyParas]--> <!--[endif]-->
# fdisk /dev/sdc
Device contains neither a valid DOS partition
table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain
in memory only,
until you decide to write them. After that, of
course, the previous
content won't be recoverable.
<!--[if !supportEmptyParas]--> <!--[endif]-->
The number of cylinders for this disk is set to
1305.
There is nothing wrong with that, but this is
larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old
versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4
will be corrected by w(rite)
N o t e :
T h i s
a p p l i c a t i o n
i
s l o w ,
a n d
m a y
t a k e
a
f e
s e c o n d s
b e f o r e
y o u
s e e
w i n d o w .
< ! - - [ i f
!
s u p p o r t E m p t y P a r a s ] - - >
[ e n d i f ] - - >
s
w
n e w
< ! - -
< ! - - [ i f
! v m l ] - - > < ! - - [ e n d i f ] - >
<!--[if !supportLineBreakNewLine]-->
<!--[endif]-->
Click on A d d on the next window, and enter the Name and IP
Address of each node in the cluster.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Note: Use node name to be the same as returned by the hostname
command
<!--[if !supportEmptyParas]--> <!--[endif]-->
Ex:
ocvmrh2011 (short name, without the us.oracle.com)
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Apply, and Close the window.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Once all of the nodes have been added, click on C l u s t e r
--> P r o p a g a t e
C o n f i g u r a t i o n .
This will copy the OCFS2 configuration file to each node in the
cluster. You may be prompted for root passwords as ocfs2console
uses ssh to propagate the configuration file. Answer yes if you see
following prompt.
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
When you see Finished!, click on Close, and leave the OCFS2
console by clicking on F i l e --> Q u i t .
<!--[if !supportEmptyParas]--> <!--[endif]-->
After exiting the ocfs2console, you will have a
/etc/ocfs2/cluster.conf similar to the following on all nodes. This
OCFS2 configuration file should be exactly the same on all of the
nodes:
<!--[if !supportEmptyParas]--> <!--[endif]-->
node:
ip_port = 7777
ip_address = 140.87.222.133
number = 0
name = ocvmrh2011
cluster = ocfs2
<!--[if !supportEmptyParas]--> <!--[endif]-->
node:
ip_port = 7777
ip_address = 140.87.222.135
number = 1
name = ocvmrh2013
cluster = ocfs2
<!--[if !supportEmptyParas]--> <!--[endif]-->
cluster:
node_count = 2
name = ocfs2
<!--[if !supportEmptyParas]--> <!--[endif]-->
If you dont see this file on any node, follow the steps bellow to copy
this file on missing nodes as root user.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Create /etc/ocfs2 directory if missing.
<!--[if !supportEmptyParas]--> <!--[endif]-->
# mkdir /etc/ocfs2
<!--[if !supportEmptyParas]--> <!--[endif]-->
Copy the cluster.conf file from the node1 (where it is found) to the
other node (where it is missing). You will be prompted for the root
password.
<!--[if !supportEmptyParas]--> <!--[endif]-->
# scp /etc/ocfs2/cluster.conf
ocvmrh2013:/etc/ocfs2/cluster.conf
Password:
cluster.conf
100%
240
0.2KB/s
00:00
<!--[if !supportEmptyParas]--> <!--[endif]-->
Configure O2CB to Start on Boot and Adjust O2CB Heartbeat
Threshold
Oracle RAC O2CB Cluster Service
Before we can do anything with OCFS2 like formatting or mounting the
file system, we need to first have OCFS2's cluster stack, O2CB, running
(which it will be as a result of the configuration process performed
above). The stack includes the following services:
NM: Node Manager that keep track of all the nodes in the cluster.conf
HB: Heart beat service that issues up/down notifications when nodes join or leave
the cluster
TCP: Handles communication between the nodes
DLM: Distributed lock manager that keeps track of all locks, its owners and
status
All of the above cluster services have been packaged in the o2cb
system service (/etc/init.d/o2cb). Here is a short listing of some of the
more useful commands and options for the o2cb system service.
http://www.rampantbooks.com/art_hunter_rac_oracle_o2cb_cluster_service.ht
m
<!--[if !supportEmptyParas]--> <!--[endif]-->
You now need to configure the on-boot properties of the O2CB driver
so that the cluster stack services will start on each boot. You will also
be adjusting the OCFS2 Heartbeat Threshold from its default setting
of 7 to 601. All the tasks within this section will need to be performed
on both nodes in the cluster as root user.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Set the on-boot properties as follows:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the
O2CB driver.
The following questions will determine whether the
driver is loaded on
boot. The current values will be shown in brackets
('[]'). Hitting
<ENTER> without typing an answer will keep that
current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear)
[ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [31]: 601
Specify network idle timeout in ms (>=5000)
[30000]: <enter>
Specify network keepalive delay in ms (>=1000)
[2000]: <enter>
Specify network reconnect delay in ms (>=2000)
[2000]: <enter>
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
<!--[if !supportEmptyParas]--> <!--[endif]-->
We can now check again to make sure the settings took place in for
the o2cb cluster stack:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# cat /proc/fs/ocfs2_nodemanager/hb_dead_threshold
601
<!--[if !supportEmptyParas]--> <!--[endif]-->
The default value was 7, but what does this value represent? Well, it
is used in the formula below to determine the fence time (in seconds):
[fence time in seconds] = (O2CB_HEARTBEAT_THRESHOLD
- 1) * 2
So, with an O2CB heartbeat threshold of 7, we would have a fence
time of:
(7 - 1) * 2 = 12 seconds
In our case, I used a larger threshold (of 1200 seconds), so I adjusted
O2CB_HEARTBEAT_THRESHOLD to 601 as shown below:
(601 - 1) * 2 = 1200 seconds
<!--[if !supportEmptyParas]--> <!--[endif]-->
It is important to note that the value of 601 I used for the O2CB
heartbeat threshold is the maximum you can use to prevent OCFS2
from panicking the kernel.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Verify that ocfs2 and o2cb are started at boot time. Check this on
both nodes. As root user:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# chkconfig --list |egrep "ocfs2|o2cb"
ocfs2 0:off 1:off 2:on 3:on 4:on 5:on 6:off
oracleasmlib-2.0.2-1.i386.rpm
oracleasm-support-2.0.3-2.i386.rpm
<!--[if !supportEmptyParas]--> <!--[endif]-->
Optionally, you can download ASMLib packages from OTN:
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->1.
<!--[endif]-->Point your Web browser
to http://www.oracle.com/technology/tech/linux/asmlib/index.ht
ml
<!--[if !supportLists]-->2.
<!--[endif]-->Select the link for your
version of Linux under Downloads.
<!--[if !supportLists]-->3.
<!--[endif]-->Download the oracleasmlib
and oracleasm-support packages for your version of Linux
<!--[if !supportLists]-->4.
<!--[endif]-->Download the oracleasm
package corresponding to your kernel.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Since we have all the needed packages in /u01/stage/asm directory
on each node next, install the packages by executing the following
command as root on each node:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# cd /u01/stage/asm
# rpm -ivh oracleasm-2.6.9-55.0.2.6.2.ELsmp-2.0.42.el4.i686.rpm \
> oracleasmlib-2.0.2-1.i386.rpm \
> oracleasm-support-2.0.3-2.i386.rpm
Preparing...
###########################################
[100%]
1:oracleasm-support
########################################### [ 33%]
2:oracleasm-2.6.955.0.2.###########################################
[ 67%]
3:oracleasmlib
########################################### [100%]
< ! - - [ i f
!
s u p p o r t E m p t y P a r a s ] - - >
< ! - [ e n d i f ] - - >
C o n f i g u r i n g
A S M L i b
Next you tell the ASM driver which disks you want it to use. Oracle
recommends that each disk contain a single partition for the entire
disk.
We will use the partitions /dev/sdd1, and /devsde1 we created in
section P a r t i t i o n
t h e
D i s k s
You mark disks for use by ASMLib by running the following command
as root from one of the cluster nodes:
<!--[if !supportEmptyParas]--> <!--[endif]-->
/etc/init.d/oracleasm createdisk [DISK_NAME]
[device_name]
<!--[if !supportEmptyParas]--> <!--[endif]-->
Tip: Enter the DISK_NAME in UPPERCASE letters, and give each
disk a unique name i.e. VOL1, VOL2
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdd1
Marking disk "/dev/sdd1" as an ASM
disk:
[ OK ]
# /etc/init.d/oracleasm createdisk VOL2 /dev/sde1
Marking disk "/dev/sde1" as an ASM
disk:
[ OK ]
<!--[if !supportEmptyParas]--> <!--[endif]-->
On all other cluster nodes, run the following command as root to
scan for configured ASMLib disks:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/oracleasm scandisks
Scanning system for ASM
disks:
[ OK ]
<!--[if !supportEmptyParas]--> <!--[endif]-->
Verify that ASMLib has marked the disks on each node:
<!--[if !supportEmptyParas]--> <!--[endif]-->
# /etc/init.d/oracleasm listdisks
VOL1
VOL2
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->o
<!--[endif]-->Name:
OraCrs10g_home
<!--[if !supportLists]-->o
<!--[endif]-->Path:
/u01/app/oracle/product/10.2.0/crs
<!--[if !supportLists]-->o
<!--[endif]--> Click on Next
<!--[if !supportLists]-->4.
<!--[endif]-->Product-Specific
Prerequisite Checks
<!--[if !supportLists]-->o
<!--[endif]-->Correct any problems
found before proceeding.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->1.
<!--[endif]-->Specify Cluster
Configuration
<!--[if !supportLists]-->o
<!--[endif]-->Enter the cluster
name (or accept the default of "crs");
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Click on OK, then click on Add and enter node names for
other cluster nodes.
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Click on OK
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Click on Next.
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->2.
<!--[endif]-->Specify Network Interface
Usage - Specify the Interface Type (public, private, or "do no
use") for each interface
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
Select eth2, and Click on Edit. Change eth2 to Private as bellow:
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
Click OK. The final screen should look like this:
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !vml]--><!--[endif]-->
Click Next.
<!--[if !supportLists]-->3.
<!--[endif]-->Specify Oracle Cluster
Registry (OCR) Location
<!--[if !supportLists]-->o
<!--[endif]-->Choose External
Redundancy and enter the full pathname of the OCR file
(/u03/oracrs/ocr.crs).
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->4.
<!--[endif]-->Specify Voting Disk
Location
<!--[if !supportLists]-->o
<!--[endif]-->Choose External
Redundancy and enter the full pathname of the voting
disk file (/u03/oracrs/vote.crs)
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->5.
<!--[endif]-->Summary
<!--[if !supportLists]-->o
<!--[endif]-->Click on Install
<!--[if !supportLists]-->6.
<!--[endif]-->Execute Configuration
Scripts
<!--[if !supportLists]-->o
<!--[endif]-->Execute the scripts as
root on each node, one at a time, starting with the
installation node.
<!--[if !supportLists]-->o
<!--[endif]-->Do not run the scripts
simultaneously. Wait for one to finish before starting
another.
<!--[if !supportLists]-->o
<!--[endif]-->Click on OK to
dismiss the window when done.
<!--[if !supportLists]-->o
<!--[endif]-->Configuration
Assistant will run automatically
<!--[if !supportLists]-->o
<!--[endif]-->Click Exit on End of
Installation screen to exit the installer.
End of Clusterware Installation
<!--[if !supportEmptyParas]--> <!--[endif]-->
Verify that the installation succeeded by running olsnodes from the
$ORACLE_BASE/oracle/product/10.2.0/crs/bin directory;
for example:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ /u01/app/oracle/product/10.2.0/crs/bin/olsnodes
ocvmrh2011
ocvmrh2013
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportEmptyParas]--> <!--[endif]-->
Once Oracle Clusterware is installed and operating, it's time to install
the rest of the Oracle RAC software.
Create the ASM Instance
<!--[if !supportEmptyParas]--> <!--[endif]-->
If you are planning to use OCFS2 for database storage, skip this
section and continue with Create the RAC Database. If you plan to
use Automatic Storage Management (ASM) for database storage,
follow the instructions below to create an ASM instance on each
cluster node. Be sure you have installed the ASMLib software as
described earlier in this guide before proceeding.
<!--[if !supportEmptyParas]--> <!--[endif]-->
If you have not already done so, login as oracle user on node1 and
set the ORACLE_BASE environment variable as in previous steps.
Start the installation using "runInstaller" from the "database" directory:
<!--[if !supportEmptyParas]--> <!--[endif]-->
$ cd /nfs/stage/linux/oracle/10g-R210.2.0.1.0/database
$ ./runInstaller
<!--[if !supportEmptyParas]--> <!--[endif]-->
<!--[if !supportLists]-->1.
<!--[endif]-->Welcome
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->2.
<!--[endif]-->Select Installation Type
<!--[if !supportLists]-->o
<!--[endif]-->Select Enterprise
Edition
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->3.
<!--[endif]-->Specify Home Details
<!--[if !supportLists]-->o
<!--[endif]-->Name: Ora10gASM
<!--[if !supportLists]-->o
<!--[endif]-->Path:
/u01/app/oracle/product/10.2.0/asm
Note:Oracle recommends using a different
ORACLE_HOME for ASM than the ORACLE_HOME
used for the database for ease of administration.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->4.
<!--[endif]-->Specify Hardware Cluster
Installation Mode
<!--[if !supportLists]-->o
<!--[endif]-->Select Cluster
Installation
<!--[if !supportLists]-->o
<!--[endif]-->Click on Select All
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->5.
<!--[endif]-->Product-specific
Prerequisite Checks
<!--[if !supportLists]-->o
<!--[endif]-->If you've been
following the steps in this guide, all the checks should
pass without difficulty. If one or more checks fail, correct
the problem before proceeding.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->6.
<!--[endif]-->Select Configuration
Option
<!--[if !supportLists]-->o
<!--[endif]-->Select Configure
Automatic Storage Management (ASM)
<!--[if !supportLists]-->o
<!--[endif]-->Enter the ASM SYS
password and confirm
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->7.
<!--[endif]-->Configure Automatic
Storage Management
<!--[if !supportLists]-->o
<!--[endif]-->Disk Group Name:
DATA
<!--[if !supportLists]-->o
<!--[endif]-->Redundancy
- High mirrors data twice.
- Normal mirrors data once. Select this default.
- External does not mirror data within ASM. This is
typically used if an external RAID array is providing
redundancy.
<!--[if !supportLists]-->o
<!--[endif]-->Add Disks
The disks you configured for use with ASMLib are listed
as Candidate Disks (VOL1, VOL2). Select both disks to
include in the disk group.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->8.
<!--[endif]-->Summary
<!--[if !supportLists]-->o
<!--[endif]-->A summary of the
products being installed is presented.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Install.
<!--[if !supportLists]-->9.
<!--[endif]-->Execute Configuration
Scripts
<!--[if !supportLists]-->o
<!--[endif]-->At the end of the
installation, a pop up window will appear indicating scripts
that need to be run as root. Login as root and run the
indicated scripts on all nodes as directed.
<!--[if !supportLists]-->o
<!--[endif]-->Click on OK when
finished.
<!--[if !supportLists]-->10. <!--[endif]-->End of Installation
<!--[if !supportLists]-->o
<!--[endif]-->Make note of the
URLs presented in the summary, and click on Exit when
ready.
<!--[if !supportLists]-->11. <!--[endif]-->Congratulations! Your new
Oracle ASM Instance is up and ready for use.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Create the RAC Database
<!--[if !supportLists]-->o
<!--[endif]-->Select Create a
Database
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->7.
<!--[endif]-->Select Database
Configuration
<!--[if !supportLists]-->o
<!--[endif]-->Select General
Purpose
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->8.
<!--[endif]-->Specify Database
Configuration Options
<!--[if !supportLists]-->o
<!--[endif]-->Database Naming:
Enter the Global Database Name: and SID: racdb
<!--[if !supportLists]-->o
<!--[endif]-->Database Character
Set: Accept the default
<!--[if !supportLists]-->o
<!--[endif]-->Database
Examples: Select Create database with sample
schemas
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->9.
<!--[endif]-->Select Database
Management Option
<!--[if !supportLists]-->o
<!--[endif]-->Select Use Database
Control for Database Management
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->10. <!--[endif]-->Specify Database Storage
Option
<!--[if !supportLists]-->o
<!--[endif]-->If you are using
OCFS2 for database storage
<!--[if !supportLists]-->
<!--[endif]-->Select File
System
<!--[if !supportLists]-->
<!--[endif]-->Specify
Database file location: Enter the path name to the
OCFS2 filesystem directory you wish to use.
ex: /u03/oradata/racdb
<!--[if !supportLists]-->o
<!--[endif]-->If you are using ASM
for database storage ( for this exercise we are going to
select ASM)
<!--[if !supportLists]-->
<!--[endif]-->Select
Automatic Storage Management (ASM)
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->11. <!--[endif]-->Specify Backup and
Recovery Options
<!--[if !supportLists]-->o
<!--[endif]-->Select Do not enable
Automated backups
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->12. <!--[endif]-->For ASM Installations Only:
<!--[if !supportLists]-->o
<!--[endif]-->Select ASM Disk
Group
<!--[if !supportLists]-->
<!--[endif]-->Select the
DATA disk group created in the previous section
<!--[if !supportLists]-->
<!--[endif]-->Click on Next
<!--[if !supportLists]-->13. <!--[endif]-->Specify Database Schema
Passwords
<!--[if !supportLists]-->o
<!--[endif]-->Select Use the same
password for all the accounts
<!--[if !supportLists]-->o
<!--[endif]-->Enter the password
and confirm
<!--[if !supportLists]-->o
<!--[endif]-->Click on Next
<!--[if !supportLists]-->14. <!--[endif]-->Summary
<!--[if !supportLists]-->o
<!--[endif]-->A summary of the
products being installed is presented.
<!--[if !supportLists]-->o
<!--[endif]-->Click on Install.
<!--[if !supportLists]-->15. <!--[endif]-->Configuration Assistants
<!--[if !supportLists]-->o
<!--[endif]-->The Oracle Net,
Oracle Database, and iSQL*Plus configuration assistants
will run automatically (you may have to click on vnc
desktop to see the new window, and to start the
Assistant.)
<!--[if !supportLists]-->16. <!--[endif]-->Execute Configuration
Scripts
<!--[if !supportLists]-->o
<!--[endif]-->At the end of the
installation, a pop up window will appear indicating scripts
that need to be run as root. Login as root and run the
indicated scripts.
<!--[if !supportLists]-->o
<!--[endif]-->Click on OK when
finished.
<!--[if !supportLists]-->17. <!--[endif]-->End of Installation
<!--[if !supportLists]-->o
<!--[endif]-->Make note of the
URLs presented in the summary, and click on Exit when
ready.
<!--[if !supportLists]-->18. <!--[endif]-->Congratulations! Your new
Oracle Database is up and ready for use.
Useful Commands
<!--[if !supportEmptyParas]--> <!--[endif]-->
Following are some commands for your reference you can
run as oracle user.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Add $ORACLE_HOME/bin to your PATH before using
these commands.
Note: All srvctl command can be run from one node to
perform action on another nodes since it is a cluster wide tool.
<!--[if !supportEmptyParas]--> <!--[endif]-->
To stop everything follow this sequence of commands.
<!--[if !supportEmptyParas]--> <!--[endif]-->
Check status of entire database:
srvctl status database d {db name} (ex:
racdb)
<!--[if !supportEmptyParas]--> <!--[endif]-->
Stop database instance:
srvctl stop instance d {db name} i
{instance name} (ex: racdb1)
<!--[if !supportEmptyParas]--> <!--[endif]-->
Stop entire database on all nodes
srvctl stop database d {db name}
<!--[if !supportEmptyParas]--> <!--[endif]-->
Stop the ASM instance on one node
srvctl stop asm n {node name}
<!--[if !supportEmptyParas]--> <!--[endif]-->
Stop the nodeapps/clusterware on one node
srvctl stop nodeapps n {node name}
<!--[if !supportEmptyParas]--> <!--[endif]-->
To start everything follow the above sequence of commands in
reverse order with the start option.