Sunteți pe pagina 1din 108

Hands-on Lab on Sun Solaris

What we are using in this lab ….

 Solaris 10
 RAW Device
 Oracle Cluster File System Rel 2
 Automatic Storage Management
 Oracle Database 10g Enterprise Edition Rel2
 Oracle Clusterware 10g Rel2
What other things you need beside software ….

 4 sets of IP addresses having the same


segment
 2 for the public network & 2 for VIP

 2 sets of IP addresses for the private


interconnect
 The private interconnect, recommended to
use gigabit switch
Infrastructure View
Database Server Layer
• Solaris 10
LAN
• Oracle 10g Database
• Oracle Clusterware
• Oracle ASM

192.10.2.x 192.10.2.x

192.10.2.x Interconnect 192.10.2.x


RAC (ORACLE_SID=orcl)
10.10.1.x 10.10.1.x
orcl1 orcl2
Database Server Database Server

Automatic Storage Management (ASM)

RAW RAW

LUN0/1 LUN2 LUN3 LUN4 Storage Layer


VOL1

ASM ASM ASM Voting • SE3510


OCR/Vote
Disk Disk Disk

ORCL_DATA1 FLASH_RECOVERY_AREA
How do you login to the environment
• Open IE and direct to this URL : http://dg-ssc.demo.sun.com/

• Click Log In (First Option)

• You will see a screen that says "Sun Secure Global Desktop“

• You will also see a popup window that says "Security Warning", Click
Accept.

• From the IE, you will get a Sun Secure Global Desktop login screen.

• Enter for Username as oracle0x and Password as abc123. This will


login to the portal.

• You will see a web page that title "Welcome to Sun Microsystems Asia
South Sun Solution Center"

• On the left panel, you will see a see a series of terminal session for
launch.
How do you login to the environment
• Scroll down to the option that say Terminal X (Terminal X will launch
telnet session for c-lab-nodeX).

• Hold the Shift key and Click on this option, and you will prompted with a
Unix login window. The Unix login is root and password is root.
Important to uncheck "Save Password".

• A Telnet session will be launched.

• From the telnet session, you can perform most of the Solaris admin
stuff. Public interface is bge0 and Private interface is bge2. Things you
do like create oracle users, etc.

• To end the telnet session, type exit at the command line. It will close
the telnet application.

• To get x-display for oracle user, you are required to re-login to the
Terminal x. End all root telnet sessions if any. Do the same, Shift and
Click on Terminal x. Login using oracle id and password you have used.
How do you login to the environment
• To test x-display, you may start Mozilla by typing Mozilla
from the $ prompt.

• If you need to run script as root (current telnet session is


presently login as oracle), start another telnet session by
clicking on the top left menu of the telnet application to
start another telnet session. At oracle telnet login, do a su
to root.

• To logout, close all applications and type exit from telnet


sessions open.

• Click "Logout" on the top right hand corner of IE to logout


out of Sun Secure Global Desktop
Flow of Installation

System Preparation Create ASM Instance

Install Clusterware Install 10g Database

Patch Clusterware Patch 10g Database

Install ASM Patch 10g Database

Patch ASM Create Clustered Database

Prepare Disk for ASM Create Listener


Preparing Oracle Installation
• Plumb private inter-connect & assign ip addr for both nodes
Node 1
# ifconfig bge2 plumb
# ifconfig bge2 10.0.0.1 netmask 255.255.255.0
# ifconfig bge2 up

• create /etc/hostname.bge2
• #gedit /etc/hostname.bge2
Put this entry in the file “c-lab-node1-priv”

• Update /etc/hosts
# gedit /etc/hosts

Update /etc/hosts
127.0.0.1 localhost
192.168.200.11 c-lab-node1 c-lab-node1.lab.sg loghost
192.168.200.21 c-lab-node2
192.168.200.12 c-lab-node1-vip
192.168.200.22 c-lab-node2-vip
10.0.0.1 c-lab-node1-priv
10.0.0.4 c-lab-node2-priv
Preparing Oracle Installation
• Plumb private inter-connect & assign ip addr for both nodes
Node 2
# ifconfig bge2 plumb
# ifconfig bge2 10.0.0.4 netmask 255.255.255.0
# ifconfig bge2 up

• create /etc/hostname.bge2
#gedit /etc/hostname.bge2
Put this entry in the file “c-lab-node2-priv”

• Update /etc/hosts on hosts c-lab-node2


#gedit /etc/hosts
127.0.0.1 localhost
192.168.200.21 c-lab-node2 c-lab-node2.lab.sg loghost
192.168.200.11 c-lab-node1
192.168.200.12 c-lab-node1-vip
192.168.200.22 c-lab-node2-vip
10.0.0.1 c-lab-node1-priv
10.0.0.4 c-lab-node2-priv

• Update /etc/netmasks
10.12.21.0 255.255.255.0
10.0.0.0 255.255.255.0
Preparing Oracle Installation
• Update /etc/system for both nodes
Oracle 10gRel2 Parameters

set noexec_user_stack=1
set shmsys:shminfo_shmmax=4294967295
set semsys:seminfo_semmap=1024
set semsys:seminfo_semmni=2048
* set semsys:seminfo_semmns=2048 (Obsolete in Solaris 10)
set semsys:seminfo_semmsl=2048
set semsys:seminfo_semmnu=2048
set semsys:seminfo_semume=200
* set shmsys:shminfo_shmmin=200 (Obsolete in Solaris 10)
set shmsys:shminfo_shmmni=200
* set shmsys:shminfo_shmseg=200 (Obsolete in Solaris 10)
set semsys:seminfo_semvmx=32767

• Disable auto mount for both nodes


# svcs -a | grep autofs
# svcadm disable autofs

• Update /etc/auto_master and comment /home for both nodes


Preparing Oracle Installation

• Reboot the machine


#reboot

• Create Oracle user/group on both nodes


# groupadd -g 101 dba
# mkdir -p /export/home
# chmod 755 /export/home
# useradd -u 1001 -g dba -d /export/home/oracle -s /bin/ksh -m oracle
# passwd oracle
# su - oracle
Preparing Oracle Installation

• Create ~/.rhosts and put a + sign in the file

• Test rlogin to each node


From c-lab-node1
rlogin c-lab-node2-priv
From c-lab-node2
rlogin c-lab-node1-priv

• Create set_oracle_env.sh in user oracle home dir


$ cp /mnt/oracle/10gSol64/OracleRAC-ASM-Setup/set_oracle_env.sh set_oracle_env.sh
$ chmod 755 set_oracle_env.sh

• Create crsstat.sh in user oracle home dir


$ cp /mnt/oracle/10gSol64/OracleRAC-ASM-Setup/crsstat.sh crsstat.sh
$ chmod 755 crsstat.sh

Whenever you login to oracle user, you need to execute the following to set the env:
$ . ./set_oracle_env.sh

Check env variable are set


$ env
Preparing Oracle Installation
• Prepare OCR & Voting
Note : Ref Metalink Doc No. 367715.1, first 1 MB should not be used. To be safe, don't
use first 10 cyl for OCR and voting disk. (See Metalink Doc 367715.1)

• Do this on 1 node
# format
e.g available disks

AVAILABLE DISK SELECTIONS:


0. c1t32d0 <SUN-StorEdge3510-327R cyl 34873 alt 2 hd 64 sec 64>
/pci@1e,600000/SUNW,qlc@2/fp@0,0/ssd@w216000c0ff07bb0c,0
1. c1t42d0 <SUN-StorEdge3510-327R cyl 34873 alt 2 hd 64 sec 64>
/pci@1e,600000/SUNW,qlc@2/fp@0,0/ssd@w216000c0ffa7bb0c,0
2. c2t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1c,600000/scsi@2/sd@0,0
3. c2t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci@1c,600000/scsi@2/sd@1,0
Specify disk (enter its number): 0

Format> partition

Partition> print
Preparing Oracle Installation
Use c1t32d0s0 and c1t32d1s1 for OCR and Voting Disk (typically need only
120MB for OCR and Voting is sufficient).

Partition Layout of c1t32d0

Current partition table (original):


Total disk cylinders available: 34873 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks


0 root wm 50 - 561 1024.00MB (512/0/0) 2097152
1 unassigned wm 562 - 1073 1024.00MB (512/0/0) 2097152
2 backup wu 0 - 34872 68.11GB (34873/0/0) 142839808
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 00

Partition> quit

Format> quit
Preparing Oracle Installation

• Initialize the disk (Optional, if disks are previously used)


# dd if=/dev/zero of=/dev/rdsk/c1t32d0s0 bs=125829120 count=1
# dd if=/dev/zero of=/dev/rdsk/c1t32d1s1 bs=125829120 count=1

• Change ownership for the disk on both nodes


# chown oracle:dba /dev/rdsk/c1t32d0s0
# chmod 755 /dev/rdsk/c1t32d0s0
# chown oracle:dba /dev/rdsk/c1t32d1s1
# chmod 755 /dev/rdsk/c1t32d1s1

• Create ORACLE_HOME dir on both nodes


# mkdir -p /app/oracle
# chown oracle:dba /app/oracle
# chmod 755 /app/oracle
Preparing Oracle Installation

• Create ORACLE_HOME dir on both nodes


# mkdir -p /app/oracle
# chown oracle:dba /app/oracle
# chmod 755 /app/oracle

• Login to Oracle (select crs home) and set display


$ . ./set_oracle_env.sh
…..
…..
…..
4
Installing the Oracle Clusterware
Run installer
>> c-lab-node1
$ /mnt/oracle/10gSol64/10gRel2-Dvd2/clusterware/runInstaller
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Run the orainstRoot.sh and root.sh script
>> c-lab-node1, c-lab-node2
/app/oracle/oraInventory/orainstRoot.sh
/app/oracle/product/10.2.0/crs/root.sh
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware - vipca
Installing the Oracle Clusterware - vipca
Installing the Oracle Clusterware - vipca
Installing the Oracle Clusterware - vipca
Installing the Oracle Clusterware - vipca
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Installing the Oracle Clusterware
Patch the CRS
• Install Oracle 10.2.03 Patch to CRS home (ensure all oracle services are down)
Stop crs services
# /etc/init.d/init.crs stop

$ /mnt/oracle/10gSol64/10gRel2-10.2.03Patch/runInstaller
Preparing disk for ASM
# format

Layout of c1t42d0
Current partition table (original):
Total disk cylinders available: 34873 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks


0 root wm 10 - 521 1024.00MB (512/0/0) 2097152
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 34872 68.11GB (34873/0/0) 142839808
3 unassigned wm 522 - 10761 20.00GB (10240/0/0) 41943040
4 unassigned wm 10762 - 31241 40.00GB (20480/0/0) 83886080
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
Preparing disk for ASM
• Change ownership for the disk on both nodes
# chown oracle:dba /dev/rdsk/c1t42d0s3
# chmod 755 /dev/rdsk/c1t42d0s3
# chown oracle:dba /dev/rdsk/c1t42d0s4
# chmod 755 /dev/rdsk/c1t42d0s4

• Set ORACLE_HOME for ASM and Install ASM Home


$ . ./set_oracle_env.sh
$ /mnt/oracle/10gSol64/10gRel2-Dvd1/database/runInstaller
Installing the ASM
Run installer
>> c-lab-node1
Installing the ASM
Installing the ASM
Installing the ASM
Installing the ASM
Installing the ASM
Installing the ASM
Installing the ASM
Installing the ASM
Run the orainstRoot.sh and root.sh script
>> c-lab-node2, c-lab-node3
/app/oracle/product/10.2.0/asm/./root.sh
Installing the ASM
Installing the ASM
Installing the ASM
Patch the ASM
• Install Oracle 10.2.03 Patch to ASM home (ensure all oracle services are
down)
$ /mnt/oracle/10gSol64/10gRel2-10.2.03Patch/runInstaller
Create ASM Instance
Run Database Configuration Assistance (DBCA)
• Start crs services (If it is not started)
# /etc/init.d/init.crs start
Login to oracle and select ASM home
# . ./set_oracle_env.sh
$ dbca
Create ASM Instance
Create ASM Instance
Create ASM Instance
Create ASM Instance
Create ASM Instance
Create ASM Instance
Create ASM Instance
Installing Oracle 10g Database
Run installer
• Set ORACLE_HOME for DB and Install DB Home
$ . ./set_oracle_env.sh
$ /mnt/oracle/10gSol64/10gRel2-Dvd1/database/runInstaller
Installing Oracle 10g Database
Installing Oracle 10g Database
Installing Oracle 10g Database
Installing Oracle 10g Database
Installing Oracle 10g Database
Installing Oracle 10g Database
Installing Oracle 10g Database
Installing Oracle 10g Database
Run the root.sh script on both nodes
>> c-lab-node2, c-lab-node3
/app/oracle/product/10.2.0/db/root.sh
Installing Oracle 10g Database
Installing Oracle 10g Database
Patch the Database software

• Install Oracle 10.2.03 Patch to DB home (ensure all oracle services are down)
Stop all oracle processes
$ srvctl stop asm -n c-lab-node3
$ srvctl stop asm -n c-lab-node2
$ srvctl stop nodeapps -n c-lab-node3
$ srvctl stop nodeapps -n c-lab-node2

# /etc/init.d/init.crs stop (for both nodes)

$ /mnt/oracle/10gSol64/10gRel2-10.2.03Patch/runInstaller
Create a Clustered Database
Run Database Configuration Assistance (DBCA)
• Start all oracle processes if it is not started
# /etc/init.d/init.crs start (for both nodes)
• Login to oracle and select DB home
# su - oracle
$ dbca
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
For this test configuration, click Add, and enter orcltest as the "Service Name." Leave both instances
set to Preferred and for the "TAF Policy" select Basic.
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create a Clustered Database
Create Listener
Run Network Configuration Assistance (netca)
>> c-lab-node1
/app/oracle/product/10.2.0/db/bin/netca
Create Listener
Create Listener
Create Listener
Create Listener
Create Listener
Create Listener
Create Listener
Create Listener
TAF Demo
From a Windows machine (or other non-RAC client machine), login to the clustered database using the
orcltest service as
the SYSTEM user:

C:\> sqlplus system/manager@orcltest

COLUMN instance_name FORMAT a13


COLUMN host_name FORMAT a9
COLUMN failover_method FORMAT a15
COLUMN failed_over FORMAT a11

SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';

INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER


------------- --------- ------------- --------------- -----------
orcl1 c-lab-node1
SELECT BASIC NO
TAF Demo
DO NOT log out of the above SQL*Plus session!
Now that we have run the query (above), we should now shutdown the instance
orcl1 on
linux1 using the abort option. To perform this operation, you can use the srvctl
command
line utility as follows:

# su - oracle
$ srvctl status database -d orcl
Instance orcl1 is running on node c-lab-node1
Instance orcl2 is running on node c-lab-node2

$ srvctl stop instance -d orcl -i orcl1 -o abort

$ srvctl status database -d orcl


Instance orcl1 is not running on node c-lab-node1
Instance orcl2 is running on node c-lab-node2
TAF Demo
Now let's go back to our SQL session and rerun the SQL statement in the buffer: COLUMN instance_name
FORMAT a13
COLUMN host_name FORMAT a9
COLUMN failover_method FORMAT a15
COLUMN failed_over FORMAT a11

SELECT
instance_name
, host_name
, NULL AS failover_type
, NULL AS failover_method
, NULL AS failed_over
FROM v$instance
UNION
SELECT
NULL
, NULL
, failover_type
, failover_method
, failed_over
FROM v$session
WHERE username = 'SYSTEM';

INSTANCE_NAME HOST_NAME FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER


------------- --------- ------------- --------------- -----------
orcl2 c-lab-node2
SELECT BASIC YES

SQL> exit

S-ar putea să vă placă și