Sunteți pe pagina 1din 7

Author: Joshua Ehiguese

Date: Nov 3rd 2014


Title: Steps to delete cluster nodes that currently dont have database.
Ref: Oracle Clusterware Administration and Deployment Guide 12c Release 1 (12.1) E17886-14

Document Change Control.


Revision

Date

Author

Description of
change

1.0

2014-11-3

Joshua Ehiguese

Document Creation.

Note: Ensure the node that is going to be dropped has no databases instances, or other services running. If
any do exist, either drop them or just move them over to other nodes in the cluster. The following steps outline
a procedure to remove a node with no database instance running from the existing cluster.

STEP 1
Check cluster status. We will be removing the highlighted node below.

-bash-4.1$ crsctl check cluster -all


**************************************************************

servern1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
servern2:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4534: Cannot communicate with Event Manager
**************************************************************
servern3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

****************

STEP 2
Identify if the node is pinned. As grid user from node 1.
-bash-4.1$ hostname
servern1.domain.com
-bash-4.1$ whoami
grid
-bash-4.1$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base remains unchanged with value /oracle/grid/GI_base
-bash-4.1$ olsnodes -t -s
servern1 Active Unpinned
servern2 Active Unpinned
servern3 Active Unpinned
-bash-4.1$

Note if the node to be removed is pinned you have to unpin it first


e.g
Execute the following command as the root user or use sudo from any node if the node that is going to
be removed is pinned:

$ crsctl unpin css n servern2

Step 3
Run the following command as the root user on the node that is going to be removed:
$GRID_HOME/deinstall/deinstall local
Note: had issues using sudo, so ran it without using sudo.

-bash-4.1$ ./deinstall -local


Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2014-10-28_04-10-04PM/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START
#########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /oracle/grid/product/12.1.0.2_grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /oracle/grid/GI_base
Checking for existence of central inventory location /home/grid/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /oracle/grid/product/12.1.0.2_grid
The following nodes are part of this cluster: servern2
Checking for sufficient temp space availability on node(s) : 'servern2'
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2014-10-28_04-10-04PM/logs//crsdc_2014-10-28_04-10-12PM.log
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2014-10-28_04-1004PM/logs/netdc_check2014-10-28_04-10-15-PM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2014-10-28_04-1004PM/logs/asmcadc_check2014-10-28_04-10-15-PM.log
Database Check Configuration START

Database de-configuration trace file location: /tmp/deinstall2014-10-28_04-1004PM/logs/databasedc_check2014-10-28_04-10-15-PM.log


Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY
#######################
Oracle Grid Infrastructure Home is: /oracle/grid/product/12.1.0.2_grid
The following nodes are part of this cluster: servern2
The cluster node(s) on which the Oracle home deinstallation will be performed are:servern2
Oracle Home selected for deinstall is: /oracle/grid/product/12.1.0.2_grid
Inventory Location where the Oracle home registered is: /home/grid/oraInventory
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2014-10-28_04-1004PM/logs/deinstall_deconfig2014-10-28_04-10-12-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2014-10-28_04-1004PM/logs/deinstall_deconfig2014-10-28_04-10-12-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2014-10-28_04-1004PM/logs/databasedc_clean2014-10-28_04-10-40-PM.log
ASM de-configuration trace file location: /tmp/deinstall2014-10-28_04-1004PM/logs/asmcadc_clean2014-10-28_04-10-40-PM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2014-10-28_04-10-

Step 4
Run the following command as the root user from an active node in a cluster:
$crsctl delete node n servern2
-bash-4.1$ hostname
servern1.domain.com
-bash-4.1$ whoami
grid
-bash-4.1$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /oracle/grid/GI_base
-bash-4.1$ which crsctl
/oracle/grid/product/12.1.0.2_grid/bin/crsctl
-bash-4.1$ sudo /oracle/grid/product/12.1.0.2_grid/bin/crsctl delete node -n servern2
CRS-4661: Node servern2 successfully deleted.

-bash-4.1$
Step 5
From any active node, execute the following command to update the Oracle inventory for GI and
RDBMS homes across all nodes:
$GRID_HOME/oui/bin/runInstaller updateNodeList ORACLE_HOME=$GRID_HOME
cluster_nodes={ servern1, servern3} CRS=TRUE silent

-bash-4.1$ hostname
servern1.domain.com
-bash-4.1$ . oraenv
ORACLE_SID = [+ASM1] ?
The Oracle base remains unchanged with value /oracle/grid/GI_base
-bash-4.1$ whoami
grid
-bash-4.1$ pwd
/oracle/grid/product/12.1.0.2_grid/oui/bin
-bash-4.1$ ls
addLangs.sh detachHome.sh filesList.properties lsnodes runConfig.sh runInstaller.sh
attachHome.sh filesList.bat filesList.sh resource runInstaller runSSHSetup.sh
-bash-4.1$ ./runInstaller -updateNodeList ORACLE_HOME=/oracle/grid/product/12.1.0.2_grid
"cluster_nodes={servern1,servern3}" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 16387 MB Passed
The inventory pointer is located at /etc/oraInst.loc
Checking swap space: must be greater than 500 MB. Actual 16387 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

-bash-4.1$

(Below is only applicable if there oracle database software home to be removed)


$GRID_HOME/oui/bin/runInstaller updateNodeList ORACLE_HOME=$ORACLE_HOME
cluster_nodes={rac1,rac2} CRS=TRUE silent )

Step 6
Verify
$cluvfy stage post nodedel n servern2 verbose

-bash-4.1$ which cluvfy


/oracle/grid/product/12.1.0.2_grid/bin/cluvfy
-bash-4.1$ cd
-bash-4.1$ cluvfy stage -post nodedel -n servern2 -verbose
Performing post-checks for node removal
Checking CRS integrity...
The Oracle Clusterware is healthy on node "servern1"
CRS integrity check passed
Clusterware version consistency passed.
Result:
Node removal check passed
Post-check for node removal was successful.

-bash-4.1$

-bash-4.1$ olsnodes -t -s
servern1 Active Unpinned
servern3 Active Unpinned

-bash-4.1$ crsctl check cluster -all


**************************************************************
servern1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
servern3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
-bash-4.1$

Step 7
Clean up the following directories manually on the node that was just dropped:
/etc/oraInst.loc, /etc/oratab, /etc/oracle/ /tmp/.oracle, /opt/ORCLmap
-bash-4.1$ cd /etc
-bash-4.1$ ls -ltr | grep -i oraIn

-bash-4.1$ ls -ltr | grep -i orat


Reboot node if possible.

S-ar putea să vă placă și