Sunteți pe pagina 1din 6

Cluster High Availability IP

Contents:
1. Check the cluster interconnect.
a. Checking in the /etc/hosts
b. Check the interconnect using oifcfg
2. Fetching extra NIC information using oifcfg
a. Checking the same in ifconfig
b. Checking the interconnect detail in the alertlog.
c. Checking the detail in the database and asm
3. Adding the Interconnect.
4. Checking the new ip detail using oifcfg
5. Adding the new ip detail to cluster interconnect with
oifcfg
6. Checking the new ip detail with oifcfg
7. Checking the added detail with asm
8. Check the ASM alert logs to confirm new interconnect
usage

1. Check the cluster interconnect.


[oracle@lnx1 ~]$ crsctl stat res -t -init| grep -1 haip| tail -2
ora.cluster_interconnect.haip
1 ONLINE ONLINE lnx1

a. Checking in the /etc/hosts


[oracle@lnx1 ~]$ cat /etc/hosts| grep priv
# eth1 internal private
192.168.2.1 lnx1-priv.localdomain lnx1-priv
192.168.2.2 lnx2-priv.localdomain lnx2-priv

b. Check the interconnect using oifcfg


[oracle@lnx1 bin]$ oifcfg getif
eth0 192.168.1.0 global public
eth1 192.168.2.1 global cluster_interconnect
ora.cluster_interconnect.haip

2. Fetching extra NIC information using oifcfg


[oracle@lnx1 bin]$ oifcfg iflist -p -n |grep eth1
eth1 192.168.2.1 UNKNOWN 255.255.255.0
eth1 169.254.0.0 UNKNOWN 255.255.0.0

Note: subnet 169.254.0.0 on eth1 is started by resource haip.


a. Checking the same in ifconfig
[oracle@lnx1 bin]$ /sbin/ifconfig a
eth1 Link encap:Ethernet HWaddr 08:00:27:41:15:1E
inet addr:192.168.2.1 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe41:151e/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:42470 errors:0 dropped:0 overruns:0 frame:0
TX packets:54153 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:26503598 (25.2 MiB) TX bytes:37994628 (36.2 MiB)
eth1:1 Link encap:Ethernet HWaddr 08:00:27:41:15:1E
inet addr:169.254.246.67 Bcast:169.254.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
ora.cluster_interconnect.haip

b. Checking the interconnect detail in the alertlog.


[oracle@lnx1 grid]$ grep eth1
/u01/app/oracle/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log | tail -2
Private Interface 'eth1:1' configured from GPnP for use as a private
interconnect.
[name='eth1:1', type=1, ip=169.254.246.67, mac=08-00-27-41-15-1e,
net=169.254.0.0/16, mask=255.255.0.0,
use=haip:cluster_interconnect/62]
Note: interconnect will use virtual private IP 169.254.xxx.xxx instead of
real private IP.
ora.cluster_interconnect.haip

c. Checking the detail in the database and asm


SQL> SELECT a.INSTANCE_NAME, b.NAME, b.IP_ADDRESS
from gv$cluster_interconnects b,
gv$instance a
where a.inst_id=b.inst_id;
INSTANCE_NAME NAME IP_ADDRESS
---------------- --------------- ---------------+ASM1 eth1:1 169.254.246.67
+ASM2 eth1:1 169.254.157.181
SQL> select a.INSTANCE_NAME, b.NAME, b.IP_ADDRESS
from gv$cluster_interconnects b,
gv$instance a
where a.inst_id=b.inst_id;

INSTANCE_NAME NAME IP_ADDRESS


---------------- --------------- ---------------ORCL1 eth1:1 169.254.246.67
ORCL2 eth1:1 169.254.157.181
Note: interconnect will use virtual private IP 169.254.xxx.xxx instead of
real private IP.

3. Adding the Interconnect.


Add to /etc/hosts on both nodes:
# eth1 internal private
192.168.2.1 lnx1-priv.localdomain lnx1-priv
192.168.2.2 lnx2-priv.localdomain lnx2-priv
30.30.30.1 lnx1-privb.localdomain lnx1-privb
30.30.30.2 lnx2-privb.localdomain lnx2-privb

Restart the cluster, check /var/log/messages


Dec 22 18:14:44 lnx1 avahi-daemon[3731]: Withdrawing address record
for 169.254.246.67 on eth1.
Dec 22 18:15:03 lnx1 logger: exec /u01/app/11.2.0/grid/perl/bin/perl
-I/u01/app/11.2.0/grid/perl/lib
/u01/app/11.2.0/grid/bin/crswrapexece.pl
/u01/app/11.2.0/grid/crs/install/s_crsconfig_lnx1_env.txt
/u01/app/11.2.0/grid/bin/ohasd.bin "reboot"
Dec 22 18:15:51 lnx1 avahi-daemon[3731]: Registering new address
record for 169.254.208.10 on eth3.
Dec 22 18:15:51 lnx1 avahi-daemon[3731]: Withdrawing address record
for 169.254.208.10 on eth3.
Dec 22 18:15:51 lnx1 avahi-daemon[3731]: Registering new address
record for 169.254.208.10 on eth3.
Dec 22 18:15:52 lnx1 avahi-daemon[3731]: Registering new address
record for 169.254.74.145 on eth1.
Dec 22 18:15:52 lnx1 avahi-daemon[3731]: Withdrawing address record
for 169.254.74.145 on eth1.
Dec 22 18:15:52 lnx1 avahi-daemon[3731]: Registering new address
record for 169.254.74.145 on eth1.
Dec 22 18:15:52 lnx1 avahi-daemon[3731]: Withdrawing address record
for 169.254.74.145 on eth1.
Dec 22 18:15:52 lnx1 avahi-daemon[3731]: Registering new address
record for 169.254.74.145 on eth1.
4. Checking the new ip detail using oifcfg
[root@lnx2 bin]# ./oifcfg iflist -p -n
eth0 192.168.1.0 PRIVATE 255.255.255.0
eth1 192.168.2.1UNKNOWN 255.255.255.0
eth1 169.254.0.0 UNKNOWN 255.255.128.0
eth3 30.30.30.0 UNKNOWN 255.255.255.0

eth3 169.254.128.0 UNKNOWN 255.255.128.0

Check, and repeat on 2nd node


[root@lnx1 ~]# ping lnx1-privb
PING lnx1-privb.localdomain (30.30.30.1) 56(84) bytes of data.
64 bytes from lnx1-privb.localdomain (30.30.30.1): icmp_seq=1 ttl=64
time=0.039 ms
64 bytes from lnx1-privb.localdomain (30.30.30.1): icmp_seq=2 ttl=64
time=0.056 ms
64 bytes from lnx1-privb.localdomain (30.30.30.1): icmp_seq=3 ttl=64
time=0.039 ms
[root@lnx2 ~]# ping lnx2-privb
PING lnx2-privb.localdomain (30.30.30.2) 56(84) bytes of data.
64 bytes from lnx2-privb.localdomain (30.30.30.2): icmp_seq=1 ttl=64
time=0.048 ms
64 bytes from lnx2-privb.localdomain (30.30.30.2): icmp_seq=2 ttl=64
time=0.152 ms
64 bytes from lnx2-privb.localdomain (30.30.30.2): icmp_seq=3 ttl=64
time=0.063 ms

5. Adding the new ip detail to cluster interconnect with


oifcfg
[root@lnx1 bin]# ./oifcfg setif -global
eth3/30.30.30.0:cluster_interconnect

6. Checking the new ip detail to cluster interconnect with


oifcfg
[root@lnx1 bin]# ./oifcfg getif
eth0 192.168.1.0 global public
eth1 192.168.2.1global cluster_interconnect
eth3 30.30.30.0 global cluster_interconnect

7. Checking the added detail with asm


SQL> SELECT a.INSTANCE_NAME, b.NAME, b.IP_ADDRESS
from gv$cluster_interconnects b,
gv$instance a
where a.inst_id=b.inst_id;
INSTANCE_NAME NAME IP_ADDRESS
---------------- --------------- ---------------+ASM1 eth1:1 169.254.74.145
+ASM1 eth3:1 169.254.208.10
+ASM2 eth1:1 169.254.85.212
+ASM2 eth3:1 169.254.232.77
SQL> SELECT a.INSTANCE_NAME, b.NAME, b.IP_ADDRESS
from gv$cluster_interconnects b,
gv$instance a
where a.inst_id=b.inst_id;
INSTANCE_NAME NAME IP_ADDRESS
---------------- --------------- ----------------

ORCL2
ORCL2
ORCL1
ORCL1

eth3:1
eth1:1
eth3:1
eth1:1

169.254.232.77
169.254.85.212
169.254.208.10
169.254.74.145

8. Check the ASM alert logs to confirm new interconnect


usage:

ASM

Private Interface 'eth1:1' configured from GPnP for use as a private


interconnect.
[name='eth1:1', type=1, ip=169.254.74.145, mac=08-00-27-41-15-1e,
net=169.254.0.0/17,
mask=255.255.128.0, use=haip:cluster_interconnect/62]
Private Interface 'eth3:1' configured from GPnP for use as a private
interconnect.
[name='eth3:1', type=1, ip=169.254.208.10, mac=08-00-27-6b-69-56,
net=169.254.128.0/17, mask=255.255.128.0,
use=haip:cluster_interconnect/62]
Cluster communication is configured to use the following
interface(s) for this instance
169.254.74.145
169.254.208.10