Sunteți pe pagina 1din 7

VCS Heartbeats must be on separate VLANs

Jan 7, 2009 Solaris, UNIX, vcs Its not entirely clear from the documentation, but Veritas Cluster heartbeat links need to be on separate VLANs. They mention the requirement of different switches, but say nothing about VLANs. Do not use one big VLAN for all your private heartbeat links you need two. Your different clusters can share these two VLANs, but if you have two heartbeat connections for your cluster, they need to be isolated from each other in hardware or in VLANs. If you do put them on the same VLAN or cross your links so they can see each other, youll get something like: Dec 11 16:39:20 server llt: [ID 525299 kern.notice] LLT WARNING V-14-1-10497 crossed links? link 0 and link 1 of node 0 on the same network

Veritas Cluster I/O Fencing, Part 1


Dec 29, 2008 vcs So youve got your Veritas Cluster up and running. Theres one advanced feature that really puts the icing on the cake in my mind: I/O Fencing. Heres a scenario: There is a partial power outage, IOS upgrade failure, spanning tree loop, or anything else that can cause multiple network switch failures in your data center. Because of this, your cluster nodes can no longer communicate using ANY of their heartbeat links, or public network. Without I/O fencing enabled, each node would believe all the other nodes were down, and try to perform a failover and run all the defined service groups in the cluster. Multiple nodes trying to read/write to the same storage may result in data corruption, and this is your worst case scenario. I/O fencing will help here. There is a SCSI-3 feature called SCSI-3 Persistent Reservation, which allows cluster nodes to write keys to shared disks, effectively locking the disk for exclusive use by a node. Ask your storage administrator to enable SCSI-3 Persistent Reservation on each LUN you are assigned. On some arrays, this is the default behavior, but others require you to turn on the feature per LUN. All the cluster nodes must be assigned three small coordinator disks, which serve as a locking mechanism for the shared storage. Just three shared disks per cluster, and all nodes must have access to them. In the same scenario above, when the heartbeats go down and nodes are thought to be offline, each surviving node will race for control of the coordinator disks, ejecting any other nodes keys and writing their own key to the disk, locking other nodes out. If there is more than one surviving node in the cluster, the loser of the race will actually panic and reboot. Thats not a typo the node will kernel panic and reboot. This is the only sure way to ensure the node will not proceed and potentially corrupt data on the shared storage. Consult your VCS documentation for the setup steps to enable I/O Fencing. I will be posting a part 2 also, with my abbreviated version to get it up and running.

Veritas Cluster I/O Fencing, Part 2


To follow up from part 1, this post will go into a bit more detail how to set up your I/O fencing for your cluster. This example is using Solaris 10 and VCS 5.0. Before you continue, you need to have your three coordinator disks and all your data disks visible to all the cluster nodes. All these disks should have the SCSI-3 Persistent Reservation bit set. Put your three coordinator disks into their own disk group. I call mine dgvxfencoord. 1. Testing your shared disks VCS includes a tool to test weather your disks have the SCSI-3 PR bit set, called vxfentsthdw. You only need to run this on one node in the cluster, and using ssh/rsh, the tool will perform the tests for all the nodes. Youll need to set up ssh keys (or .rhosts) so root on your first cluster node can log into the other cluster nodes with no password this is just temporary so our testing will work. If you use rsh, just use the -n flag with vxfentsthdw.The easiest way to use this tool is to specify disk groups to test. For example: # /opt/VRTSvcs/vxfen/bin/vxfentsthdw -g dgvxfencoord This will test every disk in the dgvxfencoord disk group, reading/writing keys to the disks, locking them for exclusive use by each member of the cluster. You should see PASSED after each test. While testing your data disks, you may want to use the -r flag, to use non-destructive testing, if you have data on your disks already: /opt/VRTSvcs/vxfen/bin/vxfentsthdw -r -g datadg Once your tests indicate that SCSI-3 Persistent Reservations are working, youre ready to move on. Your diskgroup with the coordinator disks never needs to be imported at all, since there are no file systems or other data on them. vxdg deport dgvxfencoord vxdg -t import dgvxfencoord #(turns off automatic importing when system starts) vxdg -g dgvxfencoord set coordinator=on vxdg deport dgvxfencoord 2. Perform these steps on each node of the cluster to set up I/O Fencing Im a big fan of copy and paste from online documentation, so here you go. This will tell the vx fencing kernel driver to use a diskgroup of dgvxfencoord for the coordinator disks, tell it to use SCSI-3 with dynamic multipathing, and will then restart the vx fencing service. echo dgvxfencoord > /etc/vxfendg cp /etc/vxfen.d/vxfenmode_scsi3_dmp /etc/vxfenmode /etc/init.d/vxfen stop /etc/init.d/vxfen start 3. VCS Configuration So once you have the vx fencing driver set up, you have to tell your cluster to use it. First, stop your cluster and resources:

haconf -dump -makero hastop -all Then hand-edit the main.cf file in /opt/VRTSvcs/conf/config. Insert one line within the cluster definition block. Heres an example: cluster BIG-CLUSTER4 UserNames = { admin = cERpdxPmHpzS. } Administrators = { admin } ClusterAddress = 192.168.65.144 UseFence = SCSI3 ) Once you insert your line, its a good idea to check the syntax of main.cf: hacf -verify /etc/VRTSvcs/conf/config Then, copy the updated main.cf file from this node to the other nodes using your preferred method rcp, scp, ftp, whatever. Then on each node, hastart. You can verify the fencing configuration with this: #/sbin/vxfenadm -d I/O Fencing Cluster Information: ================================ Fencing Protocol Version: 201 Fencing Mode: SCSI3 Fencing SCSI3 Disk Policy: dmp Cluster Members: * 0 (server1) 1 (server2) RFSM State Information: node 0 in state 8 (running) node 1 in state 8 (running) 4. Testing your fencing setup Youll want to test this before going into production. I have used a few methods to test this, but these are the easiest. 1. If you have physical access to your server Unplug the two heartbeat links and the public network link. Fencing should kick in and the nodes will all race for those cooridator disks. The winner will take control, and the other cluster nodes will panic and reboot. Have a console connection on the nodes to verify.

2. If you have switch access, or access to someone who does An easy thing to do is to disable the network ports corresponding to the heartbeats and public network link. Almost the same as #1. 3. If you want to perform the test by yourself and have no physical access Set up scripts to change the speed/duplex on the NICs running your heartbeats and public network. Do this from the serial console, so you dont lose access obviously (Ive done similar things quite a few times). Once your switch is still auto/auto and your NIC is forced to 10-half with no auto-negotiation, communication will be impossible and effectively youve severed your links. Happy clustering!

Tagged coordinator disks, fencing, high availability, scsi-3, vcs

Creating a firedrill service group for Veritas Cluster


Feb 14, 2008 vcs fdsetup-srdf2 We use SRDF replication here, and the fd-srdf script provided with the SRDF agent only copies over a small percentage of the resources in our service groups (no zones, no IPs, no Oracle resources, etc). I modified it to grab all of these things and copy them over. The special things about this script is that it changes all the disk groups and mounts to diskgroup_fd resources for the names and mount points for a firedrill service group. Helps if you have >50 mount points like me.

Quick checklist for Veritas Cluster Server


Posted by savas on 9 April, 2009 No comments yet This item was filled under [ Tips, Veritas Cluster Tips ] 1. Install VCS software on the servers. You can install seperately or if you install configure ssh/rsh etc, you can install from a single server to every node. 2. Check that you have installed a valid license. Run vxlicense -p 3. Decide the primary server. The other servers will act as standby servers. Keep in mind that VCS will not have every node running at the same time. It will simply swap to standby servers if primary fails. 4. Check log files for any error. The default location for VCS logs is: /var/VRTSvcs/log 5. Check cluster configuration file (/etc/VRTSvcs/conf/config/main.cf). Check this file whenever a new server is added to the cluster. 6. Perform a failover test. This can be done simply by shutting down the primary server. After the server shutdown, verify that the backup system in the cluster comes up with minimal downtime. 7. Boot up the primary server, after it is up, shutdown the secondary server and ensure that the service will be taken over by the primary with minimal downtime. 8. Boot up the secondary system, after it is up, check logs on every node to make sure

S-ar putea să vă placă și