Documente Academic
Documente Profesional
Documente Cultură
CFS allows the same file system to be simultaneously mounted on multiple nodes in the cluster.
The CFS is designed with master/slave architecture. Though any node can initiate an operation to create,
delete, or resize data, the master node carries out the actual operation. CFS caches the metadata in
memory, typically in the memory buffer cache or the vnode cache. A distributed locking mechanism,
called GLM, is used for metadata and cache coherency among the multiple nodes.
3. Make sure you have a license installed for Veritas CFS on all nodes.
4. Make sure vxfencing driver is active on all nodes (even if it is in disabled mode).
Here are some ways to check the status of your cluster. On these examples, CVM/CFS are not configured
yet.
# cfscluster status
NODE
serverA
running
not-running
serverB
running
not-running
serverC
running
not-running
serverD
running
not-running
CVM STATE
# vxdctl -c mode
mode: enabled: cluster inactive
# /etc/vx/bin/vxclustadm nidmap
Out of cluster: No mapping information available
# /etc/vx/bin/vxclustadm -v nodestate
state: out of cluster
# hastatus -sum
-- SYSTEM STATE
-- System
State
Frozen
A serverA
RUNNING
A serverB
RUNNING
A serverC
RUNNING
A serverD
RUNNING
During configuration, veritas will pick up all information that is set on your cluster configuration. And will
activate CVM on all the nodes.
# cfscluster config
: MyCluster
Nodes
Transport
: gab
-----------------------------------------------------------
========================================================
Now let's check the status of the cluster. And notice that there is now a new service group cvm. CVM is
required to be online before we can bring up any clustered filesystem on the nodes.
# cfscluster status
Node
: serverA
CVM state
: running
Node
: serverB
: running
Node
: serverC
: running
Node
: serverD
: running
# vxdctl -c mode
mode: enabled: cluster active - MASTER
master: serverA
# /etc/vx/bin/vxclustadm nidmap
Name
serverA
Joined: Master
serverB
Joined: Slave
serverC
Joined: Slave
serverD
Joined: Slave
# /etc/vx/bin/vxclustadm -v nodestate
state: cluster member
nodeId=0
masterId=1
neighborId=1
members=0xf
joiners=0x0
leavers=0x0
reconfig_seqnum=0xf0a810
vxfen=off
# hastatus -sum
-- SYSTEM STATE
-- System
State
Frozen
A serverA
RUNNING
A serverB
RUNNING
State
A serverC
RUNNING
A serverD
RUNNING
-- GROUP STATE
-- Group
System
Probed
AutoDisabled State
B cvm
serverA
ONLINE
B cvm
serverB
ONLINE
B cvm
serverC
ONLINE
B cvm
serverD
ONLINE
This procedure creates a shared disk group for use in a cluster environment. Disks must be placed in disk
groups before they can be used by the Volume Manager.
When you place a disk under Volume Manager control, the disk is initialized. Initialization destroys any
existing data on the disk.
Before you begin, make sure the disks that you add to the shared-disk group must be directly attached
to all the cluster nodes.
Initialize the disks you want to use. Make sure they are attached to all the cluster nodes. You may
optionally specify the disk format.
Create a shared disk group with the disks you just initialized.
Now let's add that new disk group in our cluster configuration. Giving all nodes in the cluster an option
for Shared Write (sw).
ACTIVATION MODE
mysharedg
sw
ACTIVATION MODE
mysharedg
sw
ACTIVATION MODE
mysharedg
sw
ACTIVATION MODE
mysharedg
sw
We can now create volumes and filesystems within the shared diskgroup.
Then add these volumes/filesystems to the cluster configuration so they can be mounted on any or all
nodes. Mountpoints will be automatically created.
TYPE
SHARED VOLUME
DISK GROUP
STATUS
MOUNT OPTIONS
/mountpoint1 Regular
mysharevol1
mysharedg
/mountpoint2 Regular
mysharevol2
mysharedg
That's it. Check you cluster configuration and try to ONLINE the filesystems on your nodes.
-- SYSTEM STATE
-- System
State
Frozen
A serverA
RUNNING
A serverB
RUNNING
A serverC
RUNNING
A serverD
RUNNING
-- GROUP STATE
-- Group
System
Probed
AutoDisabled State
B cvm
serverA
ONLINE
B cvm
serverB
ONLINE
B cvm
serverC
ONLINE
B cvm
serverD
ONLINE
B vrts_vea_cfs_int_cfsmount1 serverA
OFFLINE
B vrts_vea_cfs_int_cfsmount1 serverB
OFFLINE
B vrts_vea_cfs_int_cfsmount1 serverC
OFFLINE
B vrts_vea_cfs_int_cfsmount1 serverD
OFFLINE
B vrts_vea_cfs_int_cfsmount2 serverA
OFFLINE
B vrts_vea_cfs_int_cfsmount2 serverB
OFFLINE
B vrts_vea_cfs_int_cfsmount2 serverC
OFFLINE
B vrts_vea_cfs_int_cfsmount2 serverD
OFFLINE
Details:
Following is the algorithm to create a volume, file system and put them under VERITAS Cluster Server
(VCS).
The following example shows how to create a raid-5 volume with a VxFS file system and put it under VCS
control.
# mkdir /vol01
# haconf -makerw
# hagrp -add newgroup
# hagrp -modify newgroup SystemList <sysa> 0 <sysb> 1
# hagrp -modify newgroup AutoStartList <sysa>
# hastop -all
# cd /etc/VRTSvcs/conf/config
# haconf -makerw
# vi main.cf
group newgroup (
SystemList = { sysA =0, sysB=1}
AutoStartList = { sysA }
)
DiskGroup data_dg (
DiskGroup = datadg
)
Mount vol01_mnt (
MountPoint = "/vol01"
BlockDevice = " /dev/vx/dsk/datadg/vol01"
FSType = vxfs
)
------------------------------------------------------------------------------------
# umount /backup/pdpd415
# haconf -makerw