Sunteți pe pagina 1din 48

=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2012.07.

05 11:39:12 =~=~=~=~=~=~=~=~=~=~=~=
root
Password:
Last login: Tue Jul 3 16:52:44 from 192.168.1.61
Oracle Corporation SunOS 5.10 Generic Patch January 2005
#
#
# bash
bash-3.2#
bash-3.2# sconf con o export PATH=$PATH:/usr/cluster/bin/
bash-3.2#
bash-3.2#
bash-3.2# cc scconf -pvv | more
Cluster name: clustergrp
Cluster ID: 0x4FF2D43C
Cluster install mode: disabled
Cluster private net: 172.16.0.0
Cluster private netmask: 255.255.248.0
Cluster maximum nodes: 64
Cluster maximum private networks: 10
Cluster new node authentication: unix
Cluster authorized-node list: <. - Exclude all nodes>
Cluster transport heart beat timeout: 10000
Cluster transport heart beat quantum: 1000
Round Robin Load Balancing UDP session timeout: 480
Cluster nodes: cluster1 cluster2
Cluster node name: cluster1
(cluster1) Node ID: 1
(cluster1) Node enabled: yes
(cluster1) Node private hostname: clusternode1-priv
(cluster1) Node quorum vote count: 1
(cluster1) Node reservation key: 0x4FF2D43C00000001
(cluster1) Node zones: <NULL>
(cluster1) CPU shares for global zone: 1
--More-- (cluster1) Minimum CPU requested for global zone: 1
(cluster1) Node transport adapters: e1000g2 e1000g3
(cluster1) Node transport adapter: e1000g2
(cluster1:e1000g2) Adapter enabled: yes
(cluster1:e1000g2) Adapter transport type: dlpi
(cluster1:e1000g2) Adapter property: device_name=e1000g
(cluster1:e1000g2) Adapter property: device_instance=2
(cluster1:e1000g2) Adapter property: lazy_free=1
(cluster1:e1000g2) Adapter property: dlpi_heartbeat_timeout=10000
(cluster1:e1000g2) Adapter property: dlpi_heartbeat_quantum=1000
(cluster1:e1000g2) Adapter property: nw_bandwidth=80
(cluster1:e1000g2) Adapter property: bandwidth=70
(cluster1:e1000g2) Adapter property: ip_address=172.16.0.129
(cluster1:e1000g2) Adapter property: netmask=255.255.255.128
(cluster1:e1000g2) Adapter port names: 0
(cluster1:e1000g2) Adapter port: 0
(cluster1:e1000g2@0) Port enabled: yes
(cluster1) Node transport adapter: e1000g3
(cluster1:e1000g3) Adapter enabled: yes
(cluster1:e1000g3) Adapter transport type: dlpi
(cluster1:e1000g3) Adapter property: device_name=e1000g
(cluster1:e1000g3) Adapter property: device_instance=3
(cluster1:e1000g3) Adapter property: lazy_free=1
(cluster1:e1000g3) Adapter property: dlpi_heartbeat_timeout=10000
(cluster1:e1000g3) Adapter property: dlpi_heartbeat_quantum=1000
(cluster1:e1000g3) Adapter property: nw_bandwidth=80
(cluster1:e1000g3) Adapter property: bandwidth=70
(cluster1:e1000g3) Adapter property: ip_address=172.16.1.1
(cluster1:e1000g3) Adapter property: netmask=255.255.255.128
(cluster1:e1000g3) Adapter port names: 0
(cluster1:e1000g3) Adapter port: 0
(cluster1:e1000g3@0) Port enabled: yes
Cluster node name: cluster2
(cluster2) Node ID: 2
(cluster2) Node enabled: yes
(cluster2) Node private hostname: clusternode2-priv
(cluster2) Node quorum vote count: 1
(cluster2) Node reservation key: 0x4FF2D43C00000002
--More-- (cluster2) Node zones: <NULL>
(cluster2) CPU shares for global zone: 1
(cluster2) Minimum CPU requested for global zone: 1
(cluster2) Node transport adapters: e1000g2 e1000g3
(cluster2) Node transport adapter: e1000g2
(cluster2:e1000g2) Adapter enabled: yes
(cluster2:e1000g2) Adapter transport type: dlpi
(cluster2:e1000g2) Adapter property: device_name=e1000g
(cluster2:e1000g2) Adapter property: device_instance=2
(cluster2:e1000g2) Adapter property: lazy_free=1
(cluster2:e1000g2) Adapter property: dlpi_heartbeat_timeout=10000
(cluster2:e1000g2) Adapter property: dlpi_heartbeat_quantum=1000
(cluster2:e1000g2) Adapter property: nw_bandwidth=80
(cluster2:e1000g2) Adapter property: bandwidth=70
(cluster2:e1000g2) Adapter property: ip_address=172.16.0.130
(cluster2:e1000g2) Adapter property: netmask=255.255.255.128
(cluster2:e1000g2) Adapter port names: 0
(cluster2:e1000g2) Adapter port: 0
(cluster2:e1000g2@0) Port enabled: yes
(cluster2) Node transport adapter: e1000g3
(cluster2:e1000g3) Adapter enabled: yes
(cluster2:e1000g3) Adapter transport type: dlpi
(cluster2:e1000g3) Adapter property: device_name=e1000g
(cluster2:e1000g3) Adapter property: device_instance=3
(cluster2:e1000g3) Adapter property: lazy_free=1
(cluster2:e1000g3) Adapter property: dlpi_heartbeat_timeout=10000
(cluster2:e1000g3) Adapter property: dlpi_heartbeat_quantum=1000
(cluster2:e1000g3) Adapter property: nw_bandwidth=80
(cluster2:e1000g3) Adapter property: bandwidth=70
(cluster2:e1000g3) Adapter property: ip_address=172.16.1.2
(cluster2:e1000g3) Adapter property: netmask=255.255.255.128
(cluster2:e1000g3) Adapter port names: 0
(cluster2:e1000g3) Adapter port: 0
(cluster2:e1000g3@0) Port enabled: yes
Cluster transport switches: switch1 switch2
Cluster transport switch: switch1
(switch1) Switch enabled: yes
--More-- (switch1) Switch type: switch
(switch1) Switch port names: 1 2
(switch1) Switch port: 1
(switch1@1) Port enabled: yes
(switch1) Switch port: 2
(switch1@2) Port enabled: yes
Cluster transport switch: switch2
(switch2) Switch enabled: yes
(switch2) Switch type: switch
(switch2) Switch port names: 1 2
(switch2) Switch port: 1
(switch2@1) Port enabled: yes
(switch2) Switch port: 2
(switch2@2) Port enabled: yes
Cluster transport cables
Endpoint Endpoint State
-------- -------- -----
Transport cable: cluster1:e1000g2@0 switch1@1 Enabled
Transport cable: cluster1:e1000g3@0 switch2@1 Enabled
Transport cable: cluster2:e1000g2@0 switch1@2 Enabled
Transport cable: cluster2:e1000g3@0 switch2@2 Enabled
Quorum devices: d3
Quorum device name: d3
(d3) Quorum device votes: 1
(d3) Quorum device enabled: yes
(d3) Quorum device name: /dev/did/rdsk/d3s2
(d3) Quorum device type: shared_disk
(d3) Quorum device hosts (enabled): cluster1 cluster2
(d3) Quorum device hosts (disabled):
(d3) Quorum device access mode: scsi2
Device group name: dsk/d5
(dsk/d5) Device group type: Local_Disk
--More-- (dsk/d5) Device group failback enabled: no
(dsk/d5) Device group node list: cluster2
(dsk/d5) Device group ordered node list: no
(dsk/d5) Device group desired number of secondaries: 1
(dsk/d5) Device group device names: /dev/did/rdsk/d5s2
Device group name: dsk/d4
(dsk/d4) Device group type: Disk
(dsk/d4) Device group failback enabled: no
(dsk/d4) Device group node list: cluster2
(dsk/d4) Device group ordered node list: no
(dsk/d4) Device group desired number of secondaries: 1
(dsk/d4) Device group device names: /dev/did/rdsk/d4s2
Device group name: dsk/d3
(dsk/d3) Device group type: Disk
(dsk/d3) Device group failback enabled: no
(dsk/d3) Device group node list: cluster1, cluster2
(dsk/d3) Device group ordered node list: no
(dsk/d3) Device group desired number of secondaries: 1
(dsk/d3) Device group device names: /dev/did/rdsk/d3s2
Device group name: dsk/d2
(dsk/d2) Device group type: Local_Disk
(dsk/d2) Device group failback enabled: no
(dsk/d2) Device group node list: cluster1
(dsk/d2) Device group ordered node list: no
(dsk/d2) Device group desired number of secondaries: 1
(dsk/d2) Device group device names: /dev/did/rdsk/d2s2
Device group name: dsk/d1
(dsk/d1) Device group type: Disk
(dsk/d1) Device group failback enabled: no
(dsk/d1) Device group node list: cluster1
(dsk/d1) Device group ordered node list: no
(dsk/d1) Device group desired number of secondaries: 1
(dsk/d1) Device group device names: /dev/did/rdsk/d1s2
bash-3.2#
bash-3.2#
bash-3.2# Broadcast Message from root (???) on cluster2 Thu Jul 5 12:01:15...
The cluster clustergrp will be shutdown in 1 minute

Broadcast Message from root (???) on cluster2 Thu Jul 5 12:01:45...
The cluster clustergrp will be shutdown in 30 seconds

=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2012.07.05 12:12:27 =~=~=~=~=~=~=~=~=~=~=~=
login as: root
Using keyboard-interactive authentication.
Password:
Last login: Thu Jul 5 11:39:23 2012 from 192.168.1.61
Oracle Corporation SunOS 5.10 Generic Patch January 2005
#
#
# bash
bash-3.2#
bash-3.2# export PATH=$PATH:/usr/cluster/bin/
bash-3.2# echo PATH
PATH
bash-3.2#
bash-3.2#
bash-3.2# ls
authorized_keys2 export net usr
bin global opt var
boot home platform vol
cdrom kernel proc wipro
dev lib sbin
devices lost+found system
etc mnt tmp
bash-3.2# sc
sccheck
scconf
scdidadm
scdpm
scdsbuilder
scdsconfig
scdscreate
sceventmib
scgdevs
scha_cluster_get
scha_control
scha_resource_get
scha_resource_setstatus
scha_resourcegroup_get
scha_resourcetype_get
scinstall
scnas
scnasdir
sconadm
scp
scprivipadm
screensaver-properties-capplet
scrgadm
--More-- bash-3.2# sc scdidadm -L
1 cluster1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 cluster1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2
3 cluster1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
bash-3.2# echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,7a0@15/pci15ad,1976@0/sd@0,0
1. c2t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@16/pci15ad,1976@0/sd@0,0
2. c3t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@18/pci15ad,1976@0/sd@0,0
3. c4t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@17/pci15ad,1976@0/sd@0,0
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2#
bash-3.2# echo df df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 9.6G 5.6G 3.9G 59% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.3G 1.2M 3.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
9.6G 5.6G 3.9G 59% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c1t0d0s4 2.9G 134M 2.7G 5% /var
swap 3.3G 76K 3.3G 1% /tmp
swap 3.3G 32K 3.3G 1% /var/run
/dev/dsk/c1t0d0s3 2.9G 91M 2.7G 4% /opt
/dev/did/dsk/d5s6 486M 5.4M 432M 2% /global/.devices/node@2
/dev/did/dsk/d2s6 486M 5.4M 432M 2% /global/.devices/node@1
/vol/dev/dsk/c0t0d0/sol_10_811_x86
2.1G 2.1G 0K 100% /cdrom/sol_10_811_x86
bash-3.2#
bash-3.2#
bash-3.2# scdidadm _-L
1 cluster1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 cluster1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2
3 cluster1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
bash-3.2# decho | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,7a0@15/pci15ad,1976@0/sd@0,0
1. c2t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@16/pci15ad,1976@0/sd@0,0
2. c3t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@18/pci15ad,1976@0/sd@0,0
3. c4t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@17/pci15ad,1976@0/sd@0,0
Specify disk (enter its number): Specify disk (enter its number):
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# scdidadm -r
DID subpath "/dev/rdsk/c3t0d0s2" created for instance "6".
DID subpath "/dev/rdsk/c4t0d0s2" created for instance "7".
bash-3.2#
bash-3.2#
bash-3.2# scdidadm -L
1 cluster1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 cluster1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2
3 cluster1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
6 cluster1:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
6 cluster2:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
7 cluster1:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
7 cluster2:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# scdidadm -Lr
bash-3.2# init 5
updating /platform/i86pc/boot_archive
login as: root
Using keyboard-interactive authentication.
Password:
Last login: Thu Jul 5 12:12:42 2012 from 192.168.1.61
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2#
bash-3.2#
bash-3.2# Jul 5 12:50:06 cluster2 sendmail[835]: [ID 702911 mail.alert] unable
to qualify my own domain name (cluster2) -- using short name
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# cd /cl usr/cluster/bin
bash-3.2# ls
cl_pnmd clsetup sceventmib
claccess clsnmphost scgdevs
cldev clsnmpmib scha_cluster_get
cldevice clsnmpuser scha_control
cldevicegroup clta scha_resource_get
cldg cltelemetryattribute scha_resource_setstatus
clinterconnect cluster scha_resourcegroup_get
clintr clvxvm scha_resourcetype_get
clnas clzc scinstall
clnasdevice clzonecluster scnas
clnode hactl scnasdir
clq haget scprivipadm
clquorum halockrun scrgadm
clreslogicalhostname hasp_check scsetup
clresource hatimerun scshutdown
clresourcegroup pmfadm scsnapshot
clresourcetype sccheck scstat
clressharedaddress scconf scswitch
clrg scdidadm sctelemetry
clrs scdpm scversions
clrslh scdsbuilder scvxinstall
clrssa scdsconfig scwtadm
clrt scdscreate
bash-3.2# ./scdidadm -C
./scdidadm: Unable to remove driver instance "6" - No such device or address.
./scdidadm: Unable to remove driver instance "7" - No such device or address.
Updating shared devices on node 1
Updating shared devices on node 2
bash-3.2# ./scdidadm -Cl
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,7a0@15/pci15ad,1976@0/sd@0,0
1. c2t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@16/pci15ad,1976@0/sd@0,0
2. c3t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@18/pci15ad,1976@0/sd@0,0
3. c4t0d0 <DEFAULT cyl 1021 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@17/pci15ad,1976@0/sd@0,0
Specify disk (enter its number): 2
selecting c3t0d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 1020 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 1019 1020.00MB (1020/0/0) 2088960
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 1.00MB (1/0/0) 2048
9 unassigned wm 0 0 (0/0/0) 0
partition> q
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> q
bash-3.2# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/pci@0,0/pci15ad,7a0@15/pci15ad,1976@0/sd@0,0
1. c2t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@16/pci15ad,1976@0/sd@0,0
2. c3t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@18/pci15ad,1976@0/sd@0,0
3. c4t0d0 <DEFAULT cyl 1020 alt 2 hd 64 sec 32>
/pci@0,0/pci15ad,7a0@17/pci15ad,1976@0/sd@0,0
Specify disk (enter its number): 3
selecting c4t0d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> p
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> p
Current partition table (original):
Total disk cylinders available: 1020 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 1019 1020.00MB (1020/0/0) 2088960
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 1.00MB (1/0/0) 2048
9 unassigned wm 0 0 (0/0/0) 0
partition> q
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> q
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# scd export PATH=$PATH:/usr/cluster/bin/
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# export PATH=$PATH:/usr/cluster/bin/ bash-3.2# format ./scdidadm -l bash-3.2# format ./scdidadm -l scdidadm -l
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
bash-3.2# scdidadm -l
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# devfsadm -Cv
devfsadm[2292]: verbose: symlink /dev/cfg/c4 -> ../../devices/pci@0,0/pci15ad,7a
0@17/pci15ad,1976@0:scsi
devfsadm[2292]: verbose: symlink /dev/cfg/c3 -> ../../devices/pci@0,0/pci15ad,7a
0@18/pci15ad,1976@0:scsi
bash-3.2#
bash-3.2#
bash-3.2# devfsadm -Cv bash-3.2# scdidadm -l
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
bash-3.2#
bash-3.2#
bash-3.2# scdidadm -lr
DID subpath "/dev/rdsk/c3t0d0s2" created for instance "6".
DID subpath "/dev/rdsk/c4t0d0s2" created for instance "7".
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# scdidadm -rl
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
6 cluster2:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
7 cluster2:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
bash-3.2#
bash-3.2#
bash-3.2# scdidadm -lL
1 cluster1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 cluster1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2
3 cluster1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
6 cluster1:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
6 cluster2:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
7 cluster1:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
7 cluster2:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
bash-3.2#
bash-3.2#
bash-3.2# clsetup
*** Main Menu ***
Please select from one of the following options:
1) Quorum
2) Resource groups
3) Data Services
4) Cluster interconnect
5) Device groups and volumes
6) Private hostnames
7) New nodes
8) Other cluster tasks
?) Help with menu options
q) Quit
Option: 1
*** Quorum Menu ***
Please select from one of the following options:
1) Add a quorum device
2) Remove a quorum device
?) Help
q) Return to the Main Menu
Option: 1
>>> Add a Quorum Device <<<
This option is used to add a quorum device to the cluster
configuration. Quorum devices are necessary to protect the cluster
from split brain and amnesia situations. Each quorum device must be
connected to at least two nodes. You can use a device containing user
data.
Adding a quorum device automatically configures node-to-device paths
for the nodes attached to the device. Later, if you add more nodes to
the cluster, you might need to update these paths by removing then
adding back the quorum device. For more information on supported
quorum device topologies, see the Sun Cluster documentation.
Is it okay to continue (yes/no) [yes]?
What is the type of device you want to use?
1) Directly attached shared disk
2) Network Attached Storage (NAS) from Network Appliance
3) Quorum Server
q) Return to the quorum menu
Option: 1
>>> Add a Shared Disk Quorum Device <<<
If you are using a dual-ported disk, by default, Sun Cluster uses
SCSI-2. If you are using disks that are connected to more than two
nodes, or if you manually override the protocol from SCSI-2 to SCSI-3,
by default, Sun Cluster uses SCSI-3.
If you turn off SCSI fencing for disks, Sun Cluster uses software
quorum, which is Sun Cluster software that emulates a form of SCSI
Persistent Group Reservations (PGR).
Warning: If you are using disks that do not support SCSI, such as
Serial Advanced Technology Attachment (SATA) disks, turn off SCSI
fencing.
For more information about supported quorum device topologies, see the
Sun Cluster documentation.
Is it okay to continue (yes/no) [yes]?
Which global device do you want to use (d<N>)? d6
Is it okay to proceed with the update (yes/no) [yes]?
/usr/cluster/bin/clquorum add d6
clquorum: (C192716) I/O error.
Command failed.

Press Enter to continue: q
*** Quorum Menu ***
Please select from one of the following options:
1) Add a quorum device
2) Remove a quorum device
?) Help
q) Return to the Main Menu
Option: q
*** Main Menu ***
Please select from one of the following options:
1) Quorum
2) Resource groups
3) Data Services
4) Cluster interconnect
5) Device groups and volumes
6) Private hostnames
7) New nodes
8) Other cluster tasks
?) Help with menu options
q) Quit
Option: q
bash-3.2# clsetup scdidadm -Lli
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# cluster show
=== Cluster ===
Cluster Name: clustergrp
clusterid: 0x4FF2D43C
installmode: disabled
heartbeat_timeout: 10000
heartbeat_quantum: 1000
private_netaddr: 172.16.0.0
private_netmask: 255.255.240.0
max_nodes: 64
max_privatenets: 10
num_zoneclusters: 12
udp_session_timeout: 480
global_fencing: pathcount
Node List: cluster1, cluster2
=== Host Access Control ===
Cluster name: clustergrp
Allowed hosts: None
Authentication Protocol: sys
=== Cluster Nodes ===
Node Name: cluster1
Node ID: 1
Enabled: yes
privatehostname: clusternode1-priv
reboot_on_path_failure: disabled
globalzoneshares: 1
defaultpsetmin: 1
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x4FF2D43C00000001
Transport Adapter List: e1000g2, e1000g3
Node Name: cluster2
Node ID: 2
Enabled: yes
privatehostname: clusternode2-priv
reboot_on_path_failure: disabled
globalzoneshares: 1
defaultpsetmin: 1
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x4FF2D43C00000002
Transport Adapter List: e1000g2, e1000g3
=== Transport Cables ===
Transport Cable: cluster1:e1000g2,switch1@1
Endpoint1: cluster1:e1000g2
Endpoint2: switch1@1
State: Enabled
Transport Cable: cluster1:e1000g3,switch2@1
Endpoint1: cluster1:e1000g3
Endpoint2: switch2@1
State: Enabled
Transport Cable: cluster2:e1000g2,switch1@2
Endpoint1: cluster2:e1000g2
Endpoint2: switch1@2
State: Enabled
Transport Cable: cluster2:e1000g3,switch2@2
Endpoint1: cluster2:e1000g3
Endpoint2: switch2@2
State: Enabled
=== Transport Switches ===
Transport Switch: switch1
State: Enabled
Type: switch
Port Names: 1 2
Port State(1): Enabled
Port State(2): Enabled
Transport Switch: switch2
State: Enabled
Type: switch
Port Names: 1 2
Port State(1): Enabled
Port State(2): Enabled
=== Quorum Devices ===
Quorum Device Name: d3
Enabled: yes
Votes: 1
Global Name: /dev/did/rdsk/d3s2
Type: shared_disk
Access Mode: scsi2
Hosts (enabled): cluster1, cluster2
=== Device Groups ===
=== Registered Resource Types ===
Resource Type: SUNW.LogicalHostname:3
RT_description: Logical Hostname Resource Typ
e
RT_version: 3
API_version: 2
RT_basedir: /usr/cluster/lib/rgm/rt/hafoi
p
Single_instance: False
Proxy: False
Init_nodes: All potential masters
Installed_nodes: <All>
Failover: True
Pkglist: SUNWscu
RT_system: True
Global_zone: True
Resource Type: SUNW.SharedAddress:2
RT_description: HA Shared Address Resource Ty
pe
RT_version: 2
API_version: 2
RT_basedir: /usr/cluster/lib/rgm/rt/hasci
p
Single_instance: False
Proxy: False
Init_nodes: <Unknown>
Installed_nodes: <All>
Failover: True
Pkglist: SUNWscu
RT_system: True
Global_zone: True
=== Resource Groups and Resources ===
=== DID Device Instances ===
DID Device Name: /dev/did/rdsk/d1
Full Device Path: cluster1:/dev/rdsk/c0t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d2
Full Device Path: cluster1:/dev/rdsk/c1t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d3
Full Device Path: cluster2:/dev/rdsk/c2t0d0
Full Device Path: cluster1:/dev/rdsk/c2t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d4
Full Device Path: cluster2:/dev/rdsk/c0t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d5
Full Device Path: cluster2:/dev/rdsk/c1t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d6
Full Device Path: cluster2:/dev/rdsk/c3t0d0
Full Device Path: cluster1:/dev/rdsk/c3t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d7
Full Device Path: cluster2:/dev/rdsk/c4t0d0
Full Device Path: cluster1:/dev/rdsk/c4t0d0
Replication: none
default_fencing: global
=== NAS Devices ===
=== Zone Clusters ===
bash-3.2# init 6
updating /platform/i86pc/boot_archive
bash-3.2# =~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2012.07.05 13:09:23 =~=~=~=~=~=~=~=~
=~=~=~=
login as: export PATH=$PATH:/usr/cluster/bin/ root
Using keyboard-interactive authentication.
Password:
Last login: Thu Jul 5 12:49:50 2012 from 192.168.1.61
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2#
bash-3.2# export PATH=$PATH:/usr/cluster/bin/
bash-3.2#
bash-3.2#
bash-3.2# bash-3.2#
bash-3.2#
bash-3.2# scdidadm -l
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
6 cluster2:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
7 cluster2:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
bash-3.2#
bash-3.2# /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/d6
Reservation keys(0):
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/d66 p
pgre pmfd_debug pmmd_adm postconfigccr print-show

pmfctl pmmd pnm_mod_serverd print-list process_cmd_
log
bash-3.2# /usr/cluster/lib/sc/pgre /usr/cluster/lib/sc/pgre -c pgre_inresrv -d /dev/did/rdsk/d6
/usr/cluster/lib/sc/pgre: unrecognized option 'pgre_inresrv'.
Usage:
/usr/cluster/lib/sc/pgre -c [pgre_scrub | pgre_inkeys | pgre_inresv] -d disk_pat
h.
bash-3.2# /usr/cluster/lib/sc/pgre -c pgre_inresrv -d /dev/did/rdsk/d6
resv[0]: key=0x4ff2d43c00000001.
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# login as: root
Using keyboard-interactive authentication.
Password:
Last login: Thu Jul 5 13:11:26 2012 from 192.168.1.61
Oracle Corporation SunOS 5.10 Generic Patch January 2005
#
#
# bash
bash-3.2#
bash-3.2#
bash-3.2# uname -a
SunOS cluster2 5.10 Generic_147441-01 i86pc i386 i86pc
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 9.6G 5.6G 3.9G 59% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.3G 1.2M 3.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
9.6G 5.6G 3.9G 59% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c1t0d0s4 2.9G 135M 2.7G 5% /var
swap 3.3G 76K 3.3G 1% /tmp
swap 3.3G 32K 3.3G 1% /var/run
/dev/dsk/c1t0d0s3 2.9G 91M 2.7G 4% /opt
/dev/did/dsk/d2s6 486M 5.5M 432M 2% /global/.devices/node@1
/dev/did/dsk/d5s6 486M 5.5M 432M 2% /global/.devices/node@2
/vol/dev/dsk/c0t0d0/sol_10_811_x86
2.1G 2.1G 0K 100% /cdrom/sol_10_811_x86
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# export PATH=$PATH:/usr/cluster/bin/
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# cls cd /iusr/cluster/bin/
bash-3.2# ls
cl_pnmd clquorum clsnmphost hasp_
check scha_cluster_get scshutdown
claccess clreslogicalhostname clsnmpmib hatim
erun scha_control scsnapshot
cldev clresource clsnmpuser pmfad
m scha_resource_get scstat
cldevice clresourcegroup clta scche
ck scha_resource_setstatus scswitch
cldevicegroup clresourcetype cltelemetryattribute sccon
f scha_resourcegroup_get sctelemetry
cldg clressharedaddress cluster scdid
adm scha_resourcetype_get scversions
clinterconnect clrg clvxvm scdpm
scinstall scvxinstall
clintr clrs clzc scdsb
uilder scnas scwtadm
clnas clrslh clzonecluster scdsc
onfig scnasdir
clnasdevice clrssa hactl scdsc
reate scprivipadm
clnode clrt haget sceve
ntmib scrgadm
clq clsetup halockrun scgde
vs scsetup
bash-3.2#
bash-3.2#
bash-3.2# clquorum
clquorum: (C961689) Not enough arguments.
clquorum: (C101856) Usage error.
Usage: clquorum <subcommand> [<options>] [+ | <devicename> ...]
clquorum [<subcommand>] -? | --help
clquorum -V | --version
Manage cluster quorum
SUBCOMMANDS:
add Add a quorum device to the cluster configuration
disable Put quorum devices into maintenance state
enable Take quorum devices out of maintenance state
export Export quorum configuration
list List quorum devices
remove Remove a quorum device from the cluster configuration
reset Reset the quorum configuration
show Show quorum devices and their properties
status Display the status of the cluster quorum
bash-3.2# clquorum lsist
d3
d6
d7
cluster1
cluster2
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# clquorum list ?
clquorum: (C952776) Device "?" does not exist.
bash-3.2#
bash-3.2#
bash-3.2# scversions
Upgrade commit is NOT needed. All versions match.
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# clresource
clresource: (C961689) Not enough arguments.
clresource: (C101856) Usage error.
Usage: clresource <subcommand> [<options>] [+ | <resource> ...]
clresource [<subcommand>] -? | --help
clresource -V | --version
Manage cluster resources
SUBCOMMANDS:
clear Clear resource error flags
create Create resources
delete Delete resources
disable Disable resources
enable Enable resources
export Export resource configuration
list List resources
list-props List resource properties
monitor Enable monitoring of resources
set Set resource properties
show Show resources and their properties
status Display the status of resources
unmonitor Disable monitoring of resources
bash-3.2# clresource list
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# ls
cl_pnmd clquorum clsnmphost hasp_
check scha_cluster_get scshutdown
claccess clreslogicalhostname clsnmpmib hatim
erun scha_control scsnapshot
cldev clresource clsnmpuser pmfad
m scha_resource_get scstat
cldevice clresourcegroup clta scche
ck scha_resource_setstatus scswitch
cldevicegroup clresourcetype cltelemetryattribute sccon
f scha_resourcegroup_get sctelemetry
cldg clressharedaddress cluster scdid
adm scha_resourcetype_get scversions
clinterconnect clrg clvxvm scdpm
scinstall scvxinstall
clintr clrs clzc scdsb
uilder scnas scwtadm
clnas clrslh clzonecluster scdsc
onfig scnasdir
clnasdevice clrssa hactl scdsc
reate scprivipadm
clnode clrt haget sceve
ntmib scrgadm
clq clsetup halockrun scgde
vs scsetup
bash-3.2#
bash-3.2#
bash-3.2# clnasdevice
clnasdevice: (C961689) Not enough arguments.
clnasdevice: (C101856) Usage error.
Usage: clnasdevice <subcommand> [<options>] [+ | <nasdevice> ...]
clnasdevice [<subcommand>] -? | --help
clnasdevice -V | --version
Manage NAS devices
SUBCOMMANDS:
add Add a NAS device to the cluster configuration
add-dir Add NAS directories to the cluster configuration
export Export the NAS device configuration
list List NAS devices
remove Remove a NAS device from the cluster configuration
remove-dir Remove NAS directories from the cluster configuration
set Set NAS device properties
show Show NAS devices and their properties
bash-3.2# clnasdevice ilist
bash-3.2#
bash-3.2#
bash-3.2# clq
clq: (C961689) Not enough arguments.
clq: (C101856) Usage error.
Usage: clq <subcommand> [<options>] [+ | <devicename> ...]
clq [<subcommand>] -? | --help
clq -V | --version
Manage cluster quorum
SUBCOMMANDS:
add Add a quorum device to the cluster configuration
disable Put quorum devices into maintenance state
enable Take quorum devices out of maintenance state
export Export quorum configuration
list List quorum devices
remove Remove a quorum device from the cluster configuration
reset Reset the quorum configuration
show Show quorum devices and their properties
status Display the status of the cluster quorum
bash-3.2# clq status
=== Cluster Quorum ===
--- Quorum Votes Summary from latest node reconfiguration ---
Needed Present Possible
------ ------- --------
3 5 5
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
cluster1 1 1 Online
cluster2 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- ------- -------- ------
d3 1 1 Online
d6 1 1 Online
d7 1 1 Online
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# clq status list
d3
d6
d7
cluster1
cluster2
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# ls
cl_pnmd clquorum clsnmphost hasp_
check scha_cluster_get scshutdown
claccess clreslogicalhostname clsnmpmib hatim
erun scha_control scsnapshot
cldev clresource clsnmpuser pmfad
m scha_resource_get scstat
cldevice clresourcegroup clta scche
ck scha_resource_setstatus scswitch
cldevicegroup clresourcetype cltelemetryattribute sccon
f scha_resourcegroup_get sctelemetry
cldg clressharedaddress cluster scdid
adm scha_resourcetype_get scversions
clinterconnect clrg clvxvm scdpm
scinstall scvxinstall
clintr clrs clzc scdsb
uilder scnas scwtadm
clnas clrslh clzonecluster scdsc
onfig scnasdir
clnasdevice clrssa hactl scdsc
reate scprivipadm
clnode clrt haget sceve
ntmib scrgadm
clq clsetup halockrun scgde
vs scsetup
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# cddid scdidadm
Usage: scdidadm [disk]
-c check the configuration file against disks
-R perform repair procedure against the disks
-G Set or show the global default fencing status
-F change the fencing protocol for the specified disk(s).
-u upload the path information to the driver
-i initialize the did driver
-r reconfigure did, search for all disks
-C clean up DID mappings to nonexistent devices
-T remote_node -e replication_type configure replicated devices
-t x:y move DID instance x to DID instance y
-t x:y -e replication_type -g replication_group combine DID instance x
with DID instance y
-b x split a combined DID instance x
-U Upgrade from did.conf format to CCR format
-v Print program version
-l list device mappings for this host
-L list device mappings for all hosts
-h Print header for device listing
-o format where format is one of the following:
inst instance path fullpath host name fullname diskid asciidiskid replic
ation detectedfencing defaultfencing
bash-3.2# scdidadm -l
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
6 cluster2:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
7 cluster2:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
bash-3.2#
bash-3.2#
bash-3.2# scdidadm -lL
1 cluster1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 cluster1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2
3 cluster1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
6 cluster1:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
6 cluster2:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
7 cluster1:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
7 cluster2:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# ls
cl_pnmd clquorum clsnmphost hasp_
check scha_cluster_get scshutdown
claccess clreslogicalhostname clsnmpmib hatim
erun scha_control scsnapshot
cldev clresource clsnmpuser pmfad
m scha_resource_get scstat
cldevice clresourcegroup clta scche
ck scha_resource_setstatus scswitch
cldevicegroup clresourcetype cltelemetryattribute sccon
f scha_resourcegroup_get sctelemetry
cldg clressharedaddress cluster scdid
adm scha_resourcetype_get scversions
clinterconnect clrg clvxvm scdpm
scinstall scvxinstall
clintr clrs clzc scdsb
uilder scnas scwtadm
clnas clrslh clzonecluster scdsc
onfig scnasdir
clnasdevice clrssa hactl scdsc
reate scprivipadm
clnode clrt haget sceve
ntmib scrgadm
clq clsetup halockrun scgde
vs scsetup
bash-3.2# ls scdidadm -Lh
Instance Physical Path Pseudo Path
1 cluster1:/dev/rdsk/c0t0d0 /dev/did/rdsk/d1
2 cluster1:/dev/rdsk/c1t0d0 /dev/did/rdsk/d2
3 cluster1:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
3 cluster2:/dev/rdsk/c2t0d0 /dev/did/rdsk/d3
4 cluster2:/dev/rdsk/c0t0d0 /dev/did/rdsk/d4
5 cluster2:/dev/rdsk/c1t0d0 /dev/did/rdsk/d5
6 cluster1:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
6 cluster2:/dev/rdsk/c3t0d0 /dev/did/rdsk/d6
7 cluster1:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
7 cluster2:/dev/rdsk/c4t0d0 /dev/did/rdsk/d7
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# scconf -pvv
Cluster name: clustergrp
Cluster ID: 0x4FF2D43C
Cluster install mode: disabled
Cluster private net: 172.16.0.0
Cluster private netmask: 255.255.248.0
Cluster maximum nodes: 64
Cluster maximum private networks: 10
Cluster new node authentication: unix
Cluster authorized-node list: <. - Exclude all nodes>
Cluster transport heart beat timeout: 10000
Cluster transport heart beat quantum: 1000
Round Robin Load Balancing UDP session timeout: 480
Cluster nodes: cluster1 cluster2
Cluster node name: cluster1
(cluster1) Node ID: 1
(cluster1) Node enabled: yes
(cluster1) Node private hostname: clusternode1-priv
(cluster1) Node quorum vote count: 1
(cluster1) Node reservation key: 0x4FF2D43C00000001
(cluster1) Node zones: <NULL>
(cluster1) CPU shares for global zone: 1
(cluster1) Minimum CPU requested for global zone: 1
(cluster1) Node transport adapters: e1000g2 e1000g3
(cluster1) Node transport adapter: e1000g2
(cluster1:e1000g2) Adapter enabled: yes
(cluster1:e1000g2) Adapter transport type: dlpi
(cluster1:e1000g2) Adapter property: device_name=e1000g
(cluster1:e1000g2) Adapter property: device_instance=2
(cluster1:e1000g2) Adapter property: lazy_free=1
(cluster1:e1000g2) Adapter property: dlpi_heartbeat_timeout=10000
(cluster1:e1000g2) Adapter property: dlpi_heartbeat_quantum=1000
(cluster1:e1000g2) Adapter property: nw_bandwidth=80
(cluster1:e1000g2) Adapter property: bandwidth=70
(cluster1:e1000g2) Adapter property: ip_address=172.16.0.129
(cluster1:e1000g2) Adapter property: netmask=255.255.255.128
(cluster1:e1000g2) Adapter port names: 0
(cluster1:e1000g2) Adapter port: 0
(cluster1:e1000g2@0) Port enabled: yes
(cluster1) Node transport adapter: e1000g3
(cluster1:e1000g3) Adapter enabled: yes
(cluster1:e1000g3) Adapter transport type: dlpi
(cluster1:e1000g3) Adapter property: device_name=e1000g
(cluster1:e1000g3) Adapter property: device_instance=3
(cluster1:e1000g3) Adapter property: lazy_free=1
(cluster1:e1000g3) Adapter property: dlpi_heartbeat_timeout=10000
(cluster1:e1000g3) Adapter property: dlpi_heartbeat_quantum=1000
(cluster1:e1000g3) Adapter property: nw_bandwidth=80
(cluster1:e1000g3) Adapter property: bandwidth=70
(cluster1:e1000g3) Adapter property: ip_address=172.16.1.1
(cluster1:e1000g3) Adapter property: netmask=255.255.255.128
(cluster1:e1000g3) Adapter port names: 0
(cluster1:e1000g3) Adapter port: 0
(cluster1:e1000g3@0) Port enabled: yes
Cluster node name: cluster2
(cluster2) Node ID: 2
(cluster2) Node enabled: yes
(cluster2) Node private hostname: clusternode2-priv
(cluster2) Node quorum vote count: 1
(cluster2) Node reservation key: 0x4FF2D43C00000002
(cluster2) Node zones: <NULL>
(cluster2) CPU shares for global zone: 1
(cluster2) Minimum CPU requested for global zone: 1
(cluster2) Node transport adapters: e1000g2 e1000g3
(cluster2) Node transport adapter: e1000g2
(cluster2:e1000g2) Adapter enabled: yes
(cluster2:e1000g2) Adapter transport type: dlpi
(cluster2:e1000g2) Adapter property: device_name=e1000g
(cluster2:e1000g2) Adapter property: device_instance=2
(cluster2:e1000g2) Adapter property: lazy_free=1
(cluster2:e1000g2) Adapter property: dlpi_heartbeat_timeout=10000
(cluster2:e1000g2) Adapter property: dlpi_heartbeat_quantum=1000
(cluster2:e1000g2) Adapter property: nw_bandwidth=80
(cluster2:e1000g2) Adapter property: bandwidth=70
(cluster2:e1000g2) Adapter property: ip_address=172.16.0.130
(cluster2:e1000g2) Adapter property: netmask=255.255.255.128
(cluster2:e1000g2) Adapter port names: 0
(cluster2:e1000g2) Adapter port: 0
(cluster2:e1000g2@0) Port enabled: yes
(cluster2) Node transport adapter: e1000g3
(cluster2:e1000g3) Adapter enabled: yes
(cluster2:e1000g3) Adapter transport type: dlpi
(cluster2:e1000g3) Adapter property: device_name=e1000g
(cluster2:e1000g3) Adapter property: device_instance=3
(cluster2:e1000g3) Adapter property: lazy_free=1
(cluster2:e1000g3) Adapter property: dlpi_heartbeat_timeout=10000
(cluster2:e1000g3) Adapter property: dlpi_heartbeat_quantum=1000
(cluster2:e1000g3) Adapter property: nw_bandwidth=80
(cluster2:e1000g3) Adapter property: bandwidth=70
(cluster2:e1000g3) Adapter property: ip_address=172.16.1.2
(cluster2:e1000g3) Adapter property: netmask=255.255.255.128
(cluster2:e1000g3) Adapter port names: 0
(cluster2:e1000g3) Adapter port: 0
(cluster2:e1000g3@0) Port enabled: yes
Cluster transport switches: switch1 switch2
Cluster transport switch: switch1
(switch1) Switch enabled: yes
(switch1) Switch type: switch
(switch1) Switch port names: 1 2
(switch1) Switch port: 1
(switch1@1) Port enabled: yes
(switch1) Switch port: 2
(switch1@2) Port enabled: yes
Cluster transport switch: switch2
(switch2) Switch enabled: yes
(switch2) Switch type: switch
(switch2) Switch port names: 1 2
(switch2) Switch port: 1
(switch2@1) Port enabled: yes
(switch2) Switch port: 2
(switch2@2) Port enabled: yes
Cluster transport cables
Endpoint Endpoint State
-------- -------- -----
Transport cable: cluster1:e1000g2@0 switch1@1 Enabled
Transport cable: cluster1:e1000g3@0 switch2@1 Enabled
Transport cable: cluster2:e1000g2@0 switch1@2 Enabled
Transport cable: cluster2:e1000g3@0 switch2@2 Enabled
Quorum devices: d3 d6 d7
Quorum device name: d3
(d3) Quorum device votes: 1
(d3) Quorum device enabled: yes
(d3) Quorum device name: /dev/did/rdsk/d3s2
(d3) Quorum device type: shared_disk
(d3) Quorum device hosts (enabled): cluster1 cluster2
(d3) Quorum device hosts (disabled):
(d3) Quorum device access mode: scsi2
Quorum device name: d6
(d6) Quorum device votes: 1
(d6) Quorum device enabled: yes
(d6) Quorum device name: /dev/did/rdsk/d6s2
(d6) Quorum device type: shared_disk
(d6) Quorum device hosts (enabled): cluster1 cluster2
(d6) Quorum device hosts (disabled):
(d6) Quorum device access mode: scsi2
Quorum device name: d7
(d7) Quorum device votes: 1
(d7) Quorum device enabled: yes
(d7) Quorum device name: /dev/did/rdsk/d7s2
(d7) Quorum device type: shared_disk
(d7) Quorum device hosts (enabled): cluster1 cluster2
(d7) Quorum device hosts (disabled):
(d7) Quorum device access mode: scsi2
Device group name: dsk/d7
(dsk/d7) Device group type: Disk
(dsk/d7) Device group failback enabled: no
(dsk/d7) Device group node list: cluster1, cluster2
(dsk/d7) Device group ordered node list: no
(dsk/d7) Device group desired number of secondaries: 1
(dsk/d7) Device group device names: /dev/did/rdsk/d7s2
Device group name: dsk/d6
(dsk/d6) Device group type: Disk
(dsk/d6) Device group failback enabled: no
(dsk/d6) Device group node list: cluster1, cluster2
(dsk/d6) Device group ordered node list: no
(dsk/d6) Device group desired number of secondaries: 1
(dsk/d6) Device group device names: /dev/did/rdsk/d6s2
Device group name: dsk/d5
(dsk/d5) Device group type: Local_Disk
(dsk/d5) Device group failback enabled: no
(dsk/d5) Device group node list: cluster2
(dsk/d5) Device group ordered node list: no
(dsk/d5) Device group desired number of secondaries: 1
(dsk/d5) Device group device names: /dev/did/rdsk/d5s2
Device group name: dsk/d4
(dsk/d4) Device group type: Disk
(dsk/d4) Device group failback enabled: no
(dsk/d4) Device group node list: cluster2
(dsk/d4) Device group ordered node list: no
(dsk/d4) Device group desired number of secondaries: 1
(dsk/d4) Device group device names: /dev/did/rdsk/d4s2
Device group name: dsk/d3
(dsk/d3) Device group type: Disk
(dsk/d3) Device group failback enabled: no
(dsk/d3) Device group node list: cluster1, cluster2
(dsk/d3) Device group ordered node list: no
(dsk/d3) Device group desired number of secondaries: 1
(dsk/d3) Device group device names: /dev/did/rdsk/d3s2
Device group name: dsk/d2
(dsk/d2) Device group type: Local_Disk
(dsk/d2) Device group failback enabled: no
(dsk/d2) Device group node list: cluster1
(dsk/d2) Device group ordered node list: no
(dsk/d2) Device group desired number of secondaries: 1
(dsk/d2) Device group device names: /dev/did/rdsk/d2s2
Device group name: dsk/d1
(dsk/d1) Device group type: Disk
(dsk/d1) Device group failback enabled: no
(dsk/d1) Device group node list: cluster1
(dsk/d1) Device group ordered node list: no
(dsk/d1) Device group desired number of secondaries: 1
(dsk/d1) Device group device names: /dev/did/rdsk/d1s2
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# ls
cl_pnmd clquorum clsnmphost hasp_
check scha_cluster_get scshutdown
claccess clreslogicalhostname clsnmpmib hatim
erun scha_control scsnapshot
cldev clresource clsnmpuser pmfad
m scha_resource_get scstat
cldevice clresourcegroup clta scche
ck scha_resource_setstatus scswitch
cldevicegroup clresourcetype cltelemetryattribute sccon
f scha_resourcegroup_get sctelemetry
cldg clressharedaddress cluster scdid
adm scha_resourcetype_get scversions
clinterconnect clrg clvxvm scdpm
scinstall scvxinstall
clintr clrs clzc scdsb
uilder scnas scwtadm
clnas clrslh clzonecluster scdsc
onfig scnasdir
clnasdevice clrssa hactl scdsc
reate scprivipadm
clnode clrt haget sceve
ntmib scrgadm
clq clsetup halockrun scgde
vs scsetup
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# ls scconf -pvv
Cluster name: clustergrp
Cluster ID: 0x4FF2D43C
Cluster install mode: disabled
Cluster private net: 172.16.0.0
Cluster private netmask: 255.255.248.0
Cluster maximum nodes: 64
Cluster maximum private networks: 10
Cluster new node authentication: unix
Cluster authorized-node list: <. - Exclude all nodes>
Cluster transport heart beat timeout: 10000
Cluster transport heart beat quantum: 1000
Round Robin Load Balancing UDP session timeout: 480
Cluster nodes: cluster1 cluster2
Cluster node name: cluster1
(cluster1) Node ID: 1
(cluster1) Node enabled: yes
(cluster1) Node private hostname: clusternode1-priv
(cluster1) Node quorum vote count: 1
(cluster1) Node reservation key: 0x4FF2D43C00000001
(cluster1) Node zones: <NULL>
(cluster1) CPU shares for global zone: 1
(cluster1) Minimum CPU requested for global zone: 1
(cluster1) Node transport adapters: e1000g2 e1000g3
(cluster1) Node transport adapter: e1000g2
(cluster1:e1000g2) Adapter enabled: yes
(cluster1:e1000g2) Adapter transport type: dlpi
(cluster1:e1000g2) Adapter property: device_name=e1000g
(cluster1:e1000g2) Adapter property: device_instance=2
(cluster1:e1000g2) Adapter property: lazy_free=1
(cluster1:e1000g2) Adapter property: dlpi_heartbeat_timeout=10000
(cluster1:e1000g2) Adapter property: dlpi_heartbeat_quantum=1000
(cluster1:e1000g2) Adapter property: nw_bandwidth=80
(cluster1:e1000g2) Adapter property: bandwidth=70
(cluster1:e1000g2) Adapter property: ip_address=172.16.0.129
(cluster1:e1000g2) Adapter property: netmask=255.255.255.128
(cluster1:e1000g2) Adapter port names: 0
(cluster1:e1000g2) Adapter port: 0
(cluster1:e1000g2@0) Port enabled: yes
(cluster1) Node transport adapter: e1000g3
(cluster1:e1000g3) Adapter enabled: yes
(cluster1:e1000g3) Adapter transport type: dlpi
(cluster1:e1000g3) Adapter property: device_name=e1000g
(cluster1:e1000g3) Adapter property: device_instance=3
(cluster1:e1000g3) Adapter property: lazy_free=1
(cluster1:e1000g3) Adapter property: dlpi_heartbeat_timeout=10000
(cluster1:e1000g3) Adapter property: dlpi_heartbeat_quantum=1000
(cluster1:e1000g3) Adapter property: nw_bandwidth=80
(cluster1:e1000g3) Adapter property: bandwidth=70
(cluster1:e1000g3) Adapter property: ip_address=172.16.1.1
(cluster1:e1000g3) Adapter property: netmask=255.255.255.128
(cluster1:e1000g3) Adapter port names: 0
(cluster1:e1000g3) Adapter port: 0
(cluster1:e1000g3@0) Port enabled: yes
Cluster node name: cluster2
(cluster2) Node ID: 2
(cluster2) Node enabled: yes
(cluster2) Node private hostname: clusternode2-priv
(cluster2) Node quorum vote count: 1
(cluster2) Node reservation key: 0x4FF2D43C00000002
(cluster2) Node zones: <NULL>
(cluster2) CPU shares for global zone: 1
(cluster2) Minimum CPU requested for global zone: 1
(cluster2) Node transport adapters: e1000g2 e1000g3
(cluster2) Node transport adapter: e1000g2
(cluster2:e1000g2) Adapter enabled: yes
(cluster2:e1000g2) Adapter transport type: dlpi
(cluster2:e1000g2) Adapter property: device_name=e1000g
(cluster2:e1000g2) Adapter property: device_instance=2
(cluster2:e1000g2) Adapter property: lazy_free=1
(cluster2:e1000g2) Adapter property: dlpi_heartbeat_timeout=10000
(cluster2:e1000g2) Adapter property: dlpi_heartbeat_quantum=1000
(cluster2:e1000g2) Adapter property: nw_bandwidth=80
(cluster2:e1000g2) Adapter property: bandwidth=70
(cluster2:e1000g2) Adapter property: ip_address=172.16.0.130
(cluster2:e1000g2) Adapter property: netmask=255.255.255.128
(cluster2:e1000g2) Adapter port names: 0
(cluster2:e1000g2) Adapter port: 0
(cluster2:e1000g2@0) Port enabled: yes
(cluster2) Node transport adapter: e1000g3
(cluster2:e1000g3) Adapter enabled: yes
(cluster2:e1000g3) Adapter transport type: dlpi
(cluster2:e1000g3) Adapter property: device_name=e1000g
(cluster2:e1000g3) Adapter property: device_instance=3
(cluster2:e1000g3) Adapter property: lazy_free=1
(cluster2:e1000g3) Adapter property: dlpi_heartbeat_timeout=10000
(cluster2:e1000g3) Adapter property: dlpi_heartbeat_quantum=1000
(cluster2:e1000g3) Adapter property: nw_bandwidth=80
(cluster2:e1000g3) Adapter property: bandwidth=70
(cluster2:e1000g3) Adapter property: ip_address=172.16.1.2
(cluster2:e1000g3) Adapter property: netmask=255.255.255.128
(cluster2:e1000g3) Adapter port names: 0
(cluster2:e1000g3) Adapter port: 0
(cluster2:e1000g3@0) Port enabled: yes
Cluster transport switches: switch1 switch2
Cluster transport switch: switch1
(switch1) Switch enabled: yes
(switch1) Switch type: switch
(switch1) Switch port names: 1 2
(switch1) Switch port: 1
(switch1@1) Port enabled: yes
(switch1) Switch port: 2
(switch1@2) Port enabled: yes
Cluster transport switch: switch2
(switch2) Switch enabled: yes
(switch2) Switch type: switch
(switch2) Switch port names: 1 2
(switch2) Switch port: 1
(switch2@1) Port enabled: yes
(switch2) Switch port: 2
(switch2@2) Port enabled: yes
Cluster transport cables
Endpoint Endpoint State
-------- -------- -----
Transport cable: cluster1:e1000g2@0 switch1@1 Enabled
Transport cable: cluster1:e1000g3@0 switch2@1 Enabled
Transport cable: cluster2:e1000g2@0 switch1@2 Enabled
Transport cable: cluster2:e1000g3@0 switch2@2 Enabled
Quorum devices: d3 d6 d7
Quorum device name: d3
(d3) Quorum device votes: 1
(d3) Quorum device enabled: yes
(d3) Quorum device name: /dev/did/rdsk/d3s2
(d3) Quorum device type: shared_disk
(d3) Quorum device hosts (enabled): cluster1 cluster2
(d3) Quorum device hosts (disabled):
(d3) Quorum device access mode: scsi2
Quorum device name: d6
(d6) Quorum device votes: 1
(d6) Quorum device enabled: yes
(d6) Quorum device name: /dev/did/rdsk/d6s2
(d6) Quorum device type: shared_disk
(d6) Quorum device hosts (enabled): cluster1 cluster2
(d6) Quorum device hosts (disabled):
(d6) Quorum device access mode: scsi2
Quorum device name: d7
(d7) Quorum device votes: 1
(d7) Quorum device enabled: yes
(d7) Quorum device name: /dev/did/rdsk/d7s2
(d7) Quorum device type: shared_disk
(d7) Quorum device hosts (enabled): cluster1 cluster2
(d7) Quorum device hosts (disabled):
(d7) Quorum device access mode: scsi2
Device group name: dsk/d7
(dsk/d7) Device group type: Disk
(dsk/d7) Device group failback enabled: no
(dsk/d7) Device group node list: cluster1, cluster2
(dsk/d7) Device group ordered node list: no
(dsk/d7) Device group desired number of secondaries: 1
(dsk/d7) Device group device names: /dev/did/rdsk/d7s2
Device group name: dsk/d6
(dsk/d6) Device group type: Disk
(dsk/d6) Device group failback enabled: no
(dsk/d6) Device group node list: cluster1, cluster2
(dsk/d6) Device group ordered node list: no
(dsk/d6) Device group desired number of secondaries: 1
(dsk/d6) Device group device names: /dev/did/rdsk/d6s2
Device group name: dsk/d5
(dsk/d5) Device group type: Local_Disk
(dsk/d5) Device group failback enabled: no
(dsk/d5) Device group node list: cluster2
(dsk/d5) Device group ordered node list: no
(dsk/d5) Device group desired number of secondaries: 1
(dsk/d5) Device group device names: /dev/did/rdsk/d5s2
Device group name: dsk/d4
(dsk/d4) Device group type: Disk
(dsk/d4) Device group failback enabled: no
(dsk/d4) Device group node list: cluster2
(dsk/d4) Device group ordered node list: no
(dsk/d4) Device group desired number of secondaries: 1
(dsk/d4) Device group device names: /dev/did/rdsk/d4s2
Device group name: dsk/d3
(dsk/d3) Device group type: Disk
(dsk/d3) Device group failback enabled: no
(dsk/d3) Device group node list: cluster1, cluster2
(dsk/d3) Device group ordered node list: no
(dsk/d3) Device group desired number of secondaries: 1
(dsk/d3) Device group device names: /dev/did/rdsk/d3s2
Device group name: dsk/d2
(dsk/d2) Device group type: Local_Disk
(dsk/d2) Device group failback enabled: no
(dsk/d2) Device group node list: cluster1
(dsk/d2) Device group ordered node list: no
(dsk/d2) Device group desired number of secondaries: 1
(dsk/d2) Device group device names: /dev/did/rdsk/d2s2
Device group name: dsk/d1
(dsk/d1) Device group type: Disk
(dsk/d1) Device group failback enabled: no
(dsk/d1) Device group node list: cluster1
(dsk/d1) Device group ordered node list: no
(dsk/d1) Device group desired number of secondaries: 1
(dsk/d1) Device group device names: /dev/did/rdsk/d1s2
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# ls
cl_pnmd clquorum clsnmphost hasp_
check scha_cluster_get scshutdown
claccess clreslogicalhostname clsnmpmib hatim
erun scha_control scsnapshot
cldev clresource clsnmpuser pmfad
m scha_resource_get scstat
cldevice clresourcegroup clta scche
ck scha_resource_setstatus scswitch
cldevicegroup clresourcetype cltelemetryattribute sccon
f scha_resourcegroup_get sctelemetry
cldg clressharedaddress cluster scdid
adm scha_resourcetype_get scversions
clinterconnect clrg clvxvm scdpm
scinstall scvxinstall
clintr clrs clzc scdsb
uilder scnas scwtadm
clnas clrslh clzonecluster scdsc
onfig scnasdir
clnasdevice clrssa hactl scdsc
reate scprivipadm
clnode clrt haget sceve
ntmib scrgadm
clq clsetup halockrun scgde
vs scsetup
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# sccheck
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# cd /etc/cluster/
bash-3.2# ls
ccr eventlog nodeid qd_userd_door
release syncsa.conf zone_cluster
clpl locale original ql
remoteconfiguration vp
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# more ccr
*** ccr: directory ***
bash-3.2# cd ccr
bash-3.2# ls
cluster_directory cluster_directory.bak global
bash-3.2# more cluster_directory
ccr_gennum 0
ccr_checksum D775CC069EDB1FF4A8A4DEB7EA34BB77
global 0
bash-3.2#
bash-3.2#
bash-3.2# more cluster_directory cluster_directory.bak
ccr_gennum -2
ccr_checksum D775CC069EDB1FF4A8A4DEB7EA34BB77
global 0
bash-3.2#
bash-3.2# ls
cluster_directory cluster_directory.bak global
bash-3.2# ls -la
total 12
drwxr-xr-x 3 root sys 512 Jul 3 16:50 .
drwxr-xr-x 10 root sys 512 Jul 5 13:08 ..
-rw------- 1 root root 68 Jul 3 16:50 cluster_directory
-rw------- 1 root root 69 Oct 22 2009 cluster_directory.bak
drwxr-xr-x 2 root sys 1536 Jul 5 14:51 global
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# cd global/
bash-3.2# ls
cz_network_ccr dcs_service_6.bak did_types.
bak infrastructure.bak
dcs_service_1 dcs_service_7 directory
infrastructure.new
dcs_service_2 dcs_service_7.bak directory.
bak postconfig
dcs_service_3 dcs_service_classes dpm_autore
boot_table postconfig.bak
dcs_service_3.bak dcs_service_classes.bak dpm_status
_table privip_ccr
dcs_service_4 dcs_service_keys dpm_status
_table.bak rgm_rt_SUNW.LogicalHostname:3
dcs_service_4.bak dcs_service_keys.bak epoch
rgm_rt_SUNW.LogicalHostname:3.bak
dcs_service_5 did_instances epoch.bak
rgm_rt_SUNW.SharedAddress:2
dcs_service_5.bak did_instances.bak global_fen
cing rgm_rt_SUNW.SharedAddress:2.bak
dcs_service_6 did_types infrastruc
ture
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# more infrastructure
ccr_gennum 22
ccr_checksum DB94FD99B8799288E7BBC5272950700C
cluster.name clustergrp
cluster.state enabled
cluster.properties.cluster_id 0x4FF2D43C
cluster.properties.installmode disabled
cluster.properties.private_net_number 172.16.0.0
cluster.properties.cluster_netmask 255.255.240.0
cluster.properties.private_netmask 255.255.248.0
cluster.properties.private_subnet_netmask 255.255.255.128
cluster.properties.private_user_net_number 172.16.4.0
cluster.properties.private_user_netmask 255.255.254.0
cluster.properties.private_maxnodes 64
cluster.properties.private_maxprivnets 10
cluster.properties.zoneclusters 12
cluster.properties.auth_joinlist_type sys
cluster.properties.auth_joinlist_hostslist .
cluster.properties.transport_heartbeat_timeout 10000
cluster.properties.transport_heartbeat_quantum 1000
cluster.properties.udp_session_timeout 480
cluster.properties.cmm_version 1
cluster.nodes.1.name cluster1
cluster.nodes.1.state enabled
cluster.nodes.1.properties.private_hostname clusternode1-priv
cluster.nodes.1.properties.quorum_vote 1
cluster.nodes.1.properties.quorum_resv_key 0x4FF2D43C00000001
cluster.nodes.1.adapters.1.name e1000g2
cluster.nodes.1.adapters.1.state enabled
cluster.nodes.1.adapters.1.properties.device_name e1000g
cluster.nodes.1.adapters.1.properties.device_instance 2
cluster.nodes.1.adapters.1.properties.transport_type dlpi
cluster.nodes.1.adapters.1.properties.lazy_free 1
cluster.nodes.1.adapters.1.properties.dlpi_heartbeat_timeout 10000
cluster.nodes.1.adapters.1.properties.dlpi_heartbeat_quantum 1000
cluster.nodes.1.adapters.1.properties.nw_bandwidth 80
cluster.nodes.1.adapters.1.properties.bandwidth 70
cluster.nodes.1.adapters.1.properties.ip_address 172.16.0.129
cluster.nodes.1.adapters.1.properties.netmask 255.255.255.128
cluster.nodes.1.adapters.1.ports.1.name 0
cluster.nodes.1.adapters.1.ports.1.state enabled
cluster.nodes.1.adapters.2.name e1000g3
cluster.nodes.1.adapters.2.state enabled
cluster.nodes.1.adapters.2.properties.device_name e1000g
cluster.nodes.1.adapters.2.properties.device_instance 3
--More--(30%) cluster.nodes.1.adapters.2.properties.transport_type dlpi
cluster.nodes.1.adapters.2.properties.lazy_free 1
cluster.nodes.1.adapters.2.properties.dlpi_heartbeat_timeout 10000
cluster.nodes.1.adapters.2.properties.dlpi_heartbeat_quantum 1000
cluster.nodes.1.adapters.2.properties.nw_bandwidth 80
cluster.nodes.1.adapters.2.properties.bandwidth 70
cluster.nodes.1.adapters.2.properties.ip_address 172.16.1.1
cluster.nodes.1.adapters.2.properties.netmask 255.255.255.128
cluster.nodes.1.adapters.2.ports.1.name 0
cluster.nodes.1.adapters.2.ports.1.state enabled
cluster.nodes.1.cmm_version 1
cluster.nodes.2.name cluster2
cluster.nodes.2.state enabled
cluster.nodes.2.properties.quorum_vote 1
cluster.nodes.2.properties.quorum_resv_key 0x4FF2D43C00000002
cluster.nodes.2.properties.private_hostname clusternode2-priv
cluster.nodes.2.adapters.1.name e1000g2
cluster.nodes.2.adapters.1.properties.device_name e1000g
cluster.nodes.2.adapters.1.properties.device_instance 2
cluster.nodes.2.adapters.1.properties.transport_type dlpi
cluster.nodes.2.adapters.1.properties.lazy_free 1
cluster.nodes.2.adapters.1.properties.dlpi_heartbeat_timeout 10000
cluster.nodes.2.adapters.1.properties.dlpi_heartbeat_quantum 1000
cluster.nodes.2.adapters.1.properties.nw_bandwidth 80
cluster.nodes.2.adapters.1.properties.bandwidth 70
cluster.nodes.2.adapters.1.properties.ip_address 172.16.0.130
cluster.nodes.2.adapters.1.properties.netmask 255.255.255.128
cluster.nodes.2.adapters.1.state enabled
cluster.nodes.2.adapters.1.ports.1.name 0
cluster.nodes.2.adapters.1.ports.1.state enabled
cluster.nodes.2.adapters.2.name e1000g3
cluster.nodes.2.adapters.2.properties.device_name e1000g
cluster.nodes.2.adapters.2.properties.device_instance 3
cluster.nodes.2.adapters.2.properties.transport_type dlpi
cluster.nodes.2.adapters.2.properties.lazy_free 1
cluster.nodes.2.adapters.2.properties.dlpi_heartbeat_timeout 10000
cluster.nodes.2.adapters.2.properties.dlpi_heartbeat_quantum 1000
cluster.nodes.2.adapters.2.properties.nw_bandwidth 80
cluster.nodes.2.adapters.2.properties.bandwidth 70
cluster.nodes.2.adapters.2.properties.ip_address 172.16.1.2
cluster.nodes.2.adapters.2.properties.netmask 255.255.255.128
cluster.nodes.2.adapters.2.state enabled
cluster.nodes.2.adapters.2.ports.1.name 0
cluster.nodes.2.adapters.2.ports.1.state enabled
--More--(64%) cluster.nodes.2.cmm_version 1
cluster.blackboxes.1.name switch1
cluster.blackboxes.1.state enabled
cluster.blackboxes.1.properties.type switch
cluster.blackboxes.1.ports.1.name 1
cluster.blackboxes.1.ports.1.state enabled
cluster.blackboxes.1.ports.2.name 2
cluster.blackboxes.1.ports.2.state enabled
cluster.blackboxes.2.name switch2
cluster.blackboxes.2.state enabled
cluster.blackboxes.2.properties.type switch
cluster.blackboxes.2.ports.1.name 1
cluster.blackboxes.2.ports.1.state enabled
cluster.blackboxes.2.ports.2.name 2
cluster.blackboxes.2.ports.2.state enabled
cluster.cables.1.properties.end1 cluster.nodes.1.adapters.1.ports.1
cluster.cables.1.properties.end2 cluster.blackboxes.1.ports.1
cluster.cables.1.state enabled
cluster.cables.2.properties.end1 cluster.nodes.1.adapters.2.ports.1
cluster.cables.2.properties.end2 cluster.blackboxes.2.ports.1
cluster.cables.2.state enabled
cluster.cables.3.properties.end1 cluster.nodes.2.adapters.1.ports.1
cluster.cables.3.properties.end2 cluster.blackboxes.1.ports.2
cluster.cables.3.state enabled
cluster.cables.4.properties.end1 cluster.nodes.2.adapters.2.ports.1
cluster.cables.4.properties.end2 cluster.blackboxes.2.ports.2
cluster.cables.4.state enabled
cluster.quorum_devices.1.name d3
cluster.quorum_devices.1.state enabled
cluster.quorum_devices.1.properties.votecount 1
cluster.quorum_devices.1.properties.gdevname /dev/did/rdsk/d3s2
cluster.quorum_devices.1.properties.path_1 enabled
cluster.quorum_devices.1.properties.path_2 enabled
cluster.quorum_devices.1.properties.access_mode scsi2
cluster.quorum_devices.1.properties.type shared_disk
cluster.quorum_devices.2.name d6
cluster.quorum_devices.2.state enabled
cluster.quorum_devices.2.properties.votecount 1
cluster.quorum_devices.2.properties.gdevname /dev/did/rdsk/d6s2
cluster.quorum_devices.2.properties.path_1 enabled
cluster.quorum_devices.2.properties.path_2 enabled
cluster.quorum_devices.2.properties.access_mode scsi2
cluster.quorum_devices.2.properties.type shared_disk
cluster.quorum_devices.3.name d7
--More--(94%) cluster.quorum_devices.3.state enabled
cluster.quorum_devices.3.properties.votecount 1
cluster.quorum_devices.3.properties.gdevname /dev/did/rdsk/d7s2
cluster.quorum_devices.3.properties.path_1 enabled
cluster.quorum_devices.3.properties.path_2 enabled
cluster.quorum_devices.3.properties.access_mode scsi2
cluster.quorum_devices.3.properties.type shared_disk
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 13:07:56 ? 20:36 sched
root 1 0 0 13:07:58 ? 0:00 /sbin/init
root 2 0 0 13:07:58 ? 0:00 pageout
root 3 0 0 13:07:58 ? 0:10 fsflush
root 4 0 0 13:07:58 ? 0:06 cluster
root 5 0 0 13:07:58 ? 0:00 vmtasks
root 833 1 0 13:08:50 ? 0:00 /usr/sfw/sbin/snmpd
root 9 1 0 13:08:00 ? 0:03 /lib/svc/bin/svc.startd
root 11 1 0 13:08:00 ? 0:07 /lib/svc/bin/svc.configd
root 277 1 0 13:08:14 ? 0:00 /usr/lib/inet/in.mpathd
root 383 1 0 13:08:30 ? 0:03 /usr/sbin/in.routed
root 586 9 0 13:08:42 ? 0:00 /usr/lib/saf/sac -t 300
smmsp 2101 1 0 13:09:48 ? 0:00 /usr/lib/sendmail -Ac -q15m
root 623 586 0 13:08:42 ? 0:00 /usr/lib/saf/ttymon
root 751 1 0 13:08:44 ? 0:00 /usr/sbin/vold -f /etc/vold.c
onf
daemon 529 1 0 13:08:41 ? 0:00 /usr/lib/nfs/statd
daemon 520 1 0 13:08:41 ? 0:00 /usr/sbin/rpcbind
root 186 1 0 13:08:09 ? 0:00 /usr/lib/sysevent/syseventd
root 390 1 0 13:08:33 ? 0:00 /usr/cluster/lib/sc/qd_userd
root 449 1 0 13:08:40 ? 0:00 /lib/svc/method/iscsi-initiat
or
root 421 1 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/failfastd
daemon 525 1 0 13:08:41 ? 0:00 /usr/lib/nfs/nfs4cbd
daemon 217 1 0 13:08:12 ? 0:00 /usr/lib/crypto/kcfd
root 426 1 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/cl_execd
root 249 1 0 13:08:13 ? 0:00 /usr/lib/picl/picld
root 252 1 0 13:08:13 ? 0:00 devfsadmd
daemon 526 1 0 13:08:41 ? 0:00 /usr/lib/nfs/nfsmapid
root 427 426 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/cl_execd
root 456 455 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/clexecd
root 1093 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/sc_zonesd
root 455 1 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/clexecd
root 624 1 0 13:08:42 ? 0:00 /usr/lib/inet/inetd start
root 600 1 0 13:08:42 ? 0:00 /usr/lib/utmpd
root 461 1 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/pmmd
root 1127 1 0 13:09:08 ? 0:00 /usr/cluster/bin/cl_pnmd
root 463 1 0 13:08:40 ? 0:01 /usr/sbin/nscd
root 488 1 0 13:08:41 ? 0:00 /usr/sbin/cron
root 2102 1 0 13:09:48 ? 0:00 /usr/lib/sendmail -bl -q15m
root 1711 1 0 13:09:13 ? 0:00 /usr/lib/cacao/lib/tools/laun
ch -w /var/cacao/instances/default -L 16384 -A /us
root 538 1 0 13:08:41 ? 0:00 /usr/cluster/lib/sc/ifconfig_
proxy_serverd
root 1030 994 0 13:09:05 ? 0:00 /usr/dt/bin/dtlogin -daemon
root 949 1 0 13:09:00 ? 0:01 /usr/lib/inet/xntpd -c /etc/i
net/ntp.conf.cluster
root 558 1 0 13:08:42 ? 0:00 /usr/cluster/lib/sc/rtreg_pro
xy_serverd
daemon 553 1 0 13:08:42 ? 0:00 /usr/lib/nfs/lockd
root 596 9 0 13:08:42 console 0:00 /usr/lib/saf/ttymon -g -d /de
v/console -l console -m ldterm,ttcompat -h -p clus
root 648 1 0 13:08:43 ? 0:00 /usr/sadm/lib/smc/bin/smcboot
root 649 648 0 13:08:43 ? 0:00 /usr/sadm/lib/smc/bin/smcboot
root 651 648 0 13:08:43 ? 0:00 /usr/sadm/lib/smc/bin/smcboot
root 755 1 0 13:08:44 ? 0:00 /usr/lib/autofs/automountd
root 756 755 0 13:08:44 ? 0:00 /usr/lib/autofs/automountd
root 998 994 0 13:09:04 ?? 0:00 /usr/openwin/bin/fbconsole -n
-d :0
root 1228 461 0 13:09:11 ? 0:00 /usr/cluster/lib/sc/rgmd -z g
lobal
noaccess 882 1 0 13:08:57 ? 0:18 /usr/java/bin/java -server -X
mx128m -XX:+UseParallelGC -XX:ParallelGCThreads=4
root 786 1 0 13:08:47 ? 0:00 /usr/sbin/syslogd
root 777 1 0 13:08:46 ? 0:01 /usr/lib/fm/fmd/fmd
root 2172 2171 0 14:37:38 ? 0:00 /usr/lib/ssh/sshd
root 776 1 0 13:08:45 ? 0:00 /usr/lib/ssh/sshd
root 2178 2172 0 14:37:41 pts/3 0:00 -sh
root 994 1 0 13:09:03 ? 0:00 /usr/dt/bin/dtlogin -daemon
root 822 1 0 13:08:48 ? 0:00 /usr/lib/snmp/snmpdx -y -c /e
tc/snmp/conf
root 830 1 0 13:08:50 ? 0:00 /usr/lib/dmi/snmpXdmid -s clu
ster2
root 829 1 0 13:08:49 ? 0:00 /usr/lib/dmi/dmispd
root 999 994 0 13:09:04 ? 0:04 /usr/X11/bin/Xorg :0 -depth 2
4 -nobanner -auth /var/dt/A:0-o_aq8b
root 1117 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/rpc.pmfd
root 1886 1030 0 13:09:13 ? 0:02 dtgreet -display :0
root 1185 1 0 13:09:10 ? 0:00 /usr/cluster/lib/sc/scprivipd
root 12082 624 0 17:00:43 ? 0:00 /usr/cluster/lib/sc/rpc.scadm
d
root 1184 1 0 13:09:10 ? 0:00 /usr/cluster/lib/sc/cznetd
root 12098 2182 0 17:02:02 pts/3 0:00 ps -ef
root 1090 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/scqdmd
root 1134 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/cl_eventl
ogd
root 1188 1 0 13:09:10 ? 0:00 /usr/cluster/lib/sc/rpc.fed
root 1144 1 0 13:09:09 ? 0:00 /usr/cluster/lib/sc/pnm_mod_s
erverd
root 1082 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/cl_eventd
root 1081 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/scdpmd
root 1092 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/cl_ccrad
root 1224 1 0 13:09:11 ? 0:00 /usr/cluster/lib/sc/rgmd
root 2171 776 0 14:37:38 ? 0:00 /usr/lib/ssh/sshd
root 1721 1711 0 13:09:13 ? 0:10 /usr/jdk/jdk1.6.0_26/bin/java
-Xmx128M -Dcom.sun.management.jmxremote -Dfile.en
root 1318 1 0 13:09:12 ? 0:00 /usr/cluster/lib/sc/syncsa_se
rverd
root 2182 2178 0 14:37:43 pts/3 0:00 bash
bash-3.2# ps -ef | grep clus
root 4 0 0 13:07:58 ? 0:06 cluster
root 390 1 0 13:08:33 ? 0:00 /usr/cluster/lib/sc/qd_userd
root 421 1 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/failfastd
root 426 1 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/cl_execd
root 427 426 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/cl_execd
root 456 455 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/clexecd
root 1093 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/sc_zonesd
root 455 1 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/clexecd
root 461 1 0 13:08:40 ? 0:00 /usr/cluster/lib/sc/pmmd
root 1127 1 0 13:09:08 ? 0:00 /usr/cluster/bin/cl_pnmd
root 538 1 0 13:08:41 ? 0:00 /usr/cluster/lib/sc/ifconfig_
proxy_serverd
root 949 1 0 13:09:00 ? 0:01 /usr/lib/inet/xntpd -c /etc/i
net/ntp.conf.cluster
root 558 1 0 13:08:42 ? 0:00 /usr/cluster/lib/sc/rtreg_pro
xy_serverd
root 596 9 0 13:08:42 console 0:00 /usr/lib/saf/ttymon -g -d /de
v/console -l console -m ldterm,ttcompat -h -p clus
root 1228 461 0 13:09:11 ? 0:00 /usr/cluster/lib/sc/rgmd -z g
lobal
root 830 1 0 13:08:50 ? 0:00 /usr/lib/dmi/snmpXdmid -s clu
ster2
root 1117 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/rpc.pmfd
root 1185 1 0 13:09:10 ? 0:00 /usr/cluster/lib/sc/scprivipd
root 12082 624 0 17:00:43 ? 0:00 /usr/cluster/lib/sc/rpc.scadm
d
root 1184 1 0 13:09:10 ? 0:00 /usr/cluster/lib/sc/cznetd
root 1090 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/scqdmd
root 1134 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/cl_eventl
ogd
root 1188 1 0 13:09:10 ? 0:00 /usr/cluster/lib/sc/rpc.fed
root 1144 1 0 13:09:09 ? 0:00 /usr/cluster/lib/sc/pnm_mod_s
erverd
root 1082 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/cl_eventd
root 1081 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/scdpmd
root 1092 1 0 13:09:08 ? 0:00 /usr/cluster/lib/sc/cl_ccrad
root 1224 1 0 13:09:11 ? 0:00 /usr/cluster/lib/sc/rgmd
root 1318 1 0 13:09:12 ? 0:00 /usr/cluster/lib/sc/syncsa_se
rverd
root 12100 2182 0 17:02:07 pts/3 0:00 grep clus
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# svcs -a | grep clus
legacy_run 13:09:00 lrc:/etc/rc2_d/S74xntpd_cluster
disabled 13:08:03 svc:/system/cluster/sc_restarter:default
disabled 13:08:03 svc:/system/cluster/sc_ifconfig_proxy:default
disabled 13:08:03 svc:/system/cluster/sc_ng_zones:default
online 13:08:05 svc:/system/cluster/cl_boot_check:default
online 13:08:07 svc:/system/cluster/scmountdev:default
online 13:08:12 svc:/system/cluster/scslm:default
online 13:08:14 svc:/network/multipath:cluster
online 13:08:32 svc:/system/cluster/loaddid:default
online 13:08:39 svc:/system/cluster/bootcluster:default
online 13:08:39 svc:/system/cluster/initdid:default
online 13:08:39 svc:/system/cluster/sc_failfast:default
online 13:08:40 svc:/system/cluster/sc_pmmd:default
online 13:08:40 svc:/system/cluster/scvxinstall:default
online 13:08:40 svc:/system/cluster/cl_execd:default
online 13:08:41 svc:/system/cluster/clexecd:default
online 13:08:41 svc:/system/cluster/sc_ifconfig_server:default
online 13:08:41 svc:/system/cluster/zc_cmd_log_replay:default
online 13:08:41 svc:/system/cluster/sc_rtreg_server:default
online 13:08:42 svc:/system/cluster/globaldevices:default
online 13:08:58 svc:/system/cluster/gdevsync:default
online 13:09:07 svc:/system/cluster/cl-svc-enable:default
online 13:09:07 svc:/system/cluster/scdpm:default
online 13:09:07 svc:/system/cluster/cl-event:default
online 13:09:07 svc:/system/cluster/scqdm:default
online 13:09:07 svc:/system/cluster/cl-ccra:default
online 13:09:07 svc:/system/cluster/ql_upgrade:default
online 13:09:08 svc:/system/cluster/mountgfs:default
online 13:09:08 svc:/system/cluster/pnm:default
online 13:09:08 svc:/system/cluster/rpc-pmf:default
online 13:09:08 svc:/system/cluster/clusterdata:default
online 13:09:08 svc:/system/cluster/cl-eventlog:default
online 13:09:08 svc:/system/cluster/sc_zc_member:default
online 13:09:08 svc:/system/cluster/ql_rgm:default
online 13:09:08 svc:/system/cluster/sc_pnm_proxy_server:default
online 13:09:09 svc:/system/cluster/sc_zones:default
online 13:09:09 svc:/system/cluster/cznetd:default
online 13:09:10 svc:/system/cluster/scprivipd:default
online 13:09:10 svc:/system/cluster/rpc-fed:default
online 13:09:11 svc:/system/cluster/rgm-starter:default
online 13:09:11 svc:/system/cluster/cl-svc-cluster-milestone:default
online 13:09:11 svc:/system/cluster/scslmclean:default
online 13:09:11 svc:/system/cluster/sc_svtag:default
online 13:09:12 svc:/system/cluster/sc_syncsa_server:default
online 13:09:17 svc:/system/cluster/sckeysync:default
offline 13:08:03 svc:/system/cluster/scsymon-srv:default
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# exit
exit
# exit
=~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2012.07.06 10:58:37 =~=~=~=~=~=~=~=~=~=~=~=
login as: root
Using keyboard-interactive authentication.
Password:
Last login: Thu Jul 5 14:37:41 2012 from 192.168.1.61
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2#
bash-3.2# =~=~=~=~=~=~=~=~=~=~=~= PuTTY log 2012.07.06 11:08:10 =~=~=~=~=~=~=~=~
=~=~=~=
login as: root
Using keyboard-interactive authentication.
Password:
Last login: Fri Jul 6 10:59:20 2012 from 192.168.1.61
Oracle Corporation SunOS 5.10 Generic Patch January 2005
# bash
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# vi ./. /.Profile
"/.Profile" [New file]
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
:wq! "/.Profile" [New file] 1 line, 28 characters
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# ./P .Profile
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# luste cluster show
=== Cluster ===
Cluster Name: clustergrp
clusterid: 0x4FF2D43C
installmode: disabled
heartbeat_timeout: 10000
heartbeat_quantum: 1000
private_netaddr: 172.16.0.0
private_netmask: 255.255.240.0
max_nodes: 64
max_privatenets: 10
num_zoneclusters: 12
udp_session_timeout: 480
global_fencing: pathcount
Node List: cluster1, cluster2
=== Host Access Control ===
Cluster name: clustergrp
Allowed hosts: None
Authentication Protocol: sys
=== Cluster Nodes ===
Node Name: cluster1
Node ID: 1
Enabled: yes
privatehostname: clusternode1-priv
reboot_on_path_failure: disabled
globalzoneshares: 1
defaultpsetmin: 1
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x4FF2D43C00000001
Transport Adapter List: e1000g2, e1000g3
Node Name: cluster2
Node ID: 2
Enabled: yes
privatehostname: clusternode2-priv
reboot_on_path_failure: disabled
globalzoneshares: 1
defaultpsetmin: 1
quorum_vote: 1
quorum_defaultvote: 1
quorum_resv_key: 0x4FF2D43C00000002
Transport Adapter List: e1000g2, e1000g3
=== Transport Cables ===
Transport Cable: cluster1:e1000g2,switch1@1
Endpoint1: cluster1:e1000g2
Endpoint2: switch1@1
State: Enabled
Transport Cable: cluster1:e1000g3,switch2@1
Endpoint1: cluster1:e1000g3
Endpoint2: switch2@1
State: Enabled
Transport Cable: cluster2:e1000g2,switch1@2
Endpoint1: cluster2:e1000g2
Endpoint2: switch1@2
State: Enabled
Transport Cable: cluster2:e1000g3,switch2@2
Endpoint1: cluster2:e1000g3
Endpoint2: switch2@2
State: Enabled
=== Transport Switches ===
Transport Switch: switch1
State: Enabled
Type: switch
Port Names: 1 2
Port State(1): Enabled
Port State(2): Enabled
Transport Switch: switch2
State: Enabled
Type: switch
Port Names: 1 2
Port State(1): Enabled
Port State(2): Enabled
=== Quorum Devices ===
Quorum Device Name: d3
Enabled: yes
Votes: 1
Global Name: /dev/did/rdsk/d3s2
Type: shared_disk
Access Mode: scsi2
Hosts (enabled): cluster1, cluster2
Quorum Device Name: d6
Enabled: yes
Votes: 1
Global Name: /dev/did/rdsk/d6s2
Type: shared_disk
Access Mode: scsi2
Hosts (enabled): cluster1, cluster2
Quorum Device Name: d7
Enabled: yes
Votes: 1
Global Name: /dev/did/rdsk/d7s2
Type: shared_disk
Access Mode: scsi2
Hosts (enabled): cluster1, cluster2
=== Device Groups ===
=== Registered Resource Types ===
Resource Type: SUNW.LogicalHostname:3
RT_description: Logical Hostname Resource Typ
e
RT_version: 3
API_version: 2
RT_basedir: /usr/cluster/lib/rgm/rt/hafoi
p
Single_instance: False
Proxy: False
Init_nodes: All potential masters
Installed_nodes: <All>
Failover: True
Pkglist: SUNWscu
RT_system: True
Global_zone: True
Resource Type: SUNW.SharedAddress:2
RT_description: HA Shared Address Resource Ty
pe
RT_version: 2
API_version: 2
RT_basedir: /usr/cluster/lib/rgm/rt/hasci
p
Single_instance: False
Proxy: False
Init_nodes: <Unknown>
Installed_nodes: <All>
Failover: True
Pkglist: SUNWscu
RT_system: True
Global_zone: True
=== Resource Groups and Resources ===
=== DID Device Instances ===
DID Device Name: /dev/did/rdsk/d1
Full Device Path: cluster1:/dev/rdsk/c0t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d2
Full Device Path: cluster1:/dev/rdsk/c1t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d3
Full Device Path: cluster2:/dev/rdsk/c2t0d0
Full Device Path: cluster1:/dev/rdsk/c2t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d4
Full Device Path: cluster2:/dev/rdsk/c0t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d5
Full Device Path: cluster2:/dev/rdsk/c1t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d6
Full Device Path: cluster2:/dev/rdsk/c3t0d0
Full Device Path: cluster1:/dev/rdsk/c3t0d0
Replication: none
default_fencing: global
DID Device Name: /dev/did/rdsk/d7
Full Device Path: cluster2:/dev/rdsk/c4t0d0
Full Device Path: cluster1:/dev/rdsk/c4t0d0
Replication: none
default_fencing: global
=== NAS Devices ===
=== Zone Clusters ===
bash-3.2# c
bash-3.2#
bash-3.2# zfs import /ns /nfspool
unrecognized command 'import'
usage: zfs command args ...
where 'command' is one of the following:
create [-p] [-o property=value] ... <filesystem>
create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
destroy [-rRf] <filesystem|volume>
destroy [-rRd] <snapshot>
snapshot [-r] [-o property=value] ... <filesystem@snapname|volume@snapna
me>
rollback [-rRf] <snapshot>
clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
promote <clone-filesystem>
rename <filesystem|volume|snapshot> <filesystem|volume|snapshot>
rename -p <filesystem|volume> <filesystem|volume>
rename -r <snapshot> <snapshot>
list [-rH][-d max] [-o property[,...]] [-t type[,...]] [-s property] ...
[-S property] ... [filesystem|volume|snapshot] ...
set [-r] <property=value> <filesystem|volume|snapshot> ...
get [-rHp] [-d max] [-o "all" | field[,...]] [-s source[,...]]
<"all" | property[,...]> [filesystem|volume|snapshot] ...
inherit [-rS] <property> <filesystem|volume|snapshot> ...
upgrade [-v]
upgrade [-r] [-V version] <-a | filesystem ...>
userspace [-hniHp] [-o field[,...]] [-sS field] ... [-t type[,...]]
<filesystem|snapshot>
groupspace [-hniHpU] [-o field[,...]] [-sS field] ... [-t type[,...]]
<filesystem|snapshot>
mount
mount [-vO] [-o opts] <-a | filesystem>
unmount [-f] <-a | filesystem|mountpoint>
share <-a | filesystem>
unshare <-a | filesystem|mountpoint>
send [-RDpb] [-[iI] snapshot] <snapshot>
receive [-vnFu] [[-o property=value] | [-x property]] ... <filesystem|vo
lume|snapshot>
receive [-vnFu] [[-o property=value] | [-x property]] ... [-d | -e] <fil
esystem>
allow <filesystem|volume>
allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
<filesystem|volume>
allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
allow -c <perm|@setname>[,...] <filesystem|volume>
allow -s @setname <perm|@setname>[,...] <filesystem|volume>
unallow [-rldug] <"everyone"|user|group>[,...]
[<perm|@setname>[,...]] <filesystem|volume>
unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>
hold [-r] <tag> <snapshot> ...
holds [-r] <snapshot> ...
release [-r] <tag> <snapshot> ...
diff [-FHt] <snapshot> [snapshot|filesystem]
Each dataset is of the form: pool/[dataset/]*dataset[@name]
For the property list, run: zfs set|get
For the delegated permission list, run: zfs allow|unallow
bash-3.2# zfs import /nfspool
unrecognized command 'import'
usage: zfs command args ...
where 'command' is one of the following:
create [-p] [-o property=value] ... <filesystem>
create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
destroy [-rRf] <filesystem|volume>
destroy [-rRd] <snapshot>
snapshot [-r] [-o property=value] ... <filesystem@snapname|volume@snapna
me>
rollback [-rRf] <snapshot>
clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
promote <clone-filesystem>
rename <filesystem|volume|snapshot> <filesystem|volume|snapshot>
rename -p <filesystem|volume> <filesystem|volume>
rename -r <snapshot> <snapshot>
list [-rH][-d max] [-o property[,...]] [-t type[,...]] [-s property] ...
[-S property] ... [filesystem|volume|snapshot] ...
set [-r] <property=value> <filesystem|volume|snapshot> ...
get [-rHp] [-d max] [-o "all" | field[,...]] [-s source[,...]]
<"all" | property[,...]> [filesystem|volume|snapshot] ...
inherit [-rS] <property> <filesystem|volume|snapshot> ...
upgrade [-v]
upgrade [-r] [-V version] <-a | filesystem ...>
userspace [-hniHp] [-o field[,...]] [-sS field] ... [-t type[,...]]
<filesystem|snapshot>
groupspace [-hniHpU] [-o field[,...]] [-sS field] ... [-t type[,...]]
<filesystem|snapshot>
mount
mount [-vO] [-o opts] <-a | filesystem>
unmount [-f] <-a | filesystem|mountpoint>
share <-a | filesystem>
unshare <-a | filesystem|mountpoint>
send [-RDpb] [-[iI] snapshot] <snapshot>
receive [-vnFu] [[-o property=value] | [-x property]] ... <filesystem|vo
lume|snapshot>
receive [-vnFu] [[-o property=value] | [-x property]] ... [-d | -e] <fil
esystem>
allow <filesystem|volume>
allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
<filesystem|volume>
allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
allow -c <perm|@setname>[,...] <filesystem|volume>
allow -s @setname <perm|@setname>[,...] <filesystem|volume>
unallow [-rldug] <"everyone"|user|group>[,...]
[<perm|@setname>[,...]] <filesystem|volume>
unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>
hold [-r] <tag> <snapshot> ...
holds [-r] <snapshot> ...
release [-r] <tag> <snapshot> ...
diff [-FHt] <snapshot> [snapshot|filesystem]
Each dataset is of the form: pool/[dataset/]*dataset[@name]
For the property list, run: zfs set|get
For the delegated permission list, run: zfs allow|unallow
bash-3.2# bash-3.2#
bash-3.2#
bash-3.2# zpool import /nfspool
cannot import '/nfspool': no such pool available
bash-3.2#
bash-3.2# zpool import /nfspool /
cannot import 'nfspool': pool may be in use from other system, it was last acces
sed by cluster1 (hostid: 0x3700493a) on Fri Jul 6 13:06:59 2012
use '-f' to import anyway
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# df -h
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 9.6G 5.6G 3.9G 59% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 3.2G 1.3M 3.2G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/usr/lib/libc/libc_hwcap1.so.1
9.6G 5.6G 3.9G 59% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/dsk/c1t0d0s4 2.9G 142M 2.7G 5% /var
swap 3.2G 76K 3.2G 1% /tmp
swap 3.2G 32K 3.2G 1% /var/run
/dev/dsk/c1t0d0s3 2.9G 125M 2.7G 5% /opt
/dev/did/dsk/d2s6 486M 5.5M 432M 2% /global/.devices/node@1
/dev/did/dsk/d5s6 486M 5.5M 432M 2% /global/.devices/node@2
/vol/dev/dsk/c0t0d0/sol_10_811_x86
2.1G 2.1G 0K 100% /cdrom/sol_10_811_x86
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2#
bash-3.2# vi /ectc//hosts
"/etc//hosts" [Read only] 7 lines, 114 characters #
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.1.63 cluster2 loghost
192.168.1.62 cluster1
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
-- Resource Groups and Resources --
Group Name Resources
---------- ---------
Resources: nfs-rg nfs-lh-rs nfs-hast-rst
-- Resource Groups --
Group Name Node Name State Suspended
---------- --------- ----- ---------
Group: nfs-rg cluster1 Online No
Group: nfs-rg cluster2 Offline No
-- Resources --
Resource Name Node Name State Status Messag
e
------------- --------- ----- -------------
-
Resource: nfs-lh-rs cluster1 Online Online - Logi
calHostname online.
Resource: nfs-lh-rs cluster2 Offline Offline
Resource: nfs-hast-rst cluster1 Online Online
Resource: nfs-hast-rst cluster2 Offline Offline
bash-3.2#
bash-3.2#
bash-3.2#

S-ar putea să vă placă și