Sunteți pe pagina 1din 11

Installing SUN Cluster 3.1 ========================== bash-3.00# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.

1 netmask ff000000 eri0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.0.240 netmask ffffff00 broadcast 192.168.0.255 ether 0:3:ba:29:8a:ac

bash-3.00# ifconfig qfe0 plumb bash-3.00# snoop -d qfe0

* shouldn't see any output *

Note: Assigning IP and test each interconnected interface on each node. After th at unplumb the interface. bash-3.00# ifconfig qfe0 unplumb bash-3.00# vi /etc/hosts # Internet host table # ::1 localhost 127.0.0.1 localhost 192.168.0.240 ms1.singapore.sun.com 192.168.0.241 ms2.singapore.sun.com bash-3.00# view /etc/hostname.eri0 ms1 bash-3.00# ls /etc/notrouter /etc/notrouter: No such file or directory bash-3.00# vi /etc/default/login CONSOLE=/dev/console bash-3.00# vi /etc/profile PATH=$PATH:/usr/cluster/bin MANPATH=$MANPATH:/usr/cluster/man:/usr/share/man export PATH MANPATH bash-3.00# view /etc/nsswitch.conf ipnodes: files bash-3.00# mkdir /globaldevices bash-3.00# df -h Filesystem size used avail capacity Mounted on /dev/dsk/c1t0d0s0 14G 3.7G 11G 26% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 4.1G 1.3M 4.1G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 14G 3.7G 11G 26% /platform/sun4u-us3/lib/libc _psr.so.1 * comment out *

ms1 ms2

/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 14G 3.7G 11G 26% /platform/sun4u-us3/lib/spar cv9/libc_psr.so.1 fd 0K 0K 0K 0% /dev/fd swap 4.1G 32K 4.1G 1% /tmp swap 4.1G 32K 4.1G 1% /var/run /dev/dsk/c1t0d0s3 17G 1.2G 16G 8% /globaldevices bash-3.00# view /etc/vfstab "/etc/vfstab" [Read only] 12 lines, 426 characters #device device mount FS fsck #to mount to fsck point type pass # fd /dev/fd fd no /proc /proc proc no /dev/dsk/c1t0d0s1 swap no /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs /dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /globaldevices yes /devices /devices devfs no ctfs /system/contract ctfs no objfs /system/object objfs no swap /tmp tmpfs yes -

mount mount at boot options

1 ufs -

no 2

*** vfstab on MS2 *** bash-3.00# view /etc/vfstab #device device mount #to mount to fsck point # fd /dev/fd fd no /proc /proc proc no /dev/dsk/c1t0d0s1 swap /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 /dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 yes /devices /devices devfs ctfs /system/contract ctfs objfs /system/object objfs swap /tmp tmpfs yes

FS type /

fsck pass

mount mount at boot options

no ufs

no 2

/globaldevices ufs no no no -

*** df -h on MS2 *** bash-3.00# df -h Filesystem size used avail capacity Mounted on /dev/dsk/c1t0d0s0 14G 3.7G 11G 26% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 2.2G 632K 2.2G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 14G 3.7G 11G 26% /platform/sun4u-us3/lib/libc _psr.so.1

/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 14G 3.7G 11G 26% /platform/sun4u-us3/lib/spar cv9/libc_psr.so.1 fd 0K 0K 0K 0% /dev/fd swap 2.2G 136K 2.2G 1% /tmp swap 2.2G 48K 2.2G 1% /var/run /dev/dsk/c1t0d0s3 17G 18M 17G 1% /globaldevices

bash-3.00# pwd /multipackID3/JES5U1/Solaris_sparc bash-3.00# ./installer bash-3.00# /usr/cluster/bin/scinstall *** Main Menu *** Please select from one of the following (*) options: * 1) 2) 3) 4) 5) Install a cluster or cluster node Configure a cluster to be JumpStarted from this install server Add support for new data services to this cluster node Upgrade this cluster node Print release information for this cluster node

* ?) Help with menu options * q) Quit Option: 1 *** Install Menu *** Please select from any one of the following options: 1) Install all nodes of a new cluster 2) Install just this machine as the first node of a new cluster 3) Add this machine as a node in an existing cluster ?) Help with menu options q) Return to the Main Menu Option: 1 *** Installing all Nodes of a New Cluster *** This option is used to install and configure a new cluster. If either remote shell (see rsh(1)) or secure shell (see ssh(1)) root access is enabled to all of the new member nodes from this node, the Sun Cluster framework software will be installed on each node. Otherwise, the Sun Cluster software must already be pre-installed on each node with the "remote configuration" option enabled. The Java Enterprise System installer can be used to install the Sun Cluster framework software with the "remote configuration" option enabled. Since the installation wizard does not yet include support for cluster configuration, you must still use scinstall to complete

the configuration process. Press Control-d at any time to return to the Main Menu. Do you want to continue (yes/no) [yes]? >>> Type of Installation <<< There are two options for proceeding with cluster installation. For most clusters, a Typical installation is recommended. However, you might need to select the Custom option if not all of the Typical defaults can be applied to your cluster. For more information about the differences between the Typical and Custom installation methods, select the Help option from the menu. Please select from one of the following options: 1) Typical 2) Custom ?) Help q) Return to the Main Menu Option [1]: 1 >>> Cluster Name <<< Each cluster has a name assigned to it. The name can be made up of any characters other than whitespace. Each cluster name should be unique within the namespace of your enterprise. What is the name of the cluster you want to establish? ms

>>> Cluster Nodes <<< This Sun Cluster release supports a total of up to 16 nodes. Please list the names of the other nodes planned for the initial cluster configuration. List one node name per line. When finished, type Control-D: Node name (Control-D to finish): ms1 Node name (Control-D to finish): ms2 Node name (Control-D to finish): ^D This is the complete list of nodes: ms2 ms1 Is it correct (yes/no) [yes]? Attempting to contact "ms1" ... done

Searching for a remote install method ... done The Sun Cluster framework software is already installed on each of the new nodes of this cluster. And, it is able to complete the configuration process without remote shell access. Press Enter to continue: >>> Cluster Transport Adapters and Cables <<< You must identify the two cluster transport adapters which attach this node to the private cluster interconnect. Select the first cluster transport adapter for "ms2": 1) 2) 3) 4) 5) qfe0 qfe1 qfe2 qfe3 Other

Option: 1 Searching for any unexpected network traffic on "qfe0" ... done Verification completed. No traffic was detected over a 10 second sample period. Select the second cluster transport adapter for "ms2": 1) 2) 3) 4) 5) qfe0 qfe1 qfe2 qfe3 Other

Option: 2 Searching for any unexpected network traffic on "qfe1" ... done Verification completed. No traffic was detected over a 10 second sample period. >>> Quorum Configuration <<< Every two-node cluster requires at least one quorum device. By default, scinstall will select and configure a shared SCSI quorum disk device for you. This screen allows you to disable the automatic selection and configuration of a quorum device. The only time that you must disable this feature is when ANY of the shared storage in your cluster is not qualified for use as a Sun Cluster quorum device. If your storage was purchased with your cluster, it is qualified. Otherwise, check with your storage vendor to determine whether your storage device is supported as Sun Cluster quorum device.

If you disable automatic quorum device selection now, or if you intend to use a quorum device that is not a shared SCSI disk, you must instead use scsetup(1M) to manually configure quorum once both nodes have joined the cluster for the first time. Do you want to disable automatic quorum device selection (yes/no) [no]?

Is it okay to begin the installation (yes/no) [yes]? During the installation process, sccheck(1M) is run on each of the new cluster nodes. If sccheck(1M) detects problems, you can either interrupt the installation process or check the log files after installation has completed. Interrupt the installation for sccheck errors (yes/no) [no]? Installation and Configuration Log file - /var/cluster/logs/install/scinstall.log.2267 Testing for "/globaldevices" on "ms2" ... done Testing for "/globaldevices" on "ms1" ... done Starting discovery of the cluster transport configuration. The following connections were discovered: ms2:qfe0 switch1 ms1:qfe0 ms2:qfe1 switch2 ms1:qfe1 Completed discovery of the cluster transport configuration. Started sccheck on "ms2". Started sccheck on "ms1". sccheck completed with no errors or warnings for "ms2". sccheck completed with no errors or warnings for "ms1". Configuring "ms1" ... done Rebooting "ms1" ... done Configuring "ms2" ... done Rebooting "ms2" ... Log file - /var/cluster/logs/install/scinstall.log.2267 Rebooting ... Note: ms1 will be reboot and then ms2 will be reboot. check on MS1 -----------bash-3.00# scstat ------------------------------------------------------------------

-- Cluster Nodes -Node name --------ms1 ms2 Status -----Online Online

Cluster node: Cluster node:

------------------------------------------------------------------- Cluster Transport Paths -Endpoint -------ms1:qfe1 ms1:qfe0 Endpoint -------ms2:qfe1 ms2:qfe0 Status -----Path online Path online

Transport path: Transport path:

------------------------------------------------------------------- Quorum Summary -Quorum votes possible: Quorum votes needed: Quorum votes present: -- Quorum Votes by Node -Node Name --------ms1 ms2 Present ------1 1 Possible -------1 1 Status -----Online Online 3 2 3

Node votes: Node votes:

-- Quorum Votes by Device -Device Name Present Possible Status ----------------- -------- -----/dev/did/rdsk/d4s2 1 1 Online

Device votes:

------------------------------------------------------------------- Device Group Servers -Device Group ------------- Device Group Status -Device Group ------------- Multi-owner Device Groups -Device Group -----------Online Status ------------Status -----Primary ------Secondary ---------

-----------------------------------------------------------------------------------------------------------------------------------

-- IPMP Groups -Node Name --------IPMP Group: ms1 IPMP Group: ms2 Group Status ----- -----sc_ipmp0 Online sc_ipmp0 Online Adapter ------eri0 eri0 Status -----Online Online

-----------------------------------------------------------------bash-3.00# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 eri0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.0.240 netmask ffffff00 broadcast 192.168.0.255 groupname sc_ipmp0 ether 0:3:ba:29:8a:ac qfe0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4 inet 172.16.0.129 netmask ffffff80 broadcast 172.16.0.255 ether 0:3:ba:22:d4:36 qfe1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 6 inet 172.16.1.1 netmask ffffff80 broadcast 172.16.1.127 ether 0:3:ba:22:d4:37 clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv 4> mtu 1500 index 5 inet 172.16.193.1 netmask ffffff00 broadcast 172.16.193.255 ether 0:0:0:0:0:1 bash-3.00# view /etc/hostname.eri0 ms1 netmask + broadcast + group sc_ipmp0 up bash-3.00# df -h Filesystem /dev/dsk/c1t0d0s0 /devices ctfs proc mnttab swap objfs fd swap swap /dev/did/dsk/d3s3 /dev/did/dsk/d7s3

size 14G 0K 0K 0K 0K 3.8G 0K 0K 3.8G 3.8G 17G 17G

used avail capacity Mounted on 3.9G 10G 28% / 0K 0K 0% /devices 0K 0K 0% /system/contract 0K 0K 0% /proc 0K 0K 0% /etc/mnttab 1.6M 3.8G 1% /etc/svc/volatile 0K 0K 0% /system/object 0K 0K 0% /dev/fd 64K 3.8G 1% /tmp 104K 3.8G 1% /var/run 20M 17G 1% /global/.devices/node@1 20M 17G 1% /global/.devices/node@2

bash-3.00# view #device device #to mount options # fd -

/etc/vfstab mount FS to fsck /dev/fd fd

fsck

mount point no

mount type pass at boot

/proc /proc proc no /dev/dsk/c1t0d0s1 swap no /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no #/dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /globaldevices ufs 2 yes #/dev/dsk/c2t11d0s6 /dev/rdsk/c2t11d0s6 /multipack3 ufs 2 yes /devices /devices devfs no ctfs /system/contract ctfs no objfs /system/object objfs no swap /tmp tmpfs yes /dev/did/dsk/d3s3 /dev/did/rdsk/d3s3 /global/.devices/node@1 ufs 2 no global

bash-3.00# ls /etc/notrouter /etc/notrouter bash-3.00# view /etc/nsswitch.conf #passwd: files passwd: files #group: files group: files #hosts: files hosts: cluster files #ipnodes: files ipnodes: files bash-3.00# view /etc/inet/ntp.conf.cluster peer clusternode1-priv prefer peer clusternode2-priv bash-3.00# eeprom local-mac-address? local-mac-address?=true ------------------------------------ check on MS1 -----------------------------------check on MS2 -----------bash-3.00# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 eri0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 192.168.0.241 netmask ffffff00 broadcast 192.168.0.255 groupname sc_ipmp0 ether 0:3:ba:29:8a:27 qfe0: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 4 inet 172.16.0.130 netmask ffffff80 broadcast 172.16.0.255 ether 0:3:ba:22:d1:2 qfe1: flags=1008843<UP,BROADCAST,RUNNING,MULTICAST,PRIVATE,IPv4> mtu 1500 index 3 inet 172.16.1.2 netmask ffffff80 broadcast 172.16.1.127 ether 0:3:ba:22:d1:3

clprivnet0: flags=1009843<UP,BROADCAST,RUNNING,MULTICAST,MULTI_BCAST,PRIVATE,IPv 4> mtu 1500 index 5 inet 172.16.193.2 netmask ffffff00 broadcast 172.16.193.255 ether 0:0:0:0:0:2

bash-3.00# df -h Filesystem /dev/dsk/c1t0d0s0 /devices ctfs proc mnttab swap objfs fd swap swap /dev/did/dsk/d3s3 /dev/did/dsk/d7s3

size 14G 0K 0K 0K 0K 2.1G 0K 0K 2.1G 2.1G 17G 17G

used avail capacity Mounted on 3.9G 10G 28% / 0K 0K 0% /devices 0K 0K 0% /system/contract 0K 0K 0% /proc 0K 0K 0% /etc/mnttab 1.6M 2.1G 1% /etc/svc/volatile 0K 0K 0% /system/object 0K 0K 0% /dev/fd 64K 2.1G 1% /tmp 104K 2.1G 1% /var/run 20M 17G 1% /global/.devices/node@1 20M 17G 1% /global/.devices/node@2

bash-3.00# view /etc/hostname.eri0 ms2 netmask + broadcast + group sc_ipmp0 up

bash-3.00# view /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd /dev/fd fd no /proc /proc proc no /dev/dsk/c1t0d0s1 swap no /dev/dsk/c1t0d0s0 /dev/rdsk/c1t0d0s0 / ufs 1 no #/dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s3 /globaldevices ufs 2 yes #/dev/dsk/c2t11d0s6 /dev/rdsk/c2t11d0s6 /multipack3 ufs 2 yes /devices /devices devfs no ctfs /system/contract ctfs no objfs /system/object objfs no swap /tmp tmpfs yes /dev/did/dsk/d7s3 /dev/did/rdsk/d7s3 /global/.devices/node@2 ufs 2 no global

bash-3.00# ls /etc/notrouter /etc/notrouter bash-3.00# eeprom local-mac-address? local-mac-address?=true bash-3.00# view /etc/nsswitch.conf #passwd: files

passwd: files #group: files group: files #hosts: files hosts: cluster files #ipnodes: files ipnodes: files bash-3.00# view /etc/inet/ntp.conf.cluster peer clusternode1-priv prefer peer clusternode2-priv ------------------------------------- check on MS2 -------------------------------bash-3.00# scdidadm -L 1 ms1:/dev/rdsk/c0t6d0 2 ms1:/dev/rdsk/c1t1d0 3 ms1:/dev/rdsk/c1t0d0 4 ms1:/dev/rdsk/c2t11d0 4 ms2:/dev/rdsk/c2t11d0 5 ms1:/dev/rdsk/c2t14d0 5 ms2:/dev/rdsk/c2t14d0 6 ms2:/dev/rdsk/c0t6d0 7 ms2:/dev/rdsk/c1t0d0 8 ms2:/dev/rdsk/c1t1d0 /dev/did/rdsk/d1 /dev/did/rdsk/d2 /dev/did/rdsk/d3 /dev/did/rdsk/d4 /dev/did/rdsk/d4 /dev/did/rdsk/d5 /dev/did/rdsk/d5 /dev/did/rdsk/d6 /dev/did/rdsk/d7 /dev/did/rdsk/d8

--------------------------------- Installing SUN Cluster 3.1 ------------------------

Knowledge of SUN Cluster 3.1 ============================ *** Prepare the Cluster for Additional On any node, become superuser phys-schost# clsetup phys-schost# clinterconnect show nfigured before you can add a node * phys-schost# cluster show-netprops d private networks that the current *** /usr/cluster/bin/scinstall -pv scshutdown -y -g 0 ode, will shut down both)* * check the version of cluster * * Shut down the cluster in 3.1 (on one n Cluster Nodes *** * cssetup in 3.1 * * at least two cables or two adapters co * Display the maximum number of nodes an private-network configuration supports *

S-ar putea să vă placă și