Sunteți pe pagina 1din 81

Sun Solaris System Admin Notes 2

CONFIGURING, CONTROLLING & MONITORING THE NETWORK INTERFACES


To identify the instance name of the interface:
# grep network /etc/path_to_inst
This will display the output only in the case of SPARC-Sun hardware
# dladm show-dev will also display the instance name and status of the interface
# dladm show-dev
nge0
nge1
bge0
bge1

link:
link:
link:
link:

up speed: 100 Mbps duplex: full


unknown speed: 0 Mbps duplex: unknown
unknown speed: 0 Mbps duplex: unknown
unknown speed: 0 Mbps duplex: unknown

NOTE:
nge - Nvidia Gigabit ethernet
bge - Broadcom Gigabit ethernet
rtls - Real Tek ethernet
hme - happy meal ethernet
qfe - quad fast Ethernet
To view the mac address:
OK banner
# ifconfig -a
# ifconfig a will provide the following
a. ipaddress of the machine
b. mac address of the machine
c. status flag of the interface
d. instance name of the interface
e. broadcast id
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index1
inet 127.0.0.1 netmask ff000000
nge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.0.120 netmask ffffff00 broadcast 192.168.0.255
ether 0:1b:24:5b:d8:d6
bge1: flags=1000803<UP,BROADCAST,MULTICAST,IPv4> mtu 1500 index 3
inet 192.168.0.145 netmask ff000000 broadcast 192.255.255.255
ether 0:1b:24:5b:d8:d5

To assign the ipaddress to the interface:


1. Make sure the interface is plumbed. Plumbing will make the kernel to recognize the interface
# ifconfig bge1 plumb
# ifconfig bge1 unplumb
To assign the ip to the bge1 interface and set the status as up.
# ifconfig bge1 192.168.0.100 up
To logically down the specified interface
# ifconfig bge1 down
To make the interface up once again, It's not necessary to specify the ip if the interface is not
unplumbed
# ifconfig bge1 up
To view the mac & ip of the particular interface:
# ifconfig bge1
bge1: flags=1000803<UP,BROADCAST,MULTICAST,IPv4> mtu 1500 index 4

Ravi Mishra

Sun Solaris System Admin Notes 2


inet 192.168.0.100 netmask ffffff00 broadcast 192.168.0.255
ether 0:1b:24:5b:d8:d5

# ifconfig
1. is used to assign and view the ipaddress of the system
2. Ip address assigned using ifconfig command will persists only for the current session.
Once if the system is restarted, the ip address assinged to the interface will be vanished.
To assign the ip address permanently to the interface:
Edit the file
/etc/hotname.XXn where
XXn - logical name of the interface
For eg:
# cat > /etc/hostname.nge0
192.168.0.120
Save this file.
This file may have the hostname of the system or the ip.
To assign virtual ip to the interface: WTD:
1. Plumb the interface
2. Assign the ip to the interface
3. Create a file /etc/hostname.XXn and add entry to the file
1. # ifconfig nge0:1 plumb
2. # ifconfig nge0:1 192.168.0.170 up
3. # cat > /etc/hostname.nge0:1
192.168.0.170
Ctrl+d => to save
# ifconfig nge0:1
nge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.0.0.10 netmask ffc00000 broadcast 10.63.255.255

To assign broadcast id if it's subnetted:


# ifconfig nge0:1 10.0.0.10/10 up
# ifconfig nge0:1 10.0.0.10 up
# ifconfig nge0:1
nge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.0.0.10 netmask ff000000 broadcast 10.255.255.255
# ifconfig nge0:1 10.0.0.10/10 up
# ifconfig nge0:1
nge0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.0.0.10 netmask ffc00000 broadcast 10.63.255.255
Now, we can host the difference in the broadcast id
/etc/hosts
/etc/inet/hosts
1. Both the files are linked.
2. Both the files have the same entries
3. File is used to resolve the ip with the name locally in the network

Ravi Mishra

Sun Solaris System Admin Notes 2


NOTE:
It's not necessary that all /etc/hosts file in the network should be mapped correctly.
# cat /etc/hosts

# Internet host table


#
127.0.0.1 localhost
192.168.0.120 accel loghost
192.168.0.170 bge1
192.168.0.121 virtual1
192.168.0.122 virtual2

# cat /etc/inet/hosts

# Internet host table


#
127.0.0.1 localhost
192.168.0.120 accel loghost
192.168.0.170 bge1
192.168.0.121 virtual1
192.168.0.122 virtual2

/etc/nodename
This file will have the node name. It will be referred at the time of every boot/reboot and accordingly
the hostname will be taken.
# hostname <new_name>
For eg:
# hostname sun10 -- will change the host name only for the current session, once the system is
rebooted, the hostname will not exit.
To make the hostname permanent, edit the file /etc/nodename
# cat > /etc/nodename
sun10
/etc/services
/etc/inet/services
Both files are linked, It provides information about the services & corresponding static port numbers
To configure DHCP in Solaris-10: Client side configuration:
# touch /etc/dhcp.nge0
where
nge0 = name of the physical interface
# touch /etc/hostname.nge0
# touch /etc/notrouter
# cp /dev/null /etc/defaultrouter
# cp /etc/nsswitch.dns /etc/nsswitch.conf
# cp /dev/null /etc/resolv.conf
# ifconfig -a
# vi /etc/resolv.conf
nameserver 192.163.0.1
# svcadm restart physical
# svcadm restart network
# touch /etc/dhcp.nge0
# touch /etc/hostname.nge0
# ifconfig nge0 dhcp drop

Ravi Mishra

Sun Solaris System Admin Notes 2


# ifconfig nge0 dhcp start
# ifconfig nge0 dhcp status
# ifconfig nge0 dhcp release
# ifconfig nge0 auto-dhcp
/etc/default/dhcpagent file will have required configuration which client demands from dhcp servers
to have auto dhcp ip assignment.
NOTE: If you dont want your client host to ask hostname from DHCP server make sure to remove
argument 12 (hostname) from PARAM_REQUEST_LIST of /etc/default/dhcpagent file otherwise every
time you bootup your server hostname will change to unknown.
# By default, a parameter request list requesting a subnet mask (1),
# router (3), DNS server (6), hostname (12), DNS domain (15), broadcast
# address (28), and encapsulated vendor options (43), is sent to the DHCP
# server when the DHCP agent sends requests. However, if desired, this
# can be changed by altering the following parameter-value pair. The
# numbers correspond to the values defined in the IANA bootp-dhcp-parameters
# registry at the time of this writing.
#
PARAM_REQUEST_LIST=1,3,6,12,15,28,43

REMOVE ALL SYSTEM CONFIGURATION [DONT RUN THIS COMMAND]


# sys-unconfig - undo a system's configuration, It does the following
1. Saves current /etc/inet/hosts file information in /etc/inet/hosts.saved.
2. Saves current /etc/vfstab file to /etc/vfstab.orig.
3. Restores the default /etc/inet/hosts file.
4. Removes the default domainname in /etc/defaultdomain.
5. Restores the timezone to PST8PDT in /etc/TIMEZONE.
6. Disables NIS and NIS+ if either of them was configured.
7. Removes the file /etc/inet/netmasks.
8. Removes the file /etc/defaultrouter.
9. Removes the password set for root in /etc/shadow.
SNOOP Monitoring Network Activity to and fro hosts
# snoop
Monitor network between particular machines on a specified interface
Generally this snoop command without any options will monitor to all the interface of the system
# snoop

fire1 -> accel TELNET C port=32890


accel -> fire1 TELNET R port=32890 basic_commands
fire1 -> accel TELNET C port=32890
solaris-remote -> (broadcast) ARP C Who is 192.168.0.1, 192.168.0.1 ?
solaris-remote -> (broadcast) ARP C Who is 192.168.0.1, 192.168.0.1 ?
solaris-remote -> virtual1 TELNET C port=32869 l
virtual1 -> solaris-remote TELNET R port=32869 l
solaris-remote -> virtual1 TELNET C port=32869
solaris-remote -> virtual1 TELNET C port=32869 s
virtual1 -> solaris-remote TELNET R port=32869 s
solaris-remote -> virtual1 TELNET C port=32869
solaris-remote -> virtual1 TELNET C port=32869
virtual1 -> solaris-remote TELNET R port=32869
virtual1 -> solaris-remote TELNET R port=32869 Desktop day

# snoop -d <interface>
For eg:
# snoop -d nge0

will monitor only to the specified interface

Using device /dev/nge0 (promiscuous mode)


fire1 -> accel TELNET C port=32890
accel -> fire1 TELNET R port=32890 ^C

Ravi Mishra

fire1
accel
fire1
fire1
accel
fire1

->
->
->
->
->
->

accel
fire1
accel
accel
fire1
accel

TELNET
TELNET
TELNET
TELNET
TELNET
TELNET

Sun Solaris System Admin Notes 2

C
R
C
C
R
C

port=32890
port=32890 \r\n-bash-3.00#
port=32890
port=32890 c
port=32890 c
port=32890

# snoop -D -d nge0
where
-D = used to monitor the dropped packet information
-d = used to monitor for the specified interface
#snoop -D -d nge0
fire1 -> 224.0.0.22 drops: 0 IGMP v3 membership report
fire1 -> 192.168.0.255 drops: 0 RIP C (1 destinations)
fire1 -> 224.0.0.2 drops: 0 ICMP Router solicitation
fire1 -> 224.0.0.22 drops: 0 IGMP v3 membership report
fire1 -> 192.168.0.255 drops: 0 RIP C (1 destinations)
fire1 -> 224.0.0.2 drops: 0 ICMP Router solicitation
100.0.0.2 -> (broadcast) drops: 0 ARP C Who is 100.0.0.2, 100.0.0.2 ?
fire1 -> 224.0.0.2 drops: 0 ICMP Router solicitation
fire1 -> 224.0.0.2 drops: 0 ICMP Router solicitation
fire1 -> 224.0.0.22 drops: 0 IGMP v3 membership report
fire1 -> 224.0.0.22 drops: 0 IGMP v3 membership report
fire1 -> (broadcast) drops: 0 ARP C Who is 192.168.0.120, accel ?
accel -> fire1 drops: 0 ARP R 192.168.0.120, accel is 0:1b:24:5b:d8:d6
fire1 -> accel drops: 0 TELNET C port=32890
accel -> fire1 drops: 0 TELNET R port=32890
fire1 -> accel drops: 0 TELNET C port=32890 swap - l\r\0s\3swassssss
accel -> fire1 drops: 0 TELNET R port=32890 ^Cswap -l\r\nsswasssss
fire1 -> accel drops: 0 TELNET C port=32890
accel -> fire1 drops: 0 TELNET R port=32890 \r\n\r\n-bash-3.00#

# snoop -S -d nge0
-S = to monitor the size of the packets
Using device /dev/nge0 (promiscuous mode)
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1

->
->
->
->
->
->
->
->
->
->
->
->
->

accel
fire1
accel
accel
fire1
accel
accel
fire1
accel
accel
fire1
accel
accel

length:
length:
length:
length:
length:
length:
length:
length:
length:
length:
length:
length:
length:

60
67
60
60
55
60
60
55
60
60
55
60
60

TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET
TELNET

C
R
C
C
R
C
C
R
C
C
R
C
C

port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891
port=32891

\33[A
cd /class_doc
\33[D
\33[D
\33[D
\33[D

# snoop a
To gather the audio
# snoop accel fire1
will monitor the transmission only between the specified machine
# snoop accel fire1
Using
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
accel
fire1
fire1
accel

device /dev/nge0 (promiscuous mode)


-> accel TELNET C port=32891 s
-> fire1 TELNET R port=32891 s
-> accel TELNET C port=32891
-> accel TELNET C port=32891 i
-> fire1 TELNET R port=32891 i
-> accel TELNET C port=32891
-> accel TELNET C port=32891 c
-> fire1 TELNET R port=32891 c
-> accel TELNET C port=32891
-> accel TELNET C port=32891 _
-> fire1 TELNET R port=32891 _
-> accel TELNET C port=32891
-> accel TELNET C port=32891 c
-> fire1 TELNET R port=32891 c
-> accel TELNET C port=32891
-> accel TELNET C port=32891 o
-> fire1 TELNET R port=32891 o

Ravi Mishra

Sun Solaris System Admin Notes 2

# snoop -V
Displays the information in verbose summary mode
# snoop -V -d nge0

Using device /dev/nge0 (promiscuous mode)


fire1 -> accel ETHER Type=0800 (IP), size = 60 bytes
fire1 -> accel IP D=192.168.0.120 S=192.168.0.150 LEN=43,
ID=4610, TOS=0x0, TTL=64
fire1 -> accel TCP D=23 S=32891 Push Ack=2427569954 Seq=1197333170 Len=3 Win=49640
fire1 -> accel TELNET C port=32891 \33[A
accel -> fire1 ETHER Type=0800 (IP), size = 85 bytes
accel -> fire1 IP D=192.168.0.150 S=192.168.0.120 LEN=71,ID=20202, TOS=0x0, TTL=60
accel -> fire1 TCP D=32891 S=23 Push Ack=1197333173 Seq=2427569954Len=31 Win=49639
accel -> fire1 TELNET R port=32891 cat basic_commands

# snoop -v

Displays the detailed information


IP: .... ..0. = not ECN capable transport
IP: .... ...0 = no ECN congestion experienced
IP: Total length = 124 bytes
IP: Identification = 30333
IP: Flags = 0x4
IP: .1.. .... = do not fragment
IP: ..0. .... = last fragment
IP: Fragment offset = 0 bytes
IP: Time to live = 1 seconds/hops
IP: Protocol = 17 (UDP)
IP: Header checksum = 39f3
IP: Source address = 100.0.0.2, 100.0.0.2
IP: Destination address = 100.255.255.255, 100.255.255.255
IP: No options
IP:
UDP: ----- UDP Header ----UDP:
UDP: Source port = 32768
UDP: Destination port = 111 (Sun RPC)
UDP: Length = 104
UDP: Checksum = 9376
UDP:
RPC: ----- SUN RPC Header -----

# snoop -o /Desktop/snoop_test -d nge0


This command will redirect the output of the command the specified file
# snoop -o /Desktop/snoop_test -d nge0
Using device /dev/nge0 (promiscuous mode)
78

# snoop -i /Desktop/snoot_test
-Used to read the entries of the file
NOTE:
Format of the file is different, hence we used # snoop -i to read the entries of the file.
# file /Desktop/snoop_test
/Desktop/snoop_test: Snoop capture file - version 2
SWAP CONFIGURATION
Swap is a virtual space added from hard disk drive to the physical memory to increase system
performance. In Solaris, swap space can be added either permanently or temporary.
At the same time, the swap space can be a file or a dedicated slice.
By default the swap slice will be slice1.
# swap s
Will display the summary of the swap space totally allocated, used and free.

Ravi Mishra

Sun Solaris System Admin Notes 2


# swap -s
total: 263440k bytes allocated + 42452k reserved = 305892k used, 23162412k available
# swap l
will display the information about the swap files, slices along the size in blocks.
# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1d0s1 102,1 8 42700760 42700760
/swap_file - 8 1023992 1023992
# mkfile <size> <name_of_the_file> will create a file with the specified size.
NOTE: Whenever a file is created with defined size using #mkfile command, the file will be with
Sticky bit permission by default.
Eg:
# mkfile 200m /swap_file1 -- Will create a new file named 'swap_file' with size 200mb.
# mkfile 200m /swap_file
# ls -lh / | grep swap_file
-rw------T 1 root root 200M Aug 14 12:32 swap_file1
To add the file to swap memory:
# swap -a <file_name>
Eg: # swap -a /swap_file1
# swap -l
/dev/dsk/c1d0s1 102,1 8 42700760 42700760
/swap_file - 8 1023992 1023992
/swap_file1 - 8 409592 409592
To remove the file from swap memory:
# swap -d <file_name>
Eg:
# swap -d /swap_file1
# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1d0s1 102,1 8 42700760 42700760
/swap_file - 8 1023992 1023992
To add a slice to the swap memory:
1. Create slice using format utility
2. Create the file system for the slice
3. Add the slice to the swap memory by # swap -a
For eg:
# swap -a /dev/dsk/c1d0s5
To make the swap file & slice permanently available edit the file /etc/vfstab
Eg:
bash-3.00# cat /etc/vfstab
#device
device
mount
#to mount
to fsck
point
#
fd
/dev/fd fd
no
/proc
/proc
proc
no
/dev/dsk/c1t0d0s1
swap
/swap_file - - swap - no /swap_file1 - - swap - no
/dev/dsk/c1t0d0s5
swap
/dev/md/dsk/d3 /dev/md/rdsk/d3 /
ufs
/dev/dsk/c1t0d0s3
/dev/rdsk/c1t0d0s3
/dev/dsk/c1t0d0s4
/dev/rdsk/c1t0d0s4
/dev/dsk/c1t0d0s5
/dev/rdsk/c1t0d0s5

Ravi Mishra

FS
type

fsck
pass

mount
mount
at boot options

no

1
/usr
/var
/opt

no
no
ufs
ufs
ufs

1
1
2

no
no
yes

/devices
sharefs ctfs
objfs
swap
-

Sun Solaris System Admin Notes 2

/devices
/etc/dfs/sharetab
/system/contract
/system/object objfs
/tmp
tmpfs
-

devfs
sharefs
ctfs
yes

no
-

no
no
no
-

SOLARIS MANAGEMENT CONSOLE: SMC


# smc &
will open a Graphical tool to do administration task
The following tasks can be performed through smc.
1. Storage
-- Disks, Mounts and Shares, and Enhanced Storage Tools
2. Devices and Hardware -- Serial Ports, Terminal and Launches a terminal window
3. System Status -- Processes, Log viewer, System Information, and Performance
4. System configuration -- Users, Projects, Computer and Networks, and Patches
5. Services -- Scheduled Jobs
To determine if the SMC server is running:
# /etc/init.d/init.wbem status

To start the SMC server:


# /etc/init.d/init.wbem start
To stop the SMC server:
# /etc/init.d/init.wbem stop
CRASH/CORE & DUMP ADMINISTRATION
CRASH DUMP: OS generates a crash dump by writing some of the contents of its physical memory to a
predetermined dump device, which must be a local disk slice.
/var/crash/`uname -n`/vmcore.x
where
x = integer indentifying the dump
/var/crash/`uname -n`/unix.x
NOTE:
Within the crash dump directory a file named bounds is created. The bounds file holds a number that
is used as a suffix for the next dump to be saved.
The configuration file for crash dump is /etc/dumpadm.conf
1. This file is not recommended to edit
2. This file provides the following information
a. Which slice is dedicated for dump, by default swap slice (slice-1) is dedicated for this
purpose.
b. Provides the information about dumpadm or crash is enabled or disbaled.
c. What contents has to be dumped. By default Kernel contents will be dumped.
d. Displays the save core directory.
# dumpadm
This command reads the file /etc/dumpadm.conf and the output will be displayed accordingly.
Eg:
Dump content: kernel pages
Dump device: /dev/dsk/c0d1s1
Savecore directory: /var/crash/server
Savecore enabled: yes

Ravi Mishra

Sun Solaris System Admin Notes 2


# dumpadm -d /dev/dsk/c0d1s5
Will change the default (/dev/dsk/c0d1s1) dump device to /dev/dsk/c0d1s5
Dump content: kernel pages
Dump device: /dev/dsk/c0d1s5 (dedicated)
Savecore directory: /var/crash/server
Savecore enabled: yes

Here the dump device is changed.


# dumpadm n

will disable the save core.

Dump content: kernel pages


Dump device: /dev/dsk/c0d1s5 (dedicated)
Savecore directory: /var/crash/server
Savecore enabled: no

Here save core is disabled.


# dumpadm -y

will enable the save core.

Dump content: kernel pages


Dump device: /dev/dsk/c0d1s5 (dedicated)
Savecore directory: /var/crash/server
Savecore enabled: yes

Here save core is enabled.


NOTE:
1. save core is by default enabled.
Only if the save core is enabled dumpadm will dump the contents to the device specified.
2. dumpadm command updates the file /etc/dumpadm.conf, so the configuration remains permanent.
# dumpadm -s /var/crash/Unix
This command change the save core directory.
Dump content: kernel pages
Dump device: /dev/dsk/c0d1s5 (dedicated)
Savecore directory: /var/crash/Unix/
Savecore enabled: yes
Here savecore directory is changed.

# dumpadm -c all
This will ask the system to dump all the pages from the physical memory.
The default dump contents are kernel pages.
Dump content: all pages
Dump device: /dev/dsk/c0d1s5 (dedicated)
Sun Solaris 10 Operating System
Savecore directory: /var/crash/Unix/
Savecore enabled: yes

Here the default dump content is changed to "all pages"


Coreadm:
When a process terminates abnormally it typically produces a core file.
1. A core file is a point-in-time copy of RAM allocated to a process.
2. The copy is written to a more permanent medium - hard disk drive.
3. A core file is also a disk copy of the address space of a process at a certain point-in-time.
4. A core file will have the following information:
i.
task name
ii.
task owner
iii.
priority at the time of execution.
5. OS generated 2 possible copies of core file based on the configuration.
6. GLOBAL CORE FILE:
i.
created mode is 600
ii.
ii. owned by super-user
iii.
iii. non-priviledged users are not permitted to examine

Ravi Mishra

Sun Solaris System Admin Notes 2


7. ORDINARY PER_PROCESS CORE FILE:
i.
created mode is 600
ii.
Owned by the owner of the process
NOTE:

If the directory defined in the global core file does not exist, it has to be created manually.
The configuration file is /etc/coreadm.conf
This file is not recommended to edit.
But updates to this file can be performed by using the command

# coreadm -- reads the entries of the file /etc/coreadm.conf and the configuration is displayed.
coreadm pattterns:
%m = machine name
%n = system known name
%p = process-id
%t = decimal value
%u = effective user
%z = which process executes
%g = effictive group id
%f = execuitable file name
-d = disable
-e = enable
# coreadm option argument
MISCLEANIOUS COMMANDS:
Troubleshooting information will be available at
# cat /lib/svc/share/README
To mount the read only slice as read/write:
# mount -o rw,remount /
To view the release of the operating system:
# cat /etc/release
# cat /var/sadm/softinfo/INST_RELEASE
To assign the gateway:
# route add default <ip>
# route add default 192.168.0.150
To view the assigned gateway:
# netstat -rn
Routing Table: IPv4
Sun Solaris 10 Operating System
Destination Gateway Flags Ref Use Interface
-------------------- -------------------- ----- ----- ------ --------192.168.0.0 192.168.0.120 U 1 20 nge0
192.168.0.0 192.168.0.121 U 1 0 nge0:1
192.168.0.0 192.168.0.122 U 1 0 nge0:2
192.168.0.0 192.168.0.170 U 1 0 bge1
224.0.0.0 192.168.0.120 U 1 0 nge0
default 192.168.0.150 UG 1 0
127.0.0.1 127.0.0.1 UH 4 1110 lo0

U - Indicates route is up.


G - Route is to a gateway.

Ravi Mishra

10

Sun Solaris System Admin Notes 2


To gather the processor status:
# psrinfo
0 on-line since 08/18/2009 12:43:45
1 on-line since 08/18/2009 12:43:54
To bring the processor off-line:
# psradm -f <processor-id>
# psradm -f 1
# psrinfo
0 on-line since 08/18/2009 12:43:45
1 off-line since 08/18/2009 16:19:39

To bring back the processor on-line:


# psradm -n <processor-id>
# psradm -n 1
# psrinfo
0 on-line since 08/18/2009 12:43:45
1 on-line since 08/18/2009 16:21:50
ACL = ACCESS CONTROL LIST
# setfacl = to assign, modify the acl permissions to the file/directory
# getfacl = to view the acl entries assinged to a file/directory
Note:
A file "new" is created and ACL is assigned to the file
# getfacl new
# getfacl -a new
Will display the ACL & other permissions to specified file
NOTE: Output of above commands remains same.
bash-3.00# getfacl new
# file: new
# owner: root
# group: root
user::rwx
user:ryan:rwx
group::rwgroup:baby:rwmask:rwx
other:r

#effective:rwx
#effective:rw#effective:rw-

# getfacl -d new -- will display only the owner/group of the file specified
bash-3.00# getfacl -d new
# file: new
# owner: root
# group: root

Syntax:
# setfacl -s u::<perm>,g::<perm>,o:<perm>,m:<perm>,u:<name>:<perm>,g:name:<perm>
<name_of_file_dir>
where
u = user
g =group
o = other
m = ACL mask
Note:
u,g,o can be replaced with user, group,others respectively
m can be replaced with mask

Ravi Mishra

11

Sun Solaris System Admin Notes 2


Here first u,g refers the owner of the file and the group the file/dir belongs to.
for eg:
# setfacl -s u::rwx,g::rw-,o:r--,m:rwx,u:ryan:rwx,g:baby:rw- new
# getfacl new
# file: new
# owner: root
# group: root
user::rwx
user:ryan:rwx
group::rwgroup:baby:rwmask:rwx
other:r

#effective:rwx
#effective:rw#effective:rw-

# setfacl -m u::rwx,g::rw-,o:r--,m:rwx,u:anup:rwx,g:baby:rw- new


-m = to modify
# setfacl -m u::rwx,g::rw-,o:r--,m:rwx,u:anup:rwx,g:baby:rw- new
bash-3.00# getfacl new
# file: new
# owner: root
# group: root
user::rwx
user:ryan:rwx
user:anup:rwx
group::rwgroup:baby:rwmask:rwx
other:r--

#effective:rwx
#effective:rwx
#effective:rw#effective:rw-

To get the ACL entries of one file/dir to another file/dir


# getfacl new | setfacl -f - old
# getfacl old

NFS - NETWORK FILE SYSTEM

Comes under the distributed file system


Used or enables computers of different arch running different Operating system
Work with heterogeneous environment.(For eg: Can integrate with Linux)

Advantages of NFS:
Allows multiple computers to use the same files, all users on the network can access the same
data (based on the permission).
Reduces storage costs by sharing applications on computers instead of allocating local disk
space for each user
Provides data reliability & consistency
Reduces system administration activity
Note:
1. In Solaris-10 NFS version 4 is used by default.

Ravi Mishra

12

Sun Solaris System Admin Notes 2


2. Version related checks are applied whenever a client host attempts to access a server's file share.
3. NFSv4 provides firewall support since it uses a well known port -2049
NFS server files:
1. /etc/dfs/dfstab
- list the locally permanently shared resources at boot time
- editable file by the root user
Output: ( Along with manually added shares)
bash-3.00# cat /etc/dfs/dfstab
# Place share(1M) commands here for automatic execution
# on entering init state 3.
#
# Issue the command 'svcadm enable network/nfs/server' to
# run the NFS daemon processes and the share commands, after adding
# the very first entry to this file.
#
# share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource]
# .e.g,
# share -F nfs -o rw=engineering -d "home dirs" /export/home2
share -F nfs -o rw /export/home
share -F nfs /share
share -F nfs -o ro /nfs/share_test
share -F nfs -o rw=natra,ro=solaris -d "test"
/source/open share -F nfs -o rw=natra,ro=192.168.0.0/32
/unix_share

2. /etc/dfs/sharetab
- Not recommended to edit
- File will be updated through "share" , "shareall" , "unshare", "unshareall" commands
- lists the locally and currently shared resources in the system
Output: (With manually edited entries)
bash-3.00# cat /etc/dfs/sharetab
/Desktop/ppt - nfs rw
/export/home - nfs rw
/share - nfs rw
/nfs/share_test - nfs ro
/source/open - nfs rw=natra,ro=solaris test
/unix_share - nfs rw=natra,ro=192.168.0.0/32

3. /etc/dfs/fstypes
- lists the default file system types for remote file systems.
Output:
bash-3.00# cat /etc/dfs/fstypes
nfs NFS Utilities
autofs AUTOFS Utilities
cachefs CACHEFS Utilities

Here,
nfs - used to share the resources across the network
autofs - used to mount the shared resource at client side on demand
cachefs - used to sync the updates performed to the shared resources.
(This is responsible for maintaining the reliability & consistency)
4. /etc/rmtab
- lists file systems remotely mounted by NFS clients.
- do not edit this file
- contains a table of file systems remotely mounted by NFS clients

Ravi Mishra

13

Sun Solaris System Admin Notes 2


- after a client successfully completes a NFS mount request, the mountd daemon on the server makes
an
entry in the /etc/rmtab file
- file also contains a line entry fo each remotely mounted directory that has been successfully
unmounted, except that the mounted daemon replacces the first character in the entry with (#)
character.
Output:
bash-3.00# cat /etc/rmtab
solaris:/nfs/share_test
5. /etc/nfs/nfslog.conf
- lists information defining the location of configuration logs used for NFS server logging
Output:
bash-3.00# cat /etc/nfs/nfslog.conf
#ident "@(#)nfslog.conf 1.5 99/02/21 SMI"
#
# Copyright (c) 1999 by Sun Microsystems, Inc.
# All rights reserved.
#
# NFS server log configuration file.
#
# <tag> [ defaultdir=<dir_path> ] \
# [ log=<logfile_path> ] [ fhtable=<table_path> ] \
# [ buffer=<bufferfile_path> ] [ logformat=basic|extended ]
#
global defaultdir=/var/nfs \
log=nfslog fhtable=fhtable buffer=nfslog_workbuffer

6. /etc/default/nfslogd
- list configuration information describing the behaviour of the nfslogd daemon for NFS v2 and v3.
Output:
bash-3.00# cat /etc/default/nfslogd

#
#ident "@(#)nfslogd.dfl 1.8 99/02/27 SMI"
#
# Copyright (c) 1999 by Sun Microsystems, Inc.
# All rights reserved.
#
# Specify the maximum number of logs to preserve.
#
# MAX_LOGS_PRESERVE=10
# Minimum size buffer should reach before processing.
#
# MIN_PROCESSING_SIZE=524288
# Number of seconds the daemon should sleep waiting for more work.
#
# IDLE_TIME=300
# CYCLE_FREQUENCY specifies the frequency (in hours) with which the
# log buffers should be cycled.
#
# CYCLE_FREQUENCY=24
# Use UMASK for the creation of logs and file handle mapping tables.
#
# UMASK=0137

7. /etc/default/nfs
- contains parameter values for NFS protocols & NFS daemons.
Output: (Only selected parameters is displayed)
#NFSD_MAX_CONNECTIONS=
NFSD_LISTEN_BACKLOG=32

Ravi Mishra

14

#NFS_CLIENT_VERSMIN=2

Sun Solaris System Admin Notes 2

8. /etc/nfssec.conf
- to enable the necessary security mode.
- can be performed through # nfssec
Output:
# cat /etc/nfssec.conf
#
#ident "@(#)nfssec.conf
1.11
01/09/30 SMI"
#
# The NFS Security Service Configuration File.
#
# Each entry is of the form:
#
#
<NFS_security_mode_name> <NFS_security_mode_number> \
#
<GSS_mechanism_name> <GSS_quality_of_protection> <GSS_services>
#
#
# The "-" in <GSS_mechanism_name> signifies that this is not a GSS mechanism.
# A string entry in <GSS_mechanism_name> is required for using RPCSEC_GSS
# services. <GSS_quality_of_protection> and <GSS_services> are optional.
# White space is not an acceptable value.
#
# default security mode is defined at the end. It should be one of
# the flavor numbers defined above it.
#
none
0
# AUTH_NONE
sys
1
# AUTH_SYS
dh
3
# AUTH_DH
#
# Uncomment the following lines to use Kerberos V5 with NFS
#
#krb5
390003 kerberos_v5
default # RPCSEC_GSS
#krb5i
390004 kerberos_v5
default integrity
# RPCSEC_GSS
#krb5p
390005 kerberos_v5
default privacy
# RPCSEC_GSS
default
1
# default is AUTH_SYS

Note:
1. If the svc:/network/nfs/server service does not find any 'share' commands in the /etc/dfs/dfstab
tile, it does not start the NFS server daemons.
2. The features provided by mountd daemon and lockd daemons are integrated into NFS v4 protocol.
3. In NFSv2 and NFSv3, the mount protocol is implemented by the separate mountd daemon which
did not use an assigned, well-known port number, which is very hard to use NFS through firewall.
4. nfsd and mountd daemons are started if there is an entry (uncommented) share statement in the
system's /etc/dfs/dfstab file.
5. Manually create /var/nfs/public directory before starting nfs server logging. (Pls do ref the file
/etc/nfs/nfslog.conf)
To start/stop the nfs-server:
Solaris-10:
To start/enable:
# svcadm enable nfs/server bash# svcadm -v enable nfs/server
svc:/network/nfs/server:default enabled.
To stop/disable
# svcadm disable nfs/server bash# svcadm -v disable nfs/server
svc:/network/nfs/server:default disabled.

Ravi Mishra

15

Sun Solaris System Admin Notes 2


Earlier versions of Solaris:
/etc/init.d/nfs.server start - to start the service
/etc/init.d/nfs.server stop - to stop the service
NFS server side daemons:
1. statd
2. lockd
3. mountd
4. nfsmapid
5. nfslogd
NFS client side daemons:
1. statd - works with the lockd daemon to provide crash recovery functions for the lock Manager
2. lockd - supports record-locking operation ofn NFS files
3. nfs4cbd- NFSv4 call back daemon
Note: mountd and lockd daemon runs on both server and client.
Daemons & it's purposes:
1. mountd:
- NOT available in NFSv4
- Available in NFSv2 and NFSv3
- mountd daemon is integrated with NFSv4 protocol by default
- handles file system mount requests from remote systeds and provides access control
- started by: svc:/network/nfs/server service.
Steps involved:
1. mountd daemon checks the /etc/dfs/sharetab file to determine whether a particular file or
directory is shared and whether the requesting client has permission to access the shared resources.
2. when NFS client issues an NFS mount request, the mount command of the client contact the
mountd daemon on the server. The mountd daemon provides service.
2. nfsd daemon:
- handles client file system requests
- started by: svc:/network/nfs/server
- only root user can start the nfsd daemon
- when a client process attempts to access a remote file resource, the nfsd daemon on NFS server
receives the request and then performs the requested operation.
3. statd daemon:
- works with the lockd daemon to provide crash recovery functions for the lock manager
- server's statd daemon tracks the cients that are holding locks on an NFS server. When the NFS
server reboots after a crash, the statd daemon on the server contacts the statd daemon on client,
which informs lockd daemon to reclaim any locks on the server.
- not used in NFSv4
- started by: svc:/network/nfs/status service
4. lockd daemon:
- intergrated with NFSv4
- supports record locking operations on NFS files
- started bu: svc:/network/nfs/lockmgr
5. nfslogd daemon:

Ravi Mishra

16

Sun Solaris System Admin Notes 2


provides operational logging for NFSv2 and NFSv3
NFS logging is enabled, when the share is made available
All FS where logging is enabled, NFS kernel module records all operations in a buffer file
operations are performed based on the config file /etc/default/nfslogd
started by: svc:/network/nfs/server service

6. nfsmapid:
- implemented in NFSv4
- maps owner and group indentification that both the NFSv4 client & server user
- started by: svc:/network/nfs/mapid
- no interface to the daemon, but the parameters can be assinged to the file /etc/default/nfs
Commands:
# share
- makes a local directory on an NFS server available for mounting
- also displays the contents of the file /etc/dfs/sharetab
# share --displays the shared contents in the local system
Output:
bash-3.00# share
-

/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""

To share the resources using # share command:


Note: Sharing done using #share command will not be available post system reboots.
# share -F <file_sys> <directory>
- will share the specified directory without any Access list to all the clients in the network.
- will update the file /etc/dfs/sharetab
For eg:
# share -F nfs /data_share
Output:
# mkdir /data_share
# share -F nfs /data_share
# cat /etc/dfs/sharetab
/export/home
/share
/nfs/share_test
/source/open
/unix_share
/data_share
-

nfs
nfs
nfs
nfs
nfs

rw
rw
nfs
ro
rw=natra,ro=solaris test
rw=natra,ro=192.168.0.0/32
rw

Options-1:
# share -F nfs -d "Comment-description" /data_share
here
-F = specifies the file system
-d = description or comment about the shared directory
Output:
# share -F nfs -d "Comment-description" /data_share/
# share
- /export/home rw ""

Ravi Mishra

17

Sun Solaris System Admin Notes 2

/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw "Comment-description"

Options-2:
# share -F nfs -d "comment" -o rw=solaris,ro=fire2 /data_share
here
-o = specifies the option
ro = read only to the listed clients
rw = read write to the listed clients
# share -F nfs -d "comment" -o rw=solaris,ro=fire2:192.168.0.14 /data_share
Note: Clients name or ip can be given, seperated by , (commas) or by : (semi-colon)
Output:
# share -F nfs -d "comment" -o rw=solaris,ro=fire1 /data_share/
# share
-

/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw=solaris,ro=fire1 "comment"

# share -F nfs -d "comment" -o rw=solaris,ro=fire1:192.168.0.14 /data_share


# share
-

/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw=solaris,ro=fire1:192.168.0.14 "comment"

Option-3:
# share -F nfs -d "comment" -o root=solaris,rw=fire2,ro=192.168.0.14 /data_share
Output:
# share -F nfs -d "comment" -o root=solaris,rw=fire2,ro=192.168.0.14 /data_share
# share
- /export/home rw ""
- /share rw ""
- /nfs/share_test ro ""
- /source/open rw=natra,ro=solaris "test"
- /unix_share rw=natra,ro=192.168.0.0/32 ""
- /data_share root=solaris,rw=fire2,ro=192.168.0.14 "comment"
here
root=<client_name_or_ip> root=solaris
- informs the client that the root user on the specified client system or systems can perform superuser
privilege requests on the shared resource
Option-4:
# share -F nfs -d "comment" -o ro=@192.168.0.* /data_share
Output:
# share -F nfs -d "comment" -o rw=@192.168.0.* /data_share/
# share

Ravi Mishra

18

Sun Solaris System Admin Notes 2

/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw=@192.168.0.* "comment"

To share to resource to the specified network


Option-5:
# share -F nfs -d "comment" -o ro=aita.com /data_share
Output:
# share -F nfs -d "comment" -o ro=aita.com /data_share/
# share
-

/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share ro=aita.com "comment"

To share the resource only for that domain.


2. # unshare
- makes a previously available directory unavailable for client side mount operations
# unshare /data_share
Output:
# share
-

/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""
/data_share rw "Comment-description"

# unshare /data_share/
# share
-

/export/home rw ""
/share rw ""
/nfs/share_test ro ""
/source/open rw=natra,ro=solaris "test"
/unix_share rw=natra,ro=192.168.0.0/32 ""

3. # shareall
- reads & executes shared statements from the file /etc/dfs/dfstab
NOTE: All the above discussed share options can be edited to the file /etc/dfs/dfstab and the syntax
remains same.
NOTE:
share
share
share
share
share

Few entries from the /etc/dfs/dfstab


-F nfs -o rw /export/home
-F nfs /share
-F nfs -o ro /nfs/share_test
-F nfs -o rw=natra,ro=solaris -d "test" /source/open
-F nfs -o rw=natra,ro=192.168.0.0/32 /unix_share

4. # unshareall
- makes previously shared resources unavailable
Output:
# share
- /export/home rw ""
- /share rw ""
- /nfs/share_test ro ""

Ravi Mishra

19

Sun Solaris System Admin Notes 2

- /source/open rw=natra,ro=solaris "test"


- /unix_share rw=natra,ro=192.168.0.0/32 ""
- /data_share rw "Comment-description"

# unshareall
# share
#
5. # dfshares
- lists available shared resources from the remote/local NFS server
# dfshares 192.168.0.252
Output:
# dfshares 192.168.0.252

RESOURCE
SERVER ACCESS
TRANSPORT
192.168.0.252:/export/home 192.168.0.252 - -

# dfmounts
- displays a list of NFS server directories that are currently mounted at the clients
- reads the entry from the file /etc/rmtab
At client side:
To make the resource permanently available edit the file /etc/vfstab.
eg entry from the client:
fire2:/nfs/share_test - /mnt/point3 nfs yes ro,nosuid
fire2:/share - /mnt/point1 nfs - yes

AUTOFS
- It's a client side service to make the shared resource available at the client side on demand.
- Autofs file is initialized by
/lib/svc/automount script
/lib/svc/method/svc_autofs script starts the autofs daemon.
NOTE:
automountd deamon is completely independent from the automount command. Because of this
separation, we can add/modify/delete map information without having to stop and start the
automountd daemon process.
Autofs types:
1. Master map
2. Direct map
3. Indirect map
4. Special map
Master map:
1. Lists the other maps used for establishing the autofs file system.
2. The automount command reads this map at boot time.
/etc/auto_master is the configuration file which have the list of direct & indirectly automounted
resources.
Output: (With default entry to the file /etc/auto_master)

Ravi Mishra

20

Sun Solaris System Admin Notes 2

# Copyright 2003 Sun Microsystems, Inc. All rights reserved.


# Use is subject to license terms.
#
# ident "@(#)auto_master 1.8 03/04/28 SMI"
#
# Master map for automounter
#
+auto_master
/net
-hosts
-nosuid,browse
/home
auto_home
-nobrowse

Direct map:
Lists the mount points as ABSOLUTE PATH names.
This map explicitly indicates the mount point on the client.
Usually /usr/share/man directory is a good example for direct mapping.
/- mount point is a pointer that informs the automount facility that full path names are defined in the
file specified by MAP_NAME (for eg: here its /etc/direct_map).
NOTE:
1. /- is NOT an entry in the default master map file (/etc/auto_master)
2. The automount facility by default automatically searched for all map related file in /etc directory.
Output: ( After adding a manual entry to the file)

# Copyright 2003 Sun Microsystems, Inc. All rights reserved.


# Use is subject to license terms.
#
# ident "@(#)auto_master 1.8 03/04/28 SMI"
#
# Master map for automounter
#
+auto_master
/net
-hosts
-nosuid,browse
/home
auto_home
-nobrowse
/ direct
/ /direct

Note-1:
Here
1. "direct" is the file name that has to be resided under /etc/ dir. It's mandatory.
This file will have the absolute path of the shared resource & mount point at the client.
2. This file has to be manually created.
3. The name of the file can be anything.
# cat /etc/direct
/usr/share/man 192.168.0.150:/usr/share/man
Note-2:
Here
1. "/direct" is the file name that is residing under / directory.
If the direct maping file is NOT residing under /etc dir, the full path of the file has to be specified.
2. This file will have the absolute path of the shared resources & mount point at the client.
3. Again the name of the file can be anything
They entry of the file /direct
# cat /direct
/usr/share/man 192.168.0.150:/usr/share/man
Indirect map:
Are simplest and most useful autofs.

Ravi Mishra

21

Sun Solaris System Admin Notes 2


Lists the mount points are relative path names. This map uses a relative path to establish the
mount point on the client.
/export/home - is a good example for indirect map while implementing NIS.
An indirect map uses a key substitute value to establish the association between a mount point on the
client and a directory on the server. Indirect map are useful for accessing specific
filesystems, such as home directories, from anywhere in the network.
Special map:
Provides access to NFS service by using their host names.
By default special maps are enabled.
/net directory is a good example for special map.
This directory has the list of the hosts connected in the network.
Once if we open the dir with the name of the host, this displays the shared resources of that specified
host. It's similar to the network neighborhood in windows.
# cd /net
# ls
fire1 localhost loghost natra solaris sunfire2
# cd fire1
# ls
usr
NOTE:
+ symbol at the beginning of the
+auto_master line in the /etc/auto_master file directs the automountd daemon to look at the
NIS, NIS+ or LDAP databases before it reads the rest of the map.
If this line is commented out, only the local files are searched unless the /etc/nsswitch.conf files
specifies that NIS, NIS+ or LDAP should be searched.
auto_home
This maps provide the mechanism to allow users to access their centrally localted $HOME directories
-hosts map
Provides access to all resources shared by NFS servers. The server are mounted below the
/net/hostname directory, or if only the server's ip-address is known, bleow the /net/ipaddress
directory. The server does not have to be listed in the hosts database for this mechanism to work.
To view the status of the autofs:
# svcs autofs
online 11:51:59 svc:/system/filesystem/autofs:default
To start/stop the autofs:
# svcadm enable svc:/system/filesystem/autofs:default - to start
# svcadm disable svc:/system/filesystem/autofs:default - to stop
EG: for Direct Maps:

SERVER SIDE configuration:


For sharing the man pages from the server 192.168.1.51 to clients.

Ravi Mishra

22

Sun Solaris System Admin Notes 2


1. Edit the file /etc/dfs/dfstab
share -F nfs -o ro /usr/share/man
2. Save the file
CLIENT SIDE configuration:
1. Edit the file /etc/auto_master
/- direct_map
2. Save the file
3. Create a file /etc/direct_map file with the following contents
edit:
# vi /etc/direct_map
/usr/share/man 192.168.1.51:/usr/share/man
4. Save the file
5. Make sure autofs service is running
# svcs -a | grep autofs
Start the service if its offline.
# svcadm enable autofs
6. Then automount the shared resources.
# automount -v
here
-v = provides the detailed information about the automounted resources.
# automount -v
automount: /usr/share/man mounted
automount: no unmounts
SYSTEM MESSAGING
/etc/syslog.conf file is responsible for sending or redirecting the messages to the log file or console or
user or loghost.
Note:
1. By default every system will be its own loghost
2. Before going to do any configuration, make sure whether the packages related to the services are
completely installed along with its dependencies.
3. As precaution have the backup of the default configuration file.
a. Daemon
b. User process
c. Kernel
d. logger (this is the only command to generated the messages which is used to check out the
configuration performed on the file /etc/syslog.conf)
Above mentioned four can generate the messages to files or loghost or to user or to the console.
Level of messages:
emerg - 0 (first priority) = Panic conditions that would normally be broadcasted to all users.
alert - 1 = Conditions that should be corrected immediately, such as a corrupted system database.
crit - 2 = Warnings about critical conditions, such as hard device errors
err - 3 = Other errors
warning - 4 = Warning messages
notice - 5 = Conditions that are not error conditions but that might require special handling, such as
failed login attempt. A failed login attempt is considered a notice and not an error.
info - 6 = Information messages

Ravi Mishra

23

Sun Solaris System Admin Notes 2


debug - 7 = Messages that are normally used only when debugging a program.
none - 8 = Does not send messages from the indicated facility to the selected file,
NOTE:
When we specify a syslog level, it means that the specified level and all higher levels. For eg, if we
specify err level, then it includes crit, alert and emerg level too.
To compile:
# /usr/ccs/bin/m4 /etc/syslog.conf
Solaris-10:
# svcadm enable system/system-log
Starts the syslogd daemon
# svcadm disable system/system-log
Stops the syslogd deamon
# svcadm refresh system/system-log
Makes the Operating System to re-read the configuration file /etc/syslog.conf
This command is must if the changes are done to the configuration file.
Solaris-9:
# /etc/init.d/syslog start
# /etc/init.d/syslog stop
Output:(With default entry to the file)

# Copyright (c) 1991-1998 by Sun Microsystems, Inc.


# All rights reserved.
#
# syslog configuration file.
#
# This file is processed by m4 so be careful to quote (`') names
# that match m4 reserved words. Also, within ifdef's, arguments
# containing commas must be quoted.
#
*.err;kern.notice;auth.notice /dev/sysmsg
*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages
*.alert;kern.err;daemon.err operator
*.alert root
*.emerg *
# if a non-loghost machine chooses to have authentication messages
# sent to the loghost machine, un-comment out the following line:
#auth.notice ifdef(`LOGHOST', /var/log/authlog, @fire1)
mail.debug ifdef(`LOGHOST', /var/log/syslog, @fire1)
#
# non-loghost machines will use the following lines to cause "user"
# log messages to be logged locally.
#
ifdef(`LOGHOST', ,
user.err /dev/sysmsg
user.err /var/adm/messages
user.alert `root, operator'
user.emerg *
)

Eg entry from the file /etc/syslog.conf


*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages
A
B
C
D
E
where
A = *.err means, all (user process, kernel, daemon, logger) whoever generating the error message
B = kern.debug means, only kernel generating debug messages
C = daemon.notice means, only deamon generating notice messages
D = mail.crit means, only mail generating critical messages

Ravi Mishra

24

Sun Solaris System Admin Notes 2


E = /var/adm/messages all above mentioned messages have to be logged to the file
Note:
To Test the logger functioning:
1. Edit the file /etc/syslog.conf
*.notice /var/log/logs-test
2. Save the file
3. Create a empty file under /var/log
# touch /var/log/logs-test
4. Refresh the system-log
# svcadm refresh system-log
5. To test the configuration
eg: # logger -p local0.notice Notice:level "test message"
eg: # logger -p local0.notice Crit:level "test message"
Note:
If same message is generated several times, the same message will not be logged to the specified file.
Now customizing the file /etc/syslog.conf
Option-1: By editing the above file, with the following line
*.err;kern.debug;daemon.notice;mail.crit /var/adm/test_log
By this entry, we understand that, the log will be sent to the file /var/adm/test_log
Note:
1. Make sure that the file /var/adm/test_log exists.
2. Compile the file
3. Refresh the service
Option-2: By editing the above file, with the following line
*.err;kern.debug;daemon.notice;mail.crit *
Option-3: By editing the above file, with the following line
*.err;kern.debug;daemon.notice;mail.crit ravim
By this entry, the messages will be sent only to the user ravim (as specified)
Example entry of the file /var/adm/messages
Sep 11 04:08:52 fire3 in.routed[185]: [ID 300549 daemon.warning] interface nge0 to 10.8.0.20
restored
A
B
C
D
E
G
Here
A = Date & time when the message is generated
B = System name (here, local system name)
C = Process name/PID number
D = Message ID, facility.level informations
E = Incoming request (here through the interface nge0)
F = PPID number ( NOTE: NOT seen in the above output line)
G = IP address
H = Port number ( NOTE: NOT seen in the above output line)
- Used to debug the configuration file
- This command reads the configuration file
- NOTE: Only root user can run this command in Multi-user mode

Ravi Mishra

25

Sun Solaris System Admin Notes 2


Output truncated:
# /usr/sbin/syslogd -d
main(1): Started at time Fri Sep 11 04:19:46 2009
hnc_init(1): hostname cache configured 2037 entry ttl:1200
getnets(1): found 1 addresses, they are: 0.0.0.0.2.2
amiloghost(1): testing 192.168.0.100.2.2
cfline(1): (*.err;kern.notice;auth.notice /dev/sysmsg)
cfline(1): (*.err;kern.debug;daemon.notice;mail.crit /var/adm/messages)
cfline(1): (*.err;kern.debug;daemon.notice;mail.crit /Desktop/log_file)
cfline(1): (*.err;kern.debug;daemon.notice;mail.crit /Desktop/log_test)
cfline(1): (*.err;kern.debug;daemon.notice;mail.crit india )
cfline(1): (*.err;kern.debug;daemon.notice;mail.crit /dev/console )
logerror(1): syslogd: /dev/console : No such file or directory
logerror_to_console(1): syslogd: /dev/console : No such file or directory
cfline(1): (*.alert;kern.err;daemon.err operator)
cfline(1): (*.alert root)

RBAC - ROLE BASED ACESS CONTROL:


RBAC is an alternative method to assign special privilege to a non-root user as an authorization or as
role or as profile.
NOTE:
In Linux the same implementation is said to be us SUDO.
Configuration files:
/etc/user_attr:
- Extended user attributes Database
- Associates users and roles with authrizations and profiles
NOTE:
When creating a new user with no rights profiles, authorizations or roles, nothing is added to the file.
/etc/security/auth_attr:
- Authorization attributes Database
- Defines authorizations and their attributes and identifies the associated help file
/etc/security/prof_attr:
- Rights profile attributes database
- Defines profiles, lists the profile's assigned authorizations, and identifies the associated help
/etc/security/exec_attr:
- Profile attributes database
- Defines the privileged operations assinged to a profile
Roles:
- Will have an entry to the file /etc/passwd and /etc/shadow
- Similar to user account
- Collection of profiles
Profiles:
- Will have a dedicated shell
- Profile shells will assigned
- Bourne Shell & Kron shell have profile shells
- pfsh (bourne profile shell), pfksh (korn profile shell)
- Is collection of number of commands.

Ravi Mishra

26

Sun Solaris System Admin Notes 2


NOTE:
1. If the user/role changes from the specified profile shell then they are not permitted to execute the
authorized commands
2. It's not possible to login to the system directly using role.
A role can only be used by switching the user to the role with "su" command.
3. We can also set up the "root" user as a role through a manual process. This approach prevents
users from logging in directoryz as the root user. Therefore, they must login as themselves first, and
then use the su command to assume the role.
We can perform RBAC by three ways to an user:
1. Directly adding the authorization to the user account
2. Creating a profile, and adding the profile to the user account
3. Creating a profile, adding it to role, then adding the role to the user account.
4. Adding authorization to role and adding the role to an user
I. Adding authorization to an existing user account:
# useradd -m -d /export/home/ryan -s /usr/bin/pfsh \
-A solaris.admin.usermgr.pswd \
solaris.system.shutdown \
solaris.system.admin.fsmgr.write ryan
# passwd ryan
Here, we had added the existing authorization to the user account using -A option with #useradd
command
NOTE: The shell assigned is profile shell.
# su ryan
sunfire1% echo $SHELL
/usr/bin/pfsh
sunfire1% auths
solaris.admin.usermgr.pswd,solaris.system.shutdown,solaris.admin.fsmgr.write,solaris
.device.cdrw,solaris.profmgr.read,solaris.jobs.users,solaris.mail.mailq,solaris.admi
n.usermgr.read,solaris.admin.logsvc.read,solaris.admin.fsmgr.read,solaris.admin.seri
almgr.read,solaris.admin.diskmgr.read,solaris.admin.procmgr.user,solaris.compsys.rea
d,solaris.admin.printer.read,solaris.admin.prodreg.read,solaris.admin.dcmgr.read,sol
aris.snmp.read,solaris.project.read,solaris.admin.patchmgr.read,solaris.network.host
s.read,solaris.admin.volmgr.read
sunfire1% profiles
Basic Solaris User
All
sunfire1% profiles -l
All: *
sunfire1% roles
No roles
#
#
#
#

roles - Returns the information about, to which roles the user is authorized to login
profiles - Returns the information about, to which profile the user is authorized to execute
profiles l Returns the detailed info about the permitted commands that can be executed by the user
auths - Returns the information about the permitted authorization mapped to the user account.

Ravi Mishra

27

Sun Solaris System Admin Notes 2


When a user is created with additional information like authorization, profiles or roles, # useradd
command update the entry to the file
/etc/user_attr
Output: (Relevant to the topic)
ryan::::type=normal;auths=solaris.admin.usermgr.pswd,solaris.system.shutdown,solaris
.admin.fsmgr.write
NOTE:
We cannot see an entry to the file for a normal user.
II. Creating a profile and adding it to an existing user account:
WTD: (What To Do)
1. Determine the name of the profile
2. Determine what commands has to be added to the profile
3. Edit the file /etc/security/prof_attr file accodingly
4. Edit the file /etc/security/exec_attr file by providing the list of commands to the profile
5. Map the profile to the user
HTD: (How To Do)
Eample-1:
Profile name=testprofile
Commands added to the profile=shutdown,format,useradd,passwd
Step-1: Adding/Creating a profile
# vi /etc/security/prof_attr
testprofile:::This is a test profile to test RBAC
1
2
Here,
1 = Name of the profile
2 = Comment about the profile (Optional)
Step-2: Mapping the list of commands to the created profile
# vi /etc/security/exec_attr
testprofile:suser:cmd:::/usr/sbin/shutdown:uid=0
testprofile:suser:cmd:::/usr/sbin/format:uid=0
testprofile:suser:cmd:::/usr/sbin/useradd:uid=0
testprofile:suser:cmd:::/usr/bin/passwd:uid=0
Step-3: Mapping the profile to the user account
# useradd -m -d /export/home/accel -s /usr/bin/pfksh -P testprofile accel
Here we have added the profile named "testprofile" to the user.
Output:
# su - accel
sunfire1% echo $SHELL
/usr/bin/pfksh
sunfire1% roles
No roles
sunfire1% profiles
testprofile
Basic Solaris User

Ravi Mishra

28

Sun Solaris System Admin Notes 2


All
sunfire1% profiles -l
testprofile:
/usr/sbin/shutdown uid=0
/usr/sbin/format uid=0
/usr/sbin/useradd uid=0
/usr/bin/passwd uid=0
All:
*
Example-2
Profile name: complete
List of commands added: Creating a profile with all root privileges
Step-1: Adding/Creating a profile
# vi /etc/security/prof_attr
complete:::This is to test the duplication of root profile
1
2
Here,
1 = Name of the profile
2 = Comment about the profile (Optional)
Step-2: Mapping the list of commands to the created profile
# vi /etc/security/exec_attr
complete:suser:cmd:::*:uid=0
Step-3: Mapping the user to the profile
# useradd -m -d /export/home/anup -s /usr/bin/pfsh -P complete anup
Output:
# su - anup
sunfire1# echo $USER
root
sunfire1# roles
No roles
sunfire1# profiles
Web Console Management
All
Basic Solaris User
sunfire1# profiles -l | more
Web Console Management:
/usr/share/webconsole/private/bin/smcwebstart uid=noaccess,
gid=noaccess,
privs=proc_audit
All:
*
NOTE:
1. The output of the commands
# profiles
# profiles -l
will be similar for the root user.
2. From the above output, we can also observe the change in the shell of the user. Normally for

Ravi Mishra

29

Sun Solaris System Admin Notes 2


the user the shell is $, but since the all the privilege is given to the user, the shell is #
III. Creating a role, profile and mapping it to the user account.
WTD:
1. Determine the name of the user
2. Create the role
3. Assign the password to the role
Note:
a. Role should have a password to it.
b. Without having a password it's not possible to login to that role
4. Create a profile
5. Add the list of commands to the profile
6. Add the profile to the role
7. Add the role to the user
Note:
This method has some more layer of security by assigning a password to a role.
HTD:
Step-1: Create a role
# roleadd -m -d /export/home/policy -s /usr/bin/pfsh policy
1. This command will update the following files
a. /etc/passwd
b. /etc/shadow
c. /etc/user_attr
Output:
# roleadd -m -d /export/home/policy -s /usr/bin/pfsh policy
80 blocks
# passwd policy
New Password:
Re-enter new Password:
passwd: password successfully changed for policy
# grep policy /etc/passwd
policy:x:112:1::/export/home/policy:/usr/bin/pfsh
# grep policy /etc/shadow
policy:xXuxPLl/Wt13Q:14512::::::
# grep policy /etc/user_attr
policy::::type=role;profiles=All
Step-2: Creating a profile
Note: To create a profile pls do refer II Creating a profile.
Let's make use of the above existing profile.
For eg, let's take the profile "testprofile"
Step-3: Adding the profile to the role
# rolemod -P testprofile,All policy
Adds the profile named "testprofile" to the existing role "policy".
Now we can observe the changes to the file /etc/user_attr Output:
policy::::type=normal;roles=complete;auths=solaris.admin.usermgr.pswd,solaris.system.shutdown,sola
ris.admin.fsmgr.write

Ravi Mishra

30

Sun Solaris System Admin Notes 2


Step-4: Mapping the role to the user:
# useradd -m -d /export/home/nokia -R policy -s /bin/bash nokia
Adding a role to the user.
Output:
# useradd -m -d /export/home/nokia -R policy -s /bin/bash nokia
80 blocks
# passwd nokia
New Password:
Re-enter new Password:
passwd: password successfully changed for nokia
# su nokia
sunfire1% auths
solaris.device.cdrw,solaris.profmgr.read,solaris.jobs.users,solaris.mail.mailq,solar
is.admin.usermgr.read,solaris.admin.logsvc.read,solaris.admin.fsmgr.read,solaris.adm
in.serialmgr.read,solaris.admin.diskmgr.read,solaris.admin.procmgr.user,solaris.comp
sys.read,solaris.admin.printer.read,solaris.admin.prodreg.read,solaris.admin.dcmgr.r
ead,solaris.snmp.read,solaris.project.read,solaris.admin.patchmgr.read,solaris.netwo
rk.hosts.read,solaris.admin.volmgr.read
sunfire1% profiles
Basic Solaris User
All
sunfire1% profiles -l
All:
*
sunfire1% roles
policy
sunfire1% su policy
Password:
sunfire1% profiles
testprofile
All
Basic Solaris User
sunfire1% profiles -l
testprofile:
/usr/sbin/shutdown uid=0
/usr/sbin/format uid=0
/usr/sbin/useradd uid=0
/usr/bin/passwd uid=0
All:
*
Note:
Authorized acitivity can be performed by the user, only after switch to the role.
Role user account CANNOT be directly logged into the system.
Output:
bash-3.00# su nokia
sunfire1% su policy
Password:

Ravi Mishra

31

Sun Solaris System Admin Notes 2


$ /usr/sbin/shutdown -g 180 -i 5
Shutdown started. Fri Sep 25 17:26:01 IST 2009
Broadcast Message from root (pts/3) on sunfire1 Fri Sep 25 17:26:01...
The system sunfire1 will be shut down in 3 minutes
NOTE:
Default auths is assigned to an user is defined in the file /etc/security/policy.conf
# grep -i auths /etc/security/policy.conf
AUTHS_GRANTED=solaris.device.cdrw

NAMING SERVICE | NETWORK INFORMATION SERVICE (NIS)


NIS is used to store the centralized user administration and it works with same environment LAN
NIS has 3 components
a. NIS Master
b. NIS Slave
c. NIS Client
NIS Master Server:
1. The first system to be prepared in the domain
2. Has the source file
3. Has NIS maps which are built from the source files
4. Provides single point of control
5. Only one master server for a domain
6. Daemons (Runs on NIS Master Server):
a. ypserv
b. ypbind
c. ypxfrd
d. rpc.yppasswd
e. rpc.ypdated
NIS Slave Server:
1. An optional system in the domain
2. Doesnt have source files for that domain
3. But has maps which are received from the master server
4. Provides load balancing when the master server is busy
5. Provides redundancy when the master server fails
6. Deamons (Runs on NIS Slaver server):
a. Ypserve
b. ypbind
NIS Client:
1. Doesnt have source files and maps
2. Binds to the slaver server dynamically when the master server is either busy or down.
3. Deamons (Runs on NIS Client):
a. ypbind
DNS -> Domain Name System

Ravi Mishra

32

Sun Solaris System Admin Notes 2


WAN
LDAP -> Light Weight Directory Access Protocol
Works with other environments too.
How NIS Works
1. When a user tries to login to the system by issuing the user login name and password, system first
checks the entries to the file /etc/nsswitch.conf -> Name Service Switch Configuration file.
It will inform the system about the search preference, how the user logins needs to be searched.
For eg: first with nis server, then with local files and so on.
We are provided with number of templates for the naming service.
nsswitch.nis
nsswitch.nisplus
nsswitch.dns
nsswitch.ldap
2. After reading the entry of the file, it moves and reads the file /etc/hosts
3. Now it'll reaches the NIS server
4. Reads the database of the NIS server to permit the user logins. /etc/passwd, /etc/shadow and
some files will be checked and if the issued login name exists in the database of the NIS server. The
system responds positively.
5. And it'll be redirected to the client to authenticate the login.
NIS Server Configuration Steps:
1. # cp /etc/nsswitch.nis /etc/nsswitch.conf
2. # domainname rk.com
3. # domainname > /etc/defaultdomain
4. # cd /etc
5. # touch ethers bootparams locale timezone netgroup netmasks
6. # ypinit -m
7. # /usr/lib/netsvc/yp/ypstart
[Solaris 9, for Sol10 use svcadm enable nis/server ]
8. # /usr/lib/netsvc/yp/ypstop
[Solaris 9, for Sol10 use svcadm disable nis/server ]
9. # ypcat hosts
10. # ypcat user5 passwd
11. # ypwhich
NIS Client Configuration Steps:
1. # cp /etc/nsswitch.nis /etc/nsswitch.conf
2. # domainname rk.com
3. # domainname > /etc/defaultdomain
4. # ypinit -c
5. # /usr/lib/netsvc/yp/ypstart
6. # ypwhich -m
To Update User Account Modifications:
1. # cd /var/yp
2. # /usr/ccs/bin/make
# ypwhich -- Displays the name of the NIS Master server
# ypmatch -k <user_name> passwd
# ypmatch -k shivam passwd

Ravi Mishra

33

Sun Solaris System Admin Notes 2


Displays entry for user "shivam" in the passwd database
# ypcat hosts -- will display the hosts database
# ypinit m -- Initiate the NIS Master server
# ypinit c -- Initiate it as a client when prompted for list of servers, provides the server name.
NIS search status codes:
SUCCESS - requested entry was found
UNAVAIL - source was unavailable
NOTFOUND - source contains no such entry
TRYAGAIN - source returned an "I'm busy, try later' message
ACTIONS:
Continue - try the next source
Return - stop looking for an entry
Default Actions:
SUCCESS = return
UNAVAIL = continue
NOTFOUND = continue
TRAGAIN = continue
NOTE:
NOTFOUND = return
The next source in the list will only be searched if NIS is down or has been disabled
Normally, a success indicated that the search is over and an unsuccessful result indicates that the
next source should be queried. There are occasions, however when you want to stop searching when
an unsuccessful search result is returned.
Information handled by a name service includes,
1. System (host) names and address
2. User names
3. Passwords
4. Groups
5. Automounter configuration files (auto.master, auto.home)
6. Access permissions & RBAC database files
NOTE:
YP to NIS,
1. NIS was formerly known as SUN Yellow Pages (YP). The functionality of the second NIS remains
same, only the name has changed.
2. NIS administration database are called MAPS.
3. NIS stored information about workstation names and addresses, users, the network itself, and
network services. This collection of network information is referred to as the NIS NAMESPACE.
4. Any system can be an NIS client, but only system with disks should be NIS servers, whether master
or slave.
5. Servers are also clients of themselves.
6. The master copies of the maps are located on the NIS master server, in the directory
/var/yp/domain_name
7. Under <domain-name> directory each map is stored as 2 files.
a. mapname.dir
b. mapname.pag
/etc/bootparams:

Ravi Mishra

34

Sun Solaris System Admin Notes 2


1. Contains the path names that clients need during startup: root, swap and possibly others.
/etc/ethers:
1. Contains system names and ethernet addresses. The system name in the key - ethers.byname
2. Contains system names and ethernet addresses. The ethernet addresses is the key in the map
-ethers.byaddr
/etc/netgroup:
1. netgroup - contains groupname, username and system name. The groupname is the key.
2. netgroup.byhost - contains group name, user name and system name is the key.
3. netgroup.byuser - contains group name, user name and system name. The username is the key.
/etc/netmask:
netmasks.byaddr - contains network masks to be used with YP subnetting. The address is the key.
/etc/timezone:
timezone.byname - contians the default timezone database. The timezone name is the key.
/etc/shadow - agening.byname
/etc/security/auth_attr - authorization attributes for RBAC. Contains the authorization description
database, part of RBAC.
/etc/auto_home - auto.home -- Automounter file for home directory.
/etc/auto_master - auto.master -- Master automounter map
/etc/security/exec_attr
-- Contains execution profiles, part of RBAC
/etc/hosts
hosts.byaddr
hosts.byname
/etc/group
group.byid
group.byname
group.byaddr
/etc/user_attr
Contains the extended user attributes database, part of RBAC
/etc/security/prof_attr
Contains profile descriptions, part of RBAC
/etc/passwd
/etc/shadow
passwd.byname
passwd.byid
map.key.pag and map.key.dir
map - base name of the map
(hosts, passwd and so on...)
key - the map's sort key
(byname, byaddr and so on...)
pag - the map's data
dir - an index to the *.pag file
These above were some of the databases; files are referred after activating NIS. Still some more files
and directories are there.
To construct a NIS slave server:
# ypinit -s
To delete the NIS server configuration:
1. Replace the file
# cp /etc/nsswitch.files /etc/nsswitch.conf
2. Remove the binding directory

Ravi Mishra

35

Sun Solaris System Admin Notes 2


# cd /var/yp
# rm -rf binding
NOTE:
Make sure that the yp services are stopped.
# /usr/lib/netsvc/yp/ypstop
In short:
NIS Creation:
* /etc/hosts
* domainname
* domainname > /etc/defaultdomain
* cd /etc;touch ethers timezone netmasks netgroup bootparams locale
* cp /etc/nsswitch.nis /etc/nsswitch.conf
* ypinit -m ypinit -s ypinit -c
* svcadm enable nis/server svcadm enable nis/client
NIS Deletion:
* rm /etc/defaultdomain
* rm -r /var/yp/<domainname>
* rm -r /var/yp/binding/<domainname>
* svcadm disable /nis/server svcadm disable /nis/client
* cp /etc/nsswitch.files nsswitch.conf
Configuring Master Server: (sunfire1)
# hostname
sunfire1
# cp /var/yp/Makefile /var/yp/Makefile.orig
# domainname solaris.com
# domainname
solaris.com
# domainname > /etc/defaultdomain
# cd /etc
# touch ethers bootparams locale netgroup netmasks timezone
# vi /etc/hosts
"/etc/hosts" [Read only] 6 lines, 125 characters
#
# Internet host table
#
#192.168.0.157 sunfire103 master
127.0.0.1 localhost loghost sunfire1
192.168.0.100 sunfire1
192.168.0.150 sunfire2
192.168.0.200 sunfire3
~
~
"/etc/hosts" 8 lines, 171 characters
#
# ypinit -m

In order for NIS to operate successfully, we have to construct a list of the NIS servers. Please continue
to add the names for YP servers in order of preference, one per line. When you are done with the list,
type a <control D> or a return on a line by itself.
next host to add: sunfire1
next host to add: ^D
The current list of yp servers looks like this:
sunfire1
Is this correct? [y/n: y] y
Installing the YP database will require that you answer a few questions.
Questions will all be asked at the beginning of the procedure.
Do you want this procedure to quit on non-fatal errors? [y/n: n]
OK, please remember to go back and redo manually whatever fails. If you don't, some part of the system
(perhaps the yp itself) won't work.
The yp domain directory is /var/yp/solaris.com

Ravi Mishra

36

Sun Solaris System Admin Notes 2

There will be no further questions. The remainder of the procedure should take 5 to 10 minutes.
Building /var/yp/solaris.com/ypservers...
Running /var/yp /Makefile...
updated passwd
updated group
updated hosts
updated ipnodes
updated ethers
updated networks
updated rpc
updated services
updated protocols
updated netgroup
updated bootparams
updated publickey
updated netid
/usr/sbin/makedbm /etc/netmasks /var/yp/`domainname`/netmasks.byaddr;
updated netmasks
updated timezone
updated auto.master
updated auto.home
updated ageing
updated auth_attr
updated exec_attr
updated prof_attr
updated user_attr
updated audit_user
sunfire1 has been set up as a yp master server with errors. Please remember to figure out what went
wrong, and fix it.
If there are running slave yp servers, run yppush now for any databases which have been changed. If there
are no running slaves, run ypinit on those hosts which are to be slave servers.

# cp /etc/nsswitch.nis /etc/nsswitch.conf
# svcadm enable nis/server
# svcs nis/server
online 12:30:21 svc:/network/nis/server:default
Configuring NIS Slave Server: (sunfire2)
# hostname
sunfire2
# ypinit -c
In order for NIS to operate successfully, we have to construct a list of the NIS servers. Please continue
to add the names for YP servers in order of preference, one per line. When you are done with the list,
type a <control D> or a return on a line by itself.
next host to add: sunfire1
next host to add: ^D
The current list of yp servers looks like this:
sunfire1
Is this correct? [y/n: y]

# svcadm enable nis/client


# svcs nis/client
online 10:28:28 svc:/network/nis/client:default
# ypinit -s sunfire1

Installing the YP database will require that you answer a few questions. Questions will all be asked at
the beginning of the procedure. Do you want this procedure to quit on non-fatal errors? [y/n: n]
OK, please remember to go back and redo manually whatever fails. If you don't, some part of the system
(perhaps the yp itself) won't work.
The yp domain directory is /var/yp/solaris.com
There will be no further questions. The remainder of the procedure should take
a few minutes, to copy the data bases from sunfire1.
sunfire2's nis data base has been set up

# ypwhich
sunfire1
Configuring NIS Client: (sunfire3)
# hostname
sunfire3
# ypinit -c

Ravi Mishra

37

Sun Solaris System Admin Notes 2

In order for NIS to operate successfully, we have to construct a list of the NIS servers. Please continue
to add the names for YP servers in order of preference, one per line. When you are done with the list,
type a <control D> or a return on a line by itself.
next host to add: sunfire1
next host to add: sunfire2
next host to add: ^D
The current list of yp servers looks like this:
sunfire1
sunfire2
Is this correct? [y/n: y]

# svcadm enable nis/client


# ypwhich
sunfire1
Client side service:
To create a automount facility for the users home directories on demand, through indirect mapping
Edit the file /etc/auto_master
#+auto_master
.
.
.
.
/export/home home-indirect
:wq!
Now create a file /etc/home-indirect
Note:
This file will NOT be present by default, has to be created (can be with any name, but make sure that
the file name is entered to the file auto-master).
Contents to the file:
# vi /etc/home-indirect
* sunfire1:/export/home/&
:wq!
Note:
Here sunfire1 is the name of the NIS Master Server.
NIS Server-list: In the server-list mode, the ypbind process queries the
/var/yp/binding/domain/ypservers list for the names of all of the NIS servers in the domain. The
ypbind process binds only to servers in this file. The file is created by running ypinit -c.
JUMPSTART
RARP - Reverse Address Resolution Protocol
ARP - Address Resolution Protocol
Custom Jumpstart:
1. Requires up-front work
2. The most efficient way to centralize and automate the operating system installation at large
enterprise
3. A way to install groups of similar system automatically and identically.
Jumpstart:
1. Automatically install the Solaris software on SPARC based system just by inserting the Solaris CD
and powering on the system.
2. For new sparc systems shipped from Sun Mircrosystems, this is the default method of installing the
operating system.

Ravi Mishra

38

Sun Solaris System Admin Notes 2


Commands:
# ./setup_install_server
Sets up install server to provide the OS to the client during the jumpstart installation. This command
is also used to setup a boot only server when -b option is specified.
# ./add_to_install_server
A script that copies additional packages within a product tree on the Solaris 10 software and Solaris
10 languages CD's to the local disk on an existing install server.
#./add_install_client
A command that adds network installation information about a system to an install or boot server's
/etc files so that the system can install over the network.
# ./rm_install_client
Removes jumpstart clients that were previously setup for network installation
#./check
Validates the information in the rules file.
Components of Jumpstart server:
1. Boot & Client identification service:
These services are provided by a networked boot server and provide the information that a jumpstart
client needs to boot using the network.
2. Installation services:
These are provided by a networked install server, which provides an image of the Solaris OS
environment the jumpstart client uses as its source of data to install.
3. Configuartion services:
These are provided by networked configuration server and provide information that a jumpstart client
uses to partition disks and create file systems, add/remove Solaris packages and perform other
configuration task.
The Boot Server:
1. Also called the Start-up server, is where the client system access the startup files.
2. When a client is first turned on, it does not have an OS installed or an IP address assigned,
therefore, when the client is first started, the boot server provides this information.
3. The boot server running the RARP daemon, in.rarpd, looks up the Ehternet addresss in the
/etc/ethers files, checks for corresponding name in its /etc/hosts file, and passes the IPaddress back
to the client.
Important files which boot server will lookup:
/etc/ethers
/etc/bootparams
/etc/dfs/dfstab
/etc/hosts
/tftpboot
/etc/ethers:
1. When the jumpstart client boots, it has no IP address; so it broadcasts its Ehternet address to the
network using RARP.
2. Boot server receives this request and attempts to match the client's ethernet address with an entry
in the local /etc/ethers file.
3. If a match is found, the client name is matched to an entry in the /etc/hosts file. In response to the
RARP request from the client, the boot server sends the IP addrss from the /etc/hosts file back to the
client. The client continues the boot process using the assigned IP address.
4. An entry for the jumpstart client must be created by editing the /etc/ethers file or by using the
add_install_client script.

Ravi Mishra

39

Sun Solaris System Admin Notes 2


/etc/bootparams:
1. Contains entries that network clients use for booting.
2. Jumpstart clients retrieve the information from this file by issuing requests to server running
rpc.bootparamd program.
/tftpboot:
1. When booting over the network, the jumpstart client's boot PROM makes a RARP request, and
when it receives a reply the PROM broadcasts a TFTP request to fetch the inetboot file from any server
that responds & executes it.
The Install server:
1. The boot server and the install server typically the same system.
2. The install server is a networked system that provides Solaris 10 DVD/CD images from which we
can install Solaris 10 on another system on the network.
The Configuration server:
1. The server that contains a jumpstart configuration directory is called a configuration server. It is
usually the same system as the install and boot server, although it can be completely different server.
2. The configuration directory on the configuration server should be owned by root and should have
permissions set to 755 (by default).
3. The configuration directory has rules, rules.ok, class file, the check script and the optional begin
and install scripts.
Begin and Finish scripts:
1. A begin script is a user-defined BOURNE shell script, located in the Jumpstart configuration
directory on the configuration server, specifies within the rules file, that performs tasks before the
Solaris software is installed on the system.
2. Output from the begin script goes to
/var/sadm/system/logs/begin.log
3. Begin script should be owned by root with default permission.
4. Finish scripts
/var/sadm/system/logs/finish.logs
Procedure to initiate the jumpstart configuration:
Installation Service:
1. Create a slice with at least 5 Gb of space for holding the OS image.
Here, in our eg we had created a slice (c0d1s5) with 6 Gb.
2. Create the file system for the created slice.
# newfs /dev/rdsk/c0d1s5
3. Create a mount point and mount the slice.
# mkdir /jump_image
# mount /dev/dsk/c0d1s5 /jump_image
Note:
The slice can also be mounted as permanently, by editing the file /etc/vfstab.
4. Now, mount the cdrom/dvd (OS) either manually or using volume management.
# /etc/init.d/volmgt start
5. Move to the location
# cd /cdrom/Solaris_10/Tools
6. Run the following script from that location.
# ./setup_install_server /jump_image
This command will do the following functions:
a. Check for the mount point, /jump_image

Ravi Mishra

40

Sun Solaris System Admin Notes 2


b. check for the available space
c. Copy the OS image from the CD/DVD to the hard disk drive
Identification Service:
WTD? - What to do?
1. Create a dir /jump_image/config
Note: It can be with any name.
2. Create a directory in name of the jumpstart client under the above created directory.
/jump_image/config/client1 [Optional]
3. Cretae a file 'sysidcfg'
Note: File name should be sysidcfg
4. Share the directory.
HTD - How to do?
1. # mkdir /jump_image/config
2. # mkdir /jump_image/config/client1
3. # cd /jump_image/config/client1
4. # vi sysidcfg
Edit the file with the following contents:
network_interface=Primary
{ hostname=client1
netmask=255.255.255.0
protocol-ipv6=no
default_route=none
} name_service=none
system_locale=en_US
timezone=Asia/Calcutta
timeserver=localhost
root_password=<copy_and_paste_from_the_/etc/shadow_of_the_server>
:wq!
5. # cat >> /etc/dfs/dfstab
share -F nfs -o ro /jump_image/config/client1
crtl+d
6. # shareall
7. # svcadm enable nfs/server
8. # share
Only to check whether the resources are shared properly.
Configuration server:
How the installation proceeds in jumpstart clients
Provides information about
a. Installation type
b. System type
c. Disk partitions or file system
d. Cluster selection
e. Software package addition/deletion
WTD:
1. Create a profile under /jump_image/config/client1 directory in any name.
Note: Profile file is also known as CLASS file.
2. Create rules file to choose the right profile for the client in the same directory.
3. Run the check script to get rules.ok file
HTD:
1. # vi /jump_image/config/client1/profile

Ravi Mishra

41

Sun Solaris System Admin Notes 2


edit the file with the following contents:
install_type initial_install
system_type standalone
partitioning explicit
filesys c0t0d0s0 7000 /
filesys c0t0d0s1 1000 swap
cluster SUNWCall
package SUNWman delete
:wq!
NOTE:
partitioning explicit -- It means the manual layout.
package SUNWman delete -- will not install the package SUNWman
In the case of X86 client
fdisk all solaris all
2. # vi /jump_image/config/client1/rules
edit the file with following contents
#hostname <jumpstart_client> <pre_script> <profile_name> <post_script>
hostname client1 - profile :wq!
3. # cd /cdrom/Solaris_10/Misc/jumpstart_sample
4. # cp check /jump_image/config/client1
Copy the file check from DVD to the above specified location
5. # cd /jump_image/config/client1
6. # ./check
It will verify the rules file. If the syntax is correct it creates the rules.ok file.
Boot server:
1. # vi /etc/ethers
edit the file with client's mac address and its proposed hostname. for eg
8:0:20:a9:bc:36 client1
:wq!
2. # vi /etc/inet/hosts
edit the file with proposed ipaddress and proposed hostname of the client.
eg:
192.168.1.123 client1
:wq!
3. # cd /cdrom/Solaris_10/Tools
# ./add_install_client -C <js_server_name_or_ip:profile>
-P <js_server_name:sysidcfg-path>
<client_name>
<platform_group>
eg:
# ./add_install_client -C 192.168.1.51:/jump_image/config/client1
-P 192.168.1.51:/jump_image/config/client1
client1
sun4u
On client side at Sun machine:
OK boot net - install

Ravi Mishra

42

Sun Solaris System Admin Notes 2


ZONE ADMINISTRATION
Zone Types:
1. Global zone
2. Non-global zone
Global zones:
1. Has 2 functions
2. Is both the default zone for the system and the zone used for system-wide administrative control.
3. Is the only zone from which a non-global zone can be configured, installed, managed, or uninstalled.
4. Only global zone is bootable from the system hardware.
5. Administration of the system infrastructure, such as physical devices, routing, or dynamic
reconfiguration, is ONLY possible in the global zone.
6. Contains a complete installation of the Solaris system software packages.
7. Provides a complete database containing information about all installed components. It also holds
configuration information specific to the global zone only, such as the global zone hostname and the
file system table.
8. Is the only zone that is aware of all devices and all file systems.
9. Always has the name global.
Note:
1. Each zone is also given a unique numeric identifier, which is assigned by the system when the zone
is booted.
2. The global zone is always mapped to zone id 0.
3. The system assigns non-zero IDz to non-global zones when they reboot. The number can change
when the zone reboots.
Non-global zones:
1. Can also contain Solaris software packages shared from the global zone and additional installed
software packages not shared from the global zone.
2. Is not aware of the existence of any other zones. It CANNOT install, manage or uninstall itself or any
other zones.

Zone daemons:
Zones uses 2 daemons to control its operation.
a. zoneadm
b. zsched
Note:
The zoneadm daemon is the primary process for managing the zones virtual platform. There is one
zoneadm process running for each active (ready, running or shutting down) zone on the system.
Unless the zoneadmd daemon is already running, it is automatically started by the zoneadm
command.
Zoneadm:
Responsible for:
1. Managing zone booting and shutting down
2. Allocating the zone ID and starting the zsched system process
3. Setting zone-wide resource control (rctl)
4. Preparing the zones devices as specified in the zone configuration
5. Plumbing virtual network interfaces

Ravi Mishra

43

Sun Solaris System Admin Notes 2


6. Mounting loopback and conventional file systems
Zsched:
Every active zone has associated kernel process, zsched. The zsched process enables the zones
subsystem to keep track of per-zone kernel threads. Kernel threads doing work on behalf of the zone
are owned by zsched.
Zone file system:
There are 2 models for installing root file systems in non-global zones.
a. Sparse zone
b. Whole root zone
Sparse Zone:
1. Installs minimal number of files from the global zone when a non-global zone is installed.
2. Only certain root packages are installed in the non-global zone. These include a subset of the
required root packages that are normally installed in the global zone, as well as any additional root
packaged that the global administrator might have selected.
3. Any files that need to be shared between a non-global zone and the global zone can be mounted as
read-only loopback file systems. By default /lib, /usr, /platform and /sbin are mounted in this
manner.
4. Once a zone is installed it is no longer dependent on the global zone unless a file system is mounted
using a loopback file system.
5. A non-global zone CANNOT be a nfs server.
Whole root zone;
1. All of required & any selected optional packages are installed into the private file systems of the
zone.
2. Provides the maximum flexibility.
3. Advantages of this model include the capability for global zone administrators to customize their
zones file system layout.
Zone states:
Undefined: The zones configuration has not been completed and committed to stable storage.
This state also occurs when a zones configuration has been deleted.
Configured: Zones configuration is complete and committed to stable storage. However, those
elements of the zones application environment that must be specified after initial boot are not yet
present.
Incomplete: This is a transitional state. During an install or uninstall operation, zoneadm sets the
state of the target zone to incomplete. Upon successful completion of the operation, the state is set to
the correct state. However, a zone that is unable to complete the install process will stop in this state.
Installed: During this state, the zones configuration is instantiated on the system. The zoneadm
command is used to verify that the configuration can be successfully used on the designated Solaris
system. Packages are installed under the zones root path. In this state, the zone has no associated
virtual platform.
Ready: In this state, the virtual platform for the zone is established. The kernel created the zsched
process, network interfaces are plumbed, file systems are mounted, and devices are configured. A
unique zone ID is assigned by the system. At this stage, no processes associated with the zone have
been started.
Running: In this state. The user processes associated with the zone application environment are
running. The zone enters the running state as soon as the first user process associated with the
application environment is created.

Ravi Mishra

44

Sun Solaris System Admin Notes 2


Shutting: Down and down- These states are transitional states that are visible while the zone is being
halted. However, a zone that is unable to shut down for any reason will stop in one of these states.
Allocating file system space:
1. About 100 Mb of disk space per non-global zone is required when the global zone has been installed
with all of the standard Solaris packages.
2. By default, any additional packages installed in the global zone also populate the non-global zones.
The amount of disk space required must be increased accordingly. The directory location in the nonglobal zone for these additional packages is specified through the inerhit-pkg-dir resource.
3. An additional 40 Mb of RAM per zones is suggested, but not required on a machine with sufficient
swap space.
Usage of # zonecfg command:
1. Create or delete a zone configuration
2. Set properties for resources added to a configuration
3. Query or verify a configuration
4. Commit to a configuration
5. Revert to a previous configuration
6. Exit from a zonecfg session.
Usage of # zoneadm command;
1. Verify a zones configuration
2. Install a zone
3. Boot a zone
4. Reboot a zone
5. Display information about a running zone
6. Move a zone
7. Uninstall a zone
8. Remove a zone using the zonecfg command
In nut shell:
Before configuring the zones:
Create the zone using zonecfg -z (zonename) command [undefined state]
1. Create the zone path dir manually and the permission should be 700 for that directory
2. Configure the zone using zonecfg command[configured]
3. Install a zone after configuration to change the state to installed[during installationincomplete] from
configured.
4. Boot the zone after installing it[running state-before this state it goes to ready state where all the
n/w interfaces are plumbed, file systems are mounted , devices are configured, unique zone id is
assigned to the system].At this ready state no processes associated with this zone is started.
5. The state goes to running state where all the processes are started.
Output: Zone configuration steps:
bash-3.00# zonecfg -z zones1

zones1: No such zone configured


Use 'create' to begin configuring a new zone.
zonecfg:zones1> create
zonecfg:zones1> set zonepath=/etc/zones/zonepractice
zonecfg:zones1> set autoboot=true
zonecfg:zones1> add fs
zonecfg:zones1:fs> set dir=/mnt/zones
zonecfg:zones1:fs> set special=c1t0d0s4
zonecfg:zones1:fs> set raw=/dev/rdsk/c1t0d0s4
zonecfg:zones1:fs> set type=ufs
zonecfg:zones1:fs> end

Ravi Mishra

45

Sun Solaris System Admin Notes 2

zonecfg:zones1> add net


zonecfg:zones1:net> set physical=eri0
zonecfg:zones1:net> set address=10.2.3.5
zonecfg:zones1:net> end
zonecfg:zones1> add attr
zonecfg:zones1:attr> set name=zones
zonecfg:zones1:attr> set type=string
zonecfg:zones1:attr> set value=uint
zonecfg:zones1:attr> end
zonecfg:zones1> add inherit-pkg-dir
zonecfg:zones1:inherit-pkg-dir> set dir=/opt/sfw
zonecfg:zones1:inherit-pkg-dir> end
zonecfg:zones1> add rctl
zonecfg:zones1:rctl> set name=zone.cpu-shares
zonecfg:zones1:rctl> add value(priv=privileged,limit=10,action=none)
zonecfg:zones1:rctl> end
zonecfg:zones1:verify
zonecfg:zones1:commit
zonecfg:zones1:exit

Output: To find zone configuration information:


bash-3.00# zonecfg -z zones1 info
zonename: zones1
zonepath: /etc/zones/zonepractice
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
[cpu-shares: 10]
inherit-pkg-dir:
dir: /lib
inherit-pkg-dir:
dir: /platform
inherit-pkg-dir:
dir: /sbin
inherit-pkg-dir:
dir: /usr
inherit-pkg-dir:
dir: /opt/sfw
fs:
net:
attr:
rctl:
dir: /mnt/zones
special: c1t0d0s4
raw: /dev/rdsk/c1t0d0s4
type: ufs
options: []
address: 10.2.3.5
physical: eri0
defrouter not specified
name: zones
type: string
value: uint
name: zone.cpu-shares
value: (priv=privileged,limit=10,action=none)

Output: To know the configured zone status:


# zoneadm list -cp

0:global:running:/::native:shared
-:zones1:configured:/etc/zones/zonepractice::native:shared

bash-3.00#zoneadm -z zones1 install


bash-3.00#zoneadm -z zones1 boot
bash-3.00# zoneadm list -cp

0:global:running:/::native:shared
1:zones1:running:/etc/zones/zonepractice:f84ec383-bfe3-c890-8a7ff74970d40c96:
native:shared

bash-3.00# zlogin -C zones1

Ravi Mishra

46

Sun Solaris System Admin Notes 2


[Connected to zone 'zones1' console]
To halt a zone: # zoneadm -z zones1 halt
To uninstall a zone: # zoneadm -z zones1 uninstall
To delete a zone: # zonecfg -z zones1 delete
ZFS
ZFS has been designed to be robust, scalable and simple to administer.
ZFS pool storage features:
ZFS eliminates the volume management altogether. Instead of forcing us to create virtualized
volumes, ZFS aggregates devices into a storage pool. The storage pool describes the physical
characteristics of the storage (device layout, data redundancy, and so on,) and acts as arbitrary
data store from which the file systems can be created.
File systems grow automatically within the space allocated to the storage pool.
ZFS is a transactional file system, which means that the file system state is always consistent
on disk. With a transactional file system, data is managed using cow on write semantics.
ZFS supports storage pools with varying levels of data redundancy, including mirroring and a
variation on RAID-5. When a bad data block is detected, ZFS fetches the correct data from
another replicated copy, and repairs the bad data, replacing it with the good copy.
The file system itself is 128-bit, allowing for 256 quadrillion zettabytes of storage.
Directories can have upto 2 to the power of 48 (256 trillion) entries and no limit exists on the
number of file systems or number of files that can be contained within a file system.
A snapshot is a read-only copy of a file system or volume. Snapshots can be created quickly
and easily. Initially, snapshots consume no additional space within the pool.
Clone A file system whose initial contents are identical to the contents of a snapshot.
ZFS component Naming requirements:
Each ZFS component must be named according to the following rules;
1. Empty components are not allowed.
2. Each component can only contain alphanumeric characters in addition to the following 4 special
characters:
a. Undrerscore (_)
b. Hypen (-)
c. Colon (: )
d. Period (.)
3. Pool names must begin with a letter, expect that the beginning sequence c(0-9) is not allowed (this is
because of the physical naming convention). In addition, pool names that begin with mirror, raidz, or
spare are not allowed as these names are reserved.
4. Dataset names must begin with an alphanumeric character.
ZFS Hardware and Software requirements and recommendations:
1. A SPARC or X86 system that is running the Solaris 10 6/06 release or later release.
2. The minimum disk size is 128 Mbytes. The minimum amount of disk space required for a storage
pool is approximately 64 Mb.
3. The minimum amount of memory recommended to install a Solaris system is 512 Mb. However, for
good ZFS performance, at least 1 GB or more of memory is recommended.
4. Whilst creating a mirrored disk configuration, multiple controllers are recommended.
ZFS Steps:

Ravi Mishra

47

Sun Solaris System Admin Notes 2


zpool create <poolname> <type of pool(mirror/raidz)> <slice1 slice2 .../disk1
disk2 ...>
zpool add
zpool remove
zpool attach
zpool detach
zpool destroy
zpool list
zpool status
zpool replace
zfs create
zfs destroy
zfs snapshot
zfs rollback
zfs clone
zfs list
zfs set
zfs get
zfs mount
zfs unmount
zfs share
zfs unshare
Output: Creating a zpool:
bash-3.00# zpool create testpool c2d0s7
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 2G 77.5K 2.00G 0% ONLINE -

Output: Creating a directory under a zpool:


bash-3.00# zfs create testpool/homedir
bash-3.00# df -h

testpool 2.0G 25K 2.0G 1% /testpool


testpool/homedir 2.0G 24K 2.0G 1% /testpool/homedir

bash-3.00# mkfile 100m testpool/homedir/newfile


bash-3.00# df h

testpool 2.0G 25K 1.9G 1% /testpool


testpool/homedir 2.0G 100M 1.9G 5% /testpool/homedir

Mirror:
bash-3.00# zpool create testmirrorpool mirror c2d0s3 c2d0s4
bash-3.00# zpool list
swap 3.1G 32K 3.1G 1% /var/run
testpool 2.0G 25K 1.9G 1% /testpool
testpool/homedir 2.0G 100M 1.9G 5% /testpool/homedir
testmirrorpool 4.9G 24K 4.9G 1% /testmirrorpool

bash-3.00# cat /etc/mnttab


#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
testpool /testpool zfs rw,devices,setuid,exec,atime,dev=2d50002125 8087961
testpool/homedir /testpool/homedir zfs rw,devices,setuid,exec,atime,dev=2d 50003 1258088096
testmirrorpool /testmirrorpool zfs rw,devices,setuid,exec,atime,dev=2d50004125 8089634

DESTROYING A POOL:
bash-3.00# zpool destroy testmirrorpool
bash-3.00# zpool list

Ravi Mishra

48

Sun Solaris System Admin Notes 2


NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testpool 2G 100M 1.90G 4% ONLINE MANAGING ZFS PROPERTIES:
bash-3.00# zfs get all testpool/homedir
bash-3.00# zfs set quota=500m testpool/homedir
bash-3.00# zfs set compression=on testpool/homedir
bash-3.00# zfs set mounted=no testpool/homedir
cannot set mounted property: read only property
bash-3.00# zfs get all testpool/homedir
INHERITING ZFS PROPERTIES:
bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression on local
testpool/homedir/nesteddir compression on local

bash-3.00# zfs inherit compression testpool/homedir


bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression on local

bash-3.00# zfs inherit -r compression testpool/homedir


bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression off default

QUERYING ZFS PROPERTIES:


bash-3.00# zfs get checksum testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir checksum on default

bash-3.00# zfs get all testpool/homedir


bash-3.00# zfs get -s local all testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir quota 500M local

RAID-Z POOL:
bash-3.00# zpool create testraid5pool raidz c2d0s3 c2d0s4 c2d0s5
bash-3.00# zpool list
bash-3.00# df -h

testpool 2.0G 25K 1.9G 1% /testpool


testpool/homedir 2.0G 100M 1.9G 5% /testpool/homedir
testraid5pool 9.8G 32K 9.8G 1% /testraid5pool

DOUBLE PARITY RAID-Z POOL:


bash-3.00# zpool create doubleparityraid5pool raidz2 c2d0s3 c2d0s4 c2d0s5
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
doubleparityraid5pool 14.9G 158K 14.9G 0% ONLINE testpool 2G 100M 1.90G 4% ONLINE -

DRY RUN OF STORAGE POOL CREATION:


bash-3.00# zpool create -n testmirrorpool mirror c2d0s3 c2d0s4
would create 'testmirrorpool' with the following layout:

Ravi Mishra

49

testmirrorpool
mirror
c2d0s3
c2d0s4

Sun Solaris System Admin Notes 2

bash-3.00# zpool list

NAME SIZE USED AVAIL CAP HEALTH ALTROOT


testpool 2G 100M 1.90G 4% ONLINE bash-3.00# df
/ (/dev/dsk/c1d0s0 ):19485132 blocks 2318425 files
/devices (/devices ): 0 blocks 0 files
/system/contract (ctfs ): 0 blocks 2147483612 files
/lib/libc.so.1 (/usr/lib/libc/libc_hwcap2.so.1):19485132 blocks 2318425 files
/dev/fd (fd ): 0 blocks 0 files
/tmp (swap ): 6598720 blocks 293280 files
/var/run (swap ): 6598720 blocks 293280 files
/testpool (testpool ): 3923694 blocks 3923694 files
/testpool/homedir (testpool/homedir ): 3923694 blocks 3923694 files

Note: Here the n option is used not to create a zpool but just to check if it is
possible to create it or not. If it is possible, itll give the above output, else
itll give the error which is expected to occur when creating the zpool.
LISTING THE POOLS AND ZFS:
bash-3.00# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
testmirrorpool 4.97G 52.5K 4.97G 0% ONLINE testpool 2G 100M 1.90G 4% ONLINE -

bash-3.00# zpool list -o name,size,health


NAME SIZE HEALTH
testmirrorpool 4.97G ONLINE
testpool 2G ONLINE

bash-3.00# zpool status -x


all pools are healthy

bash-3.00# zpool status -x testmirrorpool


pool 'testmirrorpool' is healthy

bash-3.00# zpool status -v


pool: testmirrorpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
testmirrorpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c2d0s3 ONLINE 0 0 0
c2d0s4 ONLINE 0 0 0
errors: No known data errors
pool: testpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
c2d0s7 ONLINE 0 0 0
errors: No known data errors

bash-3.00# zpool status -v testmirrorpool


pool: testmirrorpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
testmirrorpool ONLINE 0 0 0
mirror ONLINE 0 0 0
c2d0s3 ONLINE 0 0 0
c2d0s4 ONLINE 0 0 0

Ravi Mishra

50

errors: No known data errors

Sun Solaris System Admin Notes 2

bash-3.00# zfs list -H

testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool


testpool 114K 1.97G 26.5K /testpool
testpool/homedir_old 24.5K 1.97G 24.5K /testpool/homedir_old

bash-3.00# zfs list -o name,sharenfs,mountpoint


NAME
SHARENFS
testmirrorpool
off
testpool
off
/testpool/homedir_old off

MOUNTPOINT
/testmirrorpool
/testpool
/testpool/homedir_old

bash-3.00# zfs create testpool/homedir_old/nesteddir


bash-3.00# zfs list testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old

bash-3.00# zfs list -r testpool/homedir_old

NAME USED AVAIL REFER MOUNTPOINT


testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir

bash-3.00# zfs get -r compression testpool


NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression off default

bash-3.00# zfs set compression=on testpool/homedir/nesteddir


bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression on local

bash-3.00# zfs inherit compression testpool/homedir/nesteddir


bash-3.00# zfs get -r compression testpool
NAME PROPERTY VALUE SOURCE
testpool compression off default
testpool/homedir compression off default
testpool/homedir/nesteddir compression off default

REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 114K 1.97G 26.5K /testpool
testpool/homedir_old 24.5K 1.97G 24.5K /testpool/homedir_old

DESTROYING A ZFS:
bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT


testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 100M 1.87G 25.5K /testpool
testpool/homedir 100M 1.87G 100M /testpool/homedir

bash-3.00# ls -l testpool/homedir/
total 4
drwxr-xr-x 2 root root 2 Nov 13 11:36 newdir
-rw-r--r-- 1 root root 0 Nov 13 11:36 newfile
bash-3.00# pwd
/testpool/homedir/newdir
bash-3.00# zfs destroy testpool/homedir
cannot unmount '/testpool/homedir': Device busy
bash-3.00# zfs destroy -f testpool/homedir
bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT

Ravi Mishra

51

Sun Solaris System Admin Notes 2

testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool


testpool 82K 1.97G 24.5K /testpool

bash-3.00# zfs create testpool/homedir/nesteddir


bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 144K 1.97G 26.5K /testpool
testpool/homedir 49K 1.97G 24.5K /testpool/homedir
testpool/homedir/nesteddir 24.5K 1.97G 24.5K /testpool/homedir/nesteddir

bash-3.00# zfs destroy testpool/homedir


cannot destroy 'testpool/homedir': filesystem has children
use '-r' to destroy the following datasets: testpool/homedir/nesteddir

bash-3.00# zfs destroy -r testpool/homedir


RENAMING ZFS:
bash-3.00# zfs list

NAME USED AVAIL REFER MOUNTPOINT


testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 144K 1.97G 26.5K /testpool
testpool/homedir 49K 1.97G 24.5K /testpool/homedir

bash-3.00# zfs rename testpool/homedir testpool/homedir_old


bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 144K 1.97G 26.5K /testpool
testpool/homedir_old 49K 1.97G 24.5K /testpool/homedir_old

bash-3.00# zfs create testpool/homedir_old/nesteddir


bash-3.00# zfs list testpool/homedir_old

NAME USED AVAIL REFER MOUNTPOINT


testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old

bash-3.00# zfs list -r testpool/homedir_old


NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old
52K 1.97G 27.5K
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K

/testpool/homedir_old
/testpool/homedir_old/nesteddir

MOUNTING AND UNMOUNTING ZFS FILESYSTEMS:


bash-3.00# zfs get mountpoint testpool/homedir
NAME PROPERTY VALUE SOURCE
testpool/homedir mountpoint /testpool/homedir default

bash-3.00# zfs get mounted testpool/homedir


NAME PROPERTY VALUE SOURCE
testpool/homedir mounted yes bash-3.00# zfs set mountpoint=/mnt/altloc testpool/homedir

bash-3.00# zfs get mountpoint testpool/homedir


Legacy filesystems must be managed through mount and umount commands and the
/etc/vfstab file. Unlike normal zfs filesystems, zfs doesn't automatically mount
legacy filesystems on boot.
bash-3.00# zfs set mountpoint=legacy testpool/additionaldir
bash-3.00# mount -F zfs testpool/additionaldir /mnt/legacy
MOUNTING ZFS FILESYSTEMS:
bash-3.00# umountall
bash-3.00# zfs mount
bash-3.00# zfs mount -a
bash-3.00# zfs mount
testpool/homedir
testpool/homedir/nesteddir

Ravi Mishra

/mnt/altloc
/mnt/altloc/nesteddir

52

testpool

Sun Solaris System Admin Notes 2


/testpool

Note:
1. zfs mount -a command doesn't mount legacy filesystems.
2. To force a mount on top of a non-empty directory, use the option -O
3. To specify the options like ro, rw use the option o

UNMOUNTING ZFS FILESYSTEMS:


bash-3.00# zfs mount
testpool
testpool/homedir
testpool/homedir/nesteddir

/testpool
/testpool/homedir
/testpool/homedir/nesteddir

bash-3.00# zfs unmount /testpool/homedir


bash-3.00# zfs mount
testpool
/testpool
bash-3.00# zfs mount -a
bash-3.00# zfs umount /testpool/homedir
bash-3.00# zfs mount
testpool
/testpool
bash-3.00# pwd
/testpool/homedir
bash-3.00# zfs unmount /testpool/homedir
cannot unmount '/testpool/homedir': Device busy
bash-3.00# zfs unmount -f /testpool/homedir
bash-3.00# zfs mount
testpool /testpool
Note: The sub command works both the ways - unmount,umount. This is to provide
backwards compatibility.
ZFS WEB-BASED MANAGEMENT:
bash-3.00# /usr/sbin/smcwebserver start
Starting Sun Java(TM) Web Console Version 3.0.2 ...
The console is running
bash-3.00# /usr/sbin/smcwebserver enable
The enable sub command enables the server to run automatically when the system
boots.
ZFS SNAPSHOTS:
bash-3.00# zfs list -r

NAME USED AVAIL REFER MOUNTPOINT


testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 146K 1.97G 26.5K /testpool
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir

bash-3.00# zfs snapshot testpool/homedir_old@snap1


bash-3.00# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old@snap1 0 - 27.5K -

Ravi Mishra

53

Sun Solaris System Admin Notes 2


bash-3.00# zfs snapshot -r testpool/homedir_old@snap2
bash-3.00# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old@snap1 0 - 27.5K testpool/homedir_old@snap2 0 - 27.5K testpool/homedir_old/nesteddir@snap2 0 - 24.5K -

bash-3.00# zfs get all testpool/homedir_old@snap1


NAME PROPERTY VALUE SOURCE
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1

type snapshot creation Fri Nov 13 16:26 2009 used 0 referenced 27.5K compressratio 1.00x -

PROPERTIES OF SNAPSHOTS:
bash-3.00# zfs get all testpool/homedir_old@snap1
NAME PROPERTY VALUE SOURCE
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1
testpool/homedir_old@snap1

type snapshot creation Fri Nov 13 16:26 2009 used 0 referenced 27.5K compressratio 1.00x -

bash-3.00# zfs set compressratio=2.00x testpool/homedir_old@snap1


cannot set compressratio property: read only property
bash-3.00# zfs set compression=on testpool/homedir_old@snap1
cannot set compression property for 'testpool/homedir_old@snap1': snapshot
properties cannot be modified
RENAMING ZFS SNAPSHOTS:
bash-3.00# zfs rename testpool/homedir_old@snap1 additionalpool/homedir@snap3
cannot rename to 'additionalpool/homedir@snap3': snapshots must be part of same
dataset
bash-3.00# zfs rename testpool/homedir_old@snap1 testpool/homedir_old@snap3
bash-3.00# zfs list -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old@snap3 0 - 27.5K testpool/homedir_old@snap2 0 - 27.5K testpool/homedir_old/nesteddir@snap2 0 - 24.5K -

DISPLAYING AND ACCESSING ZFS SNAPSHOTS:


bash-3.00# ls /testpool/homedir_old/.zfs/snapshot
snap2 snap3
bash-3.00# zfs list -r -t snapshot -o name,creation testpool/homedir_old
NAME CREATION
testpool/homedir_old@snap3 Fri Nov 13 16:26 2011
testpool/homedir_old@snap2 Fri Nov 13 16:31 2011
testpool/homedir_old/nesteddir@snap2 Fri Nov 13 16:31 2011

ROLLING BACK TO A ZFS SNAPSHOT:


bash-3.00# zfs rollback testpool/homedir_old@snap3
cannot rollback to 'testpool/homedir_old@snap3': more recent snapshots exist
use '-r' to force deletion of the following snapshots: testpool/homedir_old@snap2

bash-3.00# zfs rollback -r testpool/homedir_old@snap3


DESTROYING A ZFS SNAPSHOT:
bash-3.00# zfs destroy testpool/homedir_old@snap3
cannot destroy 'testpool/homedir_old@snap3': snapshot has dependent clones

Ravi Mishra

54

Sun Solaris System Admin Notes 2


use '-R' to destroy the following datasets: testpool/additionaldir/testclone
bash-3.00# zfs destroy -R testpool/homedir_old@snap3
CREATING ZFS CLONES:
bash-3.00# zfs clone testpool/homedir_old@snap3 testpool/additionaldir/testclone
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
additionalpool 104K 4.89G 25.5K /additionalpool
additionalpool/homedir 24.5K 4.89G 24.5K /additionalpool/homedir
testmirrorpool 75.5K 4.89G 24.5K /testmirrorpool
testpool 185K 1.97G 27.5K /testpool
testpool/additionaldir 25.5K 1.97G 25.5K /testpool/additionaldir
testpool/additionaldir/testclone 0 1.97G 27.5K /testpool/additionaldir/testclone
testpool/homedir_old 52K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old@snap3 0 - 27.5K testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir
testpool/homedir_old/nesteddir@snap2 0 - 24.5K -

SETTING CLONE PROPERTIES:


bash-3.00# zfs get all testpool/additionaldir/testclone
NAME PROPERTY VALUE SOURCE
testpool/additionaldir/testclone type filesystem testpool/additionaldir/testclone creation Fri Nov 13 16:51 2009 testpool/additionaldir/testclone used 22.5K testpool/additionaldir/testclone available 1.97G testpool/additionaldir/testclone referenced 27.5K testpool/additionaldir/testclone compressratio 1.00x testpool/additionaldir/testclone mounted yes

<Output Truncated>
bash-3.00# zfs set sharenfs=on testpool/additionaldir/testclone
bash-3.00# zfs set quota=500m testpool/additionaldir/testclone
bash-3.00# zfs get sharenfs,quota testpool/additionaldir/testclone
NAME PROPERTY VALUE SOURCE
testpool/additionaldir/testclone sharenfs on local
testpool/additionaldir/testclone quota 500M local

REPLACING A ZFS FILESYSTEM WITH A ZFS CLONE:


bash-3.00# zfs list -r testpool/homedir_old
NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old 74.5K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old@snap3 22.5K - 27.5K testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir
testpool/homedir_old/nesteddir@snap2 0 - 24.5K

bash-3.00# zfs list -r testpool/additionaldir


NAME USED AVAIL REFER MOUNTPOINT
testpool/additionaldir 48K 1.97G 25.5K /testpool/additionaldir
testpool/additionaldir/testclone 22.5K 500M 27.5K /testpool/additionaldir/testclone

bash-3.00# zfs promote testpool/additionaldir/testclone


bash-3.00# zfs list -r testpool/homedir_old

NAME USED AVAIL REFER MOUNTPOINT


testpool/homedir_old 47K 1.97G 27.5K /testpool/homedir_old
testpool/homedir_old/nesteddir 24.5K 1.97G 24.5K /testpool/homedir_old/nesteddir
testpool/homedir_old/nesteddir@snap2 0 - 24.5K -

bash-3.00# zfs list -r testpool/additionaldir

NAME USED AVAIL REFER MOUNTPOINT


testpool/additionaldir 75.5K 1.97G 25.5K /testpool/additionaldir
testpool/additionaldir/testclone 50K 500M 27.5K /testpool/additionaldir/testclone
testpool/additionaldir/testclone@snap3 22.5K - 27.5K -

Ravi Mishra

55

Sun Solaris System Admin Notes 2


bash-3.00# zfs list -r testpool/homedir_old

NAME USED AVAIL REFER MOUNTPOINT


testpool/homedir_old 50K 500M 27.5K /testpool/homedir_old
testpool/homedir_old@snap3 22.5K - 27.5K -

bash-3.00# zfs list -r testpool/additionaldir

NAME USED AVAIL REFER MOUNTPOINT


testpool/additionaldir 24.5K 1.97G 24.5K /testpool/additionaldir

bash-3.00# zfs list -r testpool/homedir_old_old


NAME USED AVAIL REFER MOUNTPOINT
testpool/homedir_old_old 47K 1.97G 27.5K /testpool/homedir_old_old
testpool/homedir_old_old/nesteddir 24.5K 1.97G 24.5K
/testpool/homedir_old_old/nesteddir
testpool/homedir_old_old/nesteddir@snap2 0 - 24.5K DESTROYING ZFS CLONE:
bash-3.00# zfs destroy /testpool/homedir_old@snap3
VOLUME MANAGER | SOLARIS VOLUME MANAGER | SOLSTICE DISK SUITE 4.0
Advantages: Provides 3 major functionalities
1. Overcome the disk size limitation by providing for joining of multiple disk slices to form a bigger
volume.
2. Fault tolerance by allowing mirroring of data from one disk to another and keeping parity
information in Raid-5.
3. Performance enhancements by allowing spreading of the data.
Disk suite packages:
1. Format of the package is datastream.
2. Packages can manually added by executing the command # pkgadd.
3. SUNWmd - Solstice Disk Suite
4. SUNWmdg - Solstice Disk Suite Tool
5. SUNWmdn - Solstice Disk Suite log daemon
Terminalogy:
A. Metadevice:
1. A virtual device composed of several physical devices-slices/disks.
2. Provide increased capacity, higher availability & better performance.
3. Standard metadevice name begins with "d" and is followed by a number.
for eg: dnn
d10
i. By default 128 unique metadevices in the range between 0 and 127 can be created.
ii. Additional metadevices can be added by updating a file /kernel/drv/md.conf. But it may degrade
the performance of the system.
4. Metadevice names are located in /dev/md/dsk and /dev/md/rdsk
B. State database/Meta database /md/ Replica:
1. Provides non-volatile storage necessary to keep track of configuration & status information for all
metadevices, meta mirrors.
2. Also keep track of error conditions that have occurred.
3. When the state database is updated each replica is modified once at a time.
4. Needs a dedicated disk slice.

Ravi Mishra

56

5.
6.
7.
8.
9.

Sun Solaris System Admin Notes 2


Has to be created before the logical devices are created.
Minimum 3 databases have to be created.
N/2 replica is required for the running system.
N/2+1 replica is required, when the system reboots.
Size of 1 replica is 4Mb.

Note:
512 bytes = 1 sector
1 block = 1 sector
1 block = 1/2 kb
32 block = 16kb
64 block = 32kb
128 block = 64kb
By default interlace = 64kb

RAID-0: Concatenation and Stripping:


1. Joining of 2 or more disk slices to add up the disk space.
2. Serial in nature. i.e. sequential data operation are performed serially on first disk then the second
disk and so on.
3. Due to serial in nature new slices can be added up without having to take the backup of entire
concatenated volume.
4. The address space is contiguous -data will be stored volume by volume.
5. No fault tolerance.
6. Size of the volume = Sum of the all physical components in that volume
Note:
We can use a concatenated/stripped metadevice for any file system with the exception of / (root),
swap, /usr, /var, /opt or any file system accessed during a Solaris upgrade or install.
Stripping:
1. Spreading of data over multiple disk drives mainly to enhance the performance by distributing data.
2. Data is divided into equal sized chunks. By default 16kb.
Chunks=interlace
3. Interlace value tells disk suite how much data is placed on a component before moving to the next
component of the stripe.
4. Because the data is spread across a stripe, we gain increased performance as read/write is spread
across multiple disks.
5. Size of the volume = N * smallest size of the physical component in that volume
6. No fault tolerance
Raid-1 Mirroring:
1. Write performance is slow.
2. Provides fault tolerance
3. Provides data redundancy by simultaneously writing data on to two sub-mirrors.
Note:
a. Meta mirror is a special type of meta device made up of one or more other meta devices. Each
meta device within a meta mirror is called a sub-mirror.
b. Meta mirror can be defined by using metainit.
i. additional sub-mirrors can be added at later stage without bringing the system down or disrupting
the read and write to existing meta mirror.
ii. "metatach" used, to which attaches a sub-mirror to a meta mirror.

Ravi Mishra

57

Sun Solaris System Admin Notes 2


iii. When attached, all the data from another sub-mirror in the metamirror is automatically written to
the newly attached sub-mirror. This is called "resyncing".
iv. Once metattach is performed, the sub-mirror remain attached even when the system is rebooted.
Raid-5:
1. Provides fault tolerance
2. Data redundancy
3. Uses less space when compared with mirroring
4. Data is divided into stripes and the parity is calculated from the data, then they are stored in such
a manner parity is distributed or rotated.
5. Size of the volume = (N-1)*smallest physical volume
System files associated with the disk suite:
1. 3 system files
2. a. /etc/lvm/md.tab
b. /etc/lvm/md.cf
c. /etc/lvm/mddb.cf
3. a. /etc/lvm/md.tab
i. used by metainit and metadb commands as a workspace
ii. each meta device may have a unique entry
iii. used only when creating metadevices, hot spares/database replicas
iv. not automatically updated by disk suite utilities
v. have little or no correspondence with actual meta devices, hot spare or replicas.
vi. Input file used by metainit, metadb,metahs
vii. The output from this file is similar to that displayed when # metastat -p
viii. # metainit -a => update this file
3. b. /etc/lvm/md.cf
i. automatically updated whenever the configuration is changed
ii. basically a disaster recovery file and should never be edited.
iii. the md.cf file does not get updated when hot sparing occurs.
iv. should never be used blindly after a disaster. Be sure to examine the file first.
# cat /etc/lvm/md.cf
# metadevice configuration file
# do not hand edit
d100 1 1 c1d0s4

3.c. /etc/lvm/mddb.cf
i. Created whenever the 'metadb' command is run and is used by command 'metainit' to find locations
of the meta device state database.
ii. never edit this file
iii. each meta device state database replica has a unique entry in the file.
# cat /etc/lvm/mddb.cf
#metadevice database location file do not hand edit
#driver minor_t daddr_t device id checksum
cmdk 7 16 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31T=5QE4B31T/h-4269
cmdk 7 8208 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31T=5QE4B31T/h-12461
cmdk 7 16400 id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B31T=5QE4B31T/h-20653

4. /kernel/drv/md.conf
a. used by the metadisk driver, when it is initially loaded.
b. the only modified field in the file is "nmd"
nmd = reprsesnts the number of meta devices supported by the driver
c. If the field is modified, perform reconfiguration boot to build meta devices. (OK boot -r)

Ravi Mishra

58

Sun Solaris System Admin Notes 2


d. If "nmd" is lowered, any meta device existing between the old number and the new number MAY
NOT PERSISTENT.
e. default:128
f. supports upto 1024
g. if larger number of meta devices are added, performance degradation will happen.
h. if larger metadevices are added, replica/state database has to be increased.
HOT SPARES:
1. Disk suite's hot spare facility automatically replaces failed sub-mirror/Raid components, provided
that a spare component is available and reserved.
2. Are temporary fixes, used until failed components are either repaired or replaced.
Hot spare pool:
1. Is a collection of slices reserved by Disk suite to be automatically substituted in case of a slice
failure in either a sub-mirror or Raid-5 meta device.
2. May be allocated, relocated or reassigned at any time unless a slice in that hot spare pool is being
used to replace damaged slice of its associated meta devices.
Operations performed on meta device:
1. mount the meta device in a directory
2. unmount the meta device
3. copy the files to the meta device
4. read and write the files from and to the meta device
5. ufsdump & ufsrestore the meta device

COMMANDS USED IN SVM:


1. # metarecover
Recover soft partition information. It scans a specified component to look for soft partition
configuration information and to regenerate the configuration.
2. # metareplace
enable/replace components of sub-mirrors/Raid-5 meta devices
3. # metaroot
setup system fils for / (root).
edit the file /etc/system and /etc/vfstab
4. # metastat
display status for meta devices or hot spare pool
5. # metasync
handle meta device resync during reboot.
6. # metattach or # metdetach
attach/detach a meta device
7. # metainit
configure the meta device
8. # metaparam
modify the parameters of the meta devices
9. # growfs
non-destructively expand a UFS file system
10. # metaclear
delete active meta devices and hot spare spools
11. # metadb

Ravi Mishra

59

Sun Solaris System Admin Notes 2


create & delete replicas of the meta device state database
12. # metahs
manage hot spares and hot spare spools
To create a replica/meta state database/metadb:
Remember:
1. Before creating a replica, make sure the existence of the dedicated slice to the replica.
2. we can create a slice with 50mb or 100mb.
3. No file system is required in that slice.
4. Once the slice is dedicated for the replica, it can't be deleted unless all the metadevices are removed.
5. We cannot remove a single replica from the slice. All the replicas available in the slice can be
deleted.
# metadb -afc3 c0d1s7
Here metadb = is the command to create the replicas
-a = to add the replica to the slice c0d1s7
-f = since we are creating the replica for the first time
-c = specify the count of the replica.
Note:
1. Minimum requirement is 3 replica.
# metadb
# metadb -i
will display the information about the existing replica and its status.
# metadb -d c0d1s5
will delete all the replicas created at the slice c0d1s5.
# metadb -d -f c0d1s7
will forcefully delete all the replicas in the slice c0d1s7.
This is done when the last, least replica is going to be removed.
Note:
Before deleting the last replica, make sure no meta device is existing.
bash-3.00# metadb
flags
a m p luo
a
p luo
a
p luo
a
p luo

first blk
16
8208
16
8208

block count
8192
8192
8192
8192

/dev/dsk/c1t1d0s5
/dev/dsk/c1t1d0s5
/dev/dsk/c1t1d0s4
/dev/dsk/c1t1d0s4

How to create a meta devices:


# metainit <name_of_meta_device> <no_of_stripe> <no_slice> <physical_component>
Eg:
1. # metainit d0 1 1 c0d1s4
here
metainit = to create a meta device
d0 = name of the meta device
1 = number of stripes
1 = number of physical components/slices
c0d1s4 = is the physical component
2. # metainit d1 1 2 c0t1d0s4 c0t2d0s4
here
d1 = name of the meta device
1 = number of stripes
2 = number of physical components

Ravi Mishra

60

Sun Solaris System Admin Notes 2


c0t1d0s4, c0t2d0s4 = are the physical components
so, a meta device d1 is going to have 1 stripe with 2 slices.
Note:
Make sure that the slices exist before creating a meta device.
3. # metainit d3 3 1 c0t0d0s4 1 c0t2d0s2 2 c0t3d0s1 c0t3d0s3
A
B
C
here
d3 = is the name of the meta device
3 = number of stripes
here 3 stripes
A = first stripe has 1 physical component c0t0d0s4
B = second stripe has 1 physical component c0t2d0s2
Note:
A complete hard disk connected to the target t2 is dedicated to the stripe 2.
C = third stripe has 2 physical components c0t3d0s1, c0t3d0s3
To remove the meta device:
Note:
Before clearing the meta device make sure that the meta device is umounted.
# metaclear <meta_device_name>
eg:
# metaclear d0
will remove the meta device d0
To view the status of the meta devices:
# metastat
# metastat -p
will display the status of the meta device
Outputs:
Note: To create a metadevice, displaying the meta device status, clearing the meta device
bash-3.00# metainit d0 1 1 c1d0s3

d0: Concat/Stripe is setup


bash-3.00# metastat
d0: Concat/Stripe
Size: 2104515 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase Reloc
c1d0s3 0 No Yes
Device Relocation Information:
Device Reloc Device ID
c1d0 Yes id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4ES6R=5QE4ES6R

bash-3.00# metastat -p
d0 1 1 c1d0s3
bash-3.00# metastat -p
d10 1 3 c1d0s4 c1d0s5 c1d0s6 -i 32b
d0 1 1 c1d0s3

bash-3.00# metainit d20 2 2 c2d0s0 c2d0s1 2 c2d0s3 c2d0s4 -i 64b


d20: Concat/Stripe is setup
bash-3.00# metastat
d20: Concat/Stripe
Size: 8388608 blocks (4.0 GB)
Stripe 0: (interlace: 32 blocks)

Ravi Mishra

61

Sun Solaris System Admin Notes 2

Device Start Block Dbase Reloc


c2d0s0 0 No Yes
c2d0s1 0 No Yes
Stripe 1: (interlace: 64 blocks)
Device Start Block Dbase Reloc
c2d0s3 0 No Yes
c2d0s4 0 No Yes
d10: Concat/Stripe
Size: 6297480 blocks (3.0 GB)
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase Reloc
c1d0s4 0 No Yes
c1d0s5 0 No Yes
c1d0s6 0 No Yes
d0: Concat/Stripe
Size: 2104515 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase Reloc
c1d0s3 0 No Yes
Device Relocation Information:
Device Reloc Device ID
c2d0 Yes id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4B3Q1=5QE4B3Q1
c1d0 Yes id1,cmdk@ASEAGATE_ST32500NSSUN250G_0732B4ES6R=5QE4ES6R

bash-3.00# newfs /dev/md/rdsk/d0

newfs: construct a new file system /dev/md/rdsk/d0: (y/n)? y


Warning: 12286 sector(s) in last cylinder unallocated
/dev/md/rdsk/d0: 2104514 sectors in 140 cylinders of 240 tracks, 63sectors
1027.6MB in 24 cyl groups (6 c/g, 44.30MB/g, 10688 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 90816, 181600, 272384, 363168, 453952, 544736, 635520, 726304, 817088,
1271008, 1361792, 1452576, 1543360, 1634144, 1724928, 1815712, 1906496,
1997280, 2088064

To Mount the metadeivce:


bash-3.00# mount /dev/md/dsk/d0 /mnt/meta0
Example entry to mount the meta device perm- by editing the file /etc/vfstab:
/dev/md/dsk/d0 - /mnt/meta0 ufs - yes Output:
bash-3.00# metastat -p

d20 2 2 c2d0s0 c2d0s1 -i 32b \


2 c2d0s3 c2d0s4 -i 64b
d10 1 3 c1d0s4 c1d0s5 c1d0s6 -i 32b
d0 1 1 c1d0s3

To clear the meta device:


bash-3.00# metaclear d10
d10: Concat/Stripe is cleared
bash-3.00# metastat -p

d20 2 2 c2d0s0 c2d0s1 -i 32b \


2 c2d0s3 c2d0s4 -i 64b
d0 1 1 c1d0s3

Creating a mirror:
1. A mirror is a meta device composed of one or more sub-mirrors.
Sub-mirror:
a. Is made up of one or more striped or concatenated meta deices.
b. Each meta device within a meta mirror is called a sub mirror.
2. Mirroring data provides us with maximum data availability by maintaining multiple copies of our
data.
3. The system must contain at least 3 state database replica before creating mirrors.
4. Any file system including / (root), swap and /usr or any application such as database can use a
mirror.

Ravi Mishra

62

Sun Solaris System Admin Notes 2


5. An error on a component does not cause the entire mirror to fail.
6. To get maximum protection & performance, place mirrored (mirrors) meta devices on different
physical components (disks) & on different disk controllers. Since the primary purpose of mirroring is
to maintain availability of data, defining mirrored meta devices on the same disk is NOT
RECOMMENDED.
7. When mirroring existing file system/data, be sure that the existing data is contained in the submirror. When second sub-mirror is subsequently attached, data from the initial sub-mirror is copied
on to the attached sub-mirror.
What to do?
1. Create a simple meta device ( 1 stripe with 1 slice)
2. Create another simple meta device ( 1 stripe with 1 slice)
3. Create a mirror meta device and associate with one meta device (adding first way sub-mirror)
4. Attach another meta device with mirror meta device (adding second sub-mirror)
5. Mount the mirrored meta device
6. Access the mount point.
How to do?
1. # metainit d10 1 1 c0t1d0s3
2. # metainit d20 1 1 c0t2d0s3
3. # metainit d30 -m d10
A
B
A = main mirror
B = sub-mirror
Converting d10 into d30 as a mirror.
4. # metattach d30 d20
attaching d20 to d30.
d20 is the second sub-mirror.
5. # metastat | grep %
will display the sync status.
6. # newfs /dev/md/rdsk/d30
# mkdir /mirror
# mount /dev/md/dsk/d30 /mirror
# cd /mirror
Note:
1. Sync will happen after attaching to the mirror.
2. Slices have to be with same size & geometry, if not greater than the source size is recommended.
How to break the mirror:
1. Deattach the sub-mirror from the mirror which is umounted.
2. Clear the mirror and sub-mirror meta devices.
3. Mount the individual slice, the same data will be available in both the physical components.
How to do?
1. # metadetach d30 d20
# metadetach<main-mirror> <sub-mirror>
will remove d20 from the meta device d30.
2. # metaclear d20
Clears/removes the meta device d20

Ravi Mishra

63

Sun Solaris System Admin Notes 2


3. # metaclear -r d30
will removes both the meta device d30 and d10.
4. # mkdir /d1
# mkdir /d2
# mount /dev/dsk/c0t1d0s3 /d1
# mount /dev/dsk/c0t2d0s3 /d2
# ls /d1 ; ls /d2
Displays the contents of /d1 and /d2 respectively. Contents remains same in both the slices and
mount pt.
Outputs:
bash-3.00# metainit d30 1 1 c1d0s4
d30: Concat/Stripe is setup
bash-3.00# metainit d40 1 1 c2d0s6
d40: Concat/Stripe is setup
bash-3.00# metainit d35 -m d30
d35: Mirror is setup
bash-3.00# metattach d35 d40
d35: submirror d40 is attached
bash-3.00# metastat | grep %
Resync in progress: 45 %
Done
bash-3.00# metastat | grep %
bash-3.00#
bash-3.00# metastat
d35: Mirror
Submirror 0: d30
State: Okay
Submirror 1: d40
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 2104515 blocks (1.0 GB)
d30: Submirror of d35
State: Okay
Size: 2104515 blocks (1.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c1d0s4 0 No Okay Yes

Replacing the failure hard disk drives:


1. If any sub-mirror fails, still data can be accessed using mirror device.
2. Suppose if we remove one disk which contains the 2nd sub-mirror, still we can access the data.
3. But the state of the 2nd sub mirror will be OKAY in the output of 'metastat' command, till we create
any new file/modification in the mirror is preformed. After changes only, the state will be changed to
'MAINTENANCE'.
Replacing the failed disk with different target:
# metareplace <mirror_device> <failed_slice> <new_slice>
# metareplace d30 c0t1d0s3 c0t3d0s5
Replacing the failed disk in the same target/destination:
# metareplace -e d30 c0t1d0s3

Ravi Mishra

64

Sun Solaris System Admin Notes 2

Soft partition:
1. Dividing one logical component (meta device) into many soft partitions. It can be laid out over
physical disk/slices.
# metainit d5 1 1 c0t11d0s6
Consider the size of the c0t11d0s6 size is 10gb.
Then the size of the meta device d5 is of 10gb.
# metainit d61 -p d5 1g
metainit = to create a soft partition
-p = to create a soft partition
A d61 = the new meta device going to be created
B d5 = is the existing meta device with 10gb of size.
C 1g = is the size of the new meta device d61
# metainit d62 -p d5 1g
# metaclear d61
Removes the soft partition d61 only.
# metaclear -p d5
Will remove all soft partitions from d5.
Soft partition is a means of dividing a disk or volume into as many partitions as needed, overcoming
the current limitation of 7. This is done by creating logical partitions within physical disk slices or
logical volumes. Soft partitions differ from hard disk slices created using 'format' command because
soft partitions can be non-contiguous, where as a hard disk slices are contiguous. Therefore soft
partitions can cause I/O performance degradation.
Note:
1. No automatic problem detection is available in SVM.
2. The SVM s/w does not detect problems with state database/replica until there is a change to an
existing SVM configuration and an update to the database replicas is required. If in-sufficient state
database replicas are available, you'll need to boot to single user mode, and delete/replace enough of
the corrupted/missing database replicas to achieve the quorum.
bash-3.00# cat /etc/lvm/md.cf

# metadevice configuration file


# do not hand edit
d5 -m d0 d10 1 d0
1 1 c2t2d0s0 d10
1 1 c2t14d0s0
d1000 -r c2t4d0s1 c2t14d0s1 c2t12d0s1 -k -i 32b
d20 -r c2t4d0s0 c2t14d0s3 c2t12d0s0 -k -i 32b
d502 -p d500 -o 2097216 -b 204800
d500 1 7 c2t2d0s1 c2t2d0s3 c2t2d0s4 c2t2d0s5 c2t14d0s4 c2t14d0s5 c2t14d0s6 -i32b
d501 -p d500 -o 32 -b 2097152

Raid-5:
# metainit <meta_device_name> -r <component1> <component2> <component3>
# metainit d100 -r c0t0d0s4 c0t1d0s4 c0t2d0s4
-r = specifies that the configuration is RAID level 5.
# metastat | grep %

Ravi Mishra

65

Sun Solaris System Admin Notes 2


bash# d20 -r c2t4d0s0 c2t14d0s3 c2t12d0s0
# metastat
Raid-5:
d1000: RAID
State: Okay
Interlace: 32 blocks
Size: 2031616 blocks (992 MB)

bash-3.00# cat /etc/lvm/md.cf


# metadevice configuration file
# do not hand edit
d5 -m d0 d10 1
d0 1 1 c2t2d0s0
d10 1 1 c2t14d0s0
CC -k -i 32b
d20 -r c2t4d0s0 c2t14d0s3 c2t12d0s0 -k -i 32b
d502 -p d500 -o 2097216 -b 204800
d500 1 7 c2t2d0s1 c2t2d0s3 c2t2d0s4 c2t2d0s5 c2t14d0s4 c2t14d0s5 c2t14d0s6 -i32b
d501 -p d500 -o 32 -b 2097152
Expanding a file system:
Note:
1. Once a file system is expanded it cannot be shrunk.
2. Aborting a 'growfs' command may cause temporary loss of free space. The space can be recovered
using 'fsck' command after the file system is unmounted using 'umount'.
3. 'growfs' command non-destructively expands a file system upto the size of the file system's physical
device or meta device.
4. 'growfs' write locks the file system when expanding a mounted file system. Access times are not kept
while the file system is write-locked. The 'lockfs' command can be used to check the file system lock
status and unlock the file system in the unlikely event that 'growfs' aborts without unlocking the FS.
5. We can perform,
a. expanding a non-metadevice component
b. expanding a mounted file system
c. expanding a mounted file system to an existing meta mirror
d. expanding an un-mounted file system
e. expanding a mounted file system using stripes.
6. 'growfs'
a. attach the disk space
b. grow the space
TYPE 1.
# newfs /dev/rdsk/c0t1d0s3
# mkdir /expand
# mount /dev/dsk/c0t1d0s4 /expand
# metainit -f d100 1 1 c0t1d0s3
# umount /expand
# mount /dev/md/dsk/d100 /expand
# metattach d100 c0t10d0s6
New slice6 is attached to d100
# growfs -M /expand /dev/md/rdsk/d100
Raw disk is expanded now
Growing a mirror:
1. Attach each individual component to each sub-mirror
2. Grow the mirror

Ravi Mishra

66

Sun Solaris System Admin Notes 2


# metainit d21 1 1 c0t10d0s3 => 400mb
# metainit d22 1 1 c0t11d0s3 => 400mb
# metainit d23 -m d21 => one-way mirror
# metattach d23 d22 => two-way mirror
# newfs /dev/md/rdsk/d23
# mkdir /mirror
# mount /dev/md/dsk/d23 /mirror
# metattach d21 c0t10d0s4 => attaching disk space 400mb to sub-mirror
# metattach d22 c0t11d0s4 => attaching disk space 400mb to sub-mirror
# growfs -M /mirror /dev/md/rdsk/d23
# df -h
=> mirror will be exapnded to 800mb.
Growing the RAID-5 device:
# metainit d75 -r c0t10d0s3 c0t10d0s4 c0t11d0s3 => each slice with 400mb
# newfs /dev/md/rdsk/d75
# mkdir /raid5
# mount /dev/md/dsk/d75 /raid5
# metattach d75 c0t11d0s6
Slice size in 500 mb; but it'll take only 400mb
# growfs -M /raid5 /dev/md/rdsk/d75
# df -h
Note:
The newly attached slice will have only data. It won't be used for storing parity information.
-M = (directory name)
The file system to be expanded is mounted on directory name.
File system locking will be used.
Outputs:
Growing the file system:
bash-3.00# df -h
Filesystem size used avail capacity Mounted on

/dev/md/dsk/d5 466M 1.1M 419M 1% /mnt/mirror


/dev/md/dsk/d50 935M 1.0M 887M 1% /mnt/concat_grow

bash-3.00# pwd
/mnt/concat_grow
bash-3.00# growfs -M /mnt/concat_grow /dev/md/rdsk/d50

/dev/md/rdsk/d50: Unable to find Media type. Proceeding with system determined parameters.
/dev/md/rdsk/d50: 3047424 sectors in 93 cylinders of 128 tracks, 256 sectors
1488.0MB in 47 cyl groups (2 c/g, 32.00MB/g, 15040 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 65824, 131616, 197408, 263200, 328992, 394784, 460576, 526368, 592160,
2434336, 2500128, 2565920, 2631712, 2697504, 2763296, 2829088, 2894880,
2960672, 3026464

bash-3.00# pwd
/mnt/concat_grow

Growing the size for raid-5 component while it's mounted, w/o loss of data:
bash-3.00# df -h

/dev/md/dsk/d5 466M 1.1M 419M 1% /mnt/mirror


/dev/md/dsk/d50 1.4G 1.5M 1.3G 1% /mnt/concat_grow

Ravi Mishra

67

Sun Solaris System Admin Notes 2

/dev/md/dsk/d1000 935M 1.0M 877M 1% /mnt/raid5-2

# metastat -p
d5 -m d0 d10 1
d0 1 1 c2t2d0s0
d10 1 1 c2t14d0s0

# metattach d1000 c2t11d0s1


d1000: component is attached
# growfs -M /mnt/raid5-2 /dev/md/rdsk/d1000

/dev/md/rdsk/d1000: Unable to find Media type. Proceeding with system determined parameters.
/dev/md/rdsk/d1000: 3047424 sectors in 93 cylinders of 128 tracks, 256 sectors
1488.0MB in 47 cyl groups (2 c/g, 32.00MB/g, 15040 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 65824, 131616, 197408, 263200, 328992, 394784, 460576, 526368, 592160,
2434336, 2500128, 2565920, 2631712, 2697504, 2763296, 2829088, 2894880,
2960672, 3026464

bash-3.00# df -h

/dev/md/dsk/d5 466M 1.1M 419M 1% /mnt/mirror


/dev/md/dsk/d50 1.4G 1.5M 1.3G 1% /mnt/concat_grow
/dev/md/dsk/d1000 1.4G 1.5M 1.3G 1% /mnt/raid5-2

Growing the space to a mirror:


bash-3.00# df -h

/dev/md/dsk/d5 466M 1.1M 419M 1% /mnt/mirror


/dev/md/dsk/d50 1.4G 1.5M 1.3G 1% /mnt/concat_grow
/dev/md/dsk/d1000 1.4G 1.5M 1.3G 1% /mnt/raid5-2

bash-3.00# metastat -p

d5 -m d0 d10 1
d0 1 1 c2t2d0s0
d10 1 1 c2t14d0s0
d1000 -r c2t4d0s1 c2t14d0s1 c2t12d0s1 c2t11d0s1 -k -i 32b -o 3
d20 -r c2t4d0s0 c2t14d0s3 c2t12d0s0 -k -i 32b
d50 3 1 c2t4d0s3 \
1 c2t4d0s4 \
1 c2t4d0s5
d502 -p d500 -o 2097216 -b 204800
d500 1 7 c2t2d0s1 c2t2d0s3 c2t2d0s4 c2t2d0s5 c2t14d0s4 c2t14d0s5 c2t14d0s6 -i32b
d501 -p d500 -o 32 -b 2097152

# metattach d0 c2t11d0s3
d0: component is attached
# metattach d10 c2t11d0s4
d10: component is attached
# growfs -M /mnt/mirror /dev/md/rdsk/d5

/dev/md/rdsk/d5: Unable to find Media type. Proceeding with system determined parameters.
/dev/md/rdsk/d5: 2031616 sectors in 62 cylinders of 128 tracks, 256 sectors
992.0MB in 31 cyl groups (2 c/g, 32.00MB/g, 15040 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 65824, 131616, 197408, 263200, 328992, 394784, 460576, 526368, 592160,
1381664, 1447456, 1513248, 1579040, 1644832, 1710624, 1776416, 1842208,
1908000, 1973792

bash-3.00# df -h

Filesystem size used avail capacity Mounted on

/dev/md/dsk/d5 935M 1.1M 887M 1% /mnt/mirror


/dev/md/dsk/d50 1.4G 1.5M 1.3G 1% /mnt/concat_grow

Ravi Mishra

68

Sun Solaris System Admin Notes 2

/dev/md/dsk/d1000 1.4G 1.5M 1.3G 1% /mnt/raid5-2

ROOT MIRRORING:
WHAT TO DO?
1. Ensure that the alternate disk has equal geometry & size.
2. Take backup of /etc/system and /etc/vfstab file.
3. Copy VTOC from root (booting) disk to the alternate disk.
4. Ensure that the state database is created.
5. Convert the root slice as a logical component forcefully.
6. Create another metadevice for duplicating root slice.
7. Convert the swap slice as a logical component forcefully.
8. Create another metadevce for duplicating the swap slice.
9. Associate first sub-mirror (for root) to mirror root.
10. Associate first sub-mirror (for swap) to mirror swap.
11. Update the system & vfstab file by running 'metaroot' command.
12. Reboot the system.
13. Associate the second sub-mirror to mirror root.
14. Associate the second sub-mirror to mirror swap.
15. Install boot block or grub in the alternate root slice
16. See the physical path for the alternate disk
17 Set alias name in the OK prompt.
18. Set boot sequence in OK prompt.
How to do?
1. # format
To create the slices manually.
2. # cp /etc/system /etc/system.orig
# cp /etc/vfstab /etc/vfstab.orig
3. # prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t12d0s2
Note: fmthard -> populate lable on the new hard disk drive
4. # metadb -afc3 c0t8d0s7 c0t10d0s7 c0t12d0s7
(if the replicas are existing, this step can be avoided)
5. # metainit -f d5 1 1 c0t0d0s0
Converting forcefully the root slice as a metadevice (primary root slice)
6. # metainit d10 1 1 c0t12d0s0
Creating another metadevice for root (secondary)
7. # metainit -f d25 1 1 c0t0d0s1
Converting forcefull the swap slice as a metadevice (primary swap slice)
8. # metainit d30 1 1 c0t12d0s1
Creating another metadevice for swap (secondary)
9. # metainit d15 -m d5
Associating d5 with d15
Here d15 = main mirror for root
10. # metainit d35 -m d25
Associate d35 with d25
Here d35 = main mirror for swap
11. # metaroot d15
a) 'metaroot' edits the file /etc/system and /etc/vfstab so that the system may be booted with the
root filesystem on a meta device.
b) 'metaroot' may also be used to edit the files so that the system may be booted with root file
system on a conventional disk device.

Ravi Mishra

69

Sun Solaris System Admin Notes 2


c) Observe the changes to the files /etc/vfstab and /etc/system.
12. # init 6
Note:
Make sure the sync is completed before rebooting the system by executing the command
# metastat | grep %
13. # metattach d15 d10
For root adding sub-mirror
14. # metattach d35 d30
For swap adding sub-mirror
15. # cd /usr/platform/`uname -m`/lib/fs/ufs
# installboot bootblk /dev/rdsk/c0t12d0s0
Installing the boot block to the SPARC machine
# installgrub -fm /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1d0s0
Installing the grub in the X86 machine.
16. # ls -l /dev/dsk/c0t12d0s0
Will display the physical path of the logical device.
Please make a note of the physical path.
17. OK nvalias <alias_name> <physical_path>
18. OK setenv boot-device <root_disk_alias_name> <alternate_root_disk_alias_name>
BREAKING THE MIRROR (ROOT):
# metadetach d15 d10
d15: submirror d10 is detached
# metaroot /dev/dsk/c0t0d0s0
/dev/dsk/c0t0d0s0 = raw disk of source disk which is running with OS.
will revert the /etc/system & /etc/vfstab to the default status.
# init 6
# metaclear -r d15
d15: Mirror is cleared
d5: Concat/Stripe is cleared
# metaclear d10
d10: Concat/Stripe is cleared
RAID-5:
# metainit <meta_device_name> -r <component1> <component2> <component3>
# metainit d100 -r c0t0d0s4 c0t1d0s4 c0t2d0s4
-r = specifies that the configuration is RAID level 5.
# metastat | grep %
HOT SPARE:
1. Hot spare facility included with Disk Suite allows automatic replacement of failed sub-mirror/
RAID-5 components, provided spare components are available & reserved.
2. Because component replacement & re-syncing of failed components is automatic.
3. A hot spare is a component that is running (but not being used) which can be substituted for a
broken component in a sub-mirror of two or three way meta mirror or RAID-5 device.
4. Failed components in a one-way meta mirror cannot be replaced by a hot spare.
5. Components designated as hot spares cannot be used in sub-mirrors or another meta device in the
'md.tab' file. They must remain ready for immediate use in the event of a component failure.

Ravi Mishra

70

Sun Solaris System Admin Notes 2


Hot spare states:
1. Has 3 states
a. Available
b. In-use
c. Broken
a. Available: These hot spares are running and ready to accept data, but are not currently being
written to or read from.
b. In-use: 'In-use' hot spares are currently being written to and read from.
c. Broken:
i. 'Broken' hot spares are out of the service.
ii. A hot spare is placed in the broken state when an I/O error occurs.
2. The number of hot spare pools is limited to 1000.
Defining Hot spare:
1. Hot spare pools are named as 'hspnnn'
where 'nnn' is a number in the range 000-999
2. A metadevice cannot be configured as a hot spare.
3. Once the hot spare pools are defined and associated with a sub-mirror, the hot spares are
"availabe" for use. If a component failure occurs, disk-suite searches through the list of hot spares in
the assigned pool and selects the first 'available" component that is equal or greater in disk capacity.
4. If a hot spare of adequate size is found, the hot spare state changes to "in-use" and a resync
operation is automatically performed. The resync operation brings the hot spare into sync with other
sub-mirror or RAID-5 components.
5. If a component of adequate size is "not found" in the list of host spare, the sub-mirror that failed is
considered "erred" and the porting of the sub-mirror no longer replicated the data.
Hot spare conditions to avoid:
1. Associating hot spares of the wrong size with sub-mirror. This condition occurs when hot spare
pools are defined and associated with a sub-mirror & none of the hot spares in the hot spare pool are
equal to or greater than the smallest component in the sub-mirror.
2. Having the entire hot spare within the hot spare pool in use. In this case immediate action is
required:
Below two possible solutions or actions can be taken to correct this issue:
i. First is to add additional hot spare
ii. To repair some of the components that have been hot spare replaced
Note:
If all hot spare are in-use and a sub-mirror fails due to errors, that portion of the mirror will no longer
be replicated.
Manipulating hot spare spools:
1. # metahs
= adding hot spares to hot spare pools
= deleting hot spares from hot spare pool
= replacing hot spares in hot spare pools
= enabling hot spare
= checking the status of the hot spare
Adding a hot spare: Creating a hot spare spool:
1. # metainit hsp000 c0t2d0s5
Creates a hot spare device with the name 'hsp000'
2. # metainit <hot_spare_pool_name> <component1> <component2>

Ravi Mishra

71

Sun Solaris System Admin Notes 2


# metainit hsp001 c0t1d0s4 c0t11d0s4
(or)
# metahs -a hsp001 c0t1d0s4 c0t11d0s4
-a = to add a hot spare
-i = to obtain the information
Deleting hot spare:
1. Hot spares can be deleted from any or all the hot spare pools to which they have been associated.
2. When a hot spare is deleted from a hot spare pool, the position of the remaining hot spares changes
to reflect the new position. For example if the second of 3 hot spares in a hot spare spool is deleted,
the 3rd hot spare moves to the second position.
3. # metahs -d hsp000 c0t11d0s4
Removes the slice from the hot spare pool
-d = to delete
4. Removing hot spare pool:
Note:
Before removing the hot spare pool, remove all hot spares from the pools using 'metahs' with d
options and provide hot spare name.
# metahs -d <pool_name> <spare_name>
-d = deletes only the spare
# metahs -d <hsp_name>
To delete the hot spare pool
Replacing hot spare:
Note:
1. Hot spares that are in the 'In-use' state cannot be replaced by other hot spare.
2. The order of hot spares in the hot spare pools is NOT CHANGED when replacement occurs.
3. # metahs -r <hsp_pool_name> <old_slice> <new_slice>
# metahs -r hsp000 c0t10d0s4 c0t11d0s4
c0t11d0s4 replaces c0t10d0s4
Associating the hot spare pool with sub-mirror/Raid-5 metadevice:
1. # metaparam
modifies the parameters of the meta devices.
# metaparam -h <hot_spare_pool_name> <sub-mirror/raid-5>
# metaparam -h hsp000 d101
# metaparam -h hsp000 d102
Note:
Where d101, d102 sub-mirrors of d103 mirror.
where
-h = specifies the hot spare spool to be used by a meta device
Disassociating the hot spare pool with sub-mirror/raid-5 metadevice:
# metaparam -h none <sub-mirror/raid-5 meta device>
# metaparam -h none d101
# metaparam -h none d102
where,
'none' specifies the meta device is disassociated with the hot spare pool associated to it.
# metahs -d hsp000 c0t2d0s5 c0t2d0s6
# metahs -d hsp000
# metaclear d100

Ravi Mishra

72

Sun Solaris System Admin Notes 2


# metadetach d15 d12
# metaclear d12
# metaclear -r d15
To view the status for hot spare pool:
# metahs -i
Note:
Suppose the failed disk is going to be replaced to free up hot spare.
# metadevadm
updates the meta device information
-u = obtain the device ID associated with the disk specified.
This option is used when a disk drive has had its device ID changed during a firmware upgrade or due
to changing the controller of storage.
-v = execution in verbose mode. Has not effect when used with -u option. Verbose is default.
# metadevadm -v -u <hot_spare_component>
Updating the device infomation.
# metadevadm -v -u c0t11d0s4
# metareplace -e d103 c0t10d0s3
To replace in the same location
1. Now hot spare will be available
2. Status of the spare disk will change from 'inuse' to 'available'
Outputs:
bash-3.00# metahs -a hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4
hsp001: Hotspares are added
bash-3.00# metahs -i
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks Yes
c0t9d0s1 Available 1027216 blocks Yes
c0t9d0s3 Available 1027216 blocks Yes
c0t9d0s4 Available 1027216 blocks Yes
Device Relocation Information:
Device Reloc Device ID
c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930

bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0
d10 1 1 c0t10d0s0
d15 1 1 c0t12d0s0
hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks
c0t9d0s1 Available 1027216 blocks
c0t9d0s3 Available 1027216 blocks
c0t9d0s4 Available 1027216 blocks

c0t9d0s4
Yes
Yes
Yes
Yes

bash-3.00# metahs -a hsp001 c0t9d0s5


hsp001: Hotspare is added
bash-3.00# metastat -p

d5 -m d0 d10 1
d0 1 1 c0t8d0s0
d10 1 1 c0t10d0s0
d15 1 1 c0t12d0s0
hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4 c0t9d0s5

Ravi Mishra

73

Sun Solaris System Admin Notes 2

bash-3.00# metahs -d hsp001 c0t9d0s5


hsp001: Hotspare is deleted
bash-3.00# metahs -i
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks Yes
c0t9d0s1 Available 1027216 blocks Yes
c0t9d0s3 Available 1027216 blocks Yes
c0t9d0s4 Available 1027216 blocks Yes
Device Relocation Information:
Device Reloc Device ID
c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930_

bash-3.00# metahs -r hsp001 c0t9d0s3 c0t9d0s5


hsp001: Hotspare c0t9d0s3 is replaced with c0t9d0s5
bash-3.00# metahs -i
hsp001: 4 hot spares
Device Status Length Reloc
c0t9d0s0 Available 1027216 blocks Yes
c0t9d0s1 Available 1027216 blocks Yes
c0t9d0s5 Available 1027216 blocks Yes
c0t9d0s4 Available 1027216 blocks Yes
Device Relocation Information:
Device Reloc Device ID
c0t9d0 Yes id1,sd@SFUJITSU_MAG3182L_SUN18G_01534930

bash-3.00# metahs -d hsp001


metahs: ent250: hsp001: hotspare pool is busy
bash-3.00# metahs -d hsp001 c0t9d0s0 c0t9d0s1 c0t9d0s5 c0t9d0s4
hsp001: Hotspares are deleted
bash-3.00# metahs -d hsp001
hsp001: Hotspare pool is cleared
bash-3.00# metahs -i
metahs: ent250: no hotspare pools found
bash-3.00# metaparam -h hsp005 d10
bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0 -h hsp005
d10 1 1 c0t10d0s0 -h hsp005
d15 1 1 c0t12d0s0
hsp005 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4

bash-3.00# metainit d100 -r c0t8d0s1 c0t10d0s1 c0t12d0s1


d100: RAID is setup
bash-3.00# metaparam -h hsp005 d100
bash-3.00# metastat -p

d5 -m d0 d10 1
d0 1 1 c0t8d0s0 -h hsp005
d10 1 1 c0t10d0s0 -h hsp005
d100 -r c0t8d0s1 c0t10d0s1 c0t12d0s1 -k -i 32b -h hsp005
d15 1 1 c0t12d0s0
hsp005 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4

bash-3.00# metastat | more

d5: Mirror
Submirror 0: d0
State: Okay
Submirror 1: d10
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)

Ravi Mishra

74

Size: 1015808 blocks (496 MB)

Sun Solaris System Admin Notes 2

d0: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t8d0s0 0 No Okay Yes
d10: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t10d0s0 0 No Okay Yes
d100: RAID
State: Okay
Hot spare pool: hsp005
Interlace: 32 blocks
Size: 2031616 blocks (992 MB)
(Output truncated)

bash-3.00# metaparam -h none d100


bash-3.00# metastat -p
d5 -m d0 d10 1
d0 1 1 c0t8d0s0 -h hsp005
d10 1 1 c0t10d0s0 -h hsp005
d100 -r c0t8d0s1 c0t10d0s1 c0t12d0s1 -k -i 32b
d15 1 1 c0t12d0s0
hsp005 c0t9d0s0 c0t9d0s1 c0t9d0s3 c0t9d0s4
Output - truncated:

# metastat
d0: Submirror of d5
State: Resyncing
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t8d0s0 0 No Resyncing Yes c0t9d0s1
d10: Submirror of d5
State: Okay
Hot spare pool: hsp005
Size: 1015808 blocks (496 MB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0t10d0s0 0 No Okay Yes

Additional Information on SVM:


1. Example entries to the file /etc/lvm/md.tab
## For raid-0 concatenation with stripping
d80 1 3 c0t6d0s7 c0t4d0s7 c0t3d0s7
or
d80 1 3 /dev/dsk/c0t6d0s7 /dev/dsk/c0t4d0s7 /dev/dsk/c0t3d0s7
## for raid-1 mirroring
d0 1 1 c0t4d0s5
d10 1 1 c0t6d0s5
d5 -m d0
## for raid-5 stripping with parity
d100 -r c0t2d0s3 c0t3d0s3 c0t5d0s3

Ravi Mishra

75

Sun Solaris System Admin Notes 2


## for hot spare
d0 1 1 c0t4d0s5 -h hsp001
d10 1 1 c0t6d0s5 -h hsp001
### for creating replicas
mddb01 -c3 c0t0d0s7
2. To update the file using a command:
a. for replica:
# metadb -af mddb01
b. for all meta devices:
# metainit -a
c. Only for a selective meta device
# metainit d10
3. To delete the root mirroring:
for eg:
# metadettach d120 d100
# metaroot c0t0d0s0
(Will change the entries to the file /etc/vfstab and /etc/system)
# init 6
# metaclear <clear_the_metadevices>
4. Soft partition:
a. Is a means of dividing a disk or volume into as many partitions as needed, overcoming the current
limitation of eight(7). This is done by creating logical partitions within physical disk slices or logical
volumes.
b. No automatic problem detection
c. SVM s/w does not detect problems with state database/replica until there is a change to an existing
SVM configuration and an update to the database replica is required. If is sufficient state database
replicas are available, you'll need to boot in single user mode, and delete/replace enough of the
corrupted/missing database replicas to achieve a quorum.
d. soft partitions differ from hard slices creating using 'format' command because soft partitions can
be non-contiguous, where as a hard slice is contiguous. Therefore soft partitions can cause I/O
performance degradation.
Outputs: Examples for editing the file /etc/lvm/md.tab
CREATING THE MIRROR BY EDITING THE FILE /ETC/LVM/MD.TAB
"/etc/lvm/md.tab" 57 lines, 1453 characters
#
# Copyright 2003 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)md.tab 2.5 03/09/11 SMI"
#
# md.tab
#
# metainit utility input file.
#
# The following examples show the format for local metadevices, and a
# similar example for a shared metadevice, where appropiate. The shared
# metadevices are in the diskset named "blue":
#
# Metadevice database entry:
#

Ravi Mishra

76

Sun Solaris System Admin Notes 2


# mddb01 /dev/dsk/c0t2d0s0 /dev/dsk/c0t0d0s0
#
# Concatenation of devices:
#
# d10 2 1 /dev/dsk/c0t2d0s0 1 /dev/dsk/c0t0d0s0
# blue/d10 2 1 /dev/dsk/c2t2d0s0 1 /dev/dsk/c2t0d0s0
#
# Stripe of devices:
#
# d11 1 2 /dev/dsk/c0t2d0s1 /dev/dsk/c0t0d0s1
# blue/d11 1 2 /dev/dsk/c2t2d0s1 /dev/dsk/c2t0d0s1
#
# Concatenation of stripes (with a hot spare pool):
#
# d13 2 2 /dev/dsk/c0t2d0s0 /dev/dsk/c0t0d0s0 \
# 2 /dev/dsk/c0t2d0s1 /dev/dsk/c0t0d0s1 -h hsp001
# blue/d13 2 2 /dev/dsk/c2t2d0s0 /dev/dsk/c2t0d0s0 \
# 2 /dev/dsk/c2t2d0s1 /dev/dsk/c2t0d0s1 -h blue/hsp001
"/etc/lvm/md.tab" 57 lines, 1453 characters
# RAID of devices
#
# d15 -r /dev/dsk/c1t0d0s0 /dev/dsk/c1t1d0s0 \
# /dev/dsk/c1t2d0s0 /dev/dsk/c1t3d0s0
# blue/d15 -r /dev/dsk/c2t0d0s0 /dev/dsk/c2t1d0s0 \
# /dev/dsk/c2t2d0s0 /dev/dsk/c2t3d0s0
#
# Hot Spare Pool of devices
#
# hsp001 /dev/dsk/c1t0d0s0
# blue/hsp001 /dev/dsk/c2t0d0s0
#
# 100MB Soft Partition
#
# d1 -p /dev/dsk/c1t0d0s1 100M
# blue/d1 -p /dev/dsk/c2t0d0s1 100M
# -------------- ADD BELOW LINES -------------# create a replica
mddb01 -c6 c0t8d0s0
# creating a metadevice
d0 1 1 c0t8d0s3
d10 1 1 c0t9d0s3
~
"/etc/lvm/md.tab" 61 lines, 1545 characters
Execute below commands, it will fetch necessary details from the file /etc/lvm/md.tab
bash-3.00# metadb -af mddb01
bash-3.00# metainit -a
d10: Concat/Stripe is setup
d0: Concat/Stripe is setup
bash-3.00# metastat
d0: Concat/Stripe
Size: 1027216 blocks (501 MB)
Stripe 0:
Device Start Block Dbase Reloc
c0t8d0s3 0 No Yes
d10: Concat/Stripe
Size: 1027216 blocks (501 MB)
Stripe 0:
Device Start Block Dbase Reloc
c0t9d0s3 0 No Yes

Ravi Mishra

77

Device
Device
c0t8d0
c0t9d0

Sun Solaris System Admin Notes 2

Relocation Information:
Reloc Device ID
Yes id1,sd@SSEAGATE_ST318203LSUN18G_LR901376000010210UDS
Yes id1,sd@SSEAGATE_ST318203LSUN18G_LRA609240000W0270ZT6

bash-3.00# vi /etc/lvm/md.cf
"/etc/lvm/md.cf" 7 lines, 140 characters
# metadevice configuration file
# do not hand edit
# create a replica
mddb01 -c6 c0t8d0s0
# creating a metadevice
d0 1 1 c0t8d0s3
d10 1 1 c0t9d0s3
Disk Set
Disk set feature lets us to set up groups of host machines and disk drives in which all of the hosts in
the set are connected to all the drives in the set.
Types of diskset:
a. Local diskset
b. Shared diskset
Local Diskset:
1. Each host in a disk set must have a local disk set.
2. Local disk set for a host consists of all drives which are not part of a shared diskset.
3. The host's local metadevice configuration is contained within this local diskset in the local
metadevice state database/replica.
Shared Diskset:
1. Is a grouping of 2 hosts and disk drives in which all the drives are accessible by both hosts.
Condition:
Disk suite requires that the device name be identical on each host in the disk set.
2. There is one meta device state database per shared diskset.
Note:
1. Drives shared diskset must not be in any ohter diskset.
2. None of the partitions on any of the drives in a diskset can be mounted on, swapped on or part of a
local metadevice.
3. All the drives in a shared diskset must be accessible by both hosts in the diskset.
4. Metadevices & hotspare pools in any diskset must consist of drives within that dataset.
Likewise, metadevices & host spare pools in the local diskset must be made up of drives from within
the local diskset.
Naming convention:
1. Metadevices within the local diskset use the standard disk suite naming conventions.
2. Metadevices within the shared diskset use the following conventions.
/dev/md/setname/(r)dsk/dnumber
(usually 0 to 127)
Eg:
/dev/md/dataset/(r)dsk/d10
3. Hotspare:
setname/hsp000

Ravi Mishra

78

Sun Solaris System Admin Notes 2


As usual, 0-999
Note:
-s = Options is used with the standard disk suite commands to create, remove and administer
metadevices / hotspare pools.
If -s option is NOT used, the command affects only the local diskset.
Defining disksets:
Note:
1. Before administering the diskset, make sure,
a. The installation of the disk suite software on each host
b. Each host must have local database replicas setup.
2. All disk that we plan to share between hosts in the diskset must be connected to each host and
must have the same name on the each host.
3. 2-basic operations involved in defining disksets.
a. Adding hosts (adding the first host defines the disk set)
b. Adding drives
Syn:
# metaset -s <set_name> -a -h <first_owner_host> <second_host>
Eg:
# metaset -s dataset -a -h node1 node2
Where
-a = to add
-h = to specify the host
NOTE:
1. Adding the first host create the diskset.
2. The last host cannot be deleted until all of the drives within the set have been deleted.
3. A host name is not accepted if all the drives within the diskset cannot be found on each specified
host. IN addition, a drive is not accepted if it cannot be found on all the hosts in the diskset.
# metaset
- Displays the status of the diskset
Adding drives to the diskset:
Syn:
# metaset -s <disk_set_name> -a <disks>
Eg:
# metaset -s dataset -a c2t1d0 c2t2d0 c2t3d0 c2t4d0
NOTE:
1. A drive name is not accepted if it cannot be found on all hosts specified as part of the disk set.
# metaset
Now we can observe the difference since disk are added to the diskset. First host (here node1 is the
owner of the diskset dataset).
2. Drives are repartitioned when they are added to the diskset, only if slice7 is not setup properly.
A small portion of each drive is reserved in slice7 for use by disksuite software.
3. The disk suite software tries to balance a reasonable number of replicas across all drives in a
diskset.
4. Each drive in the diskset is probed once every second to determine that is still reserved.

Ravi Mishra

79

Sun Solaris System Admin Notes 2


Administering disksets:
1. Reserved or reserving a diskset
2. Releasing a diskset
Reserving a diskset:
1. Before a host can use drives in a diskset, the host must reserve the diskset.
2. a. Safely:
'metaset' checks to see if another host currently has the set reserved. If another host has the diskset
reserved the other host will not be allowed to reserve the set.
Syn:
# metaset -s <disk_set_name> -t
Eg:
# metaset -s dataset -t
b. Forcefully:
Will not check with the other hosts
Syn:
# metaset -s <disk_set_name> -t -f
Eg:
# metaset -s dataset -t -f
Releasing a diskset:
1. When a diskset is released, it cannot be accessed by the host.
Syn:
# metaset -s <disk_set_name> -r
Eg:
# metaset -s dataset -r
# metaset
Observe the changes
Removing hosts & drives from disksets:
NOTE:
1. When drives are removed from/added to the diskset, disksuite balances the metadevices state
database replicas across the remaining drives.
Syn:
# metaset -s <disk_set_name> -d <disks>
Eg:
# metaset -s dataset -d c2t3d0
2. -f = Option must be used when deleting the last drive in a set, since this drive would implicitly
contain the last state database replica.
3. The last host can be removed from a diskset only after all drives in the diskset have been removed.
Removing the last host from the diskset destroys the diskset.
Eg:
# metaset -s dataset -d -h node2
Here, the diskset will be removed from the host node2
# metaset
Observe the changes
Adding drives or hosts to the diskset:
# metaset -s dataset -a c2t5d0

Ravi Mishra

80

Sun Solaris System Admin Notes 2


To add the drives
# metaset -s dataset -a -h node2
To add hosts
NOTE:
Disk suite supports a maximum of 2 hosts per diskset.

Ravi Mishra

81