Sunteți pe pagina 1din 6

1/23/2018

Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems

Exit Print View

Solaris Administration: ZFS File Systems Exit Print View Oracle Solaris Administration: ZFS File Systems Library

Library

Managing Your ZFS Root Pool

The following sections provide information about installing and updating a ZFS root pool and configuring a mirrored root pool.

Installing a ZFS Root Pool

The Oracle Solaris 11 Live CD installation method installs a default ZFS root pool on a single disk. With the Oracle Solaris 11 automated installation (AI) method, you can create an AI manifest to identify the disk or mirrored disks for the ZFS root pool.

The AI installer provides the flexibility of installing a ZFS root pool on the default boot disk or on a target disk that you identify. You can specify the logical device, such as c1t0d0s0, or the physical device path. In addition, you can use the MPxIO identifier or the device ID for the device to be installed.

After the installation, review your ZFS storage pool and file system information, which can vary by installation type and customizations. For example:

# zpool status pool: rpool state: ONLINE scrub: none requested config:

NAME

STATE

READ WRITE CKSUM

rpool

ONLINE

0

0

0

c1t3d0s0 ONLINE

0

0

0

errors: No known data errors

# zfs list

# zfs list

NAME

USED AVAIL REFER MOUNTPOINT

rpool

6.49G 60.4G

40K /rpool

rpool/ROOT

3.46G 60.4G

31K legacy

rpool/ROOT/solaris

rpool/ROOT/solaris/var

3.46G 60.4G 3.16G

/

303M 60.4G

216M /var

rpool/dump

2.00G 60.5G 1.94G

rpool/export

96.5K 60.4G

- 32K /rpool/export

rpool/export/home

64.5K 60.4G

32K /rpool/export/home

rpool/export/home/admin 32.5K 60.4G 32.5K /rpool/export/home/admin -

rpool/swap

1.03G 60.5G 1.00G

Review your ZFS BE information. For example:

#

beadm list

#

beadm list

BE

Active Mountpoint Space Policy Created

--

------ ---------- ----- ------ -------

solaris NR

/

3.85G static 2011-09-26 08:37

In the above output, the Active field indicates whether the BE is active now represented by N and active on reboot represented by R, or both represented by NR.

1/23/2018

Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems

How to Update Your ZFS Boot Environment

The default ZFS boot environment (BE) is named solaris by default. You can identify your BEs by using the beadm list command. For example:

# beadm list

BE

Active Mountpoint Space Policy Created

--

------ ---------- ----- ------ -------

solaris NR

/

8.41G static 2011-01-13 15:31

In the above output, NR means the BE is active now and will be the active BE on reboot.

You can use the pkg update command to update your ZFS boot environment. If you update your ZFS BE by using the pkg update command, a new BE is created and activated automatically, unless the updates to the existing BE are very minimal.

1. Update your ZFS BE.

# pkg update

DOWNLOAD

PKGS

FILES

XFER (MB)

Completed

707/707 10529/10529 194.9/194.9

.

.

.

A new BE, solaris-1, is created automatically and activated.

2. Reboot the system to complete the BE activation. Then, confirm your BE status.

#

init 6

.

.

.

#

beadm list

BE

--

solaris

Active Mountpoint Space Policy Created ------ ---------- ----- ------ ------- -

-

solaris-1 NR

/

6.25M static 2011-09-26 08:37 3.92G static 2011-09-26 09:32

3. If an error occurs when booting the new BE, activate and boot back to the previous BE.

# beadm activate solaris

# init 6

How to Mount an Alternate BE

You might need to copy or access a file from another BE for recovery purposes.

1. Become an administrator.

2. Mount the alternate BE.

# beadm mount solaris-1 /mnt

3. Access the BE.

# ls /mnt

bin

export

media

pkg

rpool

tmp

boot

home

mine

platform

sbin

usr

dev

import

mnt

proc

scde

var

devices

java

net

project

shared

doe

kernel

nfs4

re

src

etc

lib

opt

root

system

1/23/2018

Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems

4. Unmount the alternate BE when you're finished with it.

# beadm umount solaris-1

How to Configure a Mirrored Root Pool

If you do not configure a mirrored root pool during an automatic installation, you can easily configure a mirrored root pool after the installation.

For information about replacing a disk in a root pool, see How to Replace a Disk in a ZFS Root Pool.

1. Display your current root pool status.

# zpool status rpool pool: rpool state: ONLINE scrub: none requested config:

NAME

STATE

READ WRITE CKSUM

rpool

ONLINE

0

0

0

c2t0d0s0 ONLINE

0

0

0

errors: No known data errors

2. Prepare a second disk for attachment to the root pool, if necessary.

3. Attach a second disk to configure a mirrored root pool.

# zpool attach rpool c2t0d0s0 c2t1d0s0

Make sure to wait until resilver is done before rebooting.

4. View the root pool status to confirm that resilvering is complete.

# zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Thu Sep 29 18:09:09 2011 1.55G scanned out of 5.36G at 36.9M/s, 0h1m to go 1.55G scanned out of 5.36G at 36.9M/s, 0h1m to go 1.55G resilvered, 28.91% done config:

NAME

STATE

READ WRITE CKSUM

 

rpool

ONLINE

0

0

0

mirror-0

ONLINE

0

0

0

c2t0d0s0 ONLINE

0

0

0

c2t1d0s0 ONLINE

0

0

0 (resilvering)

errors: No known data errors

In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:

1/23/2018

Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems

resilvered 5.36G in 0h10m with 0 errors on Thu Sep 29 18:19:09 2011

5. Verify that you can boot successfully from the new disk.

6. Set up the system to boot automatically from the new disk.

SPARC: Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM. eeprom command or the setenv command from the boot PROM.

x86: Reconfigure the system BIOS.eeprom command or the setenv command from the boot PROM. How to Replace a Disk in

How to Replace a Disk in a ZFS Root Pool

You might need to replace a disk in the root pool for the following reasons:

The root pool is too small and you want to replace it with a larger diskreplace a disk in the root pool for the following reasons: The root pool disk is

The root pool disk is failing. In a non-redundant pool, if the disk is failing so that the system won't boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the root pool disk.is too small and you want to replace it with a larger disk In a mirrored

In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.

Systems with SATA disks require that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

# zpool offline rpool c1t0d0s0

# cfgadm -c unconfigure c1::dsk/c1t0d0

<Physically remove failed disk c1t0d0> <Physically insert replacement disk c1t0d0>

# cfgadm -c configure c1::dsk/c1t0d0

<Confirm that the new disk has an SMI label and a slice 0>

# zpool replace rpool c1t0d0s0

# zpool online rpool c1t0d0s0

# zpool status rpool

<Let disk resilver before installing the boot blocks> SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0 x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0

On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.

1. Physically connect the replacement disk.

2. Confirm that the replacement (new) disk has an SMI (VTOC) label and a slice 0.

3. Attach the new disk to the root pool.

For example:

# zpool attach rpool c2t0d0s0 c2t1d0s0

Make sure to wait until resilver is done before rebooting.

4. Confirm the root pool status.

For example:

# zpool status rpool pool: rpool

1/23/2018

Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems

state: ONLINE scan: resilvered 5.36G in 0h2m with 0 errors on Thu Sep 29 18:11:53 2011 config:

NAME

STATE

READ WRITE CKSUM

rpool

ONLINE

0

0

0

mirror-0

ONLINE

0

0

0

c2t0d0s0 ONLINE

0

0

0

c2t1d0s0 ONLINE

0

0

0

errors: No known data errors

5. Verify that you can boot from the new disk after resilvering is complete.

For example, on a SPARC based system:

ok boot /pci@1f,700000/scsi@2/disk@1,0

Identify the boot device pathnames of the current and new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk fails. In the example below, the current root pool disk (c2t0d0s0) is:

/pci@1f,700000/scsi@2/disk@0,0

In the example below, the replacement boot disk is (c2t1d0s0):

boot /pci@1f,700000/scsi@2/disk@1,0

6. If the system boots from the new disk, detach the old disk.

For example:

# zpool detach rpool c2t0d0s0

7. Set up the system to boot automatically from the new disk.

SPARC: Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM. eeprom command or the setenv command from the boot PROM.

x86: Reconfigure the system BIOS.eeprom command or the setenv command from the boot PROM. How to Create a BE in

How to Create a BE in Another Root Pool

If you want to re-create your existing BE in another root pool, follow the steps below. You can modify the steps based on whether you want two root pools with similar BEs that have independent swap and dump devices or whether you just want a BE in another root pool that shares the swap and dump devices.

After you activate and boot from the new BE in the second root pool, it will have no information about the previous BE in the first root pool. If you want to boot back to the original BE, you will need to boot the system manually from the original root pool's boot disk.

1. Create a second root pool with an SMI (VTOC)-labeled disk. For example:

# zpool create rpool2 c4t2d0s0

2. Create the new BE in the second root pool. For example:

# beadm create -p rpool2 solaris2

3. Set the bootfs property on the second root pool. For example:

# zpool set bootfs=rpool2/ROOT/solaris2 rpool2

4. Activate the new BE. For example:

1/23/2018

Managing Your ZFS Root Pool - Oracle Solaris Administration: ZFS File Systems

# beadm activate solaris2

5.

Boot from the new BE but you must boot specifically from the second root pool's boot device.

ok boot disk2

 

Your system should be running under the new BE.

 

6.

Re-create the swap volume. For example:

#

zfs create -V 4g rpool2/swap

7.

Update the /etc/vfstab entry for the new swap device. For example:

 

/dev/zvol/dsk/rpool2/swap

-

-

swap -

no

-

8.

Re-create the dump volume. For example:

 

#

zfs create -V 4g rpool2/dump

9.

Reset the dump device. For example:

 

#

dumpadm -d /dev/zvol/dsk/rpool2/dump

10.

Reset your default boot device to boot from the second root pool's boot disk.

 
 

SPARC – Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM. eeprom command or the setenv command from the boot PROM.

x86 – Reconfigure the system BIOS. 

 

11.

Reboot to clear the original root pool's swap and dump devices.

 

# init 6