Sunteți pe pagina 1din 8

To Migrate a UFS Root File System to a ZFS Root File System

#zpoolcreatempoolmirrorc1t2d0s0c2t1d0s0
#lucreatecufs1009BEnzfs1009BEpmpool
Analyzingsystemconfiguration.
Nonameforcurrentbootenvironment.
Currentbootenvironmentisnamed<ufs1009BE>.
Creatinginitialconfigurationforprimarybootenvironment
<ufs1009BE>.
Thedevice</dev/dsk/c1t0d0s0>isnotarootdeviceforany
bootenvironment;cannotgetBEID.
PBEconfigurationsuccessful:PBEname<ufs1009BE>PBEBoot
Device</dev/dsk/c1t0d0s0>.
Comparingsourcebootenvironment<ufs1009BE>filesystems
withthefile
system(s)youspecifiedforthenewbootenvironment.
Determiningwhich
filesystemsshouldbeinthenewbootenvironment.
UpdatingbootenvironmentdescriptiondatabaseonallBEs.
Updatingsystemconfigurationfiles.
Thedevice</dev/dsk/c1t2d0s0>isnotarootdeviceforany
bootenvironment;cannotgetBEID.
Creatingconfigurationforbootenvironment<zfs1009BE>.
Sourcebootenvironmentis<ufs1009BE>.
Creatingbootenvironment<zfs1009BE>.
Creatingfilesystemsonbootenvironment<zfs1009BE>.
Creating<zfs>filesystemfor</>inzone<global>on
<mpool/ROOT/zfs1009BE>.
Populatingfilesystemsonbootenvironment<zfs1009BE>.
Checkingselectionintegrity.
IntegritycheckOK.
Populatingcontentsofmountpoint</>.
Copying.
Creatingsharedfilesystemmountpoints.
Creatingcomparedatabasesforbootenvironment<zfs1009BE>.
Creatingcomparedatabaseforfilesystem</mpool/ROOT>.
Creatingcomparedatabaseforfilesystem</>.
Updatingcomparedatabasesonbootenvironment<zfs1009BE>.
Makingbootenvironment<zfs1009BE>bootable.
Creatingboot_archivefor/.alt.tmp.bqD.mnt
updating/.alt.tmp.bqD.mnt/platform/sun4u/boot_archive

Populationofbootenvironment<zfs1009BE>successful.
Creation of boot environment <zfs1009BE> successful.

#lustatus
BootEnvironmentIsActiveActiveCan
Copy
NameCompleteNowOnRebootDelete
Status

ufs1009BEyesyesyesno

zfs1009BE
yes
no no
yes #zfslist
NAMEUSEDAVAILREFERMOUNTPOINT
mpool7.17G59.8G95.5K/mpool
mpool/ROOT4.66G59.8G21K/mpool/ROOT
mpool/ROOT/zfs1009BE4.66G59.8G4.66G/
mpool/dump2G61.8G16K
mpool/swap
517M 60.3G 16K -

#luactivatezfs1009BE
ALiveUpgradeSyncoperationwillbeperformedonstartup
ofbootenvironment<zfs1009BE>.
************************************************************
**********
Thetargetbootenvironmenthasbeenactivated.Itwillbe
usedwhenyou
reboot.NOTE:YouMUSTNOTUSEthereboot,halt,oruadmin
commands.You

MUSTUSEeithertheinitortheshutdowncommandwhenyou
reboot.Ifyou
donotuseeitherinitorshutdown,thesystemwillnotboot
usingthe
targetBE.
************************************************************
**********
.
.
.
Modifyingbootarchiveservice
Activation of boot environment <zfs1009BE> successful.

# init 6
#lustatus
BootEnvironmentIsActiveActiveCan
Copy
NameCompleteNowOnRebootDelete
Status

ufs1009BEyesnonoyes

zfs1009BE
yes
yes yes
no -

ok bootZmpool/ROOT/zfs1009BE
ok bootZmpool/ROOT/zfs1009BEFfailsafe

How to Replace a Disk in the ZFS Root Pool


#zpoolofflinerpoolc1t0d0s0
#cfgadmcunconfigurec1::dsk/c1t0d0
<Physicallyremovefaileddiskc1t0d0>

<Physicallyinsertreplacementdiskc1t0d0>
#cfgadmcconfigurec1::dsk/c1t0d0
#zpoolreplacerpoolc1t0d0s0
#zpoolonlinerpoolc1t0d0s0
#zpoolstatusrpool
<Letdiskresilverbeforeinstallingthebootblocks>

SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0

# zpoolattachrpoolc1t10d0s0c1t9d0s0
#zpoolstatusrpool
pool:rpool
state:ONLINE
status:Oneormoredevicesiscurrentlybeingresilvered.Thepool
will
continuetofunction,possiblyinadegradedstate.
action:Waitfortheresilvertocomplete.
scrub:resilverinprogress,25.47%done,0h4mtogo
config:
NAMESTATEREADWRITECKSUM
rpoolONLINE000
mirrorONLINE000
c1t10d0s0ONLINE000
c1t9d0s0ONLINE000
errors:Noknowndataerrors

How to Create Root Pool Snapshots

Create root pool snapshots for recovery purposes. The best way to create root pool
snapshots is to do a recursive snapshot of the root pool.
The procedure below creates a recursive root pool snapshot and stores the snapshot as a
file in a pool on a remote system. In the case of a root pool failure, the remote dataset can
be mounted by using NFS and the snapshot file received into the recreated pool. You can
also store root pool snapshots as the actual snapshots in a pool on a remote system.
Sending and receiving the snapshots from a remote system is a bit more complicated
because you must configure ssh or use rsh while the system to be repaired is booted
from the Solaris OS miniroot.
Validating remotely stored snapshots as files or snapshots is an important step in root
pool recovery and in either method, snapshots should be recreated on a routine basis,
such as when the pool configuration changes or when the Solaris OS is upgraded.
In the following example, the system is booted from the zfs1009BE boot environment.
1.

Create space on a remote system to store the snapshots.


For example:

remote#zfscreaterpool/snaps

2.

Share the space to the local system.


For example:

remote#zfssetsharenfs='rw=localsystem,root=localsystem'rpool/snaps
#share
@rpool/snaps/rpool/snapssec=sys,rw=localsystem,root=localsystem""

3.

Create a recursive snapshot of the root pool.

local#zfssnapshotrrpool@0804
local#zfslist
NAMEUSEDAVAILREFERMOUNTPOINT
rpool6.17G60.8G98K/rpool
rpool@0804098K
rpool/ROOT4.67G60.8G21K/rpool/ROOT
rpool/ROOT@0804021K
rpool/ROOT/zfs1009BE4.67G60.8G4.67G/
rpool/ROOT/zfs1009BE@0804386K4.67G
rpool/dump1.00G60.8G1.00G
rpool/dump@080401.00G

rpool/swap517M61.3G16K
rpool/swap@0804016K

4.

Send the root pool snapshots to the remote system.


For example:

local#zfssendRvrpool@0804>/net/remotesystem/rpool/snaps/rpool.0804
sendingfrom@torpool@0804
sendingfrom@torpool/swap@0804
sendingfrom@torpool/ROOT@0804
sendingfrom@torpool/ROOT/zfs1009BE@0804
sendingfrom@torpool/dump@0804

How to Recreate a ZFS Root Pool and Restore Root Pool


Snapshots
In this scenario, assume the following conditions:

ZFS root pool cannot be recovered

ZFS root pool snapshots are stored on a remote system and are shared over NFS

All steps below are performed on the local system.


1.

Boot from CD/DVD or the network.


On a SPARC based system, select one of the following boot methods:

okbootnets
okbootcdroms

If you don't use s option, you'll need to exit the installation program.
On an x86 based system, select the option for booting from the DVD or the network.
Then, exit the installation program.
2.

Mount the remote snapshot dataset.


For example:

#mountFnfsremotesystem:/rpool/snaps/mnt

If your network services are not configured, you might need to specify the remotesystem's IP address.
3.

If the root pool disk is replaced and does not contain a disk label that is usable by
ZFS, you will have to relabel the disk.
For more information about relabeling the disk, go to the following site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_G
uide

4.

Recreate the root pool.


For example:

#zpoolcreatefofailmode=continueR/amlegacyocachefile=
/etc/zfs/zpool.cacherpoolc1t1d0s0

5.

Restore the root pool snapshots.


This step might take some time. For example:

#cat/mnt/rpool.0804|zfsreceiveFdurpool

6.

Verify that the root pool datasets are restored.


For example:

#zfslist
NAMEUSEDAVAILREFERMOUNTPOINT
rpool6.17G60.8G98K/a/rpool
rpool@0804098K
rpool/ROOT4.67G60.8G21K/legacy
rpool/ROOT@0804021K
rpool/ROOT/zfs1009BE4.67G60.8G4.67G/a
rpool/ROOT/zfs1009BE@0804398K4.67G
rpool/dump1.00G60.8G1.00G
rpool/dump@080401.00G
rpool/swap517M61.3G16K
rpool/swap@0804016K

7.

Set the bootfs property on the root pool BE.


For example:

#zpoolsetbootfs=rpool/ROOT/zfs1009BErpool

8.

Install the boot blocks on the new disk.


On a SPARC based system:

#installbootFzfs/usr/platform/`unamei`/lib/fs/zfs/bootblk/dev/rdsk/c1t5d0s0

On an x86 based system:

#installgrub/boot/grub/stage1/boot/grub/stage2/dev/rdsk/c1t5d0s0

9.

Reboot the system.

#init6