Sunteți pe pagina 1din 9

man zpool $ man zfs $ su Password: # cd / # mkfile 100m disk1 disk2 disk3 disk5 # mkfile 50m disk4 # ls -l disk*

-rw------T 1 root root 104857600 Sep 11 12:15 disk1 -rw------T 1 root root 104857600 Sep 11 12:15 disk2 -rw------T 1 root root 104857600 Sep 11 12:15 disk3 -rw------T 1 root root 52428800 Sep 11 12:15 disk4 -rw------T 1 root root 104857600 Sep 11 12:15 disk5 # zpool create myzfs /disk1 /disk2 # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT myzfs 191M 94K 191M 0% ONLINE # zpool status -v pool: myzfs state: ONLINE scrub: none requested config: NAME myzfs /disk1 /disk2 STATE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0

Get familiar with command structure and options

Create some "virtual devices" or vdevs as described in the zpool documentation. These can also be real disk slices if you have them available.

Create a storage pool and check the size and usage.

Get more detailed status of the zfs storage pool.

errors: No known data errors # zpool destroy myzfs # zpool list no pools available

Destroy a zfs storage pool Attempt to create a zfs pool with different size vdevs fails. Using -f options forces it to occur but only uses space allowed by smallest device. Create a mirrored storage pool. In this case, a 3 way mirrored storage pool.

# zpool create myzfs mirror /disk1 /disk4 invalid vdev specification use '-f' to override the following errors: mirror contains devices of different sizes

# zpool # zpool NAME ALTROOT myzfs # zpool pool:

create myzfs mirror /disk1 /disk2 /disk3 list SIZE USED AVAIL CAP HEALTH 95.5M status -v myzfs 112K 95.4M 0% ONLINE -

state: ONLINE scrub: none requested config: NAME myzfs mirror /disk1 /disk2 /disk3 errors: # zpool # zpool pool: state: scrub: config: STATE ONLINE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

No known data errors detach myzfs /disk3 status -v myzfs ONLINE none requested NAME myzfs mirror /disk1 /disk2 STATE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0

Detach a device from a mirrored pool.

errors: No known data errors # zpool attach myzfs /disk1 /disk3 # zpool status -v pool: myzfs state: ONLINE scrub: resilver completed with 0 errors on Tue Sep 11 13:31:49 2007 config: NAME myzfs mirror /disk1 /disk2 /disk3 STATE ONLINE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Attach device to pool. This creates a two-way mirror is the pool is not already a mirror, else it adds another mirror, in this case making it a 3 way mirror.

errors: No known data errors # zpool remove myzfs /disk3 cannot remove /disk3: only inactive hot spares can be removed # zpool detach myzfs /disk3 # zpool add myzfs spare /disk3 # zpool status -v pool: myzfs state: ONLINE scrub: none requested config:

Attempt to remove a device from a pool. In this case it's a mirror, so we must use "zpool detach".

Add a hot spare to a storage pool.

NAME myzfs mirror /disk1 /disk2 spares /disk3 errors: # zpool # zpool pool: state: scrub: config:

STATE ONLINE ONLINE ONLINE ONLINE AVAIL

READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0

No known data errors remove myzfs /disk3 status -v myzfs ONLINE none requested NAME myzfs mirror /disk1 /disk2 STATE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0

Remove a hot spare from a pool.

errors: No known data errors # zpool offline myzfs /disk1 # zpool status -v pool: myzfs state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scrub: resilver completed with 0 errors on Tue Sep 11 13:39:25 2007 config: NAME myzfs mirror /disk1 /disk2 STATE DEGRADED DEGRADED OFFLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0

Take the specified device offline. No attempt to read or write to the device will take place until it's brought back online. Use the -t option to temporarily offline a device. A reboot will bring the device back online.

errors: No known data errors # zpool online myzfs /disk1 # zpool status -v pool: myzfs state: ONLINE scrub: resilver completed with 0 errors on Tue Sep 11 13:47:14 2007 config: NAME myzfs STATE ONLINE READ WRITE CKSUM 0 0 0

Bring the specified device online.

mirror /disk1 /disk2

ONLINE ONLINE ONLINE

0 0 0

0 0 0

0 0 0

errors: No known data errors # zpool replace myzfs /disk1 /disk3 # zpool status -v pool: myzfs state: ONLINE scrub: resilver completed with 0 errors on Tue Sep 11 13:25:48 2007 config: NAME myzfs mirror /disk3 /disk2 STATE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0

Replace a disk in a pool with another disk, for example when a disk fails

errors: No known data errors

Perform a scrub of the storage pool to verify that it checksums correctly. On mirror or raidz pools, # zpool scrub myzfs ZFS will automatically repair any damage. WARNING: scrubbing is I/O intensive. Export a pool from the # zpool export myzfs # zpool list system for importing on no pools available another system. Import a previously exported storage pool. If -d is not specified, this # zpool import -d / myzfs command searches # zpool list NAME SIZE USED AVAIL CAP HEALTH /dev/dsk. As we're using ALTROOT files in this example, we myzfs 95.5M 114K 95.4M 0% ONLINE need to specify the directory of the files used by the storage pool. # zpool upgrade Display pools format This system is currently running ZFS pool version 8. version. The -v flag shows the features All pools are formatted using this version. supported by the current # zpool upgrade -v This system is currently running ZFS pool version 8. version. Use the -a flag to upgrade all pools to the The following versions are supported: latest on-disk version. Pools that are upgraded VER DESCRIPTION --- --------------------------------------------------- will no longer be

----1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 pool properties 7 Separate intent log devices 8 Delegated administration For more information on a particular version, including supported releases, see: http://www.opensolaris.org/os/community/zfs/version/N Where 'N' is the version number. # zpool iostat 5 capacity operations pool used avail read write ---------- ----- ----- ----- ----myzfs 112K 95.4M 0 4 myzfs 112K 95.4M 0 0 myzfs 112K 95.4M 0 0

accessible to any systems running older versions.

bandwidth read write ----- ----26 11.4K 0 0 0 0

Get I/O statistics for the pool

Create a file system and check it with standard df # zfs create myzfs/colin -h command. File # df -h systems are automatically Filesystem kbytes used avail capacity Mounted on mounted by default under ... the /zfs location. See the myzfs/colin 64M 18K 63M 1% /myzfs/colin Mountpoints section of the zfs man page for more details.
# zfs list NAME USED AVAIL REFER MOUNTPOINT myzfs 139K 63.4M 19K /myzfs myzfs/colin 18K 63.4M 18K /myzfs/colin # zpool add myzfs /disk1 invalid vdev specification use '-f' to override the following errors: mismatched replication level: pool uses mirror and new vdev is file # zpool add myzfs mirror /disk1 /disk5 # zpool status -v pool: myzfs state: ONLINE scrub: none requested config: NAME myzfs mirror /disk3 /disk2 STATE ONLINE ONLINE ONLINE ONLINE READ WRITE CKSUM 0 0 0 0 0 0 0 0 0 0 0 0

List current zfs file systems. Attempt to add a single vdev to a mirrored set fails

Add a mirrored set of vdevs

mirror /disk1 /disk5

ONLINE ONLINE ONLINE

0 0 0

0 0 0

0 0 0

errors: No known data errors # zfs create myzfs/colin2 # zfs list NAME USED AVAIL myzfs 172K 159M myzfs/colin 18K 159M myzfs/colin2 18K 159M

REFER 21K 18K 18K

MOUNTPOINT /myzfs /myzfs/colin /myzfs/colin2

# zfs set reservation=20m myzfs/colin # zfs list -o reservation RESERV none 20M none # zfs set quota=20m myzfs/colin2 # zfs list -o quota myzfs/colin myzfs/colin2 QUOTA none 20M # zfs set compression=on myzfs/colin2 # zfs list -o compression COMPRESS off off on # zfs set sharenfs=on myzfs/colin2 # zfs get sharenfs myzfs/colin2 NAME PROPERTY VALUE SOURCE myzfs/colin2 sharenfs on local

Create a second file system. Note that both file system show 159M available because no quotas are set. Each "could" grow to fill the pool. Reserve a specified amount of space for a file system ensuring that other users don't take up all the space. Set and view quotas

Turn on and verify compression Share a filesystem over NFS. There is no need to modify the /etc/dfs/dfstab as the filesystem will be share automatically on boot. Share a filesystem over CIFS/SMB. This will make your ZFS filesystem accessible to Windows users.

# zfs set sharesmb=on myzfs/colin2 # zfs get sharesmb myzfs/colin2 NAME PROPERTY VALUE SOURCE myzfs/colin2 sharesmb on local # zfs snapshot myzfs/colin@test # zfs list NAME USED AVAIL REFER myzfs 20.2M 139M 21K myzfs/colin 18K 159M 18K myzfs/colin@test 0 18K myzfs/colin2 18K 20.0M 18K # zfs rollback myzfs/colin@test # zfs clone myzfs/colin@test myzfs/colin3 # zfs list

MOUNTPOINT /myzfs /myzfs/colin /myzfs/colin2

Create a snapshot called test.

Rollback to a snapshot. A snapshot is not directly

NAME myzfs myzfs/colin myzfs/colin@test myzfs/colin2 myzfs/colin3

USED 20.2M 18K 0 18K 0

AVAIL 139M 159M 20.0M 139M

REFER 21K 18K 18K 18K 18K

MOUNTPOINT /myzfs /myzfs/colin /myzfs/colin2 /myzfs/colin3

addressable. A clone must be made. The target dataset can be located anywhere in the ZFS hierarchy, and will be created as the same type as the original.

# zfs destroy myzfs/colin2 # zfs list NAME USED AVAIL myzfs 20.1M 139M myzfs/colin 18K 159M myzfs/colin@test 0 myzfs/colin3 0 139M

REFER 22K 18K 18K 18K

MOUNTPOINT /myzfs /myzfs/colin /myzfs/colin3

Destroy a filesystem

# zfs destroy myzfs/colin cannot destroy 'myzfs/colin': filesystem has children use '-r' to destroy the following datasets: myzfs/colin@test

Attempt to destroy a filesystem that had a child. In this case, the snapshot filesystem. We must either remove the snapshot, or make a clone and promote the clone. Promte a clone filesystem to no longer be a dependent on it's "origin" snapshot. This now associates makes the snapshot a child of the cloned filesystem. We can then delete the original filesystem.

# zfs promote myzfs/colin3 # zfs list NAME USED AVAIL REFER MOUNTPOINT myzfs 20.1M 139M 21K /myzfs myzfs/colin 0 159M 18K /myzfs/colin myzfs/colin3 18K 139M 18K /myzfs/colin3 myzfs/colin3@test 0 18K # zfs destroy myzfs/colin # zfs list NAME USED AVAIL REFER MOUNTPOINT myzfs 147K 159M 21K /myzfs myzfs/colin3 18K 159M 18K /myzfs/colin3 myzfs/colin3@test 0 18K # zfs rename myzfs/colin3 myzfs/bob # zfs list NAME USED AVAIL REFER MOUNTPOINT myzfs 153K 159M 21K /myzfs myzfs/bob 18K 159M 18K /myzfs/bob myzfs/bob@test 0 18K # zfs rename myzfs/bob@test myzfs/bob@newtest # zfs list NAME USED AVAIL REFER MOUNTPOINT myzfs 146K 159M 20K /myzfs myzfs/bob 18K 159M 18K /myzfs/bob myzfs/bob@newtest 0 18K # zfs get all NAME PROPERTY VALUE SOURCE myzfs type filesystem myzfs creation Tue Sep 11 14:21 2007

Rename a filesystem, and separately rename the snapshot.

Display properties for the given datasets. This can be refined further using options.

myzfs used 146K myzfs available 159M myzfs referenced 20K [...] # zpool destroy myzfs cannot destroy 'myzfs': pool is not empty use '-f' to force destruction anyway # zfs unmount myzfs/bob # df -h myzfs 159M 20K 159M /myzfs # zfs mount myzfs/bob # df -h myzfs 159M 20K 159M /myzfs myzfs/bob 159M 18K 159M /myzfs/bob # zfs send myzfs/bob@newtest | ssh myzfs/backup # zfs list NAME USED AVAIL myzfs 172K 159M myzfs/backup 18K 159M myzfs/backup@newtest 0 myzfs/bob 18K 159M myzfs/bob@newtest 0 -

Can't destroy a pool with active filesystems.


1%

Unmount a ZFS file system Mount a ZFS filesystem. This is usually automatically done on boot.

1% 1%

Create a stream localhost zfs receive representation of the snapshot and redirect it to zfs receive. In this REFER MOUNTPOINT example I've redirected to 20K /myzfs 18K /myzfs/backup the localhost for illustration purposes. 18K 18K /myzfs/bob This can be used to 18K backup to a remote host, or even to a local file.

# zpool history History for 'myzfs': 2007-09-11.15:35:50 zpool create myzfs mirror /disk1 /disk2 /disk3 2007-09-11.15:36:00 zpool detach myzfs /disk3 2007-09-11.15:36:10 zpool attach myzfs /disk1 /disk3 2007-09-11.15:36:53 zpool detach myzfs /disk3 2007-09-11.15:36:59 zpool add myzfs spare /disk3 2007-09-11.15:37:09 zpool remove myzfs /disk3 2007-09-11.15:37:18 zpool offline myzfs /disk1 2007-09-11.15:37:27 zpool online myzfs /disk1 2007-09-11.15:37:37 zpool replace myzfs /disk1 /disk3 2007-09-11.15:37:47 zpool scrub myzfs 2007-09-11.15:37:57 zpool export myzfs 2007-09-11.15:38:05 zpool import -d / myzfs 2007-09-11.15:38:52 zfs create myzfs/colin 2007-09-11.15:39:27 zpool add myzfs mirror /disk1 /disk5 2007-09-11.15:39:38 zfs create myzfs/colin2 2007-09-11.15:39:50 zfs set reservation=20m myzfs/colin 2007-09-11.15:40:18 zfs set quota=20m myzfs/colin2 2007-09-11.15:40:35 zfs set compression=on myzfs/colin2 2007-09-11.15:40:48 zfs snapshot myzfs/colin@test

Display the command history of all storage pools. This can be limited to a single pool by specifying its name on the command line. The history is only stored for existing pools. Once you've destroyed the pool, you'll no longer have access to it's history.

2007-09-11.15:40:59 2007-09-11.15:41:11 myzfs/colin3 2007-09-11.15:41:25 2007-09-11.15:42:12 2007-09-11.15:42:26 2007-09-11.15:42:57 2007-09-11.15:43:23 myzfs/bob@newtest 2007-09-11.15:44:30

zfs rollback myzfs/colin@test zfs clone myzfs/colin@test zfs zfs zfs zfs zfs destroy myzfs/colin2 promote myzfs/colin3 rename myzfs/colin3 myzfs/bob destroy myzfs/colin rename myzfs/bob@test

zfs receive myzfs/backup

# zpool destroy -f myzfs # zpool status -v no pools available

Use the -f option to destroy a pool with files systems created.

S-ar putea să vă placă și