Documente Academic
Documente Profesional
Documente Cultură
Symantec Storage
Foundation 6.x for
UNIX: Administration
Fundamentals
100-002840
COURSE DEVELOPERS Copyright 2014 Symantec Corporation. All rights reserved.
Pranab Koch Symantec, the Symantec Logo, and VERITAS are trademarks or
Raj Kiran Prasad Thota registered trademarks of Symantec Corporation or its affiliates in
the U.S. and other countries. Other names may be trademarks of
their respective owners.
LEAD SUBJECT MATTER THIS PUBLICATION IS PROVIDED AS IS AND ALL
EXPERTS EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS
Brad Willer AND WARRANTIES, INCLUDING ANY IMPLIED
Gaurav Dong WARRANTY OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE
DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
TECHNICAL DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
CONTRIBUTORS AND SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR
REVIEWERS INCIDENTAL OR CONSEQUENTIAL DAMAGES IN
Margy Cassidy CONNECTION WITH THE FURNISHING, PERFORMANCE,
Steve Evans OR USE OF THIS PUBLICATION. THE INFORMATION
Joe Gallagher CONTAINED HEREIN IS SUBJECT TO CHANGE WITHOUT
Freddie Gilyard NOTICE.
Graeme Gofton No part of the contents of this book may be reproduced or
Tony Griffiths transmitted in any form or by any means without the written
Gene Henriksen permission of the publisher.
Kleber Saldanha Symantec Storage Foundation 6.x for UNIX: Administration
Kalyan Subramaniyam Fundamentals
Anand Raj Symantec Corporation
Vengadassalam
World Headquarters
Stephen Williams
350 Ellis Street
Randal Williams Mountain View, CA 94043
United States
http://www.symantec.com
Copyright 2014 Symantec Corporation. All rights reserved.
Index
Copyright 2014 Symantec Corporation. All rights reserved.
7
Course Introduction
Capacity
VxVM, VxFS, and DMP provide consistent management across Solaris, HP-
UX, AIX, and Linux platforms.
Storage Foundation provides additional benefits for array environments, such
as inter-array mirroring and hardware independent dynamic multipathing.
Hosts can be replaced without modifying storage.
Hosts with different operating systems can access the same storage.
Storage devices can be spanned.
Performance
I/O throughput can be maximized by measuring and modifying volume layouts
while storage remains online.
Extent-based allocation of space for files minimizes file level access time.
Read-ahead buffering dynamically tunes itself to the volume layout.
Aggressive caching of writes greatly reduces the number of disk accesses.
Direct I/O performs file I/O directly into and out of user buffers.
With VxFS, certain features are available for maximizing performance in a
database environment.
With VxFS, you can create a multi-tier storage environment where you benefit
from using a mixture of high-end disk arrays, solid state disks, low-end disk
arrays, and JBODs.
Availability
Management of storage and the file system is performed online in real time,
eliminating the need for planned downtime.
Online volume and file system management can be centralized through an
Copyright 2014 Symantec Corporation. All rights reserved.
Arrays can be of different manufacture or type; that is, one array can be a
RAID array and the other a JBOD.
VxVM facilitates data reorganization and maximizes available resources.
VxVM improves overall performance by making I/O activity parallel for a
volume through more than one I/O path to and within the array.
You can use snapshots with mirrors in different locations, which is beneficial
for disaster recovery and off-host processing.
If you include Veritas Volume Replicator (VVR) or Veritas File Replicator
(VFR) in your environment, VVR and VFR can be used to provide hardware-
independent replication services.
Intro
Objectives
After completing the Administration Fundamentals training, you will be able to:
Identify VxVM virtual storage objects and volume layouts.
Install and configure Storage Foundation.
Administer the SF environment from a centralized Web console using Veritas
Copyright 2014 Symantec Corporation. All rights reserved.
Intro
hdx[N]
In the syntax:
sd refers to a SCSI disk, and hd refers to an EIDE disk.
x is a letter that indicates the order of disks detected by the operating system.
For example, sda refers to the first SCSI disk, sdb refers to the second SCSI
disk, and so on.
N is an optional parameter that represents a partition number in the range 1
through 16. For example, sda7 references partition 7 on the first SCSI disk.
Primary partitions on a disk are 1, 2, 3, 4; logical partitions have numbers 5 and up.
If the partition number is omitted, the device name indicates the entire disk.
CONFIDENTIAL - NOT FOR DISTRIBUTION
23 Lesson 1 Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.
15
Disk arrays
Reads and writes on unmanaged physical disks can be a relatively slow process,
because disks are physical devices that require time to move the heads to the
correct position on the disk before reading or writing. If all of the read and write
operations are performed to individual disks, one at a time, the read-write time can
become unmanageable.
A disk array is a collection of physical disks. Performing I/O operations on
multiple disks in a disk array can improve I/O speed and throughput.
Hardware arrays present disk storage to the host operating system as LUNs. A
LUN can be made up of a single physical disk, a collection of physical disks, or
even a portion of a physical disk. From the operating system point of view, a LUN
corresponds to a single storage device.
Copyright 2014 Symantec Corporation. All rights reserved.
Multipathing
Some disk arrays provide multiple ports to access disk devices. These ports,
coupled with the host bus adaptor (HBA) controller and any data bus or I/O
processor local to the array, compose multiple hardware paths to access the disk
devices. This is called multipathing.
In a multipathing environment, a single storage device may appear to the operating
system as multiple storage devices. Special multipathing software is usually
required to administer multipathed storage devices. Veritas Dynamic Multi-
Pathing (DMP) product which is part of the Storage Foundation software provides
seamless management of multiple access paths to storage devices in heterogeneous
operating system and storage environments.
CONFIDENTIAL - NOT FOR DISTRIBUTION
24 16 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
1
What is a volume?
A volume is a virtual object, created by Volume Manager, that stores data. A
volume consists of space from one or more physical disks on which the data is
physically stored.
disks. All users and applications access volumes as contiguous address space using
special device files in a manner similar to accessing a disk partition.
Volumes have block and character device nodes in the /dev tree. You can supply
the name of the path to a volume in your commands and programs, in your file
system and database configuration files, and in any other context where you would
otherwise use the path to a physical disk partition.
Public region: The public region consists of the remainder of the space on the
disk. The public region represents the available space that Volume Manager
can use to assign to volumes and is where an application stores data. Volume
Manager never overwrites this area unless specifically instructed to do so.
Subdisks
A VxVM disk can be divided into one or more subdisks. A subdisk is a set of
contiguous disk blocks that represent a specific portion of a VxVM disk, which is
mapped to a specific region of a physical disk. A subdisk is a subsection of a disks
public region. A subdisk is the smallest unit of storage in Volume Manager.
Therefore, subdisks are the building blocks for Volume Manager objects.
A subdisk is defined by an offset and a length in sectors on a VxVM disk.
Default subdisk name: DMname-##
A VxVM disk can contain multiple subdisks, but subdisks cannot overlap or share
the same portions of a VxVM disk. Any VxVM disk space that is not reserved or
that is not part of a subdisk is free space. You can use free space to create new
subdisks.
Conceptually, a subdisk is similar to a partition. Both a subdisk and a partition
divide a disk into pieces defined by an offset address and length. Each of those
pieces represent a reservation of contiguous space on the physical disk. However,
while the maximum number of partitions to a disk is limited by some operating
systems, there is no theoretical limit to the number of subdisks that can be attached
to a single plex. This number has been limited by default to a value of 4096. If
required, this default can be changed, using the vol_subdisk_num tunable
parameter. For more information on tunable parameters, see the Veritas Storage
Foundation and High Availability Solutions Tuning Guide.
Plexes
Volume Manager uses subdisks to build virtual objects called plexes. A plex is a
structured or ordered collection of subdisks that represents one copy of the data in
a volume. A plex consists of one or more subdisks located on one or more physical
disks. The length of a plex is determined by the last block that can be read or
Copyright 2014 Symantec Corporation. All rights reserved.
Volumes
A volume is a virtual storage device that is used by applications in a manner
similar to a physical disk. Due to its virtual nature, a volume is not restricted by the
physical size constraints that apply to a physical disk. A VxVM volume can be as
large as the total of available, unreserved free physical disk space in the disk
group. A volume consists of one or more plexes.
Volume layouts
RAID levels correspond to volume layouts. A volumes layout refers to the
organization of plexes in a volume. Volume layout is the way plexes are
configured to remap the volume address space through which I/O is redirected at
run-time. Volume layouts are based on the concepts of disk spanning, redundancy,
and resilience.
Disk spanning
Disk spanning is the combining of disk space from multiple physical disks to form
one logical drive. Disk spanning has two forms:
Data redundancy
To protect data against disk failure, the volume layout must provide some form of
data redundancy. Redundancy is achieved in two ways:
Mirroring: Mirroring is maintaining two or more copies of volume data.
A mirrored volume uses multiple plexes to duplicate the information contained
in a volume. Although a volume can have a single plex, at least two are
required for true mirroring (redundancy of data). Each of these plexes should
contain disk space from different disks for the redundancy to be useful.
Resilience: A resilient volume, also called a layered volume, is a volume that
is built on one or more other volumes. Resilient volumes enable the mirroring
of data at a more granular level. For example, a resilient volume can be
concatenated or striped at the top level and then mirrored at the bottom level.
A layered volume is a virtual Volume Manager object that nests other virtual
objects inside of itself. Layered volumes provide better fault tolerance by
mirroring data at a more granular level.
Parity: Parity is a calculated value used to reconstruct data after a failure by
doing an exclusive OR (XOR) procedure on the data. Parity information can be
stored on a disk. If part of a volume fails, the data on that portion of the failed
volume can be re-created from the remaining data and parity information.
Copyright 2014 Symantec Corporation. All rights reserved.
A RAID-5 volume uses striping to spread data and parity evenly across
multiple disks in an array. Each stripe contains a parity stripe unit and data
stripe units. Parity can be used to reconstruct data if one of the disks fails. In
comparison to the performance of striped volumes, write throughput of RAID-
5 volumes decreases, because parity information needs to be updated each time
data is accessed. However, in comparison to mirroring, the use of parity
reduces the amount of space required.
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 1: VMware Workstation Introduction, page A-8
Note: The VRTSvlic package can coexist with previous licensing packages, such
as VRTSlic. If you have old license keys installed in /etc/vx/elm,
leave this directory on your system. The old and new license utilities can
coexist.
Note: The example on the slide is from a Linux platform. You may have other
products available on other platforms.
4 If the licensing utilities are installed, the product status page is displayed. This
list displays the Veritas products on the installation media and the installation
and licensing status of each product. If the licensing utilities are not installed,
you receive a message indicating that the installation utility could not
determine product status.
5 Type I to install a product. Follow the instructions to select the product that
you want to install. Installation begins automatically.
When you add Storage Foundation packages by using the installer utility, all
packages are installed. If you want to add a specific package only, for example,
only the VRTSob package, then you must add the package manually from the
command line.
Installation input
The interactive installation prompts the user for information, such as the package
set to be installed, system names, licensing selection, license keys (if traditional
licensing is selected), and other configuration information, such as the product
mode or additional options. These answers are then stored in the
installer-timestamp+3characters.response file in the installation
log directory:
/opt/VRTS/install/logs/installer-timestamp+3characters
The .response file can then be used to install other systems non-interactively
using the ./installer -responsefile filename option. For details on
using a response file during installation, refer to Veritas Storage Foundation
Installation Guide.
Copyright 2014 Symantec Corporation. All rights reserved.
Note: SF 5.1 and later provide a Web user interface to the installation utilities.
The Web installer is explained in more detail later in this lesson.
Note: If you want to install more than one system using the installer utility,
provide the system names separated by space when prompted.
utilities in this directory to verify the version of the SF product installed on your
system using the -version option as shown on the slide. This option finds out
which packages are installed on the system and attempts to connect to the SORT
Web site to get the latest version and patch information about the product installed
on the system.
If you want to verify which packages are installed on the system, you can also
view information about installed packages by using OS-specific commands to list
package information.
Solaris
To list all installed packages on the system:
2
Copyright 2014 Symantec Corporation. All rights reserved.
and configured.
When you run the webinstaller command to start the Web server, the URL is
displayed so you can connect from a browser. On some browsers, you must accept
a security exception and authenticate using the root account and password for the
system running the Web server. After you connect to the Web server, you can
select tasks, products, and systems to start installing and configuring the target
systems.
Reports
Use this tab to view and manage uploaded reports.
Notifications
This tab enables you to set e-mail alerts for being notified about the latest
information released about the Symantec enterprise products in your
environment.
Support
Use this tab to access detailed information about all Symantec resources from
product support to Symantec forums, from documentation to product training.
components of SF products are installed and running. A typical data center can
have thousands of such hosts using some or all of the SF products.
Optional external authentication brokers (ABs) for additional domain support.
An AB is a system with Symantec Product Authentication Services (SPAS)
installed that provides access to user authentication with public domains, such
as Active Directory, NIS, or NIS+.
In a centrally managed deployment, managed hosts relay information about
storage resources and applications to the MS. The Management Server then
coalesces the data it receives from the managed hosts within its database.
server itself) when you connect to the MS console. You need to add other SF
servers as managed hosts to populate the database and start the discovery process.
Note: If you are using pop-up blockers (including Yahoo Toolbar or Google
Toolbar), either disable them or configure them to accept pop-ups from the
Web server to which you will connect.
menu-driven, text-based interface that you can use for disk and disk group
administration functions. The vxdiskadm interface has a main menu from
which you can select storage management tasks.
Veritas Enterprise Administrator (VEA): Veritas Enterprise Administrator
(VEA) is a graphical user interface to Volume Manager and other Veritas
products. VEA provides access to Storage Foundation functionality through
visual elements, such as icons, menus, wizards, and dialog boxes. Using VEA,
you can manipulate Volume Manager objects and also perform common file
system operations. A single VEA task may perform multiple command-line
tasks.
and details on how to use them are located in VxVM and VxFS manual pages.
Manual pages are installed by default in /opt/VRTS/man. Add this directory to
the MANPATH environment variable, if it is not already added.
To access a manual page, type man command_name.
Examples:
man vxassist
man mount_vxfs
providing you with information and prompts. Default answers are provided for
many questions, so you can select common answers.
The menu also contains options for listing disk information, displaying help
information, and quitting the menu interface.
The tasks listed in the main menu are covered throughout this training. Options
available in the menu differ somewhat by platform. See the vxdiskadm(1m)
manual page for more details on how to use vxdiskadm.
Note: vxdiskadm can be run only once per host. A lock file prevents multiple
instances from running: /var/spool/locks/.DISKADD.LOCK.
On the Linux platform, you also need to execute the vxsvcctrl start
command to start the server process after activating it.
The VEA client can provide simultaneous access to multiple host machines. Each
host machine must be running the VEA server.
Note: Entries for your user name and password must exist in the password file or
corresponding Network Information Name Service table on the machine to
be administered. Your user name must also be included in the Veritas
administration group (vrtsadm, by default) in the group file or NIS group
table. If the vrtsadm entry does not exist, only root can run VEA.
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 2: Installing SF and Accessing SF Interfaces, page A-37
If you set the use_avid option to yes, the LUNs are numbered based on the
array volume ID instead of the traditional indexing method.
You can also change the device naming scheme using the Change the disk
naming scheme option in the vxdiskadm menu.
removed.
These disks are under Volume Manager control but cannot be used by Volume
Manager until they are added to a disk group.
Note: Encapsulation is another method of placing a disk under VxVM control in
which existing data on the disk is preserved.
such as /dev/[r]dsk/device_name.
The free space in a disk group refers to the space on all disks within the disk group
that has not been allocated as subdisks. When you place a disk into a disk group,
its space becomes part of the free space pool of the disk group.
Notes
The definitions of bootdg and defaultdg are written to the volboot file. The
definition of bootdg results in a symbolic link from the named bootdg in
/dev/vx/dsk and /dev/vx/rdsk.
The rootdg disk group name is no longer a reserved name for VxVM versions
after 4.0. If you are upgrading from a version of Volume Manager earlier than
4.0 where the system disk is encapsulated in the rootdg disk group, the bootdg
is assigned the value of rootdg automatically.
Adding disks
To add a disk to a disk group, you select an uninitialized disk or a free disk. If the
disk is uninitialized, you must initialize the disk before you can add it to a disk
group.
Disk naming
When you add a disk to a disk group, the disk is assigned a disk media name. The
Copyright 2014 Symantec Corporation. All rights reserved.
disk media name is a logical name used for VxVM administrative purposes.
on the same disk that other plexes of the same volume are using.
To create a volume from the command line, you use the vxassist command. In
the syntax:
Use the -g option to specify the disk group in which to create the volume.
make is the keyword for volume creation.
volume_name is a name you give to the volume. Specify a meaningful name
which is unique within the disk group.
length specifies the number of sectors in the volume. You can specify the
length by adding an m, k, g, or t to the length.
AIX
In AIX, you can use the following commands when working with the file system
table file, /etc/filesystems:
To view entries: lsfs mount_point
To change details of an entry, use chfs. For example, to turn off mount at
boot: chfs -A no mount_point
In the output:
A status of online, in addition to entries in the Disk and Group columns
indicates that the disk has been initialized or encapsulated, assigned a disk
media name, and added to a disk group. The disk is under Volume Manager
control and is available for creating volumes.
A status of online without entries in the Disk and Group columns indicates that
the drive has been initialized or encapsulated but is not currently assigned to a
disk group. Note that if there is a disk group name in parentheses without any
disk media name, it indicates that the disk belongs to a deported disk group.
TRANSPORT : iSCSI
ENCLOSURE_NAME : emc0
NUM_PATHS : 2
Notes:
The disk name and the disk group name are changeable. The disk ID and disk
group ID are never changed as long as the disk group exists or the disk is
initialized.
The detailed information displayed by the vxdisk list command is
discussed in more detail in the Lesson 7.
plexes
TY NAME TYPE STATUS
plex appvol-01 simple attached
The vxinfo command prints the accessibility and the usability information on
VxVM volumes. The -p option with vxinfo also reports the name and status of
each plex within the volume.
Evacuating a disk
Evacuating a disk moves the contents of the volumes on a disk to another disk. The
contents of a disk can be evacuated only to disks in the same disk group that have
sufficient free space.
To evacuate to any disk except for appdg03:
vxevac -g appdg appdg02 !appdg03
Copyright 2014 Symantec Corporation. All rights reserved.
Note: Use the -f option to force a shred operation on a Solid State Drive (SSD)
disk.
Copyright 2014 Symantec Corporation. All rights reserved.
command.
Note: You can bring back a destroyed disk group by importing it with its dgid if
its disks had not been re-used for other purposes.
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 3: Creating a Volume and File System, page A-71
Concatenated layout
A concatenated volume layout maps data in a linear manner onto one or more
subdisks in a plex. Subdisks do not have to be physically contiguous and can
belong to more than one VM disk. Storage is allocated completely from one
subdisk before using the next subdisk in the span. Data is accessed in the
remaining subdisks sequentially until the end of the last subdisk.
Copyright 2014 Symantec Corporation. All rights reserved.
For example, if you have 12 GB of data then a concatenated volume can logically
map the volume address space across subdisks on different disks. The addresses
0 GB to 8 GB of volume address space map to the first 8-gigabyte subdisk, and
addresses 9 GB to 12 GB map to the second 4-gigabyte subdisk. An address offset
of 10 GB, therefore, maps to an address offset of 2 GB in the second subdisk.
stripe unit size can be in units of sectors, kilobytes, megabytes, or gigabytes. The
default stripe unit size is 64K, which provides adequate performance for most
general purpose volumes. Performance of an individual volume may be improved
by matching the stripe unit size to the I/O characteristics of the application using
the volume.
Concatenation: Advantages
Better utilization of free space: Concatenation removes the restriction on size
of storage devices imposed by physical disk size. It also enables better
utilization of free space on disks by providing for the ordering of available
discrete disk space on multiple disks into a single addressable volume.
Simplified administration: System administration complexity is reduced
because making snapshots and mirrors uses any size space, and volumes can be
increased in size by any available amount.
Concatenation: Disadvantages
No protection against disk failure: Concatenation does not protect against disk
failure. A single disk failure results in the failure of the entire volume.
Copyright 2014 Symantec Corporation. All rights reserved.
Striping: Advantages
Improved performance through parallel data transfer: Improved
performance is obtained by increasing the effective bandwidth of the I/O path
to the data. This may be achieved by a single volume I/O operation spanning
across a number of disks or by multiple concurrent volume I/O operations to
more than one disk at the same time.
Load-balancing: Striping is also helpful in balancing the I/O load from
multiuser applications across multiple disks.
Mirroring: Advantages
Improved availability: With concatenation or striping, failure of any one disk
makes the entire plex unusable. With mirroring, data is protected against the
failure of any one disk. Mirroring improves the availability of a striped or
concatenated volume.
Improved read performance: Reads benefit from having multiple places
from which to read the data.
Mirroring: Disadvantages
Requires more disk space: Mirroring requires twice as much disk space,
which can be costly for large configurations. Each mirrored plex requires
enough space for a complete copy of the volumes data.
Slightly slower write performance: Writing to volumes is slightly slower,
because multiple copies have to be written in parallel. The overall time the
write operation takes is determined by the time needed to write to the slowest
disk involved in the operation.
The slower write performance of a mirrored volume is not generally significant
enough to decide against its use. The benefit of the resilience that mirrored
volumes provide outweighs the performance reduction.
RAID-5: Advantages
Redundancy through parity: With a RAID-5 volume layout, data can be re-
created from remaining data and parity in case of the failure of one disk.
Requires less space than mirroring: RAID-5 stores parity information, rather
than a complete copy of the data.
Improved read performance: RAID-5 provides similar improvements in read
performance as in a normal striped layout.
Copyright 2014 Symantec Corporation. All rights reserved.
The following additional attributes are used with the striped volume layout:
ncol=n designates the number of stripes, or columns, across which the volume
is created. This attribute has many aliases. For example, you can also use
nstripe=n or stripes=n.
limit, the volume is created using the upper limit as the volume length. If the
maximum possible size is smaller than this limit, the volume is created with the
maximum possible size.
When you create a mirrored volume, you can add a dirty region log by adding the
logtype=drl attribute:
vxassist -g diskgroup [-b] make volume_name length \
layout=mirror-concat logtype=drl [nlog=n]
A log plex that consists of a single subdisk is created.
If you plan to mirror the log, you can add more than one log plex by specifying
a number of logs using the nlog=n attribute, where n is the number of logs.
vxassist -g appdg make appvol 5m layout=mirror-concat \
logtype=drl
Note: Dirty regions logs are covered in a later lesson.
CONFIDENTIAL - NOT FOR DISTRIBUTION
109 Lesson 4 Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.
411
Allocating storage for volumes
Specifying storage attributes for volumes
VxVM selects the disks on which each volume resides automatically, unless you
specify otherwise. To create a volume on specific disks, you can designate those
disks when creating a volume. By specifying storage attributes when you create a
volume, you can:
Include specific disks, controllers, enclosures, targets, or trays to be used for
the volume.
Exclude specific disks, controllers, enclosures, targets, or trays from being
used for the volume.
Mirror volumes across specific controllers, enclosures, targets, or trays. (By
default, VxVM does not permit mirroring on the same disk.)
Copyright 2014 Symantec Corporation. All rights reserved.
Note: When creating a volume, all storage attributes that you specify for use must
belong to the same disk group. Otherwise, VxVM does not use these
storage attributes to create a volume.
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 4: Working with Volumes with Different Layouts, page A-93
mirror. Unless you specify the disks to be used for the mirror, VxVM
automatically locates and uses available disk space to create the mirror.
A volume can contain up to 32 plexes (mirrors); however, the practical limit is 31.
One plex should be reserved for use by VxVM for background repair operations.
Removing a mirror
When a mirror (plex) is no longer needed, you can remove it. You can remove a
mirror to provide free space, to reduce the number of mirrors, to remove a
temporary mirror.
Caution: Removing a mirror results in loss of data redundancy. If a volume only
has two plexes, removing one of them leaves the volume unmirrored.
CONFIDENTIAL - NOT FOR DISTRIBUTION
115 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
53
Migrating data to a new array
Without Storage Foundation, moving data from one array to another requires
downtime. Using Storage Foundation, you can mirror to a new array, ensure it is
stable, and then remove the plexes from the old array. No downtime is necessary.
This is useful in many situations, for example, if a company purchases a new array.
The high level steps for migrating data using Storage Foundation are listed on the
slide. Note that if you have multiple volumes on the old array, you would need to
repeat steps 6 to 9 for each volume. The following steps illustrate the commands
you need to use to perform the migration using a simple example where the appvol
volume in the appdg disk group is moved from the emc0 enclosure to the emc1
enclosure. To keep the example simple, only one LUN is used to mirror the simple
volume.
1 Set up LUNs on the new array.
Copyright 2014 Symantec Corporation. All rights reserved.
2 Get the OS to detect the LUNS. For example, type devfsadm on a Solaris
system.
3 vxdisk scandisks new (for VxVM to recognize LUNS from the new
emc1 enclosure)
4 vxdisksetup -i emc1_dd1 (Repeat for each new LUN to be used in the
volume.)
5 vxdg -g appdg adddisk appdg02=emc1_dd1
6 vxassist -g appdg mirror appvol appdg02
7 Wait for the synchronization to complete.
5
Copyright 2014 Symantec Corporation. All rights reserved.
restored to a consistent state by copying the full contents of the volume between its
mirrors. This process can be lengthy and I/O intensive.
When you enable logging on a mirrored volume, one log plex is created by default.
The log plex uses space from disks already used for that volume, or you can
specify which disk to use. To enhance performance, you should consider placing
the log plex on a disk that is not already in use by the volume.
5
Note: Before configuring the siteread policy, the Site Awareness feature must be
configured by assigning hosts and LUNs to different sites. Note that setting
the siteread policy on a volume has no impact if the site name has not been
set for the host.
You can also use the vxprint command to observe the read policy of a mirrored
volume as shown in the following output extracts. Note that the fields related to the
read policy are displayed in bold font for emphasis:
Copyright 2014 Symantec Corporation. All rights reserved.
When you resize a volume, you can specify the length of a new volume in sectors,
kilobytes, megabytes, or gigabytes. The unit of measure is added as a suffix to the
length (s, k, m, or g). If no unit is specified, the default unit is sectors.
Note that this command does not change the size of the volume.
The ability to expand or shrink a file system depends on the file system type and
whether the file system is mounted or unmounted. The following table provides
some examples:
to specify a new host to which the disk group is imported at reboot. If you know
the name of the host to which the disk group will be imported, then you should
specify the new host during the operation. If you do not specify the new host, then
the disks could accidentally be added to another disk group, resulting in data loss.
You cannot specify a new host using the vxdiskadm utility.
system crashes, the locks stored on the disks remain, and if you try to import a disk
group containing those disks, the import fails.
Importing as temporary
A temporary import does not persist across reboots. A temporary import can be
useful, for example, if you need to perform administrative operations on the
temporarily imported disk group.
Forcing an import
A disk group import fails if the VxVM configuration daemon cannot find all of the
disks in the disk group. If the import fails because a disk has failed, you can force
the import. Forcing an import should always be performed with caution.
volumes after you import a disk group from the command line.
A disk group must be deported from its previous system before it can be imported
to the new system. During the import operation, the system checks for host import
locks. If any locks are found, you are prompted to clear the locks.
To temporarily import a disk group, you use the -t option. This option does not
set the autoimport flag, which means that the import cannot survive a reboot.
To display all disk groups, including deported disk groups:
vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
emc0_dd1 auto:cdsdisk appdg01 appdg online
emc0_dd2 auto:cdsdisk - (oradg) online
CONFIDENTIAL - NOT FOR DISTRIBUTION
132 520 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Renaming VxVM objects
Changing the disk media name
VxVM creates a unique disk media name for a disk when you add a disk to a disk
group. Sometimes you may need to change a disk name to reflect changes of
ownership or use of the disk. Renaming a disk does not change the physical disk
device name. The new disk name must be unique within the disk group. 5
Volumes are not affected when subdisks are named differently from the disks.
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 5: Making Configuration Changes, page A-103
Extents allow disk I/O to take place in units of multiple blocks if storage is
allocated in consecutive blocks. This topic is analyzed in more detail in the
following pages.
Extent attributes
Extent attributes are the extent allocation policies associated with a file.
Online administration
A lot of the file system administration tasks, such as backing the file system up
or resizing the file system, can be performed while the file system is still
mounted. Online file system defragmentation is discussed later in this lesson.
VxFS attempts to allocate each file in one extent of blocks. If this is not possible,
VxFS attempts to allocate all extents for a file close to each other.
Each file is associated with an index block, called an inode. In an inode, an extent
is represented as an address-length pair, which identifies the starting block address
and the length of the extent in logical blocks. This enables the file system to
directly access any block of the file.
VxFS automatically selects an extent size by using a default allocation policy that
is based on the size of I/O write requests. The default allocation policy attempts to
balance two goals:
Optimum I/O performance through large allocations
6
Copyright 2014 Symantec Corporation. All rights reserved.
the size of the file system. For large disk configurations, running fsck is a time-
consuming process that checks, verifies, and corrects the entire file system.
The VxFS version of the fsck utility performs an intent log replay to recover a
file system without completing a full structural check of the entire file system. The
time required for log replay is proportional to the log size, not the file system size.
Therefore, the file system can be recovered and mounted seconds after a system
failure. Intent log recovery is not readily apparent to users or administrators, and
the intent log can be replayed multiple times with no adverse effects.
Note: Replaying the intent log may not completely recover the damaged file
system structure if the disk suffers a hardware failure. Such situations may require
a complete system check using the VxFS fsck utility.
CONFIDENTIAL - NOT FOR DISTRIBUTION
148 612 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Maintaining file system consistency
You use the VxFS-specific version of the fsck command to check the consistency
of and repair a VxFS file system. The fsck utility replays the intent log by
default, instead of performing a full structural file system check, which is usually
sufficient to set the file system state to CLEAN. You can also use the fsck utility
to perform a full structural recovery in the unlikely event that the log is unusable.
The syntax for the fsck command is:
fsck [fstype] [generic_options] [-y|-Y] [-n|-N] \
[-o full,nolog] special
For a complete list of generic options, see the fsck(1m) manual page. Some of
the generic options include: 6
Option Description
Copyright 2014 Symantec Corporation. All rights reserved.
-o p can only be run with log fsck, not with full fsck.
CONFIDENTIAL - NOT FOR DISTRIBUTION
149 Lesson 6 Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
613
Controlling file system fragmentation
In a Veritas file system, when free resources are initially allocated to files, they are
aligned in the most efficient order possible to provide optimal performance. On an
active file system, the original order is lost over time as files are created, removed,
and resized. As space is allocated and deallocated from files, the available free
space becomes broken up into fragments. This means that space has to be assigned
to files in smaller and smaller extents. This process is known as fragmentation.
Fragmentation leads to degraded performance and availability.
VxFS provides online reporting and optimization utilities to enable you to monitor
and defragment a mounted file system. These utilities are accessible through the
file system administration command, fsadm.
Types of fragmentation
Copyright 2014 Symantec Corporation. All rights reserved.
More than 50 percent of free space in extents of less than 64 blocks in length
Less than 5 percent of the total file system size available as free extents in
lengths of 64 or more blocks
Fragmentation can also be determined based on the fragmentation index. The
fragmentation report displays fragmentation indices for both the free space and the
files in the file system. A value of 0 for the fragmentation index means that the file
system has no fragmentation, and a value of 100 means that the file system has the
highest level of fragmentation. The fragmentation index is new with SF 6.x and
enables you to determine whether you should perform extent defragmentation or
free space defragmentation.
If you use the -D and -E with the -d and -e options, fragmentation reports are
produced both before and after the reorganization.
You can use the -t and -p options to control the amount of work performed by
fsadm, either in a specified time or by a number of passes. By default, fsadm
runs five passes. If both -t and -p are specified, fsadm exits if either of the
terminating conditions is reached.
Note: On the Linux platform, the -T time option is used instead of the
-t time option because the -t switch is used for file system switchout
mechanism.
6
Copyright 2014 Symantec Corporation. All rights reserved.
what you think is an appropriate interval for running extent reorganization and run
the fragmentation reports both before and after the reorganization. If the degree of
fragmentation is approaching the bad fragmentation figures, then the interval
between fsadm runs should be reduced. If the degree of fragmentation is low, then
the interval between fsadm runs can be increased.
You should schedule directory reorganization for file systems when the extent
reorganization is scheduled. The fsadm utility can run on demand and can be
scheduled regularly as a cron job.
The defragmentation process can take some time. You receive an alert when the
process is complete.
When using a thin provisioning capable array, a virtual container (virtual volume)
is created for the 1TB. The array then creates/resizes LUNs as actual data is
written to the virtual container. The administrator is not involved after the initial
virtual container is created unless the amount of actual physical storage is used up.
To truly benefit from thin storage, you need the right stack on all hosts:
A multi-pathing driver that supports the thin hardware
A file system optimized not to waste storage on thin volumes
A stack to reclaim space as you migrate to thin storage
A stack to continually optimize utilization of thin storage
SF unlocks thin provisionings full potential with DMP and VxFS which is the
only cross-platform thin storage friendly file system.
CONFIDENTIAL - NOT FOR DISTRIBUTION
155 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
619
Displaying information on thin disks
SF automatically controls the applicability of features such as SmartMove and thin
reclamation based on known device attributes. If SmartMove is enabled only for
thin LUNs and a device is known to be thin by Storage Foundation, then mirroring
operations are optimized to keep the device thin. If a device is known to be
thinrclm, then SF allows thin reclamation commands to be issued to it.
SF 5.0 MP3 and later automatically discover thin LUNs and their attributes. If a
thin LUN is not automatically discovered as thin, you can use the following
command to manually inform SF that the LUN is thin or thin reclaim:
vxdisk -g diskgroup set dm_name thin=[on|reclaim]
The vxdisk -e list command prints the extended device attributes
(EXT_ATTR) as the last column to indicate the type of the device.
Copyright 2014 Symantec Corporation. All rights reserved.
To display properties of the devices that support thin provisioning, use the
vxdisk -o thin list command. This command also indicates whether the
LUN supports thin reclamation. Thin reclamation is the process of reclaiming
unused storage that is a result of deleted files and volumes back to the available
free pool of the thin provisioning capable array. Not all thin provisioning arrays
support thin reclamation. Use the vxdisk -o thin,fssize list command
to display and compare the physically allocated storage size to the storage size
used by the file system. If there is a big difference between the two sizes, it is time
to initiate a thin reclamation process on the corresponding device.
The vxdisk -p list command displays the discovered properties of the
disks including the attributes related to thin provisioning and thin reclamation.
SmartMove feature only for volumes that contain thin LUNs, you need to specify
usefssmartmove=thinonly in the /etc/default/vxsf file. This
tunable is system-wide and persistent, so it only needs to be set once per server.
Setting this tunable parameter to none completely disables the SmartMove
feature. Note that with SF 5.1 and later, you can also use the vxdefault
command to change the value of this tunable parameter. The vxdefault
command is explained in more detail later in this topic.
Note: The Veritas file system must be mounted to get the benefits of the
SmartMove feature.
This feature can be used for faster plex creation and faster array migration.
CONFIDENTIAL - NOT FOR DISTRIBUTION
157 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
621
Administering thin provisioning parameters
In SF 5.1 and later, the vxdefault command is used to modify and display the
tunable parameters that are stored in the /etc/vx/vxsf file as shown on the
slide.
The sharedminorstart tunable parameter is used with the dynamic disk group
reminoring feature. This feature is used to allocate minor numbers dynamically to
disk groups based on their private or shared status. Shared disk groups are used
with Cluster Volume Manager and are not covered in this course.
The fssmartmovethreshold defines a threshold value; only if the filesystem
%usage is less than this threshold, then the SmartMove feature is used. By default,
the fssmartmovethreshold is set to 100 which means that SmartMove is used with
all vxfs file systems with less than 100% usage.
The autostartvolumes tunable parameter turns on or off automatic volume
Copyright 2014 Symantec Corporation. All rights reserved.
recovery. If this parameter is set to on, VxVM automatically recovers and starts
disabled volumes when you import, join, move or split a disk group.
Thin reclamation can be triggered on one or more disks, enclosures or disk groups,
or at the file system level on a mounted VxFS file system as displayed on the slide.
When you reclaim at the file system level, the command goes through all the free
extents in the file system and issues the storage level reclaim on the regions which
are free. Every time the command is run, the complete file system is scanned.
VxVM is optimized to issue the reclaim only to the TP LUNs in the file system.
When you reclaim at the VxVM level, the reclaim command goes through the list
of all TP LUN-backed mounted file systems associated to the specified object, and
issues the reclaim on all the file systems. The output displays the list of volumes
skipped and the list of volumes reclaimed.
space. This is an additional option for reclamation that can only be triggered at the
file system level using the fsadm -R -A mount_point command. Note that
you can use the -o analyze option first to determine if you should perform a
normal reclaim operation or an aggressive reclaim operation.
Notes:
Aggressive reclamation can only be performed on file systems that are known
to use thin reclaim capable storage.
Aggressive reclamation can increase the thin storage usage temporarily during
the data compaction process.
reclaim_on_delete_start_time=[00:00-23:59]
The vxdg destroy diskgroup command does not reclaim any storage
automatically. The thin provision reclaimable LUNs belonging to the destroyed
disk group must be reclaimed manually using the vxdisk reclaim disk
command.
created with a DCO volume to hold the FastResync maps as well as the DRL
recovery maps and other special maps used with instant snapshot operations on
disk.
Note that you cannot remove a mirrored volume using the vxassist remove
volume command if it has an associated DCO log. To remove a mirrored volume
with a DCO log, use the following vxedit command:
vxedit -g diskgroup -rf rm volume_name
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 6: Administering File Systems, page A-123
VxVM daemons
VxVM relies on the following constantly running daemons for its operation:
vxconfigdThe VxVM configuration daemon maintains disk and group
configurations, communicates configuration changes to the kernel, and
modifies configuration information stored on disks. When a system is booted,
the vxdctl enable command is automatically executed to start
Copyright 2014 Symantec Corporation. All rights reserved.
monitors five or more copies of the configuration database for each disk group.
VxVM balances their locations based on the number of controllers, targets and
disks in the disk group.
7
VxVM configuration copies are placed across the enclosures spanned by a disk
group to ensure maximum redundancy across enclosures.
The vxconfigd configuration daemon, is the process that updates the
configuration through the vxconfig device. The vxconfigd daemon was
designed to be the sole and exclusive owner of this device.
By default, for each disk group, VxVM maintains a minimum of five active
database copies on the same controller. In most cases, VxVM also attempts to
alternate active copies with inactive copies. In the example, the copies on c1t3d0
and c1t9d0 are disabled. If different controllers are represented on the disks in the
same disk group, VxVM maintains a minimum of two active copies per controller.
In the output on the slide, the Configuration database size (permlen=) is next to a
field named free=. The free= field can be used to check how fast the configuration
database is filling up so that action can be taken before the disk group runs out of
database space.
Term Description
Device Full UNIX device name of disk
devicetag Device name used by VxVM to refer to the physical disk
type Method of placing the disk under VxVM control
hostid Name of system that manages the disk group (If blank, no host is
currently controlling this group.)
disk VM disk media name and internal ID
group Disk group name and internal ID
Copyright 2014 Symantec Corporation. All rights reserved.
info Disk format, private region offset, and partition numbers for
public and private regions
flags Settings that describe status and options for the disk
7
pubpaths Paths for block and character device files of the public region of
the disk
version Version number of header format
iosize The iosize range that the disk accepts
public, Partition (slice) number, offset from beginning of the partition,
private length of the partition, and disk offset
to data on disk.
Note: With SF 5.1 and later, the vxconfigd daemon is able to process the
following query requests while it is performing disk group import
operations:
vxdctl mode
vxdg list
vxdisk list
vxprint
retain configuration information for the imported disk groups and does not
maintain the volume and plex device directories. Certain failures, most
commonly the loss of all disks or configuration copies in the boot disk group,
cause vxconfigd to enter the disabled state automatically.
7
Booted
The booted mode is part of normal system startup, prior to checking the root
file system. The booted mode imports the boot disk group and waits for a
request to enter the enabled mode. Volume device node directories are not
maintained, because it may not be possible to write to the root file system.
vxdctl list
Volboot file
version: 3/1
seqno: 0.1
cluster protocol version: 110
hostid: train1
...
can create LVM volumes and volume groups on DMP metadevices. Veritas
Dynamic Multi-Pathing can be licensed separately from Storage
Foundation products. Veritas Volume Manager and Veritas File System
functionality is not provided with a DMP license.
Active/active disk arrays permit several paths to be used concurrently for I/O.
With these arrays, DMP provides greater I/O throughput by balancing the I/O load
uniformly across the multiple paths to the disk devices. If one connection to an
array is lost, DMP automatically routes I/O over the other available connections to 7
the array.
paths.
minimumq sends I/O on paths that have the minimum number of I/O requests
in the queue. This is the default policy for all types of arrays.
priority assigns the path with the highest load carrying capacity as the
priority path.
round-robin sets a simple round-robin policy for I/O.
singleactive channels I/O through the single active path.
To display the current I/O policy:
vxdmpadm getattr enclosure enclosure_name iopolicy
Note: Marking a path as a preferred path does not change its I/O load
balancing policy.
When you disable I/O to a controller, disk, or path, you override the DMP path
restoration threads ability to reset the path to ENABLED; the status of the manually
disabled path is displayed as DISABLED(M) or disabled(m).
When you enable I/O to a controller:
For active/active disk arrays, the controller is used again for load balancing.
For active/passive disk arrays, the operation results in failback of I/O to the
primary path.
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 7: Managing Devices Within the VxVM Architecture, page A-141
Disk failure
Data availability and reliability are ensured through most failures if you are using
VxVM redundancy features, such as mirroring or RAID-5. If the volume layout is
not redundant, loss of a drive may result in loss of data and may require recovery
from backup.
After you run this command, the drive status changes to error for the failed drive,
and the disk media record changes to failed. The disk is immediately marked as
error state, when the public region is not accessible.
Note that the example on the slide shows three failed disks one of which was in a
disk group and was assigned a disk media name. Failed disks that were not part of
a disk group also change their status to error but they have no disk media records
to show as failed.
media name. For temporary failures, this is the same physical disk that was
used for the disk media before.
Volume recovery: When a disk fails and is removed for replacement, the plex
on the failed disk is disabled, until the disk is replaced. Volume recovery
involves starting disabled volumes, resynchronizing mirrors, and
resynchronizing RAID-5 parity.
After successful recovery, the volume is available for use again. Redundant
(mirrored or RAID-5) volumes can be recovered by VxVM. With permanent
failures, nonredundant (unmirrored) volumes must be restored from backup. 8
3 Get VxVM to recognize that a failed disk is now working again. Although you
can use the vxdctl enable command to get VxVM to recognize a new or
recovered disk, this command causes VxVM to reread all of the configuration
information on all of the existing devices. In large configurations this can be
time consuming. You can use the vxdisk scandisks commands
displayed on the slide to limit the discovery operation to a subset of disks.
4 Verify that VxVM recognizes the disk:
vxdisk -o alldgs list
After the operating system and VxVM recognize the new disk or the recovered
disk, you can then attach the disk to the failed disk media record.
CONFIDENTIAL - NOT FOR DISTRIBUTION
198 812 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Attaching the VxVM diskPermanent failures
disk access record. If you use the r option with the vxreattach command,
volume recovery is also initiated and there is no need to perform volume recovery
separately.
Note that the -s option of the vxrecover command starts all disabled volumes
that can be started. However, if a non-redundant volume does not have a clean or
active plex, the vxrecover -s command will not succeed in starting it. In this
case, you may need to start the non-redundant volume forcibly using the vxvol
f start command as shown in the slide. Starting a volume is necessary before
you can perform any I/O on the volume, for example to restore data from a backup.
CAUTION You must never start redundant volumes forcibly. If you do so, you
may cause data corruption.
Note: If the failing flag is set on a disk, it is not turned off until the administrator
executes the following command:
vxedit -g diskgroup set failing=off dm_name
Note: VxVM hot relocation is applicable when working with both physical disks
and hardware arrays. For example, even with hardware arrays if you mirror
a volume across LUN arrays, and one array becomes unusable, it is better
Copyright 2014 Symantec Corporation. All rights reserved.
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 8: Resolving Hardware Problems, page A-159
Note: The sym3 and sym4 systems are not used for the Storage Foundation labs.
Note: In the following exercises, the virtual machines are identified by the system
names in the preceding table.
Copyright 2014 Symantec Corporation. All rights reserved.
by the instructor
winclient User: administrator
Password: train
In this exercise, you start the virtual machines and display the existing snapshots A
for each virtual machine.
b Ensure that VMware Workstation opens and that the following tabs are
present - mgt, util1, winclient, sym1 and sym2. If these tabs are not
present, do not proceed until notifying the instructor.
End of Solution
Solution
b Use the Summary view to locate the Devices tab and review the
information showing the virtual machine configuration.
c Click each of the remaining tabs and review the Devices pane information
for each virtual machine.
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
c While the virtual machine is starting, proceed to the next virtual machine.
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
End of Solution
CAUTION Do not proceed to the next step until the login screen is visible on
both mgt and util1. The mgt server will show a typical RHEL
graphical logon screen, while the util1 server will stop at a CLI
login prompt.
Note: The first two virtual machines must be turned on at all times during all lab
testing. Failure to start the mgt and util1 virtual machines results in
missing files and missing shared LUNs.
Copyright 2014 Symantec Corporation. All rights reserved.
winclient
Solution
c While the virtual machine is starting, proceed to the next virtual machine.
End of Solution
sym1
Solution
c While the virtual machine is starting, proceed to the next virtual machine.
End of Solution
sym2
Solution
End of Solution
Log on to each virtual machine to become familiar with the logon procedures for A
each system type.
Note: Do not log onto the mgt and util1 virtual machines unless the instructor
requests you to do so.
winclient
b On the login screen of the winclient system, type the username and
password. Click the Enter key.
sym1
Copyright 2014 Symantec Corporation. All rights reserved.
2 Log on to the first Storage Foundation Server (sym1) as the root user.
Solution
b On the login screen of the sym1 server, type the username and click the
Enter key.
Password: train
End of Solution
sym2
3 Log on to the second Storage Foundation Server (sym2) as the root user.
Solution
b On the login screen of the sym2 server, type the username and click the
Enter key.
c When prompted, type the password for this system and click the Enter key.
Password: train
End of Solution
4 Press Ctrl+Alt to release keyboard and mouse controls from the virtual
Copyright 2014 Symantec Corporation. All rights reserved.
machine.
sym1
Solution
On the desktop, right-click and select Konsole.
End of Solution
Solution
b Locate the entries for the eth1, eth2, eth3, and eth4 interfaces.
End of Solution
Solution
ping 10.10.2.1
ping 10.10.3.1
8 Use the nslookup command to view the fully qualified host name of the
second Storage Foundation Server (sym2).
Solution
nslookup sym2
End of Solution
9 Ensure that iSCSI LUNs are available using the fdisk -l command.
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym2
Solution
On the desktop, right-click and select Konsole.
End of Solution
Solution
b Locate the entries for the eth1, eth2, eth3 and eth4 interfaces.
End of Solution
Solution
ping 10.10.2.1
ping 10.10.3.1
ping 10.10.4.1
13 Use the nslookup command to view the fully qualified host name of the first
Storage Foundation Server (sym1).
Solution
nslookup sym1
End of Solution
14 Ensure that iSCSI LUNs are available using the fdisk -l command.
Solution
From one of the open terminal windows, type fdisk -l.
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Note: The mgt and util1 virtual machines must be running to have access to the
iSCSI LUNs. If only the sda and sdb disks are visible, contact the
instructor to isolate the issue.
End of lab
Note: These exercises are to be used only if the class is using the hosted Hatsize
platform to access the lab environment. Exercises for other environments,
such as VMware Workstation, are located elsewhere in this document.
Hatsize interface
The screen shot in the slide shows the Hatsize interface used to access the virtual
machines. Instead of using tabs, such as the tabs in VMware Workstation, you
access Hatsize virtual machines from the System Access menu. Other key
interface elements include:
Machine Commands: Indicates the currently connected machine and whether you
have control of the machine or are in view-only mode.
System Access: Is used to Power on, Power off, revert, and saved options.
Note: The sym3 and sym4 systems are not used for the Storage Foundation labs.
Copyright 2014 Symantec Corporation. All rights reserved.
Log on to Hatsize and connect to the first system. For each lab environment in
Hatsize, a particular virtual machine is marked as a primary machine. All other
machines are marked as secondary machines. When you connect to the Hatsize
interface, you are initially connected to the primary virtual machine.
1 Locate the Hatsize portal URL and login credentials from your registration
e-mail. Record your credentials here:
Hatsize username:
Hatsize password:
2 Your student number is the number at the end of your Hatsize username
recorded in the previous step.
Note: When you use the Hatsize environment, all of the virtual machines
assigned to you are prefixed with a letter and your student number. For
example, if your student number is 8, the virtual machine named vom is
named something like k8-vom or s8-vom. Because the prefix is
different for each student, the lab exercises refer only to the system
name without the prefix.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
A
The logon screen in the browser is similar to this:
End of Solution
4 After logging in, find your class in the Current Classes table and click Enter
Lab. Note that the name of your class will be different than the sample shown
here.
Sample:
Copyright 2014 Symantec Corporation. All rights reserved.
Note: In this exercise you are going to perform these steps on sym1. Similarly
you can repeat the same steps for other virtual machines sym2 and
winclient in Hatsize environment.
sym1
End of Solution
Note: The above screen displayed (Java console) might be different for
different VMs. You can open a VM by single-clicking the thumbnail
screen image or by clicking the System pull-down control (located in
the bottom-right corner of the thumbnail) and selecting Open.
5 Clicking the white triangle in the green square icon at the top of the
VM desktop, will provide access to the VM control functions, such as
keyboard entry and power management.
Copyright 2014 Symantec Corporation. All rights reserved.
Note: In this exercise you are going to perform these steps on sym1. Similarly
you can repeat the same steps for other virtual machine sym2 in Hatsize
environment.
sym1
Solution
b Locate the entries for the eth1, eth2, eth3 and eth4 interfaces.
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
ping 10.10.2.12
ping 10.10.3.12
ping 10.10.4.12
ping 10.10.5.12
End of Solution
4 Use the nslookup command to view the fully qualified host name of the
second Storage Foundation Server (sym2).
Solution
nslookup sym2
End of Solution
End of Solution
5 Ensure that iSCSI LUNs are available using the fdisk -l command.
Solution
From one of the open terminal windows, type fdisk -l.
End of Solution
sym2
Solution
b Locate the entries for the eth1, eth2, eth3 and eth4 interfaces.
End of Solution
Solution
ping 10.10.2.11
ping 10.10.3.11
ping 10.10.4.11
CONFIDENTIAL - NOT FOR DISTRIBUTION
241 Lab 1: Hatsize Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A31
ping 10.10.5.11
End of Solution
4 Use the nslookup command to view the fully qualified host name of the
second Storage Foundation Server (sym1).
Solution
nslookup sym1
End of Solution
5 Ensure that iSCSI LUNs are available using the fdisk -l command.
Solution
From one of the open terminal windows, type fdisk -l.
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Note: The mgt and scst virtual machines must be running to have access to the
iSCSI LUNs. If only the sda, sdc, and sdb disks are visible, contact the
instructor to isolate the issue.
If the lab steps instruct you to restart a VMware machine, you must preserve the A
system state during the process. Otherwise, the machine is restored to the initial
state and loses any changes you have made. Only discard the state of the machine
after consulting with your instructor. There are two methods, either within the
operating system on the virtual machine, or in the console System Control menu.
sym1
CAUTION Do not perform any of these steps with out receiving any
permission or notice from the instructor.
2 From a terminal window, use the shutdown -ry now command to restart
the virtual machine.
Solution
shutdown -ry now
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
3 The power management machine commands are available at two places from
the white on green triangle at the top of the VM desktop and from the
System pull-down control from the thumbnail view. The two power
management machine commands that are currently available in the VA lab are:
CAUTION Do not perform any of these steps with out receiving any
permission or notice from the instructor.
b Power Cycle and revert to last saved state This operation will return
the VM back to first-day-of-class condition. Any and all work, including
software installations and configurations, performed since the beginning of
the class will be lost. This choice should only be used as a last resort when
a VM has become unusable and a total refresh is necessary. You should not
choose this option without Instructor direction.
Copyright 2014 Symantec Corporation. All rights reserved.
4 After finishing all the lab exercises remember formally to disconnect the VA
session. On the top right corner you will see a Disconnect option to close the
VA session.
Note: If you simply close the web browser window your access will take a
few minutes to eventually time out. Reconnecting attempts will result
in a User is already logged in message in the portal until that
timeout.
End of lab
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
/etc/grub.conf
/etc/modprobe.conf
Solution
a cp /etc/grub.conf /etc/grub.conf.preVM
b cp /etc/modprobe.conf /etc/modprobe.conf.preVM
c fdisk -l /dev/sda
Note: This lab section shows the steps for one lab system. These steps should
be repeated for all systems that SF 6.x will be installed on, for example
sym2.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
2 If you have access to the Internet, start a Web browser and navigate to the
Symantec Operations Readiness Tools (SORT) Web site at
https://sort.symantec.com. In the Download section, click the link
for SORT Data Collectors. If a Read and accept Terms of Service page
appears, click Accept. Select the link for the Linux (x86-64) operating system.
Save the SORT data collector sharball to a local directory, such as /var/
tmp, or to the Desktop.
If you do not have access to the Internet, copy the SORT data collector sharball
located in the /student/software/sf/sort directory to a local
directory, such as /var/tmp.
Solution
a cp /student/software/sf/sort/sort_linux_x64.sh \
/var/tmp
b cd /var/tmp
End of Solution
3 Decompress the SORT data collector sharball you copied to the local directory.
Note that you may need to change file permissions to execute the sharball or
run it using sh. When the Would you like to run the data
collector now? prompt is displayed, enter n.
Solution
sh ./sort_linux_x64.sh
Would you like to run the data collector now? [y,n] (y) n
End of Solution
4 Run SORT data collector and verify completion using displayed text output.
a Start the SORT utility. If you need to install Storage Foundation on more
than one system, start the SORT utility to check all systems.
Note: If your system has access to the Internet and a more recent version of the
SORT utility is available than the version you are running, you are
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
./sort/sortdc
End of Solution
A
Solution
Solution
Press [Return] to indicate your acceptance the
terms and conditions as indicated in the /var/tmp/
sort/advanced/terms.txt file or q to decline: (y)
y
End of Solution
Solution
Main Menu:
End of Solution
Solution
Main Menu ->Storage Foundation and HA Solutions:
g When prompted on which system to run the report, select option 2) One
or more remote systems and press Enter to continue.
Solution
Main Menu->Installation/Upgrade report-
>SFProductFamily:
h When prompted, type the names of the systems that you desire to test.
Note: SORT data collector uses the same code base as the CPI installer, so
Copyright 2014 Symantec Corporation. All rights reserved.
you can specify multiple systems of the same OS and the utility
includes all specified systems in the test. A single XML file is
created that includes all systems.
Solution
Enter one or more system names separated by space,
or the full-qualified path to a file containing a
list of system names: [b,q,?] sym1 sym2
End of Solution
Solution A
j After the SORT data collector checks for partial clusters and performs
some basic data collection, choose the Symantec enterprise product you
want to install or upgrade to. Select option 1)Storage Foundation
and press Enter to continue.
Solution
Choose the Symantec enterprise product you want to
install or upgrade to. If you are installing or
upgrading multiple products, run the data
collector for each one.
1) Storage Foundation
2) Storage Foundation for Oracle
3) Cluster Server
4) Storage Foundation HA
5) Storage Foundation Cluster File System
6) Storage Foundation for Cluster File
System/HA
7) Storage Foundation for Oracle RAC
8) Storage Foundation for Sybase
9) Storage Foundation for DB2
10) Storage Foundation Cluster File System for
Copyright 2014 Symantec Corporation. All rights reserved.
Oracle RAC
11) Storage Foundation Sybase ASE CE
b) Back to previous menu
Solution
Choose the product version you want to install or
upgrade to on the system(s) in your environment.
Storage Foundation
Solution
Analyzing systems: 100%
Solution
Your tasks are completed. Would you like to exit
the data collector? [y,n,q](y) y
End of Solution
Solution
more /var/tmp/sort/reports/[report_name].txt
End of Solution
5 If you have access to the Internet, upload the SORT .xml output file to the
SORT Web site. Otherwise, skip steps 5 and 6.
Note: Uploading the SORT .xml report to the SORT Web site requires that
there be access to the Internet from the classroom lab. If an external
connection is not available, the .xml file can be saved to a USB drive
and these steps can be performed at a later date.
Solution
b Under the SORT section, click the My SORT link. Under the Custom
Copyright 2014 Symantec Corporation. All rights reserved.
Reports Using Data Collectors section, select the Upload Report tab.
Click the Choose File button and then browse to the /var/tmp/sort/
reports directory, select the SORT .xml file, and click the Open
button. Click the Upload button to continue.
End of Solution
Solution
Mark the checkbox next to Passed in the Filter View By section at the top of A
the page. In the Summary for this server section, ensure that each section
displays a green icon. If any of the sections display an orange or red icon,
record the steps that need to be taken before the installation can be performed
on the following lines.
_______________________________________________________________
______________________________________________________________
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
1 Open a terminal window and navigate to the directory that contains the Storage
Foundation 6.x installer script.
Solution
cd /student/software/sf/sf61/rhel/rhel5_x86_64
End of Solution
Solution
./installer
End of Solution
d Type y to agree to the terms of the End User License Agreement (EULA).
f Type the names of your two systems when prompted. The server where the
installer script was executed on is the default value.
Solution
sym1 sym2
End of Solution
System communications
Release compatibility
Installed product A
Prerequisite patches and rpms
Platform version
File system free space
Product licensing
Product prechecks
If you discover any issues, report them at this time.
h Press Enter to scroll through the list of packages and start the package
installation.
i Select 2 for the Enable keyless licensing and complete system licensing
later option.
Notes:
Your lab systems are already configured with the PATH and MANPATH
Copyright 2014 Symantec Corporation. All rights reserved.
echo $MANPATH
End of Solution
Solution
rpm -qa | grep VRTS
VRTSvlic-3.02.61.010-0
VRTSfsadv-6.1.000.000-GA_RHEL5
VRTSob-3.4.678-0
VRTSsfmh-6.0.0.0-0
VRTSspt-6.1.000.000-GA
VRTSlvmconv-6.1.000.000-GA_RHEL5
VRTSdbed-6.1.000.000-GA_RHEL
VRTSvxvm-6.1.000.000-GA_RHEL5
VRTSvxfs-6.1.000.000-GA_RHEL5
VRTSodm-6.1.000.000-GA_RHEL5
VRTSaslapm-6.1.000.000-GA_RHEL5
VRTSsfcpi60-6.1.000.000-GA_GENERIC
VRTSperl-5.16.1.6-RHEL5.5
VRTSfssdk-6.1.000.000-GA_RHEL5
5 View the log files from the installation using the location specified at the end
of the installation. The log file directory is located in
/opt/VRTS/install/logs.
Solution
cd /opt/VRTS/install/logs/
ls
cd installer-unique_string/
Copyright 2014 Symantec Corporation. All rights reserved.
ls
installer-unique_string.summary
installer-unique_string.response
installer-unique_string.tunables
installer-unique_string.log#
install.package.system
start.SFprocess.system
End of Solution
Solution
A
vxlicrep | more
-----------------***********************---------
--------
Features :=
Reserved = 0
CPU Count = Not
Restricted
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
1 Open a terminal window and navigate to the directory that contains the Storage
Foundation 6.x installer script.
Solution
cd /student/software/sf/sf61/rhel/rhel5_x86_64
End of Solution
Solution
./installer
End of Solution
d Type the names of your two systems when prompted. The server where the
installer script was executed is the default value.
Solution
sym1 sym2
End of Solution
System communications
CONFIDENTIAL - NOT FOR DISTRIBUTION
262 A52 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Release compatibility
Installed product
Platform version
Product prechecks
A
If you discover any issues report them at this time.
Note: Notice that the post installation check displays a warning due to sda,
sdb, and sdc disks are not being in an online state. This warning can
be ignored.
3 Perform a version check of the installed Storage Foundation 6.x systems. Start
the installer script with the -version option. Specify the sym1 and
sym2 system names.
Solution
Product:
Symantec Storage Foundation - 6.1
Summary:
Packages:
7 of 7 required Symantec Storage Foundation
6.1 packages installed
7 of 7 optional Symantec Storage Foundation
6.1 packages installed
Copyright 2014 Symantec Corporation. All rights reserved.
Product:
Symantec Storage Foundation - 6.1
Summary:
Packages:
7 of 7 required Symantec Storage Foundation
6.1 packages installed
7 of 7 optional Symantec Storage Foundation
6.1 packages installed
Copyright 2014 Symantec Corporation. All rights reserved.
Note: If required perform the following 3c and 3d steps, else skip them and
move to next step.
Note: The installer script attempts to contact the SORT Web site to
check for product updates.
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
The VEA GUI client package has been removed from the Storage Foundation
installed packages, although the object bus (VRTSob) is still present. This section
covers how to enable the server and install the VEA GUI client if desired.
Solution
chkconfig --list |grep isisd
isisd 0:off 1:off 2:off 3:on
Copyright 2014 Symantec Corporation. All rights reserved.
Note: The chkconfig command is used to list and maintain the /etc/
rc[0-6].d directories
End of Solution
Solution
ps -ef | grep vxsvc
/opt/VRTS/bin/vxsvcctrl start
/opt/VRTS/bin/vxsvcctrl status
End of Solution
4 Attempt to start the VEA GUI using the vea command. Observe the message
displayed. Press Enter.
Solution
vea &
VEA GUI is no longer packaged. Symantec recommends that you use VOM to
manage, monitor, and report on multi-host environments. You can download
this utility at no charge from http://go.symantec.com/vom. If you
wish to continue using VEA GUI, you can downloaded it from the same Web
site.
End of Solution
Solution
cd /student/software/sf/vea_gui
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
rpm -ivh VRTSobgui-3.4.15.0-0.i686.rpm
End of Solution
Solution A
rm /opt/VRTS/bin/vea
ln -s /opt/VRTSob/bin/vea /opt/VRTS/bin/vea
End of Solution
8 Verify that you can start the VEA GUI and connect to the local host.
Solution
/opt/VRTS/bin/vea &
End of Solution
9 In the Select Profile window, click the Manage Profiles button and configure
VEA to always start with the Default profile.
Solution
Set the Start VEA using profile option to Default, click Close, and then
click OK to continue.
End of Solution
10 Click the Connect to a Host or Domain link and connect to your system as
root.
Solution
Hostname: (For example, sym1)
Copyright 2014 Symantec Corporation. All rights reserved.
Username: root
Password: train
End of Solution
11 On the left pane (object tree) view, navigate the system and observe the various
categories of VxVM objects.
12 Select the Assistant perspective, on the quick access bar and view tasks for
systemname.
CONFIDENTIAL - NOT FOR DISTRIBUTION
269 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A59
13 Using the System perspective, determine which disks are available to the OS.
Solution
In the System perspective object tree, expand your host and then select the
Disks node. Examine the Device column in the grid.
End of Solution
14 Execute the Disk Scan command and check if any messages are displayed on
the Console view.
Solution
In the VEA System perspective object tree, select your host. Select Actions >
Rescan.
End of Solution
Solution
Navigate to the Log perspective. Select the Task Log tab in the right pane and
double-click the Scan for new disks task.
End of Solution
Solution
In the VEA main window, select File > Exit. Confirm when prompted.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
17 Create a root equivalent administrative account named admin1 for use of VEA.
Solution
useradd admin1
passwd admin1
c Modify the /etc/group file to add the vrtsadm group and specify the
root and admin1 users by using the vi editor, as follows: A
vi /etc/group
d In the file, navigate to the location where you want to insert the vrtsadm
entry, change to insert mode by typing o, and then add the line:
vrtsadm::99:root,admin1
e When you are finished editing, press Esc to leave insert mode.
:wq
End of Solution
18 Test the new account. After you have tested the new account, exit VEA.
Solution
vea &
Hostname: sym1
c Select the Connect using a different user account option and click
Copyright 2014 Symantec Corporation. All rights reserved.
Connect.
d Type the username and password for the new user, as follows:
User: admin1
Password: (Type the password that you created for admin1.)
End of Solution
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 From the command line, invoke the text-based VxVM menu interface using
the vxdiskadm command.
Solution
vxdiskadm
End of Solution
Solution
Type ? at any of the prompts within the interface.
End of Solution
Solution
End of Solution
Solution
Type q at the prompts until you exit vxdiskadm.
A
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 From the command line, invoke the VxVM manual pages as follows and then
read about the vxassist command.
Solution
man vxassist
End of Solution
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
3 From the command line, invoke the VxVM manual pages to read about the
vxdisk command.
Solution
man vxdisk
End of Solution
Solution
vxdisk -o alldgs list
A
All the available disks are displayed in the list.
End of Solution
5 From the command line, invoke the VxVM manual pages to read about the
vxdg command.
Solution
man vxdg
End of Solution
Solution
vxdg list
Note: Because you have not created any disk groups yet, the command output
shows only the header statement at this stage in the labs.
End of Solution
7 From the command line, invoke the VxVM manual pages to read about the
vxprint command.
Solution
man vxprint
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
ps -ef | grep -i vx
This section requires an additional system that has SF 6.x pre-installed but is not A
already configured in the VOM MS. For this lab section, use the sym1 and sym2
virtual machines.
sym1
1 Open a terminal window and use the ps -ef command to determine if the
vxsvc daemon is running. If it is not, use vxsvcctrl activate followed
by vxsvcctrl start to start the daemon.
Note: You are enabling the vxsvc service now for ease of use later in the lab.
Solution
b /opt/VRTS/bin/vxsvcctrl activate
c /opt/VRTS/bin/vxsvcctrl start
End of Solution
2 Verify that the service (isisd) is online (enabled) on the system to be added
as a managed host on the MS server.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
chkconfig --list | grep isisd
isisd 0:off 1:off 2:off 3:on 4:off
5:on 6:off
End of Solution
sym2
3 Open a terminal window and use the ps -ef command to determine if the
vxsvc daemon is running. If it is not, use vxsvcctrl activate followed
by vxsvcctrl start to start the daemon.
Note: You are enabling the vxsvc service now for ease of use later in the lab.
Solution
b /opt/VRTS/bin/vxsvcctrl activate
c /opt/VRTS/bin/vxsvcctrl start
End of Solution
4 Verify that the service (isisd) is online (enabled) on the system to be added
as a managed host on the MS server.
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
End of Solution
Note: If required manually change the resolution of the web page using IE
settings option.
9 In the Host Name field, type the fully qualified host name of the first system to
be added as a managed host (sym1.example.com).
10 In the User Name field, type root, and in the Password field type the root
password.
13 Click Finish to have the VOM server add the hosts, and then click OK.
Note: The VOM MS server and the managed host must be time synchronized.
Check the system times using the date command to ensure that the
time difference between the two systems is not greater than five
minutes.
14 Verify that your hosts have been added. You can do this by going to the Server
perspective and viewing the Hosts tab. (Click on Home > Server> Hosts.)
End of lab
Copyright 2014 Symantec Corporation. All rights reserved.
CAUTION In this lab, do not include the boot disk in any of the tasks.
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four empty and unused
external disks to be used during the labs.
Note: If there are multiple paths to each disk, the fdisk -l output will show a
higher number of devices than actually present.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
Object Value
root password train
Host names of lab systems winclient
sym1
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_60
sym1
1 View all the disk devices on the system. What is the status of the disks
assigned to you for the labs?
Solution
vxdisk -o alldgs list
End of Solution
2 Choose a disk (emc0_dd7) and initialize it, if necessary, using the CLI. Using
the vxdisk -o alldgs list command observe the change in the Status
column. What is the status of the disk now?
CAUTION Do not initialize the sda device. This is the system boot disk.
Do not initialize the sdb device. This disk has Oracle binaries.
Solution
vxdisksetup -i emc0_dd7
vxdisk -o alldgs list
Copyright 2014 Symantec Corporation. All rights reserved.
The TYPE field should change to auto:cdsdisk and the STATUS of the
disk should change to online but the DISK and GROUP columns should still
be empty.
End of Solution
Solution
vxdg init appdg appdg01=emc0_dd7
vxdisk -o alldgs list
The TYPE and STATUS of the disk are the same but the DISK and GROUP
columns now show the new disk media name and the disk group name
respectively.
End of Solution
Solution
vxassist -g appdg make appvol 1g
End of Solution
5 Create a Veritas file system on the appvol volume, mount the file system to the
/app directory. Create the directory if it does not exist.
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
A
a Edit the fstab file: vi /etc/fstab
:wq
End of Solution
7 Unmount the /app file system, verify the unmount, and remount using the
mount -a command to mount all file systems in the file system table.
Solution
umount /app
/bin/mount | grep app
/bin/mount -a
/bin/mount | grep app
End of Solution
8 Identify the amount of free space in the appdg disk group. Try to create a
volume in this disk group named largevol with a size slightly larger than the
available free space, for example 2g on standard Symantec classroom systems.
What happens?
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdg -g appdg free
The free space is displayed in sectors in the LENGTH column.
Note: You can use vxdg -g appdg -u H free command to display the
free space in the appdg disk group.
9 Choose a second disk (emc0_dd8), initialize it, if necessary, and add it to the
appdg disk group. Observe the change in free space.
Solution
vxdisksetup -i emc0_dd8
vxdg -g appdg adddisk appdg02=emc0_dd8
vxdg -g appdg free
End of Solution
10 Create the same volume, largevol, in the appdg disk group using the same size
as in step 8.
Solution
Note: The 2g volume size is used as an example here. You may need to use a
value more suitable to your lab environment if you are not working in a
standard Symantec classroom.
Solution
vxprint -g appdg -htr
End of Solution
Solution
vxdg list
A
If you have followed the labs so far, you should have one disk group listed
appdg.
End of Solution
13 Display disk property information for each disk in the appdg disk group using
the vxdisk -p list command. From the output record the following
information:
End of Solution
14 Display the OS native names for all the disks using the vxdisk -e list
command.
Solution
vxdisk -e list
End of Solution
sym1
1 Unmount the /app file system and remove it from the file system table.
Solution
umount /app
vi /etc/fstab
Navigate to the line with the entry corresponding to the /app file system and
type dd to delete the line.
Type :wq to save and close the file.
End of Solution
2 Remove the largevol volume in the appdg disk group. Observe the disk group
configuration information using the vxprint -g appdg -htr command.
Solution
vxassist -g appdg remove volume largevol
vxprint -g appdg -htr
There should be only appvol volume, and the second disk, appdg02, should be
unused.
End of Solution
3 Remove the second disk (appdg02) from the appdg disk group. Observe the
change in its status.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdg -g appdg rmdisk appdg02
vxdisk -o alldgs list
Note that the disk is still in online state; it is initialized.
End of Solution
sym1
1 Mount the appvol volume to the /app directory. Do not add the entry to the
file system table.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
2 Copy some data files into the /app file system. For this test, use the files
located in the /etc/default directory. List the contents of the /app
directory after the copy has completed.
Solution
cp /etc/default/* /app
ls -al /app
End of Solution
Solution
umount /app
/bin/mount | grep app
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
vxdg destroy appdg
End of Solution
Solution
vxdisk -o alldgs list
End of Solution
6 To prove that the data files have not been destroyed, recreate the appdg disk
group and the appvol volume using the exact same steps as used in Exercise 1 -
steps 3-4. DO NOT create a new file system on the appvol volume.
Solution
vxdg init appdg appdg01=emc0_dd7
vxdisk -o alldgs list
vxassist -g appdg make appvol 1g
End of Solution
7 Mount the appvol volume to the /app directory and list the contents of the
directory. The data files should still exist.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
ls -al /app
End of Solution
Solution
umount /app
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdg destroy appdg
End of Solution
Solution A
Solution
vxdisksetup -i emc0_dd7
End of Solution
12 Recreate the appdg disk group and the appvol volume using the exact same
steps as used in Exercise 1 - steps 3-4. DO NOT create a new file system on the
appvol volume.
Solution
vxdg init appdg appdg01=emc0_dd7
vxdisk -o alldgs list
vxassist -g appdg make appvol 1g
End of Solution
13 Attempt to mount the appvol volume to the /app directory. What happened?
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
Copyright 2014 Symantec Corporation. All rights reserved.
The volume will not mount because the file system information was shredded.
End of Solution
Solution
vxdg destroy appdg
End of Solution
winclient
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 If you are already logged on to VOM, proceed to Step 2. Else, start the VOM
GUI with IE browser and log on as the root user.
Solution
a https://mgt.example.com:14161
b A warning message may appear to advise that this has an invalid security
certificate.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
2 View all the disk devices on the system. What is the status of the disks
assigned to you for the labs?
Solution
a Click Home on the upper right corner and then navigate to the Server
perspective. On the navigation tree, expand Data Center > Uncategorized
Hosts. Click the sym1.example.com host link, and choose the Disks
tab.
Solution
c Ensure that CDS format is select. Click OK. Click OK again after the
message displays successful completion of operation.
4 Create a new disk group using the disk you initialized in the previous step.
Name the new disk group appdg. Observe the change in the disk status.
Solution
c In the Create Disk Group screen, type the name of the disk group
Copyright 2014 Symantec Corporation. All rights reserved.
d In the Change internal disk name screen, select the Custom Name
option. Under New Name field, type appdg01 as the disk media name
and click Next.
e On the summary page verify the details and click Finish. Click OK.
5 Using VOM, create a new 1g volume in the appdg disk group. Name the new
volume appvol. Create a file system on it and make sure that the file system is
mounted at boot time to /app directory.
Solution
a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com. Click Disk Groups.
c Let the Symantec Storage Foundation (VxVM) decide what disks to use
from the disk group and click Next to continue.
e On the File System Options screen, select Create file system to create a
VxFS file system. Select Mount options and enter the mount point /app.
Ensure that Add to file system table is checked. Click Next.
End of Solution
6 Check if the file system is mounted and verify that there is an entry for this file
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups. Select the appdg disk group. The
disk group details are displayed.
b Click the Disks tab to view the details of disks in the disk group.
7 Go back to the Disks tab and view the properties of the disk in the appdg disk
group and note the Total Size and the Free Size fields.
8 Try to create a second volume, largevol, in the appdg disk group and specify a
size slightly larger than the unallocated space on the existing disk in the disk
group, for example 2g in the standard Symantec classroom systems. Do not
create a file system on the volume. What happens?
Solution
a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.symantec.com. Click Disk Groups.
Copyright 2014 Symantec Corporation. All rights reserved.
c Let the Symantec Storage Foundation (VxVM) decide what disks to use
from the disk group and click Next to continue.
Solution
a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com. Click Disk Groups.
c Choose the emc0_dd8 disk and click Next. You may need to refresh the
page if the disks dont display promptly.
d In the Change internal disk name screen, select the Custom Name
option. Under New Name field, type appdg02 as the disk media name
and click Next.
e On the summary page verify the details and click Finish. Click OK.
End of Solution
10 Create the same volume, largevol, in the appdg disk group using the same size
as in step 8. Do not create a file system.
Solution
a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com. Click Disk Groups.
Copyright 2014 Symantec Corporation. All rights reserved.
c Let the Symantec Storage Foundation (VxVM) decide what disks to use
from the disk group and click Next to continue.
e On the File System Options screen, select Do not create a file system.
Click Next.
CONFIDENTIAL - NOT FOR DISTRIBUTION
296 A86 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
f On the summary page verify the details and click Finish. Click OK.
11 Observe the volumes displayed in the Volumes in Disk Group table. Can you
tell which volume has a mounted file system?
Solution
Double-click on the appdg disk group. View the details of the volumes in the
disk group from the Volumes tab. You should notice that the FS Type and
Mount Point columns have file system information for appvol and not for
largevol.
End of Solution
Solution
a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups.
b Select the appdg disk group and click the Volumes tab.
c Right-click the largevol volume and select File System > Create File
System.
d Verify the disk group and volumes names and that the file system type is
Copyright 2014 Symantec Corporation. All rights reserved.
e Enter the mount point as /large. Ensure that the Add to file system
table option is not selected, and click Next.
g View the details of the volumes in the appdg disk group from the Volumes
tab. You should notice that the FS Type and Mount Point columns have file
system information now for largevol with a mount point of /large.
The /large file system should show as mounted but there should be no
CONFIDENTIAL - NOT FOR DISTRIBUTION
297 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A87
change in the file system table.
h You can also use the command line on sym1 to verify the changes as
follows:
mount
cat /etc/fstab
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
A
winclient
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Unmount both /app and /large file systems using VOM. Accept to remove
the file systems from the file system table if prompted. Check if the file
systems are unmounted and verify that any corresponding entries have been
removed from the file system table.
Solution
a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups.
b Select the appdg disk group and click the Volumes tab.
c Right-click the appvol volume and select File System > Unmount.
End of Solution
Solution
End of Solution
3 View the Disks tab for appdg disk group. Can you identify which disk is
empty?
Solution
Click on the Disks tab of appdg disk group. Viewing the properties should
show that the second disk in the disk group emc0_dd8 (appdg02) is empty.
End of Solution
4 Remove the disk you identified as empty from the appdg disk group.
Right
Solution
Right-click on the empty disk emc0_dd8 and select the Remove From Disk
Group option. Click OK, and OK.
End of Solution
5 Observe all the disks on the system. What is the status of the disk you removed
from the disk group?
Solution
On the navigation tree, expand Data Center > Uncategorized Hosts. Select
sym1.example.com and click on the Disks tab to view all disks.
The disk removed in step 4 should be in Free (Initialized) state.
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups.
End of Solution
A
7 Verify that the appdg disk group is no longer present.
Solution
On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups. The appdg disk group should not be
displayed.
End of Solution
End of lab
Copyright 2014 Symantec Corporation. All rights reserved.
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four empty and unused
external disks to be used during the labs.
Lab information
Copyright 2014 Symantec Corporation. All rights reserved.
In preparation for this lab, you need the following information about your lab
environment.
Object Value
root password train
Host names of lab systems sym1
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_60
sym1
a If you have completed the Creating a Volume and File System lab
(lab 4), you should already have some initialized disks. You will need four
disks for this lab. If four disks are not initialized, then initialize the needed
disks from the same enclosure for use in Volume Manager (all disks on the
EMC array).
Solution
vxdisksetup -i emc0_dd9
vxdisksetup -i emc0_d10
Perform the above command for any disks that have not been initialized for
Volume Manager use and that will be used in this lab.
End of Solution
Solution
vxdg init appdg appdg01=emc0_dd7 \
appdg02=emc0_dd8 appdg03=emc0_dd9 \
appdg04=emc0_d10
Alternatively, you can also create the disk group using a single disk device
and then add each additional disk as follows:
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
A
vxassist -g appdg make appvol 50m
End of Solution
3 Display the volume layout. What names have been assigned to the plex and
subdisks?
Solution
To view the assigned names, view the volume using:
vxprint -g appdg -htr | more
End of Solution
Solution
vxassist -g appdg remove volume appvol
End of Solution
5 Create a 50-MB striped volume on two disks in appdg and specify which two
disks to use in creating the volume. Name the volume stripevol.
Solution
vxassist -g appdg make stripevol 50m layout=stripe \
appdg01 appdg02
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
vxassist -g appdg make strmirvol 20m \
layout=mirror-stripe ncol=2 stripeunit=256k
End of Solution
7 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit
size to 128K. Select at least one disk that you should not use. Name the volume
2colstrvol.
Solution
vxassist -g appdg make 2colstrvol 20m \
layout=mirror-stripe ncol=2 stripeunit=128k \
\!appdg03
Note: As you are using bash as your shell environment, you must use the
escape character before the exclamation mark; for example
\!appdg03.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
A
vxassist -g appdg make 3colstrvol 20m \
layout=mirror-stripe ncol=3 appdg01 appdg02 \
appdg03
End of Solution
9 Create the same volume specified in the previous step using the same three
disks, but without the mirror. However, this time first determine what the
maximum size that the volume can be based on the remaining free space.
Create the volume with the maximum possible size for this layout.
Solution
vxassist -g appdg maxsize layout=stripe ncol=3 \
appdg01 appdg02 appdg03
Maximum volume size: 12128256 (5922Mb)
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
vxassist -g appdg make 3colstrvol maxsize \
layout=stripe ncol=3 appdg01 appdg02 appdg03
End of Solution
Solution
vxassist -g appdg remove volume stripevol
vxassist -g appdg remove volume strmirvol
vxassist -g appdg remove volume 3colstrvol
End of Solution
Note: Only perform the remaining step if you do not intend to complete the
optional exercise. Otherwise, skip the next step.
Solution
vxdg destroy appdg
End of Solution
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
sym1
a Using the vi editor, create a file called vxassist that includes the
following:
nmirror=3
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vi vxassist
# When mirroring create three mirrors
nmirror=3
End of Solution
b Using the vi editor, create a file called alt_vxassist that includes the
following:
Solution
vxassist -g appdg make mirrorvol 100m \
layout=mirror
End of Solution
Solution
vxassist -g appdg -d /etc/default/alt_vxassist \
make 2colstrvol 100m layout=stripe
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
4 View the layout of these volumes using vxprint -g appdg -htr. What
do you notice?
Solution
The first volume should show three plexes rather than the standard two.
5 Remove any vxassist default files that you created in this optional lab
section. The presence of these files can impact subsequent labs where default
behavior is assumed.
Solution
rm -f /etc/default/vxassist
rm -f /etc/default/alt_vxassist
End of Solution
Solution
vxassist -g appdg remove volume mirrorvol
vxassist -g appdg remove volume 2colstrvol
End of Solution
End of lab
Copyright 2014 Symantec Corporation. All rights reserved.
Prerequisite setup
Copyright 2014 Symantec Corporation. All rights reserved.
To perform this lab, you need two lab systems with Storage Foundation pre-
installed, configured and licensed. In addition to this, you also need four external
shared disks to be used during the labs.
At the beginning of this lab, you should have a disk group called appdg that has
four external disks and no volumes in it.
sym1
Note: In order to perform the tasks in this exercise, you should have at least four
disks in the disk group that you are using.
1 Ensure that you have a disk group called appdg with four disks in it. If not,
create the disk group using four disks.
Note: If you have completed the previous lab steps you should already have
the appdg disk group with four disks and no volumes.
Solution
vxdg init appdg appdg01=emc0_dd7 \
appdg02=emc0_dd8 appdg03=emc0_dd9 \
appdg04=emc0_d10
Solution
vxassist -g appdg make appvol 50m layout=stripe \
Copyright 2014 Symantec Corporation. All rights reserved.
ncol=2
End of Solution
3 Display the volume layout. How are the disks allocated in the volume? Note
the disk devices used for the volume.
Solution
vxprint -g appdg -htr
End of Solution
4 Add a mirror to appvol, and display the volume layout. What is the layout of
the second plex? Which disks are used for the second plex?
Solution
vxassist -g appdg mirror appvol
vxprint -g appdg -htr
Note the disk devices used for the second plex. Note that the default layout
used for the second plex is the same as the first plex.
End of Solution
5 Add a dirty region log to appvol and specify the disk to use for the DRL.
Display the volume layout.
Solution
vxassist -g appdg addlog appvol logtype=drl \
appdg01
vxprint -g appdg -htr
End of Solution
6 Add a second dirty region log to appvol and specify another disk to use for the
DRL. Display the volume layout.
Solution
vxassist -g appdg addlog appvol logtype=drl \
appdg02
Copyright 2014 Symantec Corporation. All rights reserved.
7 Remove the first dirty region log that you added to the volume. Display the
volume layout. Can you control which log was removed?
Solution
vxassist -g appdg remove log appvol \!appdg01
8 Find out what the current volume read policy for appvol is. Change the volume
read policy to round robin, and display the volume layout.
Solution
vxprint -g appdg -htr
You should observe that the read policy shows as SELECT which is the value
used for selected based on layouts.
9 Remove the original mirror (appvol-01) from appvol, and display the volume
layout.
Solution
vxassist -g appdg remove mirror appvol \
\!disk_used_by_original_mirror
vxprint -g appdg -htr
Copyright 2014 Symantec Corporation. All rights reserved.
Note: As you are using bash as your shell environment, you must use the
escape character before the exclamation mark; for example
\!appdg01. The appdg01 disk was used by the original plex.
Note that the DRL log will also be removed automatically with this command
because the volume is no longer mirrored.
End of Solution
Solution
vxassist -g appdg remove volume appvol
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
2 View the layout of the volume and display the size of the file system.
Solution
vxprint -g appdg -htr
df -k /app
End of Solution
3 Add data to the volume by creating a file in the file system and verify that the
file has been added.
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxresize -g appdg appvol 100m
vxprint -g appdg -htr
df -k /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
2 Add data to the webvol volume by copying the /etc/group file to the
/web file system. Verify that the file has been added.
Solution
cp /etc/group /web
ls -l /web
End of Solution
3 Try to deport and rename the appdg disk group to webdg while the /app and
/web file systems are still mounted. Can you do it?
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdg -n webdg deport appdg
You receive an error message indicating that the volumes in the disk group are
in use.
End of Solution
Solution
ls -lR /dev/vx/rdsk
This directory contains a subdirectory for each imported disk group, which
contains the character devices for the volumes in that disk group.
ls -lR /dev/vx/dsk
This directory contains a subdirectory for each imported disk group, which
contains the block devices for the volumes in that disk group.
End of Solution
5 Unmount all the mounted file systems in the appdg disk group.
Solution
umount /app
umount /web
End of Solution
6 Deport and rename the appdg disk group to webdg. Then import the newly
renamed webdg disk group.
Solution
vxdg -n webdg deport appdg
vxdg import webdg
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
ls -lR /dev/vx/rdsk
ls -lR /dev/vx/dsk
A
8 Observe the disk media names. Is there any change?
Solution
vxdisk -o alldgs list
vxprint -g webdg -htr
9 Mount the /app and /web file systems, and observe their contents.
Solution
mount -t vxfs /dev/vx/dsk/webdg/appvol /app
mount -t vxfs /dev/vx/dsk/webdg/webvol /web
ls -l /app
ls -l /web
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
1 Copy new data to the /app and /web file systems. For example, copy the file
/etc/group to /app and the file /etc/hosts to /web.
Solution
cp /etc/group /app
cp /etc/hosts /web
End of Solution
Solution
vxdisk -o alldgs list
End of Solution
3 Unmount all file systems in the webdg disk group and deport the disk group.
Do not assign it to a new host. View all the disk devices on the system.
Solution
umount /app
umount /web
vxdg deport webdg
vxdisk -o alldgs list
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
vxdg import webdg
vxprint -g webdg -htr
vxdisk -o alldgs list
End of Solution
5 Mount the /app and /web file systems. Note that you will need to create the
mount directories on the other system before mounting the file systems.
Observe the data in the file systems.
Solution
mkdir /app
mkdir /web
mount -t vxfs /dev/vx/dsk/webdg/appvol /app
mount -t vxfs /dev/vx/dsk/webdg/webvol /web
ls -l /app
ls -l /web
Solution
umount /app
umount /web
End of Solution
Solution
vxdg -h sym1 deport webdg
End of Solution
sym1
8 Import the disk group and change its name back to appdg. View all the disk
devices on the system.
Note: Because the hostname of the sym1 system is assigned to the disk group
during the deport operation, the disk group can be automatically
imported if you execute the vxdctl enable command on your
system.
Solution
vxdg -n appdg import webdg
vxdisk -o alldgs list
End of Solution
9 Deport the disk group appdg by assigning the ownership to a system called
anotherhost. View all the disk devices on the system. Why would you do this?
Solution
vxdg -h anotherhost deport appdg
Copyright 2014 Symantec Corporation. All rights reserved.
You would do this to ensure that the disks are not imported accidentally by any
system other than the one whose name you assigned to the disks.
End of Solution
Solution A
Solution
vxdg import appdg
12 Now import appdg and overwrite the disk group lock. What did you have to do
to import it and why?
Solution
vxdg -C import appdg
You had to forcefully clear the host lock using the -C option because the disks
in the disk group were locked to anotherhost.
End of Solution
13 Display detailed information about the same disk in the disk group as you did
in step 10. Note the change in the hostid field in the output.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdisk list emc0_dd7
End of Solution
Solution
vxassist -g appdg remove volume appvol
vxassist -g appdg remove volume webvol
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Create a 50-MB concatenated volume named appvol in the appdg disk group.
Solution
vxassist -g appdg make appvol 50m
End of Solution
2 Create a Veritas file system on the volume by using the mkfs command.
Specify the file system size as 40 MB.
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol 40m
0
End of Solution
3 Create a mount point /app on which to the mount the file system, if it does not
already exist.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
mkdir /app (if necessary)
End of Solution
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
5 Verify disk space using the df command. Observe that the total size of the file
system is smaller than the size of the volume.
Solution
df -k
End of Solution
6 Expand the file system to the full size of the underlying volume using the
fsadm -b newsize command.
Note: On Linux there is more than one fsadm command, you must use the
command located in /opt/VRTS/bin.
Solution
/opt/VRTS/bin/fsadm -b 50m -r \
/dev/vx/rdsk/appdg/appvol /app
End of Solution
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
df -k
End of Solution
Solution
A
dd if=/dev/zero of=/app/25_mb bs=1024k count=25
End of Solution
9 Shrink the file system to 50 percent of its current size. What happens?
Solution
/opt/VRTS/bin/fsadm -b 25m -r \
/dev/vx/rdsk/appdg/appvol /app
The command fails. You cannot shrink the file system because blocks are
currently in use.
End of Solution
10 Unmount the /app file system and remove the appvol volume in appdg.
Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
End of lab
Copyright 2014 Symantec Corporation. All rights reserved.
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four external disks and
Copyright 2014 Symantec Corporation. All rights reserved.
the third (sdc) internal disk to be used during the labs. If you do not have a third
(sdc) internal disk or if you cannot use the third (sdc) internal disk, you need five
external disks to complete the labs.
At the beginning of this lab, you should have a disk group called appdg that has
four external disks and no volumes in it. The third (sdc) internal disk should be
empty and unused.
sym1
1 Identify the device name for the third (sdc) internal disk on your lab system.
Solution
df -k
vxdisk -o alldgs list
Solution
vxdisksetup -i sdc format=sliced
Note: If an error occurs when initializing the sdc disk, then use -f option to
initialize the disk.
For example: vxdisksetup -f -i sdc format=sliced
End of Solution
3 Create a non-cds disk group called testdg using the internal disk you initialized
Copyright 2014 Symantec Corporation. All rights reserved.
in step 2.
Solution
vxdg init testdg testdg01=sdc cds=off
End of Solution
Solution
vxassist -g testdg make testvol 1g init=zero
End of Solution
Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/testvol
End of Solution
This script restores a fragmented file system onto the volume and performs a
file system check so that the volume can be mounted. Whatever files are in the
existing file system will be lost.
Solution
cd /student/labs/sf/sf61
./extentfrag_vxfs.pl
End of Solution
7 Mount the file system on /test. Note that you may need to perform a file
system check before mounting the file system.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
fsck -t vxfs /dev/vx/rdsk/testdg/testvol (if necessary)
mkdir /test
mount -t vxfs /dev/vx/dsk/testdg/testvol /test
End of Solution
sym1
1 In the appdg disk group create a 1-GB concatenated volume called appvol.
Solution
vxassist -g appdg make appvol 1g
End of Solution
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
/opt/VRTS/bin/fsadm -D -E /app
End of Solution
Solution
A fragmented file system is a file system where the free space and/or file data
is in relatively small extents scattered throughout different allocation units
within the file system.
End of Solution
5 If you were shown the following extent fragmentation report about a file
system, what would you conclude?
Solution
A high total in the Dirs to Reduce column indicates that the directories are not
optimized. This file system's directories should be optimized by directory
defragmentation.
End of Solution
Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
Solution
/opt/VRTS/bin/fsadm -D -E /test
8 Defragment /test and gather summary statistics after each pass through the
file system. After the defragmentation completes, determine if /test is
fragmented? Why or why not?
Solution
/opt/VRTS/bin/fsadm -e -E -s /test
/opt/VRTS/bin/fsadm -D -E /test
End of Solution
Solution
A fragmented file system has free space scattered throughout the file system in
relatively small extents whereas an unfragmented file system has free space in
just a few relatively large extents.
End of Solution
Solution
Yes, volatile environments wherein files are grown, shrunk, erased, moved,
and so on, especially where the file systems do not have much free space, are
prone to fragmentation.
Stable environments, such as Oracle databases and logs, have very little impact
on the supporting file system so require infrequent defragmentation.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1 A
In this lab section, you make a larger volume so that you can see the time
difference when using the SmartMove feature.
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Solution
cp /etc/hosts /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
umount /app
End of Solution
5 Mirror the appvol volume. Record the time it takes to complete the mirror
operation.
Solution
time -p vxassist -g appdg mirror appvol
Time to create mirror _____________________________________
End of Solution
Solution
vxassist -g appdg remove mirror appvol
End of Solution
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
8 Mirror the appvol volume. Record the time it takes to complete the mirror
operation.
Solution
time -p vxassist -g appdg mirror appvol
Note: The mirroring operation should not take as long as it did the first time
the mirror was created. This is because it is only mirroring the used
data in the file system and not the whole volume by using SmartMove,
as the file system is now mounted.
End of Solution
Solution
umount /app
A
End of Solution
Solution
vxassist -g appdg remove volume appvol
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
1 View all the disk devices on the system. Then use the vxdisk -o thin
list command to list only the thin provisioning capable devices.
Solution
vxdisk -o alldgs list
vxdisk -o thin list
End of Solution
2 Locate the thin provisioning and thin reclamation capable devices from the
output in the previous step. The TYPE column in the output of the
vxdisk -o thin list command should display thinrclm. Choose
two thin reclamation capable devices (3pardata0_49 and 3pardata0_50), if they
are uninitialized use the vxdisksetup command to initialize them.
Note: If you do not see any thin provisioning and thin reclamation capable
devices in the vxdisk list output, contact your instructor. You
must have thin provisioning and thin reclamation capable devices to
complete this lab section.
Solution
vxdisksetup -i 3pardata0_49
vxdisksetup -i 3pardata0_50
vxdisk -o alldgs list
vxdisk -o thin list
Copyright 2014 Symantec Corporation. All rights reserved.
The TYPE field in the output of the vxdisk -o alldgs list command
should change to auto:cdsdisk and the STATUS of the disk should change
to online thinrclm but the DISK and GROUP columns should still be
empty.
End of Solution
Solution A
The TYPE and STATUS of the disks are the same but the DISK and GROUP
columns now show the new disk media name and the disk group name
respectively.
End of Solution
Solution
vxassist -g thindg make thinvol 3g
End of Solution
5 Create a Veritas file system on the volume and mount it to /thin. Do not add
the file system to the file system table.
Solution
mkfs -t vxfs /dev/vx/rdsk/thindg/thinvol
mkdir /thin (if necessary)
mount -t vxfs /dev/vx/dsk/thindg/thinvol /thin
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
6 Display the size of the file system using the df -k /thin command.
Solution
df -k /thin
End of Solution
Solution
vxdisk -o thin,fssize list
End of Solution
8 Use the dd command to make some 400MB files on the file system mounted at
/thin, so that the free space is less than 10 percent of the total file system
size. Use the df -k /thin command to monitor the file system free space.
Solution
dd if=/dev/zero of=/thin/file1 bs=1024k count=400
dd if=/dev/zero of=/thin/file2 bs=1024k count=400
dd if=/dev/zero of=/thin/file3 bs=1024k count=400
dd if=/dev/zero of=/thin/file4 bs=1024k count=400
dd if=/dev/zero of=/thin/file5 bs=1024k count=400
dd if=/dev/zero of=/thin/file6 bs=1024k count=400
dd if=/dev/zero of=/thin/file7 bs=1024k count=400
df -k /thin
End of Solution
Solution
vxdisk -o thin,fssize list
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
rm -f /thin/file*
End of Solution
Solution A
df -k /thin
vxdisk -o thin,fssize list
End of Solution
12 Use the vxdisk reclaim command on the thindg disk group to reclaim
the space on the LUNS.
Solution
vxdisk reclaim thindg
End of Solution
Solution
vxdisk -o thin,fssize list
End of Solution
14 Unmount the /thin file system and destroy the thindg disk group.
Solution
umount /thin
vxdg destroy thindg
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
Solution
umount /test
vxdg destroy testdg
vxdg destroy appdg
End of Solution
End of lab
Copyright 2014 Symantec Corporation. All rights reserved.
Prerequisite setup
Copyright 2014 Symantec Corporation. All rights reserved.
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need a minimum of three
external disks to be used during the labs.
Before you begin this lab, destroy any data disk groups that are left from previous
labs:
vxdg destroy diskgroup
sym1 A
2 List all the enclosures connected to your system using the vxdmpadm
listenclosure all command. Does Volume Manager recognize the disk
array you are using in your lab environment? What is the name of the
enclosure? Note the enclosure name here.
Volume Manager recognizes the disk array if it is among the supported disk
arrays you listed in step 1. Any internal disks will show with an enclosure
name of OTHER_DISKS or DISK.
Note: Volume Manager 6.x does not require that the all option be used. If it
is left out of the command, all is assumed.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxddladm get namingscheme
NAMING_SCHEME PERSISTENCE LOWERCASE USE_AVID
======================================================
Enclosure Based Yes Yes Yes
End of Solution
Solution
vxdiskadm (only if EBN is not set)
Select the option, Change the disk-naming scheme and complete the
prompts to select enclosure-based naming.
End of Solution
5 Display the disks attached to your system and note the changes.
Solution
vxdisk -o alldgs list
End of Solution
Solution
vxdmpadm setattr enclosure emc0 name=emc_disk
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
7 Display the disks attached to your system and note the changes.
Solution
vxdisk -o alldgs list
The disks should now contain the new name that you entered in the previous
step, for example emc_disk_dd1.
End of Solution
sym1 A
In the virtual lab environment, you will observe two controllers listed for the
enclosure you renamed to emc_disk.
End of Solution
2 Using one of the controller names discovered in the previous step, display all
paths connected to the controller using the vxdmpadm getsubpaths
ctlr=controller command. Compare the NAME and the
DMPNODENAME columns in the output.
Solution
vxdmpadm getsubpaths ctlr=controller
The NAME column lists all of the disk devices that the operating system sees
whereas the DMPNODENAME column provides the corresponding DMP node
name used for that disk device. If you have not switched to enclosure based
naming, these names will be the same. Note that the DMP node names are the
ones displayed by the vxdisk -o alldgs list command.
Example Output
vxdmpadm getsubpaths ctlr=c1
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdmpadm getsubpaths dmpnodename=emc_disk_dd7
End of Solution
4 View DDL extended attributes for the dmpnodename used in the previous step
using the vxdisk -p list command.
Solution
vxdisk -p list emc_disk_dd7
You should see extended attributes such as cabinet serial number, array type,
transport, and so on.
End of Solution
5 Determine the Port ID (PID) for all devices attached to the system using the
vxdisk -p list command and the -x option.
Solution
vxdisk -x PID -p list
Selecting a specific attribute is useful when you wish to see that attribute for all
devices attached to a system.
End of Solution
6 Determine the DDL_DEVICE_ATTR for all disks attached to the system using
Copyright 2014 Symantec Corporation. All rights reserved.
the vxdisk -p list command and the -x option. If no attributes are set
the attribute displays a NULL.
Solution
vxdisk -x DDL_DEVICE_ATTR -p list
End of Solution
End of Solution
sym1
1 Create a disk group called appdg that contains two disks (emc_disk_dd7 and
emc_disk_dd8).
Solution
vxdisksetup -i emc_disk_dd7 (if necessary)
vxdisksetup -i emc_disk_dd8 (if necessary)
vxdg init appdg appdg01=emc_disk_dd7 \
appdg02=emc_disk_dd8
End of Solution
Solution
vxassist -g appdg make appvol 1g
End of Solution
3 Determine the device used for the appvol volume. This device name will be
used as the dmpnodename in step 10.
Solution
vxprint -g appdg -htr
SELECT - fsgen
pl appvol-01 appvol ENABLED ACTIVE 2097152
CONCAT - RW
sd appdg01-01 appvol-01 appdg01 0 2097152
0 emc_disk_dd7 ENA
In the above example, the emc_disk_dd7 device is used for the appvol volume.
End of Solution
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
A
End of Solution
5 Create a mount point for the appvol volume called /app and mount the file
system created in the previous step to the mount point.
Solution
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Solution
vxdmpadm iostat start
End of Solution
Solution
vxdmpadm iostat reset
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
8 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:
./dmpiotest
Solution
cd /student/labs/sf/sf61
./dmpiotest /app
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
nrep=100
Note: Note that the script is using a version of the vxbench program
specific to your platform.
End of Solution
Solution
vxdmpadm iostat show all
End of Solution
10 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for eight times.
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
Copyright 2014 Symantec Corporation. All rights reserved.
interval=2 count=8
End of Solution
sym1 A
1 Use the dmpiotest script to generate I/O on the disk used by the appdg disk
group. The dmpiotest script uses the vxbench utility, which is a part of
the VRTSspt package and is installed as a part of the SF installation. Change to
the directory containing lab scripts and execute the script:
./dmpiotest
Solution
cd /student/labs/sf/sf61
./dmpiotest /app
This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \ nrep=100
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
Note: Note that the script is using a version of the vxbench program
specific to your platform.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
2 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for one thousand times. This will
ensure that the output continues as you enable and disable paths. I/O should be
present for both paths to the device.
Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=1000
End of Solution
3 Open a new terminal and use the vxdmpadm disable command to disable
one of the paths shown in the vxdmpadm iostat output. Go back to the
original terminal and note that I/O for that path stops.
Solution
vxdmpadm disable path=path_name
End of Solution
4 Switch to the new terminal and use the vxdmpadm enable command to
enable the path that was disabled in the previous step. Go back to the original
terminal and note that I/O for that path resumes.
Solution
vxdmpadm enable path=path_name
End of Solution
5 Switch to the new terminal and use the vxdmpadm disable command to
disable the other path shown in the vxdmpadm iostat output. Go back to
the original terminal and note that I/O for that path stops.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdmpadm disable path=path_name
End of Solution
Solution A
sym1
1 Display the current I/O policy for the enclosure you are using.
Solution
vxdmpadm getattr enclosure emc_disk iopolicy
The default I/O policy is MinimumQ for the array used in the virtual lab
environment.
End of Solution
2 Change the current I/O policy for the enclosure to stop load-balancing and only
use multipathing for high availability.
Solution
vxdmpadm setattr enclosure emc_disk \
iopolicy=singleactive
End of Solution
Solution
vxdmpadm getattr enclosure emc_disk iopolicy
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdmpadm iostat reset
End of Solution
5 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
./dmpiotest
A
Solution
cd /student/labs/sf/sf61
./dmpiotest /app
This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \ nrep=100
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
Note: Note that the script is using a version of the vxbench program
specific to your platform.
End of Solution
6 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for eight times. Compare the
output to the output you observed before changing the DMP policy to
singleactive. Note that a single path is now used.
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=8
End of Solution
Solution
vxdmpadm setattr enclosure emc_disk iopolicy=minimumq
End of Solution
8 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:
./dmpiotest
Solution
cd /student/labs/sf/sf61
./dmpiotest /app
This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \ nrep=100
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
Note: Note that the script is using a version of the vxbench program
specific to your platform.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
9 Display I/O statistics for the DMP node again. Compare the output to the
output you observed when changing the DMP policy to singleactive. Note that
both paths are now used again.
Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=8
End of Solution
Solution
umount /app
A
End of Solution
Note: If the unmount of /app fails because the device is busy, it is because
the vxbench commands started by the dmpiotest script are still
running. Either let them complete, or kill each running command
(ps -ef | grep vxbench). Use pkill vxbench command to
stop the dmpiotest script.
11 Rename the enclosure back to its original name (emc0) using the vxdmpadm
setattr command.
Note: The original name of the enclosure was displayed by the vxdmpadm
listenclosure all command that you used in step 2 of
Exercise 1.
Solution
vxdmpadm setattr enclosure emc_disk name=emc0
End of Solution
Solution
vxdg destroy appdg
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
End of lab
Lab information
In preparation for this lab, you need the following information about your lab
environment.
Object Value
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_54
Location of lab scripts: /student/labs/sf/sf61
sym1 A
1 Create a disk group called appdg that contains one disk (emc0_dd7).
Solution
vxdisksetup -i emc0_dd7 (if necessary)
vxdg init appdg appdg01=emc0_dd7
End of Solution
Solution
vxassist -g appdg make appvol 1g
End of Solution
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if required)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
cp -r /etc/default /app
ls -lR /app
End of Solution
Solution
tail -f /var/log/messages
End of Solution
Notes:
The faildg_temp.pl script disables the paths to the disk in the disk
group to simulate a hardware failure. This is just a simulation and not a real
failure; therefore, the operating system will still be able to see the disk after
the failure. The script will prompt you for the disk group name and then it
will create the failure by disabling the paths to the disk, performing some
I/O and then re-enabling the paths.
All lab scripts are located in the /student/labs/sf/sf61 directory.
Note: The script may have to run two or three times before the error occurs.
Solution
/student/labs/sf/sf61/faildg_temp.pl
What is the name of the disk group would you like to
temporarily disable? [appdg]: appdg
Checking to make sure appdg is enabled . . . done.
Creating failure, please be patient
Note: You will see a dd error because I/O will be stopped as soon as the
failure is recognized.
End of Solution
Solution
A
vxdisk -o alldgs list
vxdg list
The disk group should show as disabled and the disk status should change to
online dgdisabled.
End of Solution
Solution
df -k /app
The file system is also disabled.
End of Solution
9 Assuming that the failure was due to a temporary fiber disconnection and that
the data is still intact, recover the disk group and start the volume using the first
terminal window. Verify the disk and disk group status using the vxdisk -o
alldgs list and vxdg list commands.
Solution
umount /app
vxdg deport appdg
vxdg import appdg
vxdisk -o alldgs list
vxdg list
Copyright 2014 Symantec Corporation. All rights reserved.
The disk group should now be enabled and the disk status should change back
to online.
End of Solution
10 Remount the file system and verify that the contents are still there.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
ls -lR /app
CONFIDENTIAL - NOT FOR DISTRIBUTION
373 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A163
It is not necessary to run an fsck on the file system.
End of Solution
Solution
umount /app
End of Solution
Solution
vxdg destroy appdg
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1 A
Overview
The following sections use an interactive script to simulate a variety of disk failure
scenarios. Your goal is to recover from the problem as described in each scenario.
Use your knowledge of VxVM administration, in addition to the VxVM recovery
tools and concepts described in the lesson, to determine which steps to take to
ensure recovery. After you recover the test volumes, the script verifies your
solution and provides you with the result. You succeed when you recover the
volumes without corrupting the data.
For most of the recovery problems, you can use any of the VxVM interfaces: the
command line interface, the Veritas Operations Manager (VOM) Web console, or
the vxdiskadm menu interface. Lab solutions are provided for only one method.
If you have questions about recovery using interfaces not covered in the solutions,
see your instructor.
Setup
Due to the way in which the lab scripts work, it is important to set up your
environment as described in this setup section:
1 Create a disk group named testdg and add three disks (emc_disk_dd7,
emc_disk_dd8 and emc_disk_dd9) to the disk group. Assign the following disk
media names to the disks: testdg01, testdg02, and testdg03.
Solution
vxdg init testdg testdg01=emc0_dd7 testdg02=emc0_dd8 \
testdg03=emc0_dd9
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
2 In the first terminal window, navigate to the directory that contains the lab
scripts. Note that the lab scripts are located at the
/student/labs/sf/sf61 directory.
Solution
cd /student/labs/sf/sf61
End of Solution
sym1
In this lab exercise, a temporary disk failure is simulated. Your goal is to recover
all of the redundant and nonredundant volumes that were on the failed drive. The
lab script disk_failures.pl sets up the test volume configuration and
simulates a disk failure. You must then recover and validate the volumes.
1 From the first terminal window (from the directory that contains the lab
scripts), run the script disk_failures.pl, answer the initial configuration
questions and then select option 1, Exercise 3 - Recovering from temporary
disk failure. Note that the initial configuration questions will only be asked the
first time you run the script. Use test as the prefix for disk group and volume
names.
Solution
./disk_failures.pl
Initial Configuration File Check
What prefix should be used for the disk group name and
volume names? [app]: test <ENTER>
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are A
then mounted.
test1 with a mirrored layout
test2 with a concatenated layout
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Solution
ls /test1
ls /test2
Because the test1 volume is mirrored, the files in the /test1 mount point are
still accessible. When trying to view the files in /test2, you should see the
following error:
/test2: I/O error
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxprint -g testdg -htr
a If you are using enclosure based naming, identify the OS native name of
the disk that has temporarily failed. You will use this OS disk name while
verifying that the operating system recognizes the device.
Solution
vxdisk -e list ebn_of_failed_disk
DEVICE TYPE DISK GROUP STATUS
OS_NATIVE_NAME ATTR
ebn_of_failed_disk auto:cdsdisk testdg online
osn_of_failed_disk lun
End of Solution
b Ensure that the operating system recognizes the device using the
appropriate OS commands. Ignore any warning message about disk
geometry mismatch, if displayed.
Solution
partprobe /dev/osn_of_failed_disk
End of Solution
c Verify that the operating system recognizes the device using the
appropriate OS commands.
Solution
fdisk -l /dev/osn_of_failed_disk
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
d Force the VxVM configuration daemon to reread all of the drives in the
system.
Solution
vxdctl enable
End of Solution
Solution
A
vxreattach
End of Solution
Solution
vxrecover
End of Solution
Solution
vxvol -g testdg -f start test2
End of Solution
5 Because this is a temporary failure, the files in the test2 volume (and file
system) are still available. Recover the mount point by performing the
following:
Solution
umount /test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
fsck -t vxfs /dev/vx/rdsk/testdg/test2
End of Solution
Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution
Solution
diff /test1 /test2
If the files are the same you should not see an output. Only differences would
be displayed. You should see an output for common subdirectories.
Common subdirectories: /test1/lost+found and /test2/lost+found
Note: There is a potential for file system corruption in the test2 volume
since it has no redundancy.
End of Solution
7 Unmount the file systems and delete the test1 and test2 volumes.
Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1 A
In this lab exercise, a permanent disk failure is simulated. Your goal is to replace
the failed drive and recover the volumes as needed. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
1 In the first terminal window (from the directory that contains the lab scripts),
run the script disk_failures.pl, and select option 2, Exercise 4 -
Recovering from permanent disk failure.
Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.1 Disk Failure Labs.
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a mirrored layout
test2 with a concatenated layout
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Solution
ls /test1
ls /test2
Because the test1 volume is mirrored, the files in the /test1 mount point are
still accessible. When trying to view the files in /test2 you should see the
following error
/test2: I/O error
End of Solution
4 Replace the permanently failed drive with a new disk at another SCSI location.
Then, recover the volumes.
Solution
vxdisksetup -i emc0_d10 (if necessary)
End of Solution
Solution
vxdg -g testdg -k adddisk testdg02=emc0_d10
A
End of Solution
Solution
vxrecover
End of Solution
Solution
vxvol -g testdg -f start test2
End of Solution
Note: You can also use the vxdiskadm menu interface to correct the failure.
Select Replace a failed or removed disk option and select the desired
drive when prompted.
5 Because this is a permanent failure, the files in the test2 volume (and file
system) are no longer available. Recover the mount point by performing the
following:
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
umount /test2
End of Solution
b Attempt to mount the test2 volume to /test2. You will see an error
because the file system has been lost during the recovery.
Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
UX:vxfs mount: ERROR: V-3-20012: not a valid vxfs file
system
CONFIDENTIAL - NOT FOR DISTRIBUTION
383 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A173
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk
layout version
End of Solution
c Create a new file system on the test2 volume and then mount the test2
volume to /test2.
Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/test2
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution
6 List the contents of /test2. In a real failure scenario, the files in this file
system would need to be restored from a backup.
Solution
ls /test2
lost+found
End of Solution
7 Unmount the file systems and delete the test1 and test2 volumes.
Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
8 When you have completed this exercise, the disk device that was originally
used during disk failure simulation is in an online invalid state,
reinitialize the disk to prepare for later labs.
Solution
vxdisk -o alldgs list
vxdisksetup -i accessname
End of Solution
A
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this optional lab exercise, a temporary disk failure is simulated. Your goal is to
recover all of the volumes that were on the failed drive. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
1 Use the vxdg command with the adddisk option to add a fourth disk
(emc0_d11) called testdg04 to the testdg disk group. If necessary,
initialize a new disk before adding it to the disk group.
Solution
vxdisksetup -i emc0_d11 (if necessary)
vxdg -g testdg adddisk testdg04=emc0_d11
End of Solution
2 From the directory that contains the lab scripts, run the script
disk_failures.pl, and select option 3, Exercise 5 - Optional Lab:
Recovering from temporary disk failure - Layered volume.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.1 Disk Failure Labs.
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a stripe-mirror layout on 4 disks
test2 with a concatenated layout
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
4 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Solution
ls /test1
ls /test2
Because the test1 volume is layered and mirrored, the files in the /test1
Copyright 2014 Symantec Corporation. All rights reserved.
mount point are still accessible even though two disks have failed. When trying
to view the files in /test2 you should see the following error
/test2: I/O error
End of Solution
5 Assume that the failure was temporary. In a second terminal window, attempt
to recover the volumes.
A
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
a If you are using enclosure based naming, identify the OS native names of
the disks that have temporarily failed. You will use these OS disk names
while verifying that the operating system recognizes the devices.
Solution
vxdisk -e list
DEVICE TYPE DISK GROUP STATUS
OS_NATIVE_NAME ATTR
ebn_of_failed_disk1 auto:cdsdisk testdg online
osn_of_failed_disk1 lun
ebn_of_failed_disk2 auto:cdsdisk testdg online
osn_of_failed_disk2 lun
End of Solution
b Ensure that the operating system recognizes the devices using the
appropriate OS commands.Igonore any warning message about disk
geometry mismatch, if displayed.
Solution
partprobe /dev/osn_of_first_failed_disk
Copyright 2014 Symantec Corporation. All rights reserved.
partprobe /dev/osn_of_second_failed_disk
End of Solution
c Verify that the operating system recognizes the devices using the
appropriate OS commands.
Solution
fdisk -l /dev/osn_of_first_failed_disk
d Force the VxVM configuration daemon to reread all of the drives in the
system.
Solution
vxdctl enable
End of Solution
e Reattach the devices to the disk media records using the vxreattach
command.
Solution
vxreattach
End of Solution
Solution
vxrecover
End of Solution
Solution
vxvol -g testdg -f start test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
6 Because this is a temporary failure, the files in the test2 volume (and file
system) are still available. Recover the mount point by performing the
following:
Solution
umount /test2
End of Solution
Solution
fsck -t vxfs /dev/vx/rdsk/testdg/test2
A
End of Solution
Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution
Solution
diff /test1 /test2
If the files are the same you should not see an output. Only differences would
be displayed. You should see an output for common subdirectories.
Common subdirectories: /test1/lost+found and /test2/lost+found
End of Solution
Note: There is a potential for file system corruption in the test2 volume since
it has no redundancy.
8 Unmount the file systems and delete the test1 and test2 volumes.
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this optional lab exercise, a permanent disk failure is simulated. Your goal is to
replace the failed drive and recover the volumes as needed. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
1 From the directory that contains the lab scripts, run the script
disk_failures.pl, and select option 4, Exercise 6 - Optional Lab:
Recovering from permanent disk failure - Layered volume:
Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.1 Disk Failure Labs.
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
Note: You can neglect if the above script displays a warning message such as
Disk destroy failed, and skip the steps 4a and 4b for A
initializing and adding the failed disk. But you can recover the data
using the vxrecover command as displayed in step 4c.
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Solution
ls /test1
ls /test2
Because the test1 volume is layered and mirrored, the files in the /test1
mount point are still accessible even though two disks have failed. When trying
to view the files in /test2 you should see the following error
/test2: I/O error
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
4 Replace the permanently failed drive with either a new disk at the same SCSI
location or by another disk at another SCSI location. Then, recover the
volumes.
Note: If you are unable to initialize and add the failed disk then skip the steps
4a and 4b, then continue with step 4c and finish the exercise.
a In the second terminal window, initialize the drive that failed. In a real
failure scenario this drive would have been replaced with a new drive.
Solution
vxdisksetup -i accessname
End of Solution
Solution
vxdg -g testdg -k adddisk testdg02=accessname
End of Solution
Solution
vxrecover
End of Solution
Solution
vxvol -g testdg -f start test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Note: You can also use the vxdiskadm menu interface to correct the failure.
Select Replace a failed or removed disk option and select the desired
drive when prompted.
5 Because this is a permanent failure, the files in the test2 volume (and file
system) are no longer available. Recover the mount point and file system by
performing the following:
Solution
umount /test2
A
End of Solution
Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/test2
End of Solution
c Mount the test2 volume to /test2 and list the contents. The mount point
should only contain a lost+found directory.
Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
ls /test2
End of Solution
6 Unmount the file systems and delete the test1 and test2 volumes.
Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdg destroy testdg
End of Solution
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: If you have not already done so, destroy the testdg disk group before you
start this section.
1 Create a disk group called appdg that contains four disks (emc0_dd7 -
emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
Solution
vxassist -g appdg make appvol 100m layout=mirror
Copyright 2014 Symantec Corporation. All rights reserved.
A
Solution
ps -ef | grep vxrelocd
kill -9 pid (if necessary)
ps -ef | grep vxrelocd
End of Solution
4 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script.
While using the script, substitute the appropriate disk device name for one of
the disks in use by appvol, for example enter emc0_dd7.
Solution
cd /student/labs/sf/sf61
./overwritepr.pl
Enter a device used in appvol when prompted.
End of Solution
5 When the error occurs, view the status of the disks from the command line.
Solution
vxdisk -o alldgs list
Copyright 2014 Symantec Corporation. All rights reserved.
The physical device is no longer associated with the disk media name and the
disk group.
End of Solution
Solution
vxprint -g appdg -htr
7 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.
Solution
vxdisksetup -i accessname
Note: This step is only necessary when you replace the failed disk with a
brand new one. If it were a temporary failure, this step would not be
necessary.
End of Solution
Solution
vxdg -g appdg -k adddisk dm_name=accessname
where dm_name is the disk media name of the failed disk and accessname
is the enclosure-based name of the disk device used to replace the failed one.
End of Solution
9 Check the status of the disks and the volume. The disk should now be a part of
the disk group, but the volume still has a failure.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
Solution
vxrecover
A
End of Solution
11 Check the status of the disks and the volume to ensure that the disk and volume
are fully recovered.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
12 Unmount the /app file system and remove the appvol volume.
Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
Solution
vxassist -g appdg make appvol 100m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
Copyright 2014 Symantec Corporation. All rights reserved.
End of Solution
3 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script.
4 When the error occurs, view the status of the disks and volume from the
command line using the vxdisk list and vxprint commands. Allow
sufficient time for the vxrelocd daemon to relocate the failed device.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
The physical device is no longer associated with the disk media name and the
disk group. The failed device in the volume should be relocated to a different
device within the disk group
End of Solution
5 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.
Solution
vxdisksetup -i accessname
Note: This step is only necessary when you replace the failed disk with a
Copyright 2014 Symantec Corporation. All rights reserved.
brand new one. If it were a temporary failure, this step would not be
necessary.
End of Solution
Solution
vxdg -g appdg -k adddisk dm_name=accessname
7 Check the status of the disks and the volume. The failed disk should now be a
part of the disk group, but the plex that used to be on the failed disk is now
relocated to another disk in the disk group.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
8 Use the vxunreloc command to return the plex back to the original device.
Solution
vxunreloc -g appdg appdg01
vxprint -g appdg -htr
Note: This solution assumes that the failed and then recovered disk was
appdg01. Depending on which disk you failed in step 3, you may need
to use a different disk media name with the vxunreloc command.
End of Solution
9 Unmount the /app file system and remove the appvol volume.
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
umount /app
vxassist -g appdg remove volume appvol
End of Solution
A
sym1
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
Solution
vxassist -g appdg make appvol 100m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
3 Determine the first device use by the appvol volume. This device should be
emc0_dd7. Use the vxdisk list command to determine all paths to the
device.
Solution
vxprint -g appdg -htr
vxdisk list emc0_dd7
End of Solution
Solution
vxdmpadm -f disable path=path1,path2
vxdisk list emc0_dd7
End of Solution
Solution
dd if=/dev/zero of=/app/test1 bs=1 count=10
End of Solution
6 Use the vxdmpadm enable command to enable all paths to the failed
device. Monitor the vxdisk list and vxprint outputs until the
vxattachd daemon senses that the device is back online and reattaches the
device and recovers the failed plexes.
Note: If the vxrelocd daemons are running, then the plex on the failed disk
will first be relocated to another disk in the disk group. Then the failed
plex and disk will be recovered.
Solution
vxdmpadm enable path=path1,path2
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
7 Unmount the /app file system and remove the appvol volume.
Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
sym1 A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 You should have four disks (appdg01 through appdg04) in the disk group
appdg. Set all disks to have the spare flag on.
Solution
vxedit -g appdg set spare=on appdg01
vxedit -g appdg set spare=on appdg02
vxedit -g appdg set spare=on appdg03
vxedit -g appdg set spare=on appdg04
End of Solution
No, the volume is not created, and you receive the error:
...Cannot allocate space for size block volume ...
The volume is not created because all disks are set as spares, and vxassist
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxassist -g appdg make sparevol 100m layout=mirror \
appdg03 appdg04
Notice that VxVM overrides its default and applies the two spare disks to the
volume because the two disks were specified by the administrator.
End of Solution
Solution
vxassist -g appdg remove volume sparevol
End of Solution
5 Verify that the relocation daemon (vxrelocd) is running. If not, start it.
Solution
ps -ef |grep vxrelocd
vxrelocd root & (if necessary)
ps -ef |grep vxrelocd
End of Solution
Solution
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxassist -g appdg make sparevol 100m layout=mirror
A
End of Solution
Solution
vxprint -g appdg -htr > /tmp/savedvxprint
End of Solution
9 Display the properties of the sparevol volume. In the table, record the device
and disk media name of the disks used in this volume. You are going to
simulate disk failure on one of the disks. Decide which disk you are going to
fail.
10 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script. In the standard virtual lab environment, this script
is located in the /student/labs/sf/sf61 directory.
While using the script, substitute the appropriate disk device name for one of
the disks in use by sparevol, for example enter emc0_dd7.
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
cd /student/labs/sf/sf61
./overwritepr.pl
Enter a device used in sparevol when prompted.
End of Solution
Note: You may need to wait a minute or two for the hot relocation to
complete.
Solution
Hot relocation has taken place. The failed disk has a status of NODEVICE.
VxVM has relocated the mirror of the failed disk onto the designated spare
disk.
End of Solution
Solution
This disk is displayed as a failed disk.
End of Solution
winclient
13 In the VOM console, view the status of the disks and the volume.
Solution
https://mgt.example.com:14161/
Navigate to the Server perspective. On the navigation tree, expand Data
Center > Uncategorized Hosts. Click the sym1.example.com host link, and
Copyright 2014 Symantec Corporation. All rights reserved.
Solution
vxdisksetup -i accessname
End of Solution
15 Bring the disk back under VxVM control and into the disk group to replace the
failed disk media name.
Solution
vxdg -g appdg -k adddisk dm_name=accessname
End of Solution
Solution
vxunreloc -g appdg dm_name
where dm_name is the disk media name of the failed and replaced disk.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
17 Wait until the volume is fully recovered before continuing. Check to ensure
that the disk and the volume are fully recovered.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
Note: The vxprint command shows the subdisk with the UR tag.
End of Solution
Solution
vxedit -g appdg rename appdg01-UR-001 appdg01-01
End of Solution
Solution
vxassist -g appdg remove volume sparevol
End of Solution
Solution
vxedit -g appdg set spare=off appdg04
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.
sym1 A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: If you do not have access to the Internet from the classroom systems, skip
this optional lab section.
Solution
Select Find a product manual link. This will redirect to SORT website.
In the Linux column, find the 6.1 row and then select the Product guides
link.
Look for the Release Notes documents. Download and open the PDF file
for Symantec Storage Foundation Release Notes.
3 Where would you locate the latest patch for Veritas Storage Foundation and
High Availability?
Solution
Go to the Symantec Operations Readiness Tools (SORT) Web site at
http://sort.symantec.com/.
Select the Downloads tab.
Click the Patches link.
Select the Product, Product version and Platform.
Available patches are displayed for download.
End of Solution
End of lab
Copyright 2014 Symantec Corporation. All rights reserved.
B
Copyright 2014 Symantec Corporation. All rights reserved.
B
Copyright 2014 Symantec Corporation. All rights reserved.
G
E
group name 3-18
ENABLED state 8-17
enabling I/O to a controller 7-20
encapsulation 3-5 H
enclosure-based naming HFS 6-8
benefits 3-4
Hierarchical File System 6-8
error disk status 8-6
high availability 5-17
error status 3-16
host locks
evacuating a disk 3-25 clearing 5-19
exclusive OR 4-6 hostid 3-17, 7-7
EXT2 6-8 hot relocation
EXT3 6-8 definition 8-20
Copyright 2014 Symantec Corporation. All rights reserved.
removing 5-3
ioscan 8-12
mirrored volume 1-14, 4-5
creating 4-11
J mirroring 1-14
advantages 4-8
JFS 6-8 disadvantages 4-8
JFS2 6-8 mirrors
Journaled File System 6-8 adding 4-10
journaling 6-12 mkdir 3-12
mkfs 3-12
mkfs options 6-10
after attaching disk media 8-15 file system switchout mechanisms 6-9
after recovering volumes 8-17 file system type 6-11
after running vxreattach 8-10 fragmentation types 6-14
after temporary disk failure 8-10 identifying free space 6-11
volumes intent log 6-12
allocating storage for 4-12 maintaining consistency 6-13
VOM resizing 5-15
architecture 2-25 using by default 6-9
support for virtual environments 2-26 vxinfo 3-23
vrtsadm 2-32 vxinstall 2-12
VRTSvxfs 2-16 vxiod 7-3
X
XOR 1-14, 4-6
xprtlwid 2-18
Copyright 2014 Symantec Corporation. All rights reserved.
x You invest a considerable amount of time, expense, and expertise to prepare for
and complete a Symantec technical exam, which is undermined by those who
engage in exam misconduct.
x Exam misconduct enables less qualified individuals to compete for the jobs and
benefits YOU deserve.
x Exam misconduct erodes confidence in both Symantec programs and your skills
as a certified IT professional and can lead to security and liability risks for your
customers and/or employer
x To confidentially report suspected cases of misconduct, please contact
global_exams@symantec.com.
Copyright 2014 Symantec Corporation. All rights reserved.
Symantec is committed to maintaining the security and integrity of its brand and
certification and accreditation exams. This ensures that our products are installed and
maintained by qualified IT Professionals and provides end users with the confidence
that their system software is operating at maximum efficiency. Symantec actively
investigates and takes corrective action against individuals and organizations who
attempt to compromise the security of our exams or engage in any form of exam
misconduct. To learn more about Symantec Testing Policies and Exam Security, visit
http://www.symantec.com/business/training/certification/path.jsp?pathID=policies
CONFIDENTIAL
To learn more - NOT Certification
about the Symantec FOR DISTRIBUTION
Program and exams,
435 visit http://go.symantec.com/certification