Sunteți pe pagina 1din 435

CONFIDENTIAL - NOT FOR DISTRIBUTION

Symantec Storage
Foundation 6.x for
UNIX: Administration
Fundamentals

100-002840
COURSE DEVELOPERS Copyright 2014 Symantec Corporation. All rights reserved.
Pranab Koch Symantec, the Symantec Logo, and VERITAS are trademarks or
Raj Kiran Prasad Thota registered trademarks of Symantec Corporation or its affiliates in
the U.S. and other countries. Other names may be trademarks of
their respective owners.
LEAD SUBJECT MATTER THIS PUBLICATION IS PROVIDED AS IS AND ALL
EXPERTS EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS
Brad Willer AND WARRANTIES, INCLUDING ANY IMPLIED
Gaurav Dong WARRANTY OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE
DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
TECHNICAL DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
CONTRIBUTORS AND SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR
REVIEWERS INCIDENTAL OR CONSEQUENTIAL DAMAGES IN
Margy Cassidy CONNECTION WITH THE FURNISHING, PERFORMANCE,
Steve Evans OR USE OF THIS PUBLICATION. THE INFORMATION
Joe Gallagher CONTAINED HEREIN IS SUBJECT TO CHANGE WITHOUT
Freddie Gilyard NOTICE.
Graeme Gofton No part of the contents of this book may be reproduced or
Tony Griffiths transmitted in any form or by any means without the written
Gene Henriksen permission of the publisher.
Kleber Saldanha Symantec Storage Foundation 6.x for UNIX: Administration
Kalyan Subramaniyam Fundamentals
Anand Raj Symantec Corporation
Vengadassalam
World Headquarters
Stephen Williams
350 Ellis Street
Randal Williams Mountain View, CA 94043
United States
http://www.symantec.com
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


2
Table of Contents
Course Introduction
What is storage virtualization? ................................................................ Intro-2
Introducing Symantec Storage Foundation ............................................. Intro-5
Symantec Storage Foundation curriculum .............................................. Intro-9

Lesson 1: Virtual Objects


Operating system storage devices and virtual data storage ........................ 1-3
Volume Manager storage objects............................................................... 1-11
VxVM volume layouts and RAID levels ...................................................... 1-13

Lesson 2: Installing Storage Foundation and Accessing SF Interfaces


Preparing to install Storage Foundation ....................................................... 2-3
Installing Storage Foundation..................................................................... 2-11
Storage Foundation resources ................................................................... 2-20
Storage Foundation user interfaces ........................................................... 2-24

Lesson 3: Creating a Volume and File System


Preparing disks and disk groups for volume creation................................... 3-3
Creating a volume and adding a file system .............................................. 3-11
Displaying disk and disk group information ................................................ 3-15
Displaying volume configuration information.............................................. 3-21
Removing volumes, disks, and disk groups ............................................... 3-24

Lesson 4: Working with Volumes with Different Layouts


Volume layouts............................................................................................. 4-3
Creating volumes with various layouts ......................................................... 4-9
Allocating storage for volumes ................................................................... 4-12

Lesson 5: Making Configuration Changes


Administering mirrored volumes................................................................... 5-3
Resizing a volume and a file system .......................................................... 5-12
Moving data between systems ................................................................... 5-17
Copyright 2014 Symantec Corporation. All rights reserved.

Renaming VxVM objects ............................................................................ 5-21

Lesson 6: Administering File Systems


Benefits of using Veritas File System........................................................... 6-3
Using Veritas File System commands.......................................................... 6-8
Logging in VxFS ......................................................................................... 6-12
Controlling file system fragmentation ......................................................... 6-14
Using thin provisioning disk arrays............................................................. 6-19

Lesson 7: Managing Devices Within the VxVM Architecture


Managing components in the VxVM architecture......................................... 7-3
Discovering disk devices ............................................................................ 7-11
CONFIDENTIAL - NOT FOR DISTRIBUTION
3 Table of Contents iii
Copyright 2014 Symantec Corporation. All rights reserved.
Managing multiple paths to disk devices .................................................... 7-17

Lesson 8: Resolving Hardware Problems


How does VxVM interpret failures in hardware?........................................... 8-3
Recovering disabled disk groups.................................................................. 8-8
Resolving disk failures ................................................................................ 8-10
Managing hot relocation at the host level ................................................... 8-20

Appendix A: Lab Solutions


Lab 1: VMware Workstation Introduction..................................................... A-8
Exercise 1: Starting virtual machines (VMware Workstation)............... A-11
Exercise 2: Logging on to virtual machines (VMware Workstation) ..... A-15
Exercise 3: Running basic commands (VMware Workstation)............. A-17
Lab 1: Hatsize Introduction ........................................................................ A-21
Exercise 1: Connecting to the lab environment (Hatsize) .................... A-24
Exercise 2: Connecting to virtual machines (Hatsize) .......................... A-26
Exercise 3: Running basic commands (Hatsize).................................. A-29
Exercise 4: Restarting virtual machines (Hatsize)................................ A-33
Lab 2: Installing SF and Accessing SF Interfaces ..................................... A-37
Exercise 1: Verifying that the system meets installation requirements A-39
Exercise 2: Installing Symantec Storage Foundation........................... A-48
Exercise 3: Performing post-installation and version checks ............... A-52
Exercise 4: Optional lab: Setting up Veritas Enterprise Administrator . A-57
Exercise 5: Optional lab: Text-based VxVM menu interface ................ A-62
Exercise 6: Optional lab: Accessing CLI commands............................ A-64
Exercise 7: Optional lab: Adding managed hosts to the
VOM Management Server.................................................................... A-67
Lab 3: Creating a Volume and File System ............................................... A-71
Exercise 1: Creating a volume and file system: VOM .......................... A-72
Exercise 1: Creating disk groups, volumes and file systems: CLI........ A-73
Exercise 2: Removing volumes and disks: CLI .................................... A-78
Exercise 3: Destroying disk data using disk shredding: CLI................. A-79
Exercise 4: Optional lab: Creating disk groups, volumes,
and file systems: VOM ......................................................................... A-82
Exercise 5: Optional lab: Removing volumes, disks,
and disk groups: VOM.......................................................................... A-89
Copyright 2014 Symantec Corporation. All rights reserved.

Lab 4: Working with Volumes with Different Layouts................................. A-93


Exercise 1: Creating volumes with different layouts: CLI ..................... A-94
Exercise 2: Optional lab: Creating volumes with user defaults: CLI..... A-99
Lab 5: Making Configuration Changes .................................................... A-103
Exercise 1: Administering mirrored volumes ...................................... A-105
Exercise 2: Resizing a volume and file system .................................. A-109
Exercise 3: Renaming a disk group..................................................... A-111
Exercise 4: Moving data between systems ......................................... A-114
Exercise 5: Optional lab: Resizing a file system only .......................... A-119
Lab 6: Administering File Systems .......................................................... A-123
Exercise 1: Preparation for defragmenting a Veritas File System lab A-125

CONFIDENTIAL - NOT FOR DISTRIBUTION


4 iv Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 2: Defragmenting a Veritas File System............................... A-127
Exercise 3: SmartMove....................................................................... A-133
Exercise 4: Thin reclamation............................................................... A-136
Lab 7: Managing Devices Within the VxVM Architecture ......................... A-141
Exercise 1: Administering the Device Discovery Layer....................... A-143
Exercise 2: Displaying DMP information............................................. A-145
Exercise 3: Displaying DMP statistics................................................. A-148
Exercise 4: Enabling and disabling DMP paths .................................. A-151
Exercise 5: Managing array policies ................................................... A-154
Lab 8: Resolving Hardware Problems...................................................... A-159
Exercise 1: Recovering a temporarily disabled disk group ................. A-161
Exercise 2: Preparing for disk failure labs........................................... A-165
Exercise 3: Recovering from temporary disk failure ........................... A-166
Exercise 4: Recovering from permanent disk failure .......................... A-171
Exercise 5: Optional lab: Recovering from temporary
disk failure - Layered volume .............................................................. A-175
Exercise 6: Optional lab: Recovering from permanent
disk failure - Layered volume .............................................................. A-180
Exercise 7: Optional lab: Replacing physical drives
(without hot relocation)........................................................................ A-184
Exercise 8: Optional lab: Replacing physical drives
(with hot relocation)............................................................................. A-188
Exercise 9: Optional lab: Recovering from temporary
disk failure with vxattachd daemon.................................................. A-191
Exercise 10: Optional lab: Exploring spare disk behavior................... A-193
Exercise 11: Optional lab: Using the Support Web Site...................... A-199

Appendix B: Using the VEA


Creating a disk group and a volume and adding a file system.................... B-3
Displaying disk, disk group and volume information ................................... B-6
Removing volumes, disks, and disk groups ............................................... B-10
Performing basic administration tasks on volumes and file systems ......... B-11

Index
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


5 Table of Contents v
Copyright 2014 Symantec Corporation. All rights reserved.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


6 vi Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Copyright 2014 Symantec Corporation. All rights reserved.

7
Course Introduction

CONFIDENTIAL - NOT FOR DISTRIBUTION


What is storage virtualization?
Storage virtualization is the process of taking multiple physical storage devices
and combining them into logical (virtual) storage devices that are presented to the
operating system, applications, and users. Storage virtualization builds a layer of
abstraction above the physical storage so that data is not restricted to specific
hardware devices, creating a flexible storage environment. Storage virtualization
simplifies management of storage and potentially reduces cost through improved
hardware utilization and consolidation.
With storage virtualization, the physical aspects of storage are masked to users.
Administrators can concentrate less on physical aspects of storage and more on
delivering access to necessary data.
Benefits of storage virtualization include:
Greater IT productivity through the automation of manual tasks and simplified
Copyright 2014 Symantec Corporation. All rights reserved.

administration of heterogeneous environments


Increased application return on investment through improved throughput and
increased uptime
Lower hardware costs through the optimized use of hardware resources

CONFIDENTIAL - NOT FOR DISTRIBUTION


8 Intro2 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
How is storage virtualization used in your environment?
The way in which you use storage virtualization, and the benefits derived from
storage virtualization, depend on the nature of your IT infrastructure and your
specific application requirements. Three main types of storage virtualization used
today are:
Storage-based
Host-based
Network-based
Most companies use a combination of these three types of storage virtualization
solutions to support their chosen architecture and application needs.
The type of storage virtualization that you use depends on factors, such as the:
Heterogeneity of deployed enterprise storage arrays
Copyright 2014 Symantec Corporation. All rights reserved.

Need for applications to access data contained in multiple storage devices


Importance of uptime when replacing or upgrading storage
Need for multiple hosts to access data within a single storage device
Value of the maturity of technology
Investments in a SAN architecture
Level of security required
Level of scalability needed
Intro

CONFIDENTIAL - NOT FOR DISTRIBUTION


9 Course Introduction
Copyright 2014 Symantec Corporation. All rights reserved.
Intro3
Storage-based storage virtualization
Storage-based storage virtualization refers to disks within an individual array that
are presented virtually to multiple servers. Storage is virtualized by the array itself.
For example, RAID arrays virtualize the individual disks (that are contained within
the array) into logical LUNS, which are accessed by host operating systems using
the same method of addressing as a directly-attached physical disk.
This type of storage virtualization is useful under these conditions:
You need to have data in an array accessible to servers of different operating
systems.
All of a servers data needs are met by storage contained in the physical box.
You are not concerned about disruption to data access when replacing or
upgrading the storage.
The main limitation to this type of storage virtualization is that data cannot be
shared between arrays, creating islands of storage that must be managed.

Host-based storage virtualization


Host-based storage virtualization refers to disks within multiple arrays and from
multiple vendors that are presented virtually to a single host server. For example,
software-based solutions, such as Veritas Storage Foundation, provide host-based
storage virtualization. Using Veritas Storage Foundation to administer host-based
storage virtualization is the focus of this training.
Host-based storage virtualization is useful under these conditions:
A server needs to access data stored in multiple storage devices.
You need the flexibility to access data stored in arrays from different vendors.
Additional servers do not need to access the data assigned to a particular host.
Maturity of technology is a highly important factor to you in making IT
decisions.
Note: By combining Veritas Storage Foundation with clustering technologies,
such as Veritas Cluster Volume Manager, storage can be virtualized to multiple
hosts of the same operating system.

Network-based storage virtualization


Copyright 2014 Symantec Corporation. All rights reserved.

Network-based storage virtualization refers to disks from multiple arrays and


multiple vendors that are presented virtually to multiple servers. Network-based
storage virtualization is useful under these conditions:
You need to have data accessible across heterogeneous servers and storage
devices.
You require central administration of storage across all Network Attached
Storage (NAS) systems or Storage Area Network (SAN) devices.
You want to ensure that replacing or upgrading storage does not disrupt data
access.
You want to virtualize storage to provide block services to applications.

CONFIDENTIAL - NOT FOR DISTRIBUTION


10 Intro4 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Introducing Symantec Storage Foundation
Symantec storage management solutions address the increasing costs of managing
mission-critical data and disk resources in Direct Attached Storage (DAS) and
Storage Area Network (SAN) environments.
At the heart of these solutions is Symantec Storage Foundation, which includes
Veritas Volume Manager (VxVM), Veritas File System (VxFS), and Veritas
Dynamic Multi-Pathing (DMP) products. Independently, these components
provide key benefits. When used together as an integrated solution, they deliver
the highest possible levels of performance, availability, and manageability for
heterogeneous storage environments.

What is Veritas Volume Manager?


Veritas Volume Manager, the industry-leader in storage virtualization, is an easy-
Copyright 2014 Symantec Corporation. All rights reserved.

to-use, online storage management solution for organizations that require


uninterrupted, consistent access to mission-critical data. VxVM enables you to
apply business policies to configure, share, and manage storage without worrying
about the physical limitations of disk storage. VxVM reduces the total cost of
ownership by enabling administrators to easily build storage configurations that
improve performance and increase data availability.
Working in conjunction with Veritas File System, Veritas Volume Manager
creates a foundation for other value-added technologies. such as SAN
environments, clustering and failover, automated management, backup and HSM,
Intro

and remote browser-based management.

CONFIDENTIAL - NOT FOR DISTRIBUTION


11 Course Introduction
Copyright 2014 Symantec Corporation. All rights reserved.
Intro5
What is Veritas File System?
A file system is a collection of directories organized into a structure that enables
you to locate and store files. The main purposes of a file system are to:
Provide shared access to data storage.
Provide structured access to data.
Control access to data.
Provide a common, portable application interface.
Enable the manageability of data storage.
The value of a file system depends on its integrity and performance. Veritas File
System is an extent-based, intent logging file system. It is designed for use in
operating environments that require high performance and availability and deal
with large amounts of data.
Copyright 2014 Symantec Corporation. All rights reserved.

What is Veritas Dynamic Multi-Pathing?


Veritas Dynamic Multi-Pathing is designed to manage seamlessly multiple access
paths to a single storage device. It provides improved storage I/O performance and
availability across heterogeneous server and storage platforms using intelligent
algorithms and load balancing for faster throughput and path failover.

CONFIDENTIAL - NOT FOR DISTRIBUTION


12 Intro6 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Benefits of Symantec Storage Foundation
Commercial system availability now requires continuous uptime in many
implementations. Systems must be available 24 hours a day, 7 days a week, and
365 days a year. Symantec Storage Foundation reduces the cost of ownership by
providing capacity, availability, and performance enhancements for these
enterprise computing environments.

Capacity
VxVM, VxFS, and DMP provide consistent management across Solaris, HP-
UX, AIX, and Linux platforms.
Storage Foundation provides additional benefits for array environments, such
as inter-array mirroring and hardware independent dynamic multipathing.
Hosts can be replaced without modifying storage.
Hosts with different operating systems can access the same storage.
Storage devices can be spanned.

Performance
I/O throughput can be maximized by measuring and modifying volume layouts
while storage remains online.
Extent-based allocation of space for files minimizes file level access time.
Read-ahead buffering dynamically tunes itself to the volume layout.
Aggressive caching of writes greatly reduces the number of disk accesses.
Direct I/O performs file I/O directly into and out of user buffers.
With VxFS, certain features are available for maximizing performance in a
database environment.
With VxFS, you can create a multi-tier storage environment where you benefit
from using a mixture of high-end disk arrays, solid state disks, low-end disk
arrays, and JBODs.

Availability
Management of storage and the file system is performed online in real time,
eliminating the need for planned downtime.
Online volume and file system management can be centralized through an
Copyright 2014 Symantec Corporation. All rights reserved.

intuitive, easy-to-use Web console that is implemented using Veritas


Operations Manager.
Through software RAID techniques, storage remains available in the event of
hardware failure.
Recovery time is minimized with logging and background mirror
resynchronization.
Logging of file system changes enables fast file system recovery.
A snapshot of a file system provides an internally consistent, read-only image
Intro

for backup, and file system checkpoints provide read-writable snapshots.

CONFIDENTIAL - NOT FOR DISTRIBUTION


13 Course Introduction
Copyright 2014 Symantec Corporation. All rights reserved.
Intro7
Benefits of VxVM and RAID arrays
RAID arrays virtualize individual disks into logical LUNS which are accessed by
host operating systems as physical devices, that is, using the same method of
addressing as a directly-attached physical disk.
VxVM virtualizes both the physical disks and the logical LUNs presented by a
RAID array. Modifying the configuration of a RAID array may result in changes in
SCSI addresses of LUNs, requiring modification of application configurations.
VxVM provides an effective method of reconfiguring and resizing storage across
the logical devices presented by a RAID array.
When using VxVM with RAID arrays, you can leverage the strengths of both
technologies:
You can use VxVM to mirror between arrays to improve disaster recovery
protection against the failure of an array, particularly if one array is remote.
Copyright 2014 Symantec Corporation. All rights reserved.

Arrays can be of different manufacture or type; that is, one array can be a
RAID array and the other a JBOD.
VxVM facilitates data reorganization and maximizes available resources.
VxVM improves overall performance by making I/O activity parallel for a
volume through more than one I/O path to and within the array.
You can use snapshots with mirrors in different locations, which is beneficial
for disaster recovery and off-host processing.
If you include Veritas Volume Replicator (VVR) or Veritas File Replicator
(VFR) in your environment, VVR and VFR can be used to provide hardware-
independent replication services.

CONFIDENTIAL - NOT FOR DISTRIBUTION


14 Intro8 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Symantec Storage Foundation curriculum
Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
training is designed to provide you with basic instructions on making the most of
Symanec Storage Foundation. This is a base course. Differences courses of newer
releases of Storage Foundation 6.x are built on top of this course.
Copyright 2014 Symantec Corporation. All rights reserved.

Intro

CONFIDENTIAL - NOT FOR DISTRIBUTION


15 Course Introduction
Copyright 2014 Symantec Corporation. All rights reserved.
Intro9
Symantec Storage Foundation for UNIX: Administration
Fundamentals overview
The Administration training provides comprehensive instruction on operating the
file and disk management foundation products: Veritas Volume Manager (VxVM)
and Veritas File System (VxFS). In this training, you learn how to combine file
system and disk management technology to ensure easy management of all storage
and maximum availability of essential data.

Objectives
After completing the Administration Fundamentals training, you will be able to:
Identify VxVM virtual storage objects and volume layouts.
Install and configure Storage Foundation.
Administer the SF environment from a centralized Web console using Veritas
Copyright 2014 Symantec Corporation. All rights reserved.

Operations Manager (VOM).


Configure and manage disks and disk groups.
Create concatenated, striped, mirrored, and layered volumes.
Configure volumes by adding mirrors and logs and resizing volumes and file
systems.
Perform file system administration.
Manage the dynamic multipathing feature.
Resolve hardware problems that result in disk and disk group failures.

CONFIDENTIAL - NOT FOR DISTRIBUTION


16 Intro10 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Additional course resources

Appendix A: Lab Solutions


This section contains detailed solutions to the lab exercises for each lesson.

Appendix B: Using the VEA


This section contains instructions on how to perform administrative tasks from the
Veritas Enterprise Administrator Graphical User Interface.

Typographic conventions used in this course


The following tables describe the typographic conventions used in this course.
Copyright 2014 Symantec Corporation. All rights reserved.

Intro

CONFIDENTIAL - NOT FOR DISTRIBUTION


17 Course Introduction
Copyright 2014 Symantec Corporation. All rights reserved.
Intro11
Typographic conventions in text and commands

Convention Element Examples


Courier New, Command input, To display the robot and drive configuration:
bold both syntax and tpconfig -d
examples
To display disk information:
vxdisk -o alldgs list
Courier New, Command output In the output:
plain Command protocol_minimum: 40
names, directory protocol_maximum: 60
names, file protocol_current: 0
names, path Locate the altnames directory.
names, URLs
Go to http://www.symantec.com.
when used within
regular text Enter the value 300.
paragraphs.
Courier New, Variables in To install the media server:
Italic, bold or command syntax, /cdrom_directory/install
plain and examples:
To access a manual page:
Variables in
man command_name
command input
are Italic, plain. To display detailed information for a disk:
Variables in vxdisk -g disk_group list
command output dm_name
are Italic, bold.

Typographic conventions in graphical user interface descriptions

Convention Element Examples


Greater than (>) sign and Menu navigation paths Select File > Save.
bold font
Initial capitalization and Buttons, menus, windows, Select the Next button.
bold font options, and other interface Open the Task Status
elements window.
Copyright 2014 Symantec Corporation. All rights reserved.

Remove the checkmark


from the Print File check
box.

CONFIDENTIAL - NOT FOR DISTRIBUTION


18 Intro12 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Lesson 1
Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


19
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


20 12 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
1

Operating system storage devices and virtual data storage


The different UNIX flavors supported by Storage Foundation each have their own
unique way of detecting and using storage devices. Some platforms, such as
Solaris and Linux, use a partition table and disk partitions to organize data on the
physical disks and others, such as AIX and HP-UX, use OS-native logical volume
management software to detect disks as physical volumes.
Storage Foundation hides the complexity of the device management layer by
introducing a virtual data layer that works the same on all of these UNIX
platforms. The way Volume Manager uses disks to organize data is explained in
detail later in this lesson.
However, the key point to note is that Volume Manager can only use a device if it
is recognized by the operating system on the Storage Foundation host. Therefore,
if a disk device is not visible in Volume Manager, you first have to ensure that the
Copyright 2014 Symantec Corporation. All rights reserved.

operating system detects it correctly.


Use the following OS-specific commands to list storage devices on individual
platforms. Refer to manual pages for specific command syntax.

Operating system Command to use


Solaris format
Linux fdisk
HP-UX ioscan
AIX lsdev

CONFIDENTIAL - NOT FOR DISTRIBUTION


21 Lesson 1 Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.
13
Operating system disk naming
Solaris
You locate and access the data on a physical disk by using a device name that
specifies the controller, target ID, and disk number. A typical device name uses the
format: c#t#d#.
c# is the controller number.
t# is the target ID.
d# is the logical unit number (LUN) of the drive attached to the target.
If a disk is divided into partitions, you also specify the partition number in the
device name:
s# is the partition (slice) number.
For example, the device name c0t0d0s1 is connected to controller number 0 in the
Copyright 2014 Symantec Corporation. All rights reserved.

system, with a target ID of 0, physical disk number 0, and partition number 1 on


the disk.
HP-UX
Traditionally, you locate and access the data on a physical disk by using a device
name that specifies the controller, target ID, and disk number. A typical traditional
device name uses the format: c#t#d#.
c# is the controller number.
t# is the target ID.
d# is the logical unit number (LUN) of the drive attached to the target.

CONFIDENTIAL - NOT FOR DISTRIBUTION


22 14 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
For example, the c0t0d0 device name is connected to the controller number 0 in
the system, with a target ID of 0, and the physical disk number 0.
With HP-UX 11iv3, a new method called agile view has been introduced. The new
convention uses the /dev/[r]disk/diskN name where N is the decimal
instance number for the disk. This is called a persistent device special file name. 1
The persistent device special file names are not available before HP-UX 11iv3.
AIX
Every device in AIX is assigned a location code that describes its connection to the
system. The general format of this identifier is AB-CD-EF-GH, where the letters
represent decimal digits or uppercase letters. The first two characters represent the
bus, the second pair identify the adapter, the third pair represent the connector, and
the final pair uniquely represent the device. For example, a SCSI disk drive might
have a location identifier of 04-01-00-6,0. In this example, 04 means the PCI bus,
01 is the slot number on the PCI bus occupied by the SCSI adapter, 00 means the
only or internal connector, and the 6,0 means SCSI ID 6, LUN 0.
However, this data is used internally by AIX to locate a device. The device name
that a system administrator or software uses to identify a device is less hardware
dependant. The system maintains a special database called the Object Data
Manager (ODM) that contains essential definitions for most objects in the system,
including devices. Through the ODM, a device name is mapped to the location
identifier. The device names are referred to by special files found in the /dev
directory. For example, the SCSI disk identified previously might have the device
name hdisk3 (the fourth hard disk identified by the system). The device named
hdisk3 is accessed by the file name /dev/hdisk3.
If a device is moved so that it has a different location identifier, the ODM is
updated so that it retains the same device name, and the move is transparent to
users. This is facilitated by the physical volume identifier stored in the first sector
of a physical volume. This unique 128-bit number is used by the system to
recognize the physical volume wherever it may be attached because it is also
associated with the device name in the ODM.
Linux
On Linux, device names are displayed in the format:
sdx[N]
Copyright 2014 Symantec Corporation. All rights reserved.

hdx[N]
In the syntax:
sd refers to a SCSI disk, and hd refers to an EIDE disk.
x is a letter that indicates the order of disks detected by the operating system.
For example, sda refers to the first SCSI disk, sdb refers to the second SCSI
disk, and so on.
N is an optional parameter that represents a partition number in the range 1
through 16. For example, sda7 references partition 7 on the first SCSI disk.
Primary partitions on a disk are 1, 2, 3, 4; logical partitions have numbers 5 and up.
If the partition number is omitted, the device name indicates the entire disk.
CONFIDENTIAL - NOT FOR DISTRIBUTION
23 Lesson 1 Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.
15
Disk arrays
Reads and writes on unmanaged physical disks can be a relatively slow process,
because disks are physical devices that require time to move the heads to the
correct position on the disk before reading or writing. If all of the read and write
operations are performed to individual disks, one at a time, the read-write time can
become unmanageable.
A disk array is a collection of physical disks. Performing I/O operations on
multiple disks in a disk array can improve I/O speed and throughput.
Hardware arrays present disk storage to the host operating system as LUNs. A
LUN can be made up of a single physical disk, a collection of physical disks, or
even a portion of a physical disk. From the operating system point of view, a LUN
corresponds to a single storage device.
Copyright 2014 Symantec Corporation. All rights reserved.

Multipathing
Some disk arrays provide multiple ports to access disk devices. These ports,
coupled with the host bus adaptor (HBA) controller and any data bus or I/O
processor local to the array, compose multiple hardware paths to access the disk
devices. This is called multipathing.
In a multipathing environment, a single storage device may appear to the operating
system as multiple storage devices. Special multipathing software is usually
required to administer multipathed storage devices. Veritas Dynamic Multi-
Pathing (DMP) product which is part of the Storage Foundation software provides
seamless management of multiple access paths to storage devices in heterogeneous
operating system and storage environments.
CONFIDENTIAL - NOT FOR DISTRIBUTION
24 16 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
1

Example array structure


In an array, the LUNs are a virtual presentation. Therefore, you cannot know
where in the array the actual data will be put. That means you have no control over
the physical conditions.
The array in the slide contains slots for 14 physical disks, and the configuration
places 12 physical disks in the array. These physical disks are paired together into
6 mirrored RAID groups. In each RAID group, 12 logical units, or LUNs, are
created. These LUNs appear to hosts as SAN-based SCSI disks. The remaining
two disks are used as spares in case one of the active disks fails.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


25 Lesson 1 Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.
17
Virtual storage management
Veritas Volume Manager creates a virtual level of storage management above the
physical device level by creating virtual storage objects. The virtual storage object
that is visible to users and applications is called a volume.

What is a volume?
A volume is a virtual object, created by Volume Manager, that stores data. A
volume consists of space from one or more physical disks on which the data is
physically stored.

How do you access a volume?


Volumes created by VxVM appear to the operating system as physical disks, and
applications that interact with volumes work in the same way as with physical
Copyright 2014 Symantec Corporation. All rights reserved.

disks. All users and applications access volumes as contiguous address space using
special device files in a manner similar to accessing a disk partition.
Volumes have block and character device nodes in the /dev tree. You can supply
the name of the path to a volume in your commands and programs, in your file
system and database configuration files, and in any other context where you would
otherwise use the path to a physical disk partition.

CONFIDENTIAL - NOT FOR DISTRIBUTION


26 18 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
1

Volume Manager-controlled disks


With Volume Manager, you enable virtual data storage by bringing a disk under
Volume Manager control. By default, Volume Manager uses a cross-platform data
sharing (CDS) disk layout. A CDS disk is consistently recognized by all VxVM-
supported UNIX platforms and consists of:
OS-reserved area: To accommodate platform-specific disk usage, 128K is
reserved for disk labels, platform blocks, and platform-coexistence labels.
Private region: The private region stores information, such as disk headers,
configuration copies, and kernel logs, in addition to other platform-specific
management areas that VxVM uses to manage virtual objects. The private
region represents a small management overhead:

Operating System Default Block/Sector Size Default Private Region Size


Copyright 2014 Symantec Corporation. All rights reserved.

Solaris 512 bytes 65536 sectors (32M)


HP-UX 1024 bytes 32768 sectors (32M)
AIX 512 bytes 65536 sectors (32M)
Linux 512 bytes 65536 sectors (32M)

Public region: The public region consists of the remainder of the space on the
disk. The public region represents the available space that Volume Manager
can use to assign to volumes and is where an application stores data. Volume
Manager never overwrites this area unless specifically instructed to do so.

CONFIDENTIAL - NOT FOR DISTRIBUTION


27 Lesson 1 Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.
19
Comparing CDS and other VxVM disk formats
In addition to the default CDS disk format, Volume Manager supports other
platform-specific disk formats. These disk formats are used for bringing the boot
disk under VxVM control on operating systems that support that capability.
On platforms that support bringing the boot disk under VxVM control, CDS disks
cannot be used for boot disks. CDS disks have specific disk layout requirements
that enable a common disk layout across different platforms, and these
requirements are not compatible with the particular platform-specific requirements
of boot disks. Therefore, when placing a boot disk under VxVM control, you must
use a non-default disk format (sliced on Solaris and Linux, hpdisk on HP-UX).
For nonboot disks, you can convert CDS disks to other disk layout formats and
vice versa by using VxVM utilities.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


28 110 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
1

Volume Manager storage objects


Disk groups
A disk group is a collection of VxVM disks that share a common configuration.
You group disks into disk groups for management purposes, such as to hold the
data for a specific application or set of applications. For example, data for
accounting applications can be organized in a disk group called acctdg. A disk
group configuration is a set of records with detailed information about related
Volume Manager objects in a disk group, their attributes, and their connections.
Volume Manager objects cannot span disk groups. For example, a volumes
subdisks, plexes, and disks must be derived from the same disk group as the
volume. You can create additional disk groups as necessary. Disk groups enable
you to group disks into logical collections. Disk groups and their components can
be moved as a unit from one host machine to another.
Copyright 2014 Symantec Corporation. All rights reserved.

Volume Manager disks


A Volume Manager (VxVM) disk represents the public region of a physical disk
that is under Volume Manager control. Each VxVM disk corresponds to one
physical disk. Each VxVM disk has a unique virtual disk name called a disk media
name. The disk media name is a logical name used for Volume Manager
administrative purposes. Volume Manager uses the disk media name when
assigning space to volumes. A VxVM disk is given a disk media name when it is
added to a disk group.
Default disk media name: diskgroup##

CONFIDENTIAL - NOT FOR DISTRIBUTION


29 Lesson 1 Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.
111
You can supply the disk media name or allow Volume Manager to assign a default
name. The disk media name is stored with a unique disk ID to avoid name
collision. After a VxVM disk is assigned a disk media name, the disk is no longer
referred to by its physical address. The physical address (for example, c#t#d# or
hdisk#) becomes known as the disk access record.

Subdisks
A VxVM disk can be divided into one or more subdisks. A subdisk is a set of
contiguous disk blocks that represent a specific portion of a VxVM disk, which is
mapped to a specific region of a physical disk. A subdisk is a subsection of a disks
public region. A subdisk is the smallest unit of storage in Volume Manager.
Therefore, subdisks are the building blocks for Volume Manager objects.
A subdisk is defined by an offset and a length in sectors on a VxVM disk.
Default subdisk name: DMname-##
A VxVM disk can contain multiple subdisks, but subdisks cannot overlap or share
the same portions of a VxVM disk. Any VxVM disk space that is not reserved or
that is not part of a subdisk is free space. You can use free space to create new
subdisks.
Conceptually, a subdisk is similar to a partition. Both a subdisk and a partition
divide a disk into pieces defined by an offset address and length. Each of those
pieces represent a reservation of contiguous space on the physical disk. However,
while the maximum number of partitions to a disk is limited by some operating
systems, there is no theoretical limit to the number of subdisks that can be attached
to a single plex. This number has been limited by default to a value of 4096. If
required, this default can be changed, using the vol_subdisk_num tunable
parameter. For more information on tunable parameters, see the Veritas Storage
Foundation and High Availability Solutions Tuning Guide.

Plexes
Volume Manager uses subdisks to build virtual objects called plexes. A plex is a
structured or ordered collection of subdisks that represents one copy of the data in
a volume. A plex consists of one or more subdisks located on one or more physical
disks. The length of a plex is determined by the last block that can be read or
Copyright 2014 Symantec Corporation. All rights reserved.

written on the last subdisk in the plex.


Default plex name: volume_name-##

Volumes
A volume is a virtual storage device that is used by applications in a manner
similar to a physical disk. Due to its virtual nature, a volume is not restricted by the
physical size constraints that apply to a physical disk. A VxVM volume can be as
large as the total of available, unreserved free physical disk space in the disk
group. A volume consists of one or more plexes.

CONFIDENTIAL - NOT FOR DISTRIBUTION


30 112 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
1

VxVM volume layouts and RAID levels


RAID
RAID is an acronym for Redundant Array of Independent Disks. RAID is a
storage management approach in which an array of disks is created, and part of the
combined storage capacity of the disks is used to store duplicate information about
the data in the array. By maintaining a redundant array of disks, you can regenerate
data in the case of disk failure.
RAID configuration models are classified in terms of RAID levels, which are
defined by the number of disks in the array, the way data is spanned across the
disks, and the method used for redundancy. Each RAID level has specific features
and performance benefits that involve a trade-off between performance and
reliability.
Copyright 2014 Symantec Corporation. All rights reserved.

Volume layouts
RAID levels correspond to volume layouts. A volumes layout refers to the
organization of plexes in a volume. Volume layout is the way plexes are
configured to remap the volume address space through which I/O is redirected at
run-time. Volume layouts are based on the concepts of disk spanning, redundancy,
and resilience.

Disk spanning
Disk spanning is the combining of disk space from multiple physical disks to form
one logical drive. Disk spanning has two forms:

CONFIDENTIAL - NOT FOR DISTRIBUTION


31 Lesson 1 Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.
113
Concatenation: Concatenation is the mapping of data in a linear manner
across two or more disks.
In a concatenated volume, subdisks are arranged both sequentially and
contiguously within a plex. Concatenation allows a volume to be created from
multiple regions of one or more disks if there is not enough space for an entire
volume on a single region of a disk.
Striping: Striping is the mapping of data in equally-sized chunks alternating
across multiple disks. Striping is also called interleaving.
In a striped volume, data is spread evenly across multiple disks. Stripes are
equally-sized fragments that are allocated alternately and evenly to the
subdisks of a single plex. There must be at least two subdisks in a striped plex,
each of which must exist on a different disk. Configured properly, striping not
only helps to balance I/O but also to increase throughput.

Data redundancy
To protect data against disk failure, the volume layout must provide some form of
data redundancy. Redundancy is achieved in two ways:
Mirroring: Mirroring is maintaining two or more copies of volume data.
A mirrored volume uses multiple plexes to duplicate the information contained
in a volume. Although a volume can have a single plex, at least two are
required for true mirroring (redundancy of data). Each of these plexes should
contain disk space from different disks for the redundancy to be useful.
Resilience: A resilient volume, also called a layered volume, is a volume that
is built on one or more other volumes. Resilient volumes enable the mirroring
of data at a more granular level. For example, a resilient volume can be
concatenated or striped at the top level and then mirrored at the bottom level.
A layered volume is a virtual Volume Manager object that nests other virtual
objects inside of itself. Layered volumes provide better fault tolerance by
mirroring data at a more granular level.
Parity: Parity is a calculated value used to reconstruct data after a failure by
doing an exclusive OR (XOR) procedure on the data. Parity information can be
stored on a disk. If part of a volume fails, the data on that portion of the failed
volume can be re-created from the remaining data and parity information.
Copyright 2014 Symantec Corporation. All rights reserved.

A RAID-5 volume uses striping to spread data and parity evenly across
multiple disks in an array. Each stripe contains a parity stripe unit and data
stripe units. Parity can be used to reconstruct data if one of the disks fails. In
comparison to the performance of striped volumes, write throughput of RAID-
5 volumes decreases, because parity information needs to be updated each time
data is accessed. However, in comparison to mirroring, the use of parity
reduces the amount of space required.

CONFIDENTIAL - NOT FOR DISTRIBUTION


32 114 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
1
Copyright 2014 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 1: VMware Workstation Introduction, page A-8

CONFIDENTIAL - NOT FOR DISTRIBUTION


33 Lesson 1 Virtual Objects
Copyright 2014 Symantec Corporation. All rights reserved.
115
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


34 116 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Lesson 2
Installing Storage Foundation and Accessing
SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


35
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


36 22 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Preparing to install Storage Foundation


OS version compatibility
Before installing Storage Foundation, ensure that the version of Storage
Foundation that you are installing is compatible with the version of the operating
system that you are running. You may need to upgrade your operating system
before you install the latest Storage Foundation version.
Check the Veritas Storage Foundation Release Notes for additional operating
system requirements.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


37 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
23
Storage Foundation packaging
For the Unix and Linux platforms, Storage Foundation 6.x is available in three
different product levels:
Storage Foundation Basic:
Intended for smaller systems, Storage Foundation Basic provides the same
robust storage management features of Storage Foundation Standard, but is
designed for system workloads that do not exceed four volumes and/or four
file systems, and/or two processors/sockets in a single physical system. This is
a free license, support subscription version of Storage Foundation.
Storage Foundation Standard:
This option includes Volume Manager, File System, Veritas Operations
Manager (VOM), Dynamic Multi-pathing (DMP), Cross-platform Data
Sharing (CDSalso known as portable data containers: PDC), SmartTier
Copyright 2014 Symantec Corporation. All rights reserved.

(previously known as Dynamic Storage Tiering), and file system data


compression.
Storage Foundation Enterprise:
This option includes everything from the SF Standard version including
Volume Manager, File System, VOM, DMP, PDC, SmartTier, and data
compression and adds support for FlashSnap, storage checkpoints, importing
LUN snapshots, the site awareness feature, FileSnap, and file system
deduplication. These features are described in detail later in the course.

CONFIDENTIAL - NOT FOR DISTRIBUTION


38 24 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Other products with Storage Foundation


Storage Foundation licenses are also available in combination with other products:
Storage Foundation Standard High Availability license includes the Veritas
Cluster Server product with the database and application agents as well as
Storage Foundation Standard. The enterprise version of this product is also
available.
The disaster recovery version enables global clustering with VCS and also
includes the replication agents with automatic fire drill capability.
The Storage Foundation Cluster File System product provides cluster volume
manager and cluster file system capability for concurrent access to data from
multiple systems.
The Storage Foundation for Oracle RAC product is used in parallel Oracle
database environments.
Copyright 2014 Symantec Corporation. All rights reserved.

The Symantec VirtualStore product provides datastore capability for VMware


environments with very high storage optimizations.
Veritas Replicator is an additional option that includes both Veritas Volume
Replicator (VVR) and Veritas File Replicator (VFR) options. Veritas Volume
Replicator enables synchronous or asynchronous data replication across
multiple SF sites.
Veritas File Replicator is also available as a separate option on its own. VFR is
new with the 6.x release and provides periodic file system replication support
on Linux platforms.

CONFIDENTIAL - NOT FOR DISTRIBUTION


39 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
25
Licensing selection
During SF installation after you select the product to install, you are prompted to
agree with the End User License Agreement. The agreement is provided in the
EULA.pdf file on the distribution media. The installation utility quits the
installation if you do not reply y to this question.
The diagram in the slide shows the two possible paths for licensing SF 6.x. The
boxes at the lower part of the slide show the traditional licensing methodology
based on the same keys used in 5.0 and earlier versions of Storage Foundation.
The path at the top of the slide shows how the installer handles the keyless
licensing option.
When keyless licensing is selected, the user is not required to type a license key. If
the system being installed is immediately configured as a managed host connected
to a Veritas Operations Manager management server, the license is considered
Copyright 2014 Symantec Corporation. All rights reserved.

valid and no future action is necessary.

Note: Veritas Operations Manager is an additional free centralized management


solution that you can download and install to manage multiple SF servers
from a single management console.

Adding license keys


If you do not want to use the keyless licensing capability available with SF 6.x,
you must have your license key before you begin installation, because you are
prompted for the license key during the installation process.
CONFIDENTIAL - NOT FOR DISTRIBUTION
40 26 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
License keys are non-node locked.
In a non-node locked model, one key can unlock a product on different servers
regardless of Host ID and architecture type.
In a node locked model, a single license is tied to a single specific server. For
each server, you need a different key.

Generating license keys


The Symantec licensing Web site (http://licensing.symantec.com) is a
self-service online license management system. The licensing web site supports
production license keys only. 2

Note: The VRTSvlic package can coexist with previous licensing packages, such
as VRTSlic. If you have old license keys installed in /etc/vx/elm,
leave this directory on your system. The old and new license utilities can
coexist.

Administering license keys


To add a license key after product installation, type:
vxlicinst
License keys are installed in the /etc/vx/licenses/lic directory. To view
installed license key information, type:
vxlicrep
Displayed information includes:
License key number
Name of the product that the key enables
Type of license
Features enabled by the key
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


41 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
27
Installation and Upgrade service at Symantec Operations Readiness
Tools (SORT) Web site
The Symantec Operations Readiness Tools (SORT) Web site
(https://sort.symantec.com) is designed specifically for Symantec
enterprise products. It automates and simplifies some of the administrator tasks
associated with these products.
You can use this site to:
Determine if your systems are ready to install or upgrade
Download, search, and set up notifications for patches
Search for UMI code descriptions and solutions
Check your product and system configurations for upgrade readiness or risk
exposure
Copyright 2014 Symantec Corporation. All rights reserved.

Gather licensing information


Get the latest information about your SFHA and VCS products
The Installation and Upgrade service at the SORT Web site intends to help SF
administrators analyze their environment for suitability to install or upgrade SF.
This service can either be used to create a preinstallation checklist based on the
information provided by the user or to perform a set of checks on the SF server to
create a detailed custom report.

CONFIDENTIAL - NOT FOR DISTRIBUTION


42 28 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

How to use the Installation and Upgrade service from SORT?


The data collector utility combines the various services provided by SORT that
require data collection from the live systems into a single interface and is built
using the same code base (Perl) as the SF installation scripts. As a result it has the
following benefits:
Can be installed on any UNIX machine on the network and is supported on all
UNIX / Linux versions
Can test multiple systems at the same time
Can test multiple operating systems
Follow the steps on the slide to execute the data collector utility. Note that you can
use the utility to test multiple systems at the same time. By default, the data
collector utility uses passwordless ssh to access remote systems. If you want to
use remote shell, you need to start the data collector utility with the -rsh option.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


43 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
29
What does the utility check?
The Installation and Upgrade service provides complete checking for fresh
installations as well as for existing installations on the following items:
Operating system
Server name, version, and architecture
File system free space
Disk Space
OS patch levels
Number of CPUs, CPU type, CPU speed, and memory
NICs and link speed
Correct OS packages installed
Correct SAN hardware
Copyright 2014 Symantec Corporation. All rights reserved.

Upgradability (current Symantec packages and patches)


Symantec licenses
When the utility completes, it creates two files in the ./sort/reports
directory:
hostname_IAS_date_time.txt
This file is for immediate review on the server.
hostname_IAS_date_time.xml
This file can be uploaded to the SORT Web site to display detailed reports as
shown on the slide.
Other reports may also be generated if you choose different services while running
the data collector utility.
CONFIDENTIAL - NOT FOR DISTRIBUTION
44 210 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Installing Storage Foundation


The Installer is a menu-based installation utility that you can use to install any
product released as part of the Veritas Storage Solutions. This utility acts as a
wrapper for existing product installation scripts and is most useful when you are
installing multiple Veritas products or bundles, such as Veritas Storage Foundation
or Veritas Storage Foundation High Availability.

Note: The example on the slide is from a Linux platform. You may have other
products available on other platforms.

Note: The Veritas Storage Solutions installation media contains an installation


guide that describes how to use the installer utility. Symantec also
recommends reading all product installation guides and release notes even
Copyright 2014 Symantec Corporation. All rights reserved.

if you are using the installer utility.

To add the Storage Foundation packages using the installer utility:


1 Log on as superuser.
2 Mount the Veritas Storage Solutions installation media.
3 Locate and invoke the installer script:
cd /installation_media_location
./installer

CONFIDENTIAL - NOT FOR DISTRIBUTION


45 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
211
Note: If you are planning to perform non-local installs on remote systems, ensure
that the remote systems are configured for passwordless access using either
ssh or rsh. The installer first tries ssh and then rsh to access remote
systems. If you want to use rsh specifically, start the installer script
using the -rsh option.

4 If the licensing utilities are installed, the product status page is displayed. This
list displays the Veritas products on the installation media and the installation
and licensing status of each product. If the licensing utilities are not installed,
you receive a message indicating that the installation utility could not
determine product status.
5 Type I to install a product. Follow the instructions to select the product that
you want to install. Installation begins automatically.
When you add Storage Foundation packages by using the installer utility, all
packages are installed. If you want to add a specific package only, for example,
only the VRTSob package, then you must add the package manually from the
command line.

Methods for adding Storage Foundation packages


A first-time installation of Storage Foundation involves adding the software
packages and starting Storage Foundation processes for first-time use. You can
add Veritas product packages by using one of three methods:

Method Command Notes


Veritas Installation installer Installs multiple Veritas products
Menu interactively.
Installs packages and starts Storage
Foundation for first-time use.
Product installation installvm Install individual Veritas products
scripts installfs interactively.
installsf Installs packages and configures SF
for first time use.
installdmp
Copyright 2014 Symantec Corporation. All rights reserved.

Native operating pkgadd (Solaris) Install individual packages, for


system package swinstall (HP-UX) example, when using your own
installation custom installation scripts.
installp (AIX)
commands First-time Storage Foundation
rpm (Linux)
configuration must be run as a
Then, to start SF separate step.
processes:
./installer \
-start

CONFIDENTIAL - NOT FOR DISTRIBUTION


46 212 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Installation input
The interactive installation prompts the user for information, such as the package
set to be installed, system names, licensing selection, license keys (if traditional
licensing is selected), and other configuration information, such as the product
mode or additional options. These answers are then stored in the
installer-timestamp+3characters.response file in the installation
log directory:
/opt/VRTS/install/logs/installer-timestamp+3characters
The .response file can then be used to install other systems non-interactively
using the ./installer -responsefile filename option. For details on
using a response file during installation, refer to Veritas Storage Foundation
Installation Guide.
Copyright 2014 Symantec Corporation. All rights reserved.

Note: SF 5.1 and later provide a Web user interface to the installation utilities.
The Web installer is explained in more detail later in this lesson.

Note: If you want to install more than one system using the installer utility,
provide the system names separated by space when prompted.

CONFIDENTIAL - NOT FOR DISTRIBUTION


47 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
213
Viewing installation results
The best way to find out what exactly the installation utilities have performed on
the SF servers is by reviewing the installation log files. Each time you invoke the
installer utility, it creates a corresponding log directory under
/opt/VRTS/install/logs and stores detailed log files in this directory as
shown on the slide. Note that the timestamp used with the log directory and file
names is YYYYMMDDhhmm.
In addition to the key log files mentioned on the slide, individual log files exist for
the installation of a software package (the install.software.system log
files) and the starting of a Storage Foundation process or daemon (the
start.SFprocess.system log files).
During the installation, the related installation utilities are copied to the
/opt/VRTS/install directory on the SF hosts. You can use the installation
Copyright 2014 Symantec Corporation. All rights reserved.

utilities in this directory to verify the version of the SF product installed on your
system using the -version option as shown on the slide. This option finds out
which packages are installed on the system and attempts to connect to the SORT
Web site to get the latest version and patch information about the product installed
on the system.
If you want to verify which packages are installed on the system, you can also
view information about installed packages by using OS-specific commands to list
package information.
Solaris
To list all installed packages on the system:

CONFIDENTIAL - NOT FOR DISTRIBUTION


48 214 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
pkginfo
To restrict the list to installed Veritas packages:
pkginfo | grep VRTS
To display detailed information about a package:
pkginfo -l VRTSvxvm
HP-UX
To list all installed packages on the system:
swlist -l product
2
To restrict the list to installed Veritas packages:
swlist -l product | grep VRTS
To display detailed information about a package:
swlist -l product VRTSvxvm
AIX
To list all installed packages on the system:
lslpp
To restrict the list to installed Veritas packages, type:
lslpp -l 'VRTS*'
To verify that a particular fileset has been installed, use its name, for example:
lslpp -l VRTSvxvm
Linux
To verify package installation on the system:
rpm -qa | grep VRTS
To verify a specific package installation on the system:
rpm -q[i] package_name
For example, to verify that the VRTSvxvm package is installed:
rpm -q VRTSvxvm
The -i option lists detailed information about the package.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


49 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
215
Other installation script options
The installer utility can be invoked using a variety of options some of which
are displayed on the slide. To get detailed usage information type ./installer
-help.
Solaris Note
VxFS often requires more than the default 8K kernel stack size, so entries are
added to the /etc/system file. This increases the kernel thread stack size of the
system to 24K. The original /etc/system file is copied to
/etc/fs/vxfs/system.preinstall. If the /etc/system file is
modified during installation, the installation utility does not start SF processes and
prompts you for a reboot. If you receive a message to reboot at the end of the
installation, reboot your system and when the system boots back up, start SF
processes using the -start option to the installation utility.
Copyright 2014 Symantec Corporation. All rights reserved.

Support for native operating system installation methods


SF 6.x supports product installation through the native operating system
provisioning tools, such as jumpstart and flash archives on Solaris, kickstart and
yum on Linux, ignite on HP-UX, and NIM on AIX.
You can create installation scripts for these OS-native methods, using options
specific to each platform. For example, to create a custom Solaris jumpstart finish
script, use the -jumpstart option. Note that the resulting finish scripts are not
complete and must be modified before being used for system installation

CONFIDENTIAL - NOT FOR DISTRIBUTION


50 216 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
operations. For more information on using OS-native methods for installation,
refer to the Veritas Storage Foundation Installation Guide on the specific
platform.

2
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


51 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
217
Using the Web installer
SF 5.1 and later include a Web-based interface to the CPI installer. The key
components of the Web installer architecture are shown in the diagram in the slide.
The Web browser can be run on any platform that supports the browser
requirements and can connect securely to the Web server.
The supported browsers are Firefox 3.x and later, Internet Explorer 6, 7, and 8.
The Web server runs the xprtlwid daemon, which is started using the
webinstaller command on the distribution media. The Web installer uses
the CPI installer scripts, and the software packages. Therefore, the system
acting as the Web server must have access to the SF distribution media.
The Web server must be able to connect to the installation target systems using
rsh or ssh.
The installation targets are the systems on which the SF software is installed
Copyright 2014 Symantec Corporation. All rights reserved.

and configured.
When you run the webinstaller command to start the Web server, the URL is
displayed so you can connect from a browser. On some browsers, you must accept
a security exception and authenticate using the root account and password for the
system running the Web server. After you connect to the Web server, you can
select tasks, products, and systems to start installing and configuring the target
systems.

Note: The webinstaller command is located at the root directory of the


software distribution media.

CONFIDENTIAL - NOT FOR DISTRIBUTION


52 218 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Administering keyless licenses


You can manage license keys using the CPI installer script with the
-license option or using the vxkeyless utility. Examples of some common
tasks are shown in the slide.
Use vxkeyless to change the current licensing selections. For example, if you
want to upgrade to an additional product level, say SF standard to SF enterprise,
use the vxkeyless set SFENT command.
If you want to remove keyless licensing, use the vxkeyless set NONE
command to clear all keyless licenses from the system. This operation may disable
the Veritas products unless a valid license key is installed. Use the vxlicinst
command to install valid traditional license keys for the Veritas products that you
want to continue to use.
Files used by keyless licensing include:
Copyright 2014 Symantec Corporation. All rights reserved.

Most recent license keys that are added to the system:


/etc/vx/licenses/lic/.keyless.diff
This file is removed if you remove keyless licensing.
License keys: /etc/vx/licenses/dat/licenses.dat
License log: /var/vx/licenses/.keyless.log
Veritas Operations Manager (VOM) configuration files (used for verifying
managed host configuration):
/etc/default/sfm_resolve.conf
/var/opt/VRTSsfmh/scheduler.conf

CONFIDENTIAL - NOT FOR DISTRIBUTION


53 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
219
Storage Foundation resources
Version release differences
With each new release of the Storage Foundation software, changes are made that
may affect the installation or operation of Storage Foundation in your
environment. By reading version release notes and installation documentation that
are included with the product, you can stay informed of any changes.
For more information about specific releases of Veritas Storage Foundation, visit
the Symantec Support Web site at:
http://www.symantec.com/business/support
and select Storage Foundation for UNIX/Linux in the Product Finder.
This site contains product information, a searchable knowledge base of technical
notes, access to product-specific news groups and e-mail notification services, and
Copyright 2014 Symantec Corporation. All rights reserved.

other information about contacting technical support staff.

CONFIDENTIAL - NOT FOR DISTRIBUTION


54 220 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Other services available from Symantec Operations Readiness Tools


The SORT Web site provides other services in addition to the Installation and
Upgrade service:
Downloads
This tab enables you to find and download patches, array specific modules,
such as Array Support Libraries (ASLs) and Array Policy Modules (APMs) for
UNIX servers, product documentation, high availability agents for Veritas
Cluster Server product, and VOM add-ons and documentation.
Systems
This tab lets administrators view and manage their Symantec enterprise
product configurations, track configuration changes, and share that information
with others. Note that before you can use SORT's Systems features, you must
create at least one custom report and upload it to SORT.
Copyright 2014 Symantec Corporation. All rights reserved.

Reports
Use this tab to view and manage uploaded reports.
Notifications
This tab enables you to set e-mail alerts for being notified about the latest
information released about the Symantec enterprise products in your
environment.
Support
Use this tab to access detailed information about all Symantec resources from
product support to Symantec forums, from documentation to product training.

CONFIDENTIAL - NOT FOR DISTRIBUTION


55 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
221
Accessing product documentation
The SORT Web site makes it easy to find product information from compatibility
lists to manual pages or product guides. Navigate to the Downloads > Documents
page on the SORT Web site and use the filtering capabilities provided to narrow
your search as shown on the slide.
Note that other Web sites that used to provide product documentation, such as the
Storage Foundation DocCentral site
(http://sfdoccentral.symantec.com/), are being deprecated in favor
of the additional filtering and search capabilities of the SORT Web site.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


56 222 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

SORT patch services


The Patch Finder service, accessed by selecting Downloads > Patches on the
SORT Web site, aims to make finding patches easy by providing filtering
capabilities for product version, platform, and the Symantec product itself. When
you find the patch you need, you can display detailed information about that patch
or you can download it using ftp.
Note that not all patches may be available at this site. If you cannot find the patch
you are looking for, contact Symantec support services.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


57 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
223
Storage Foundation user interfaces
Veritas Operations Manager
Storage Foundation provides central management capability by introducing a
Veritas Operations Manager (VOM) Management Server (MS).
Veritas Operations Manager is a comprehensive management platform, for
Storage Foundation and Cluster Server environments, that helps you optimize your
data center assets, with a solution to centralize visibility and control, ensure
availability, scale operations, increase storage utilization, and maintain
compliance. It is available as a free download for SF customers from the
http://go.symantec.com/vom Web site.
An introduction to VOM is provided in the Getting Started with Veritas
Operations Manager lesson in this course. For more information, refer to the
Copyright 2014 Symantec Corporation. All rights reserved.

Veritas Operations Manager Administrator's Guide.

CONFIDENTIAL - NOT FOR DISTRIBUTION


58 224 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Veritas Operations Manager architecture


VOM is based on a distributed client-server architecture. It consists of the
following:
Management server (MS)
Components of the MS are:
Management server
Authentication broker (for OS/public domain-based user authentication)
A database server
Web server
One or more managed hosts, each consisting of an agent
An agent is a process that collects status information from network resources
and relays that information to VOM.
Typically, a managed host is a production server on which different
Copyright 2014 Symantec Corporation. All rights reserved.

components of SF products are installed and running. A typical data center can
have thousands of such hosts using some or all of the SF products.
Optional external authentication brokers (ABs) for additional domain support.
An AB is a system with Symantec Product Authentication Services (SPAS)
installed that provides access to user authentication with public domains, such
as Active Directory, NIS, or NIS+.
In a centrally managed deployment, managed hosts relay information about
storage resources and applications to the MS. The Management Server then
coalesces the data it receives from the managed hosts within its database.

CONFIDENTIAL - NOT FOR DISTRIBUTION


59 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
225
VOM support for virtual environments
Veritas Operations Manager supports the following virtualization technologies:
VMware virtualization technology
Solaris Zones
Solaris Logical Domains (LDom).
With VMware virtualization technology, a designated Control Host discovers the
VirtualCenter servers in the datacenter. This discovery displays the ESX servers
that the VirtualCenter server manages and the virtual machines that are configured
on the ESX servers. Veritas Operations Manager can also discover the ESX
servers that VirtualCenter servers do not manage.
With the Solaris zones virtualization technology, the Zone agentlet that is present
in the VRTSsfmh package, which is installed on a Solaris managed host discovers
the Global Zones that are configured on the host. This discovery displays the non-
Copyright 2014 Symantec Corporation. All rights reserved.

global zones that are configured on the Global Zone.


With the Solaris LDom virtualization technology, the LDom agentlet that is
present in the VRTSsfmh package, which is installed on a Solaris managed host
discovers the LDom Server that is configured on the host. This discovery displays
the LDoms that are configured on the LDom Server.

CONFIDENTIAL - NOT FOR DISTRIBUTION


60 226 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Connecting to the VOM management server


To connect to the VOM management server, use a supported Web browser and
type:
https://fully_qualified_systemname_or_IP_address:14161
When you connect to the MS, you are presented with a summary view called the
Dashboard. The dashboard displays status information for the entire managed
environment organized into application groups, servers and storage. It also
provides a list of faulted applications and the most recent alerts with critical or
error status.
The menu at the top of the main page provides access to other parts of the
management server. Note that there are also in context links on the page that will
immediately allow you to browse to the associated information.
After the initial configuration, you would only see one host (the management
Copyright 2014 Symantec Corporation. All rights reserved.

server itself) when you connect to the MS console. You need to add other SF
servers as managed hosts to populate the database and start the discovery process.

Note: If you are using pop-up blockers (including Yahoo Toolbar or Google
Toolbar), either disable them or configure them to accept pop-ups from the
Web server to which you will connect.

CONFIDENTIAL - NOT FOR DISTRIBUTION


61 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
227
Storage Foundation user interfaces for single host administration
Storage Foundation supports three user interfaces which can be used to administer
one host at a time. Volume Manager objects created by one interface are
compatible with those created by the other interfaces.
Command-Line Interface (CLI): The command-line interface (CLI) consists
of UNIX utilities that you invoke from the command line to perform Storage
Foundation and standard UNIX tasks. You can use the CLI not only to
manipulate Volume Manager objects, but also to perform scripting and
debugging functions. Most of the CLI commands require superuser or other
appropriate privileges. The CLI commands perform functions that range from
the simple to the complex, and some require detailed user input.
Volume Manager Support Operations (vxdiskadm): The Volume
Manager Support Operations interface, commonly called vxdiskadm, is a
Copyright 2014 Symantec Corporation. All rights reserved.

menu-driven, text-based interface that you can use for disk and disk group
administration functions. The vxdiskadm interface has a main menu from
which you can select storage management tasks.
Veritas Enterprise Administrator (VEA): Veritas Enterprise Administrator
(VEA) is a graphical user interface to Volume Manager and other Veritas
products. VEA provides access to Storage Foundation functionality through
visual elements, such as icons, menus, wizards, and dialog boxes. Using VEA,
you can manipulate Volume Manager objects and also perform common file
system operations. A single VEA task may perform multiple command-line
tasks.

CONFIDENTIAL - NOT FOR DISTRIBUTION


62 228 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Using the command-line interface


The Storage Foundation command-line interface (CLI) provides commands used
for administering Storage Foundation from the shell prompt on a UNIX system.
CLI commands can be executed individually for specific tasks or combined into
scripts.
The Storage Foundation command set ranges from commands requiring minimal
user input to commands requiring detailed user input. Many of the Storage
Foundation commands require an understanding of Storage Foundation concepts.
Most Storage Foundation commands require superuser or other appropriate access
privileges.

Accessing manual pages for CLI commands


Detailed descriptions of VxVM and VxFS commands, the options for each utility,
Copyright 2014 Symantec Corporation. All rights reserved.

and details on how to use them are located in VxVM and VxFS manual pages.
Manual pages are installed by default in /opt/VRTS/man. Add this directory to
the MANPATH environment variable, if it is not already added.
To access a manual page, type man command_name.
Examples:
man vxassist
man mount_vxfs

CONFIDENTIAL - NOT FOR DISTRIBUTION


63 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
229
Using the vxdiskadm interface
The vxdiskadm command is a CLI command that you can use to launch the
Volume Manager Support Operations menu interface. You can use the Volume
Manager Support Operations interface, commonly referred to as vxdiskadm, to
perform common disk management tasks. The vxdiskadm interface is restricted
to managing disk objects and does not provide a means of handling all other
VxVM objects.
Each option in the vxdiskadm interface invokes a sequence of CLI commands.
The vxdiskadm interface presents disk management tasks to the user as a series
of questions, or prompts.
To start, you type vxdiskadm at the command line. The vxdiskadm main menu
contains a selection of main tasks that you can use to manipulate Volume Manager
objects. Each entry in the main menu leads you through a particular task by
Copyright 2014 Symantec Corporation. All rights reserved.

providing you with information and prompts. Default answers are provided for
many questions, so you can select common answers.
The menu also contains options for listing disk information, displaying help
information, and quitting the menu interface.
The tasks listed in the main menu are covered throughout this training. Options
available in the menu differ somewhat by platform. See the vxdiskadm(1m)
manual page for more details on how to use vxdiskadm.

Note: vxdiskadm can be run only once per host. A lock file prevents multiple
instances from running: /var/spool/locks/.DISKADD.LOCK.

CONFIDENTIAL - NOT FOR DISTRIBUTION


64 230 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2

Using the VEA interface


The Veritas Enterprise Administrator (VEA) is the graphical user interface for
Storage Foundation and other Veritas products. You can use the Storage
Foundation features of VEA to administer disks, volumes, and file systems on
local or remote machines.
VEA is a Java-based interface that consists of a server and a client. You must
install the VEA server on a UNIX machine that is running Veritas Volume
Manager. The VEA client can run on any machine that supports the Java (1.4 or
later) Runtime Environment, which can be Solaris, HP-UX, AIX, Linux, or
Windows.
Some Storage Foundation features of VEA include:
Remote Administration
Security
Copyright 2014 Symantec Corporation. All rights reserved.

Multiple Host Support


Multiple Views of Objects

Setting VEA preferences


You can customize general VEA environment attributes through the Preferences
window (Select Tools > Preferences).

CONFIDENTIAL - NOT FOR DISTRIBUTION


65 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
231
Installing the VEA server and client on UNIX
If you install Storage Foundation by using the installer utility, and select either the
recommended or the all package sets, the VEA server package (VRTSob) is
automatically installed. The VEA client package (VRTSobgui) must be
downloaded from the Symantec Web site displayed on the slide and installed on a
server with Java Runtime Environment. Use the native OS installation commands,
such as pkgadd on the Solaris platform, to install the VEA client software.

Starting the VEA server and client


In order to use VEA, the VEA server must be running on the UNIX machine to be
administered. Only one instance of the VEA server should be running at a time.
With SF 5.1 and later, the VEA server is no longer started automatically during
installation. You must use the /opt/VRTSob/bin/vxsvcctrl activate
command to configure the VEA server to start up automatically at system boot up.
Copyright 2014 Symantec Corporation. All rights reserved.

On the Linux platform, you also need to execute the vxsvcctrl start
command to start the server process after activating it.
The VEA client can provide simultaneous access to multiple host machines. Each
host machine must be running the VEA server.

Note: Entries for your user name and password must exist in the password file or
corresponding Network Information Name Service table on the machine to
be administered. Your user name must also be included in the Veritas
administration group (vrtsadm, by default) in the group file or NIS group
table. If the vrtsadm entry does not exist, only root can run VEA.

CONFIDENTIAL - NOT FOR DISTRIBUTION


66 232 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2
Copyright 2014 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 2: Installing SF and Accessing SF Interfaces, page A-37

CONFIDENTIAL - NOT FOR DISTRIBUTION


67 Lesson 2 Installing Storage Foundation and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
233
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


68 234 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Lesson 3
Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


69
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


70 32 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Preparing disks and disk groups for volume creation


What is enclosure-based naming?
An enclosure, or disk enclosure, is an intelligent disk array, which permits
hotswapping of disks. With Storage Foundation, disk devices can be named for
enclosures rather than for the controllers through which they are accessed as with
standard disk device naming (for example, c0t0d0 or hdisk2).
Enclosure-based naming allows Storage Foundation to access enclosures as
separate physical entities. By configuring redundant copies of your data on
separate enclosures, you can safeguard against failure of one or more enclosures.
This is especially useful in a storage area network (SAN) that uses Fibre Channel
hubs or fabric switches and when managing the dynamic multipathing (DMP)
feature of Storage Foundation. For example, if two paths (c1t99d0 and
c2t99d0) exist to a single disk in an enclosure, VxVM can use a single DMP
metanode, such as enc0_0, to access the disk.
Copyright 2014 Symantec Corporation. All rights reserved.

Here are some examples of naming schemes:

Naming Scheme Example


OS-based Solaris: /dev/[r]dsk/c1t9d0s2
HP-UX: /dev/[r]dsk/c3t2d0 (no slice)
HP-UX: /dev/[r]disk/disk32 (11iv3)
AIX: /dev/hdisk2
Linux: /dev/sda, /dev/hda
Enclosure-based sena0_1, sena0_2, sena0_3, ...
Enclosure-based customized englab2, hr1, boston3
CONFIDENTIAL - NOT FOR DISTRIBUTION
71 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
33
Note: With SF 5.1 and later, the enclosure-based naming scheme is the default
naming scheme for all fresh SF installations. Upgrades to a new version of
SF preserve the naming scheme set by the user in previous versions of SF.

Benefits of enclosure-based naming include:


Easier fault isolation: Storage Foundation can more effectively place data and
metadata to ensure data availability.
Device-name independence: Storage Foundation is independent of arbitrary
device names used by third-party drivers.
Improved SAN management: Storage Foundation can create better location
identification information about disks in large disk farms and SANs.
Improved cluster management: In a cluster environment, disk array names
on all hosts in a cluster can be the same. Storage Foundation 5.0 MP3 and later
provide consistent enclosure-based device names across systems in a cluster;
that is each LUN in a disk array is indexed using the same number on different
systems sharing storage in the same cluster.
Improved dynamic multipathing (DMP) management: With multipathed
disks, the name of a disk is independent of the physical communication paths,
avoiding confusion and conflict.
You can use the vxddladm command to determine the current naming scheme as
follows:
vxddladm get namingscheme
NAMING_SCHEME PERSISTENCE LOWERCASE USE_AVID
======================================================
OS Native No Yes Yes
You can change the naming scheme from the command line using the vxddladm
set command with the following options:
vxddladm set namingscheme=<osn|ebn> \
[persistent=<yes|no>] [lowercase=<yes|no>] \
[use_avid=<yes|no>]
Copyright 2014 Symantec Corporation. All rights reserved.

If you set the use_avid option to yes, the LUNs are numbered based on the
array volume ID instead of the traditional indexing method.
You can also change the device naming scheme using the Change the disk
naming scheme option in the vxdiskadm menu.

CONFIDENTIAL - NOT FOR DISTRIBUTION


72 34 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Before configuring a disk for use by VxVM


In order to use the space of a physical disk to build VxVM volumes, you must
place the disk under Volume Manager control. Before a disk can be placed under
Volume Manager control, the disk media must be formatted outside of VxVM
using standard operating system formatting methods. SCSI disks are usually
preformatted. After a disk is formatted, the disk can be initialized for use by
Volume Manager. In other words, disks must be detected by the operating system,
before VxVM can detect the disks.

Stage one: Initialize a disk


A formatted physical disk is considered uninitialized until it is initialized for use
by VxVM. When a disk is initialized, the public and private regions are created,
and VM disk header information is written to the private region. Any partitions
(other than slice 2 on the Solaris platform) that may have existed on the disk are
Copyright 2014 Symantec Corporation. All rights reserved.

removed.
These disks are under Volume Manager control but cannot be used by Volume
Manager until they are added to a disk group.
Note: Encapsulation is another method of placing a disk under VxVM control in
which existing data on the disk is preserved.

Changing the disk layout


To display or change the default values that are used for initializing disks, select
the Change/display the default disk layouts option in vxdiskadm:

CONFIDENTIAL - NOT FOR DISTRIBUTION


73 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
35
For disk initialization, you can change the default format and the default length
of the private region. If the attribute settings for initializing disks are stored in
the user-created file, /etc/default/vxdisk, they apply to all disks to be
initialized
On Solaris for disk encapsulation, you can additionally change the offset
values for both the private and public regions. To make encapsulation
parameters different from the default VxVM values, create the user-defined
/etc/default/vxencap file and place the parameters in this file.
On HP-UX when converting LVM disks, you can change the default format
and the default private region length. The attribute settings are stored in the
/etc/default/vxencap file.

Stage two: Assign a disk to a disk group


When you add a disk to a disk group, VxVM assigns a disk media name to the disk
and maps this name to the disk access name.
Disk media name: A disk media name is the logical disk name assigned to a
drive by VxVM. VxVM uses this name to identify the disk for volume
operations, such as volume creation and mirroring.
Disk access name: A disk access name represents all paths to the device. A
disk access record maps the physical location to the logical name and
represents the link between the disk media name and the disk access name.
Disk access records are dynamic and can be re-created when vxdctl enable
is run.
The disk media name and disk access name, in addition to the host name, are
written to the private region of the disk. The disk name field in the private region
is used to hold the disk media name and the devicetag field is used to hold the disk
access name. Space in the public region is made available for assignment to
volumes. Whenever the VxVM configuration daemon is started (or vxdctl
enable is run), the system reads the private region on every disk and establishes
the connections between disk access names and disk media names.
After disks are placed under Volume Manager control, storage is managed in terms
of the logical configuration. File systems mount to logical volumes, not to physical
partitions. Logical names, such as
/dev/vx/[r]dsk/diskgroup/volume_name, replace physical locations,
Copyright 2014 Symantec Corporation. All rights reserved.

such as /dev/[r]dsk/device_name.
The free space in a disk group refers to the space on all disks within the disk group
that has not been allocated as subdisks. When you place a disk into a disk group,
its space becomes part of the free space pool of the disk group.

Stage three: Assign disk space to volumes


When you create volumes, space in the public region of a disk is assigned to the
volumes. Some operations, such as removal of a disk from a disk group, are
restricted if space on a disk is in use by a volume.

CONFIDENTIAL - NOT FOR DISTRIBUTION


74 36 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

What is a disk group?


A disk group is a collection of physical disks, volumes, plexes, and subdisks that
are used for a common purpose. A disk group is created when you place at least
one disk in the disk group. When you add a disk to a disk group, a disk group entry
is added to the private region header of that disk. Because a disk can only have one
disk group entry in its private region header, one disk group does not know
about other disk groups, and therefore disk groups cannot share resources, such as
disk drives, plexes, and volumes.
A volume with a plex can belong to only one disk group, and subdisks and plexes
of a volume must be stored in the same disk group. You can never have an
empty disk group, because a disk group with no disks would have no private
region available in which to store the disk group definition. Therefore, you cannot
remove all disks from a disk group without destroying the disk group.
Copyright 2014 Symantec Corporation. All rights reserved.

Why are disk groups needed?


Disk groups assist disk management in several ways:
Disk groups enable the grouping of disks into logical collections for a
particular set of users or applications.
Disk groups enable data, volumes, and disks to be easily moved from one host
machine to another.
Disk groups ease the administration of high availability environments. Disk
drives can be shared by two or more hosts, but they can be accessed by only
one host at a time. If one host crashes, the other host can take over its disk
groups and therefore its disks.
A disk group provides the configuration boundary for VxVM objects.
CONFIDENTIAL - NOT FOR DISTRIBUTION
75 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
37
System-wide reserved disk groups
VxVM has reserved three disk group names that are used to provide boot disk
group and default disk group functionality. The names bootdg, defaultdg, and
nodg are system-wide reserved disk group names and cannot be used as names
for any of the disk groups that you set up.
If you choose to place your boot disk under VxVM control, VxVM assigns bootdg
as an alias for the name of the disk group that contains the volumes that are used to
boot the system.
The main benefit of creating a default disk group is that SF commands default to
that disk group if you do not specify a disk group on the command line.
defaultdg is an alias for the disk group name that should be assumed if the -g
option is not specified to a command. You can set defaultdg when you install
Veritas Volume Manager (pre-SF 5.1) or anytime after installation.
Copyright 2014 Symantec Corporation. All rights reserved.

By default, both bootdg and defaultdg are set to nodg.

Notes
The definitions of bootdg and defaultdg are written to the volboot file. The
definition of bootdg results in a symbolic link from the named bootdg in
/dev/vx/dsk and /dev/vx/rdsk.
The rootdg disk group name is no longer a reserved name for VxVM versions
after 4.0. If you are upgrading from a version of Volume Manager earlier than
4.0 where the system disk is encapsulated in the rootdg disk group, the bootdg
is assigned the value of rootdg automatically.

CONFIDENTIAL - NOT FOR DISTRIBUTION


76 38 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Creating a disk group


A disk must be placed into a disk group before it can be used by VxVM. A disk
group cannot exist without having at least one associated disk. When you create a
new disk group, you specify a name for the disk group and at least one disk to add
to the disk group. The disk group name must be unique for the host machine.

Adding disks
To add a disk to a disk group, you select an uninitialized disk or a free disk. If the
disk is uninitialized, you must initialize the disk before you can add it to a disk
group.

Disk naming
When you add a disk to a disk group, the disk is assigned a disk media name. The
Copyright 2014 Symantec Corporation. All rights reserved.

disk media name is a logical name used for VxVM administrative purposes.

Notes on disk naming


You can change disk media names after the disks have been added to disk groups.
However, if you must change a disk media name, it is recommended that you make
the change before using the disk for any volumes. Renaming a disk does not
rename the subdisks on the disk, which may be confusing.
Assign logical media names, rather than use the device names, to facilitate
transparent logical replacement of failed disks.

CONFIDENTIAL - NOT FOR DISTRIBUTION


77 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
39
Creating a disk group: vxdiskadm
From the vxdiskadm main menu, select the Add or initialize one or more disks
option. Specify the disk group to which the disk should be added. To add the disk
to a new disk group, you type a name for the new disk group. You use this same
menu option to add additional disks to the disk group.
To verify that the disk group was created, you can use vxdg list.
When you add a disk to a disk group, the disk group configuration is copied onto
the disk, and the disk is stamped with the system host ID.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


78 310 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Creating a volume and adding a file system


Creating a volume
When you create a volume, you indicate the desired volume characteristics, and
VxVM creates the underlying plexes and subdisks automatically. The VxVM
interfaces require minimal input if you use default settings. For experienced users,
the interfaces also enable you to enter more detailed specifications regarding all
aspects of volume creation.
Before you create a volume
Before you create a volume, ensure that you have enough disks to support the
layout type.
A striped volume requires at least two disks.
A mirrored volume requires at least one disk for each plex. A mirror cannot be
Copyright 2014 Symantec Corporation. All rights reserved.

on the same disk that other plexes of the same volume are using.
To create a volume from the command line, you use the vxassist command. In
the syntax:
Use the -g option to specify the disk group in which to create the volume.
make is the keyword for volume creation.
volume_name is a name you give to the volume. Specify a meaningful name
which is unique within the disk group.
length specifies the number of sectors in the volume. You can specify the
length by adding an m, k, g, or t to the length.

CONFIDENTIAL - NOT FOR DISTRIBUTION


79 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
311
Adding a file system to a volume
A file system provides an organized structure to facilitate the storage and retrieval
of files. You can add a file system to a volume when you create a volume or any
time after you create the volume initially.
When a file system has been mounted on a volume, the data is accessed
through the mount point directory.
When data is written to files, it is actually written to the block device file:
/dev/vx/dsk/diskgroup/volume_name.
When fsck is run on the file system, the raw device file is checked:
/dev/vx/rdsk/diskgroup/volume_name.
To add a file system to a volume from the command line, you must create the file
system, create a mount point for the file system, and then mount the file system.
Solaris
Copyright 2014 Symantec Corporation. All rights reserved.

To create and mount a VxFS file system:


mkfs -F vxfs /dev/vx/rdsk/datadg/datavol
mkdir /data
mount -F vxfs /dev/vx/dsk/datadg/datavol /data
To create and mount a UFS file system:
newfs /dev/vx/rdsk/datadg/datavol
mkdir /data
mount /dev/vx/dsk/datadg/datavol /data

CONFIDENTIAL - NOT FOR DISTRIBUTION


80 312 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
HP-UX
To create and mount a VxFS file system:
mkfs -F vxfs /dev/vx/rdsk/datadg/datavol
mkdir /data
mount -F vxfs /dev/vx/dsk/datadg/datavol /data
To create and mount an HFS file system:
newfs -F hfs /dev/vx/rdsk/datadg/datavol
mkdir /data
mount -F hfs /dev/vx/dsk/datadg/datavol /data
AIX
To create and mount a VxFS file system using mkfs:
mkfs -V vxfs /dev/vx/rdsk/datadg/datavol 3
mkdir /data
mount -V vxfs /dev/vx/dsk/datadg/datavol /data
To create and mount a VxFS file system using crfs:
crfs -v vxfs -d /dev/vx/rdsk/datadg/datavol -m /data -A
yes
Notes:
An uppercase V is used with mkfs; a lowercase v is used with crfs (to avoid
conflict with another crfs option).
crfs creates the file system, creates the mount point, and updates the file
systems file (/etc/filesystems). The -A yes option requests mount at
boot.
If the file system already exists in /etc/filesystems, you can mount the
file system by simply using the syntax: mount mount_point.
Linux
To create and mount a VxFS file system using mkfs:
Copyright 2014 Symantec Corporation. All rights reserved.

mkfs -t vxfs /dev/vx/rdsk/datadg/datavol


mkdir /data
mount -t vxfs /dev/vx/dsk/datadg/datavol /data

CONFIDENTIAL - NOT FOR DISTRIBUTION


81 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
313
Mounting a file system at boot
Using CLI, if you want the file system to be mounted at every system boot, you
must edit the file system table file by adding an entry for the file system. If you
later decide to remove the volume, you must remove the entry in the file system
table file.

Platform File System Table File


Solaris /etc/vfstab
HP-UX /etc/fstab
AIX /etc/filesystems
Linux /etc/fstab
Copyright 2014 Symantec Corporation. All rights reserved.

AIX
In AIX, you can use the following commands when working with the file system
table file, /etc/filesystems:
To view entries: lsfs mount_point
To change details of an entry, use chfs. For example, to turn off mount at
boot: chfs -A no mount_point

CONFIDENTIAL - NOT FOR DISTRIBUTION


82 314 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Displaying disk and disk group information


Displaying basic disk information
By viewing disk information, you can determine if a disk has been initialized and
added to a disk group, verify the changes that you make to disks, and keep track of
the status and configuration of your disks.
You use the vxdisk -o alldgs list command to display basic information
about all disks attached to the system. The vxdisk list command displays the:
Device names for all recognized disks
Type of disk, that is, how a disk is placed under VxVM control
Disk names
Disk group names associated with each disk
Status of each disk
Copyright 2014 Symantec Corporation. All rights reserved.

In the output:
A status of online, in addition to entries in the Disk and Group columns
indicates that the disk has been initialized or encapsulated, assigned a disk
media name, and added to a disk group. The disk is under Volume Manager
control and is available for creating volumes.
A status of online without entries in the Disk and Group columns indicates that
the drive has been initialized or encapsulated but is not currently assigned to a
disk group. Note that if there is a disk group name in parentheses without any
disk media name, it indicates that the disk belongs to a deported disk group.

CONFIDENTIAL - NOT FOR DISTRIBUTION


83 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
315
A status of online invalid indicates that the disk has neither been initialized
nor encapsulated by VxVM. The disk is not under VxVM control.
A status of error (not shown on the slide) indicates that Volume Manager can
no longer access the disk device, possibly due to a failure.
Notes:
On the HP-UX platform, LVM disks have a type of auto:LVM and a status of
LVM.
With SF 5.1 on the Solaris platform, ZFS/SVM disks have a type of auto:ZFS
or auto:SVM and a status of ZFS or SVM respectively.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


84 316 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Viewing detailed disk information


To display detailed information about a disk, you use the vxdisk list
command with the name of the disk. With this command, you can either use the
disk access name or the disk media name together with the disk group name as
shown in the following syntax:
vxdisk -g diskgroup list dm_name
vxdisk -g appdg list appdg01
Device: emc0_dd5
devicetag: emc0_dd5
type: auto
hostid: train12
Copyright 2014 Symantec Corporation. All rights reserved.

disk: name=appdg01 id=1000753057.1114.train12


group: name=appdg id=1000753077.1117.train12
...
In the example output:
Device is the VxVM name for the device path.
devicetag is the name used by VxVM to refer to the physical disk.
type is how a disk was discovered by VxVM. auto is the default type.
hostid is the name of the system that currently manages the disk group to
which the disk belongs; if blank, no host is currently controlling this group.
disk is the VM disk media name and internal ID.
CONFIDENTIAL - NOT FOR DISTRIBUTION
85 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
317
group is the disk group name and internal ID.
To view a summary of information for all disks, you use the -s option with the
vxdisk list command.
To display discovered properties of a disk, such as vendor ID, array port ID/WWN
and so on, you use the -p option with the vxdisk list command.
vxdisk -p list emc0_dd1
DISK : emc0_dd1
DISKID : 1320849045.92.sym1
VID : EMC
UDID : EMC%5FSYMMETRIX%5F313635323300%5FDD0DD1
SCSI_VERSION : 3
SCSI3_VPD_ID : 5123456000000000
REVISION : 5671
PORT_SERIAL_NO : 1a-a
PID : SYMMETRIX
PHYS_CTLR_NAME : c3
NR_DEVICE : Y
MEDIA_TYPE : hdd
LUN_TYPE : std
LUN_SNO_ORDER : 0
LUN_SERIAL_NO : DD0DD1
LIBNAME : libvxemc.so
HARDWARE_MIRROR: no
DMP_DEVICE : emc0_dd1
DDL_DEVICE_ATTR: lun
CAB_SERIAL_NO : 313635323300
ATYPE : A/A
ARRAY_VOLUME_ID: DD1
ARRAY_PORT_PWWN: 10.10.5.3:3260
ANAME : EMC
Copyright 2014 Symantec Corporation. All rights reserved.

TRANSPORT : iSCSI
ENCLOSURE_NAME : emc0
NUM_PATHS : 2
Notes:
The disk name and the disk group name are changeable. The disk ID and disk
group ID are never changed as long as the disk group exists or the disk is
initialized.
The detailed information displayed by the vxdisk list command is
discussed in more detail in the Lesson 7.

CONFIDENTIAL - NOT FOR DISTRIBUTION


86 318 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Displaying disk group information


To display disk group information:
Use vxdg list to display disk group names, states, and IDs for all imported
disk groups in the system.
Use vxdg free to display free space on each disk. This command displays
free space on all disks in all disk groups that the host can detect. Add -g
diskgroup to restrict the output to a specific disk group.
Note: This command does not show space on spare disks. Reserved disks are
displayed with an r in the FLAGS column.
Use vxdisk -o alldgs list to display all disk groups, including
deported disk groups. For example:
vxdisk -o alldgs list
Copyright 2014 Symantec Corporation. All rights reserved.

DEVICE TYPE DISK GROUP STATUS


Disk_1 auto:cdsdisk appdg01 appdg online
Disk_7 auto:cdsdisk - (oradg) online

CONFIDENTIAL - NOT FOR DISTRIBUTION


87 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
319
Using vxlist to display disk and disk group information
The vxlist command is a new display command that provides a consolidated
view of the SF configuration.
To display the vxlist command output, the vxdclid daemon must be running.
If this daemon is not running, execute
/opt/VRTSsfmh/adm/dclisetup.sh as the root user.
For more information on using the vxlist command, refer to the manual pages.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


88 320 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Displaying volume configuration information


Displaying volume layout information

The vxprint command


You can use the vxprint command to display information about how a volume is
configured. This command displays records from the VxVM configuration
database.
vxprint -g diskgroup [options]
The vxprint command can display information about disk groups, disk media,
volumes, plexes, and subdisks. You can specify a variety of options with the
command to expand or restrict the information displayed. Only some of the
options are presented in this training. For more information about additional
options, see the vxprint(1m) manual page.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


89 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
321
Displaying information for all volumes
To display the volume, plex, and subdisk record information for a disk group:
vxprint -g diskgroup -htr - u h
In the output, the top few lines indicate the headers that match each type of output
line that follows. Each volume is listed along with its associated plexes and
subdisks and other VxVM objects.
dg is a disk group.
st is a storage pool (used in Intelligent Storage Provisioning).
dm is a disk.
rv is a replicated volume group (used in Veritas Volume Replicator).
rl is an rlink (used in Veritas Volume Replicator).
co is a cache object.
Copyright 2014 Symantec Corporation. All rights reserved.

vt is a volume template (used in Intelligent Storage Provisioning).


v is a volume.
pl is a plex.
sd is a subdisk.
sv is a subvolume.
sc is a storage cache.
dc is a data change object.
sp is a snap object.
For more information, see the vxprint(1m) manual page.

CONFIDENTIAL - NOT FOR DISTRIBUTION


90 322 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Using vxlist and vxinfo for volume information


The vxlist command is useful in summarizing the volume information on the
system. You can also use this command to display the disks and the plexes
associated with a specific volume, using the following command options:
vxlist s disk vol volume_name
vxlist -s disk vol appvol
disks
TY DEVICE DISK NPATH ENCLR_NAME ENCLR_SNO STATUS
disk emc0_dd1 appdg01 2 emc0 ... imported
vxlist s plexes vol volume_name
vxlist -s plexes vol appvol
Copyright 2014 Symantec Corporation. All rights reserved.

plexes
TY NAME TYPE STATUS
plex appvol-01 simple attached
The vxinfo command prints the accessibility and the usability information on
VxVM volumes. The -p option with vxinfo also reports the name and status of
each plex within the volume.

CONFIDENTIAL - NOT FOR DISTRIBUTION


91 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
323
Removing volumes, disks, and disk groups
Removing a volume
Only remove a volume if you are sure that you do not need the data in the volume,
or if the data is backed up elsewhere. A volume must be closed before it can be
removed. For example, if the volume contains a file system, the file system must
be unmounted. You must edit the OS-specific file system table file manually in
order to remove the entry for the file system and avoid errors at boot. If the volume
is used as a raw device, the application, such as a database, must close the device.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


92 324 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Evacuating a disk
Evacuating a disk moves the contents of the volumes on a disk to another disk. The
contents of a disk can be evacuated only to disks in the same disk group that have
sufficient free space.
To evacuate to any disk except for appdg03:
vxevac -g appdg appdg02 !appdg03
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


93 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
325
Removing a disk
You can verify the removal by using the vxdisk list command to display disk
information. A disk that has been taken out of a disk group no longer has a disk
media name or disk group assignment but still shows a status of online.
Before the disk is taken out of the disk group:
vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
emc0_dd1 auto:cdsdisk appdg01 appdg online
...
After the disk is taken out of the disk group using the vxdg -g appdg
rmdisk appdg01 command:
Copyright 2014 Symantec Corporation. All rights reserved.

vxdisk -o alldgs list


DEVICE TYPE DISK GROUP STATUS
emc0_dd1 auto:cdsdisk - - online
...

CONFIDENTIAL - NOT FOR DISTRIBUTION


94 326 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Uninitializing and shredding a disk


After the disk has been removed from its disk group, you can remove it from
Volume Manager control completely by using the vxdiskunsetup command.
This command reverses the configuration of a disk by removing the public and
private regions that were created by the vxdisksetup command. The
vxdiskunsetup command does not operate on disks that are active members of
an imported disk group. This command does not usually operate on disks that
appear to be imported by some other hostfor example, a host that shares access
to the disk. You can use the -C option to force deconfiguration of the disk,
removing host locks that may be detected.
Before the disk is uninitialized:
vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
Copyright 2014 Symantec Corporation. All rights reserved.

emc0_dd1 auto:cdsdisk - - online


...
After the disk is uninitialized using the vxdiskunsetup emc0_dd1
command:
vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
emc0_dd1 auto:none - - online invalid
...

CONFIDENTIAL - NOT FOR DISTRIBUTION


95 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
327
With SF 6.x, you can use the -o shred option with the vxdiskunsetup
command to shred a disk. Shredding a disk destroys the data stored on the disk by
overwriting the disk with a digital pattern in one of three ways:
One-pass: VxVM overwrites the disk with a randomly selected pattern. This
option takes the least amount of time. This is the default behavior if the user
does not specify a type.
Three-pass: The disk is overwritten a total of 3 times. In the first pass, it is
overwritten with a pre-selected digital pattern. The second time, it is
overwritten with the binary complement of the pattern. In the last pass, the disk
is overwritten with a randomly selected digital pattern. This algorithm is based
on the US DoD.5200.22-M standard for the sanitization of sensitive data.
Seven-pass: Disk is overwritten a total of 7 times. Each pass consists of
overwriting the disk with a randomly selected digital pattern or with the binary
complement of the previous pattern. This algorithm is based on the US
DoD.5200.28-STD standard.

Note: Use the -f option to force a shred operation on a Solid State Drive (SSD)
disk.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


96 328 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3

Destroying a disk group


Destroying a disk group permanently removes a disk group from Volume Manager
control, and the disk group ceases to exist. When you destroy a disk group, all of
the disks in the disk group are made available as empty disks. Volumes and
configuration information including the automatic configuration backups of the
disk group are removed. Disk group configuration backups are discussed later in
this course. Because you cannot remove the last disk in a disk group, destroying a
disk group is the only method to free the last disk in a disk group for reuse. A disk
group cannot be destroyed if any volumes in that disk group are in use or contain
mounted file systems. The bootdg disk group cannot be destroyed.
Caution: Destroying a disk group can result in data loss. Only destroy a disk
group if you are sure that the volumes and data in the disk group are not needed.
To destroy a disk group from the command line, use the vxdg destroy
Copyright 2014 Symantec Corporation. All rights reserved.

command.

Note: You can bring back a destroyed disk group by importing it with its dgid if
its disks had not been re-used for other purposes.

CONFIDENTIAL - NOT FOR DISTRIBUTION


97 Lesson 3 Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
329
Copyright 2014 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 3: Creating a Volume and File System, page A-71

CONFIDENTIAL - NOT FOR DISTRIBUTION


98 330 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Lesson 4
Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


99
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


100 42 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Volume layouts 4
Each volume layout has different advantages and disadvantages. For example, a
volume can be extended across multiple disks to increase capacity, mirrored on
another disk to provide data redundancy, or striped across multiple disks to
improve I/O performance. The layouts that you choose depend on the levels of
performance and availability required by your system.

Concatenated layout
A concatenated volume layout maps data in a linear manner onto one or more
subdisks in a plex. Subdisks do not have to be physically contiguous and can
belong to more than one VM disk. Storage is allocated completely from one
subdisk before using the next subdisk in the span. Data is accessed in the
remaining subdisks sequentially until the end of the last subdisk.
Copyright 2014 Symantec Corporation. All rights reserved.

For example, if you have 12 GB of data then a concatenated volume can logically
map the volume address space across subdisks on different disks. The addresses
0 GB to 8 GB of volume address space map to the first 8-gigabyte subdisk, and
addresses 9 GB to 12 GB map to the second 4-gigabyte subdisk. An address offset
of 10 GB, therefore, maps to an address offset of 2 GB in the second subdisk.

CONFIDENTIAL - NOT FOR DISTRIBUTION


101 Lesson 4 Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.
43
Striped layout
A striped volume layout maps data so that the data is interleaved, or allocated in
stripes, among two or more subdisks on two or more physical disks. Data is
allocated alternately and evenly to the subdisks of a striped plex.
The subdisks are grouped into columns. Each column contains one or more
subdisks and can be derived from one or more physical disks. To obtain the
maximum performance benefits of striping, you should not use a single disk to
provide space for more than one column.
All columns must be the same size. The size of a column is equal to the size of the
volume divided by the number of columns. The default number of columns in a
striped volume is based on the number of disks in the disk group.
Data is allocated in equal-sized units, called stripe units, that are interleaved
between the columns. Each stripe unit is a set of contiguous blocks on a disk. The
Copyright 2014 Symantec Corporation. All rights reserved.

stripe unit size can be in units of sectors, kilobytes, megabytes, or gigabytes. The
default stripe unit size is 64K, which provides adequate performance for most
general purpose volumes. Performance of an individual volume may be improved
by matching the stripe unit size to the I/O characteristics of the application using
the volume.

CONFIDENTIAL - NOT FOR DISTRIBUTION


102 44 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Mirrored layout 4
By adding a mirror to a concatenated or striped volume, you create a mirrored
layout. A mirrored volume layout consists of more than one plex that duplicate the
information contained in a volume. Each plex in a mirrored layout contains an
identical copy of the volume data. In the event of a physical disk failure and when
the plex on the failed disk becomes unavailable, the system can continue to operate
using the unaffected mirrors.
Although a volume can have a single plex, at least two plexes are required to
provide redundancy of data. Each of these plexes must contain disk space from
different disks to achieve redundancy.
Volume Manager uses true mirrors, which means that all copies of the data are the
same at all times. When a write occurs to a volume, all plexes must receive the
write before the write is considered complete.
Copyright 2014 Symantec Corporation. All rights reserved.

Distribute mirrors across controllers to eliminate the controller as a single point of


failure.

CONFIDENTIAL - NOT FOR DISTRIBUTION


103 Lesson 4 Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.
45
RAID-5 layout
A RAID-5 volume layout has the same attributes as a striped plex, but one column
in each stripe is used for parity. Parity provides redundancy.
Parity is a calculated value used to reconstruct data after a failure. While data is
being written to a RAID-5 volume, parity is calculated by performing an exclusive
OR (XOR) procedure on the data. The resulting parity is then written to the
volume. If a portion of a RAID-5 volume fails, the data that was on that portion of
the failed volume can be re-created from the remaining data and parity
information.
RAID-5 volumes keep a copy of the data and calculated parity in a plex that is
striped across multiple disks. Parity is spread equally across columns. Given a
five-column RAID-5 where each column is 1 GB in size, the RAID-5 volume size
is 4 GB. An amount of space equivalent to one column is devoted to parity; the
Copyright 2014 Symantec Corporation. All rights reserved.

remaining space is used for data.


The default stripe unit size for a RAID-5 volume is 16K. Each column must be the
same length but may be made from multiple subdisks of variable length. Subdisks
used in different columns must not be located on the same physical disk.
RAID-5 requires a minimum of three disks for data and parity. When implemented
as recommended, an additional disk is required for the log.
RAID-5 cannot be mirrored.

CONFIDENTIAL - NOT FOR DISTRIBUTION


104 46 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Comparing volume layouts 4

Concatenation: Advantages
Better utilization of free space: Concatenation removes the restriction on size
of storage devices imposed by physical disk size. It also enables better
utilization of free space on disks by providing for the ordering of available
discrete disk space on multiple disks into a single addressable volume.
Simplified administration: System administration complexity is reduced
because making snapshots and mirrors uses any size space, and volumes can be
increased in size by any available amount.

Concatenation: Disadvantages
No protection against disk failure: Concatenation does not protect against disk
failure. A single disk failure results in the failure of the entire volume.
Copyright 2014 Symantec Corporation. All rights reserved.

Striping: Advantages
Improved performance through parallel data transfer: Improved
performance is obtained by increasing the effective bandwidth of the I/O path
to the data. This may be achieved by a single volume I/O operation spanning
across a number of disks or by multiple concurrent volume I/O operations to
more than one disk at the same time.
Load-balancing: Striping is also helpful in balancing the I/O load from
multiuser applications across multiple disks.

CONFIDENTIAL - NOT FOR DISTRIBUTION


105 Lesson 4 Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.
47
Striping: Disadvantages
No redundancy: Striping alone offers no redundancy or recovery features.
Disk failure: Striping a volume increases the chance that a disk failure results
in failure of that volume. For example, if you have three volumes striped
across two disks, and one of the disks is used by two of the volumes, then if
that one disk goes down, both volumes go down.

Mirroring: Advantages
Improved availability: With concatenation or striping, failure of any one disk
makes the entire plex unusable. With mirroring, data is protected against the
failure of any one disk. Mirroring improves the availability of a striped or
concatenated volume.
Improved read performance: Reads benefit from having multiple places
from which to read the data.

Mirroring: Disadvantages
Requires more disk space: Mirroring requires twice as much disk space,
which can be costly for large configurations. Each mirrored plex requires
enough space for a complete copy of the volumes data.
Slightly slower write performance: Writing to volumes is slightly slower,
because multiple copies have to be written in parallel. The overall time the
write operation takes is determined by the time needed to write to the slowest
disk involved in the operation.
The slower write performance of a mirrored volume is not generally significant
enough to decide against its use. The benefit of the resilience that mirrored
volumes provide outweighs the performance reduction.
RAID-5: Advantages
Redundancy through parity: With a RAID-5 volume layout, data can be re-
created from remaining data and parity in case of the failure of one disk.
Requires less space than mirroring: RAID-5 stores parity information, rather
than a complete copy of the data.
Improved read performance: RAID-5 provides similar improvements in read
performance as in a normal striped layout.
Copyright 2014 Symantec Corporation. All rights reserved.

Fast recovery through logging: RAID-5 logging minimizes recovery time in


case of disk failure.
RAID-5: Disadvantages
Slow write performance: The performance overhead for writes can be
substantial, because a write can involve much more than simply writing to a
data block. A write can involve reading the old data and parity, computing the
new parity, and writing the new data and parity. If you have more than twenty
percent writes, do not use RAID-5.
Very poor performance after a disk failure: After one column fails, all I/O
performance goes down. This is not the case with mirroring, where a disk
failure does not have any significant effect on performance.
CONFIDENTIAL - NOT FOR DISTRIBUTION
106 48 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Creating volumes with various layouts 4

Using CLI to create volumes with various layouts


To specify different volume layouts while creating a volume from the command
line using the vxassist make command, you use the layout attribute. If you do
not specify the layout attribute, by default, vxassist creates a concatenated
volume that uses one or more sections of disk space. The layout=striped attribute
designates a striped layout and the layout=mirror-concat or the
layout=mirror-stripe attributes designate a mirrored volume layout. Note that you
can also use the layout=mirror attribute to create a mirrored volume. However,
layout=mirror may result in the creation of layered volumes. Layered volumes are
covered in detail later in this lesson.

Note: To guarantee that a concatenated volume is created, include the


Copyright 2014 Symantec Corporation. All rights reserved.

layout=nostripe attribute in the vxassist make command.


Without the layout attribute, the default layout is used that may have
been changed by the creation of the /etc/default/vxassist file.

The following additional attributes are used with the striped volume layout:
ncol=n designates the number of stripes, or columns, across which the volume
is created. This attribute has many aliases. For example, you can also use
nstripe=n or stripes=n.

CONFIDENTIAL - NOT FOR DISTRIBUTION


107 Lesson 4 Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.
49
The minimum number of stripes in a volume is 2 and the maximum is 8. You
can edit these minimum and maximum values in
/etc/default/vxassist using the min_columns and max_columns
attributes.
stripeunit=size specifies the size of the stripe unit to be used. The default is
64K.
The following additional attributes are used with the mirrored volume layout:
To specify more than two mirrors, you add the nmirror attribute.
When creating a mirrored volume, the volume initialization process requires
that the mirrors be synchronized. The vxassist command normally waits for
the mirrors to be synchronized before returning to the system prompt. To run
the process in the background, you add the -b option.

Estimating volume size


The vxassist maxsize command can determine the largest possible size for a
volume that can currently be created with a given set of attributes. This command
does not create the volume but returns an estimate of the maximum volume size.
The output value is displayed in sectors, by default.
vxassist -g appdg maxsize layout=stripe ncol=2
Maximum volume size: 14389248 (7026Mb)
If the volume with the specified attributes cannot be created, an error message is
returned:
VxVM vxassist ERROR V-5-1-752 No volume can be created
within the given constraints

Creating volumes with the maximum possible size


With SF 6.x, you can also create a volume of the maximum possible size using a
single command:
vxassist g diskgroup make volume_name maxsize[=length] \
attributes
You can also provide an upper limit for the maximum size by specifying
maxsize=length parameter. If the maximum possible size is higher than this upper
Copyright 2014 Symantec Corporation. All rights reserved.

limit, the volume is created using the upper limit as the volume length. If the
maximum possible size is smaller than this limit, the volume is created with the
maximum possible size.

CONFIDENTIAL - NOT FOR DISTRIBUTION


108 410 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Volume creation examples using the CLI 4
While creating a concatenated volume, the vxassist command attempts to
locate sufficient contiguous space on one disk for the volume. However, if
necessary, the volume is spanned across multiple disks. VxVM selects the disks on
which to create the volume unless you designate the disks by adding the disk
media names to the end of the command.
To stripe the volume across specific disks, you can specify the disk media names at
the end of the command. The order in which disks are listed on the command line
does not imply any ordering of disks within the volume layout.
To exclude a disk or list of disks, add an exclamation point (!) before the disk
media names. For example, !appdg01 specifies that the disk appdg01 should
not be used to create the volume.
Creating a mirrored and logged volume
Copyright 2014 Symantec Corporation. All rights reserved.

When you create a mirrored volume, you can add a dirty region log by adding the
logtype=drl attribute:
vxassist -g diskgroup [-b] make volume_name length \
layout=mirror-concat logtype=drl [nlog=n]
A log plex that consists of a single subdisk is created.
If you plan to mirror the log, you can add more than one log plex by specifying
a number of logs using the nlog=n attribute, where n is the number of logs.
vxassist -g appdg make appvol 5m layout=mirror-concat \
logtype=drl
Note: Dirty regions logs are covered in a later lesson.
CONFIDENTIAL - NOT FOR DISTRIBUTION
109 Lesson 4 Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.
411
Allocating storage for volumes
Specifying storage attributes for volumes
VxVM selects the disks on which each volume resides automatically, unless you
specify otherwise. To create a volume on specific disks, you can designate those
disks when creating a volume. By specifying storage attributes when you create a
volume, you can:
Include specific disks, controllers, enclosures, targets, or trays to be used for
the volume.
Exclude specific disks, controllers, enclosures, targets, or trays from being
used for the volume.
Mirror volumes across specific controllers, enclosures, targets, or trays. (By
default, VxVM does not permit mirroring on the same disk.)
Copyright 2014 Symantec Corporation. All rights reserved.

By specifying storage attributes, you can ensure a high availability environment.


For example, you can only permit mirroring of a volume on disks connected to
different controllers and eliminate the controller as a single point of failure.
To exclude a disk, controller, enclosure, target, or tray, you add the exclusion
symbol (!) before the storage attribute. For example, to exclude appdg02 from
volume creation, you use the format: !appdg02.

Note: When creating a volume, all storage attributes that you specify for use must
belong to the same disk group. Otherwise, VxVM does not use these
storage attributes to create a volume.

CONFIDENTIAL - NOT FOR DISTRIBUTION


110 412 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
4
Copyright 2014 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 4: Working with Volumes with Different Layouts, page A-93

CONFIDENTIAL - NOT FOR DISTRIBUTION


111 Lesson 4 Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.
413
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


112 414 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Lesson 5
Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


113
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


114 52 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Administering mirrored volumes
Adding a mirror to a volume
If a volume was not originally created as a mirrored volume, or if you want to add
additional mirrors, you can add a mirror to an existing volume.
By default, a mirror is created with the same plex layout as the plex already in the
5
volume. For example, assume that a volume is composed of a single striped plex.
If you add a mirror to the volume, VxVM makes that plex striped, as well.
However, you can specify a different layout.
A mirrored volume requires at least two disks. You cannot add a mirror to a disk
that is already being used by the volume. A volume can have multiple mirrors, as
long as each mirror resides on separate disks.
Only disks in the same disk group as the volume can be used to create the new
Copyright 2014 Symantec Corporation. All rights reserved.

mirror. Unless you specify the disks to be used for the mirror, VxVM
automatically locates and uses available disk space to create the mirror.
A volume can contain up to 32 plexes (mirrors); however, the practical limit is 31.
One plex should be reserved for use by VxVM for background repair operations.

Removing a mirror
When a mirror (plex) is no longer needed, you can remove it. You can remove a
mirror to provide free space, to reduce the number of mirrors, to remove a
temporary mirror.
Caution: Removing a mirror results in loss of data redundancy. If a volume only
has two plexes, removing one of them leaves the volume unmirrored.
CONFIDENTIAL - NOT FOR DISTRIBUTION
115 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
53
Migrating data to a new array
Without Storage Foundation, moving data from one array to another requires
downtime. Using Storage Foundation, you can mirror to a new array, ensure it is
stable, and then remove the plexes from the old array. No downtime is necessary.
This is useful in many situations, for example, if a company purchases a new array.
The high level steps for migrating data using Storage Foundation are listed on the
slide. Note that if you have multiple volumes on the old array, you would need to
repeat steps 6 to 9 for each volume. The following steps illustrate the commands
you need to use to perform the migration using a simple example where the appvol
volume in the appdg disk group is moved from the emc0 enclosure to the emc1
enclosure. To keep the example simple, only one LUN is used to mirror the simple
volume.
1 Set up LUNs on the new array.
Copyright 2014 Symantec Corporation. All rights reserved.

2 Get the OS to detect the LUNS. For example, type devfsadm on a Solaris
system.
3 vxdisk scandisks new (for VxVM to recognize LUNS from the new
emc1 enclosure)
4 vxdisksetup -i emc1_dd1 (Repeat for each new LUN to be used in the
volume.)
5 vxdg -g appdg adddisk appdg02=emc1_dd1
6 vxassist -g appdg mirror appvol appdg02
7 Wait for the synchronization to complete.

CONFIDENTIAL - NOT FOR DISTRIBUTION


116 54 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
8 vxvol -g appdg rdpol prefer appvol appvol-02 (appvol-02
is the new plex in the volume that is configured on the emc1 enclosure. Note
that setting read policies for mirrored volumes is explained in more detail later
in this lesson.)
9 After the testing period: vxplex -g appdg -o rm dis appvol-01
(appvol-01 is the original plex in the volume that was configured on the emc0
enclosure.)
10 vxdg -g appdg rmdisk appdg01 (appdg01 is the disk media name of
the old LUN from the emc0 enclosure.)
Note that the steps after you get Storage Foundation to recognize the LUNs in the
new array can be automated using the Move Volumes functionality that is
available with the Veritas Operations Manager Storage Provisioning add-on. This
wizard moves all VxVM volumes from one enclosure to another in a single
operation.

5
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


117 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
55
Adding a mirror: CLI
To add a mirror onto a specific disk, you specify the disk name in the command:
vxassist -g appdg mirror appvol appdg03
Removing a mirror: CLI
To remove a mirror, use vxassist remove mirror as shown on the slide. If
you specify a disk media name with an exclamation mark in front, the plex that
contains a subdisk on that disk is removed. To remove a specific plex, you can also
use the following vxplex command specifying the name of the plex you want to
remove:
vxplex -g diskgroup -o rm dis plex_name
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


118 56 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Logging in VxVM
By enabling logging, VxVM tracks changed regions of a volume. Log information
can then be used to reduce plex synchronization times and speed the recovery of
volumes after a system failure. Logging is an optional feature, but is highly
recommended, especially for large volumes.
5
Dirty region logging
Dirty region logging (DRL) is used with mirrored volume layouts. DRL keeps
track of the regions that have changed due to I/O writes to a mirrored volume.
Prior to every write, a bit is set in a log to record the area of the disk that is being
changed. In case of system failure, DRL uses this information to recover only the
portions of the volume that need to be recovered.
If DRL is not used and a system failure occurs, all mirrors of the volumes must be
Copyright 2014 Symantec Corporation. All rights reserved.

restored to a consistent state by copying the full contents of the volume between its
mirrors. This process can be lengthy and I/O intensive.
When you enable logging on a mirrored volume, one log plex is created by default.
The log plex uses space from disks already used for that volume, or you can
specify which disk to use. To enhance performance, you should consider placing
the log plex on a disk that is not already in use by the volume.

How does DRL work?


In the dirty region log:
A small number of bytes of the DRL are reserved for internal use. The
remaining bytes are used for the DRL bitmap.
CONFIDENTIAL - NOT FOR DISTRIBUTION
119 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
57
The bytes are divided into two bitmaps: an active bitmap and a recovery
bitmap.
Each bit in the active bitmap maps to a single region of the volume.
A maximum of 2048 dirty regions per system is allowed by default.

How the bitmaps are used in dirty region logging


Both bitmaps are zeroed when the volume is started initially, after a clean
shutdown. As regions transition to dirty, the corresponding bits in the active
bitmap are set before the writes to the volume occur.
If the system crashes, the active map is ORd with the recovery map.
Mirror resynchronization is now limited to the dirty bits in the recovery map.
The active map is simultaneously reset, and normal volume I/O is permitted.
Usage of two bitmaps in this way allows VxVM to handle multiple system crashes.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


120 58 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Adding a log to a volume
To create a volume that is mirrored and logged:
vxassist -g appdg make appvol 5m layout=mirror-concat \
logtype=drl

Dirty region log considerations: 5


Multiple logs can be added to mirror the DRL, up to a maximum of one log per
data plex in the volume.
The size of the DRL is determined by Volume Manager based on the length of
the volume.
DRL adds a small I/O overhead for most write access patterns.
DRL should not be used for:
Mirrored boot disks
Copyright 2014 Symantec Corporation. All rights reserved.

Volumes that have a data change object (DCO)


Data change objects are used with the FastResync feature.
Data volumes for databases that support the SmartSync feature of Volume
Manager
Redo log volumes and other volumes that are used primarily for sequential
writes may benefit from using a sequential DRL instead of a standard DRL
(logtype=drlseq).

CONFIDENTIAL - NOT FOR DISTRIBUTION


121 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
59
Volume read policies with mirroring
One of the benefits of mirrored volumes is that you have more than one copy of the
data from which to satisfy read requests. The read policy for a volume determines
the order in which plexes are accessed during read I/O operations.
Round robin: VxVM reads each plex in turn in round-robin manner for
each nonsequential I/O detected. Sequential access causes only one plex to be
accessed in order to take advantage of drive or controller read-ahead caching
policies. If a read is within 256K of the previous read, then the read is sent to
the same plex.
Preferred plex: VxVM reads first from a plex that has been named as the
preferred plex. Read requests are satisfied from one specific plex, presumably
the plex with the highest performance. If the preferred plex fails, another plex
is accessed. For example, if you are mirroring across disk arrays with
Copyright 2014 Symantec Corporation. All rights reserved.

significantly different performance specifications, setting the plex on the faster


array as the preferred plex would increase performance.
Selected plex: This is the default read policy. Under the selected plex policy,
Volume Manager chooses an appropriate read policy based on the plex
configuration to achieve the greatest I/O throughput. If the mirrored volume
has exactly one enabled striped plex, the read policy defaults to that plex;
otherwise, it defaults to a round-robin read policy.
Siteread: VxVM reads preferentially from plexes at the locally defined site.
This is the default policy for volumes in disk groups where site consistency has
been enabled.

CONFIDENTIAL - NOT FOR DISTRIBUTION


122 510 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Changing the volume read policy: CLI
vxvol -g diskgroup rdpol round volume_name
vxvol -g diskgroup rdpol prefer volume_name preferred_plex
vxvol -g diskgroup rdpol select volume_name

5
Note: Before configuring the siteread policy, the Site Awareness feature must be
configured by assigning hosts and LUNs to different sites. Note that setting
the siteread policy on a volume has no impact if the site name has not been
set for the host.

You can also use the vxprint command to observe the read policy of a mirrored
volume as shown in the following output extracts. Note that the fields related to the
read policy are displayed in bold font for emphasis:
Copyright 2014 Symantec Corporation. All rights reserved.

The vxprint output with the default read policy:


V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX
UTYPE
v appvol - ENABLED ACTIVE 2097152 SELECT -
fsgen
The vxprint output after the read policy is changed to preferred plex:
v appvol - ENABLED ACTIVE 2097152 PREFER appvol-02
fsgen

CONFIDENTIAL - NOT FOR DISTRIBUTION


123 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
511
Resizing a volume and a file system
Resizing a volume
If users require more space on a volume, you can increase the size of the volume.
If a volume contains unused space that you need to use elsewhere, you can shrink
the volume.
When the volume size is increased, sufficient disk space must be available in the
disk group to support extending the existing volume layout. A volume with
concatenated layout can be grown by any amount on any disk within the disk
group whereas a volume with striped layout can be grown only if subdisks remain
the same length and an equal number of disks as stripes are available. When
increasing the size of a volume, VxVM assigns the necessary new space from
available disks. By default, VxVM uses space from any disk in the disk group,
unless you define specific disks.
Copyright 2014 Symantec Corporation. All rights reserved.

Resizing a volume with a file system


Volumes and file systems are separate virtual objects. When a volume is resized,
the size of the raw volume is changed. If a file system exists that uses the volume,
the file system must also be resized. When you resize a volume using VOM or the
vxresize command, the file system is also resized.

Resizing volumes with other types of data


For volumes containing data other than file systems, such as raw database data,
you must ensure that the data manager application can support the resizing of the
data device with which it has been configured.
CONFIDENTIAL - NOT FOR DISTRIBUTION
124 512 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Resizing a volume and file system: Methods
To resize a volume from the command line, you can use either the vxassist
command or the vxresize command. Both commands can expand or reduce a
volume to a specific size or by a specified amount of space, with one significant
difference:
vxresize automatically resizes a volumes file system. 5
vxassist does not resize a volumes file system.
When using vxassist, you must resize the file system separately by using the
fsadm command.
When you expand a volume, both commands automatically locate available disk
space unless you designate specific disks to use. When you shrink a volume, the
unused space becomes free space in the disk group.
Copyright 2014 Symantec Corporation. All rights reserved.

When you resize a volume, you can specify the length of a new volume in sectors,
kilobytes, megabytes, or gigabytes. The unit of measure is added as a suffix to the
length (s, k, m, or g). If no unit is specified, the default unit is sectors.

CONFIDENTIAL - NOT FOR DISTRIBUTION


125 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
513
Resizing a volume and file system: CLI
The new_length operand can begin with a plus sign (+) to indicate that the new
length is added to the current volume length. A minus sign (-) indicates that the
new length is subtracted. -b runs the process in the background. The -x switch
restricts the change to an expand operation and the -s switch restricts the change
to a shrink operation.
The vxassist maxgrow command can be used to get an estimate of how much
an existing volume can be expanded. The output indicates the amount by which the
volume can be increased and the total size to which the volume can grow. The
output is displayed in sectors, by default.
vxassist -g datadg maxgrow datavol
Volume datavol can be extended by 366592 to 1677312
(819Mb)
Copyright 2014 Symantec Corporation. All rights reserved.

Note that this command does not change the size of the volume.
The ability to expand or shrink a file system depends on the file system type and
whether the file system is mounted or unmounted. The following table provides
some examples:

File System Type Mounted FS Unmounted FS


VxFS Expand and shrink Not allowed
UFS (Solaris) Expand only Expand only
HFS (HP-UX) Not allowed Expand only

CONFIDENTIAL - NOT FOR DISTRIBUTION


126 514 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Example: The size of the volume myvol is 1 GB. To extend myvol to 5 GB:
vxresize -g mydg myvol 5g
To extend myvol by an additional 1 GB:
vxresize -g mydg myvol +1g
To shrink myvol back to a length of 4 GB:
vxresize -g mydg myvol 4g
To shrink myvol by an additional 1 GB:
vxresize -g mydg myvol -1g

Resizing a volume only: vxassist


The vxassist command can be used to resize a volume only as follows:
vxassist -g diskgroup {growto|growby|shrinkto|shrinkby} \
volume_name size
growto Increases volume to specified length
growby Increases volume by specified amount
shrinkto Reduces volume to specified length
shrinkby Reduces volume by specified amount
You should use this command only if the volume does not include a file system or
if you are resizing the volume and the file system separately for a specific purpose.

Resizing a file system only: fsadm


You may need to resize a file system to accommodate a change in usefor
example, when there is an increased need for space in the file system. You may
5
also need to resize a file system as part of a general reorganization of disk
usagefor example, when a large file system is subdivided into several smaller
file systems. You can resize a VxFS file system while the file system remains
mounted by using the fsadm command:
fsadm [-b newsize] [-r rawdev] mount_point
Using fsadm to resize a file system does not automatically resize the underlying
volume. When you expand a file system, the underlying device must be large
Copyright 2014 Symantec Corporation. All rights reserved.

enough to contain the new larger file system.

CONFIDENTIAL - NOT FOR DISTRIBUTION


127 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
515
Resizing a Volume Manager disk to match a resized LUN
When you resize a LUN in the hardware, you should resize the VxVM disk
corresponding to that LUN. You can use vxdisk resize to update disk headers
and other VxVM structures to match a new LUN size. This command does not
resize the underlying LUN itself.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


128 516 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Moving data between systems
Example: Disk groups and high availability
The example in the diagram represents a high availability environment.
In the example, Computer sym1 and Computer sym2 each have their own bootdg
on their own private SCSI bus. The two hosts are also on a shared SCSI bus. On
5
the shared bus, each host has a disk group, and each disk group has a set of VxVM
disks and volumes. There are additional disks on the shared SCSI bus that have not
been added to a disk group.
If Computer sym1 fails, then Computer sym2, which is on the same SCSI bus as
the appdg disk group, can take ownership or control of the disk group and all of its
components.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


129 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
517
Deporting a disk group
A deported disk group is a disk group over which management control has been
surrendered. The objects within the disk group cannot be accessed, its volumes are
unavailable, and the disk group configuration cannot be changed. (You cannot
access volumes in a deported disk group because the directory containing the
device nodes for the volumes are deleted upon deport.) To resume management of
the disk group, it must be imported.
A disk group cannot be deported if any volumes in that disk group are in use.
Before you deport a disk group, you must unmount file systems and stop any
application using the volumes in the disk group.

Deporting and specifying a new host


When you deport a disk group using VOM or CLI commands, you have the option
Copyright 2014 Symantec Corporation. All rights reserved.

to specify a new host to which the disk group is imported at reboot. If you know
the name of the host to which the disk group will be imported, then you should
specify the new host during the operation. If you do not specify the new host, then
the disks could accidentally be added to another disk group, resulting in data loss.
You cannot specify a new host using the vxdiskadm utility.

Deporting and renaming


When you deport a disk group using VOM or CLI commands, you also have the
option to rename the disk group when you deport it. You cannot rename a disk
group when deporting using the vxdiskadm utility.

CONFIDENTIAL - NOT FOR DISTRIBUTION


130 518 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Importing a deported disk group
During a disk group import operation, the volume device files
(/dev/vx/[r]dsk/diskgroup/volume_name) are created and with SF
versions 5.1 SP1 and later, the volumes are automatically started.
Importing and renaming
A deported disk group cannot be imported if another disk group with the same 5
name has been created since the disk group was deported. You can import and
rename a disk group at the same time.
Importing and clearing host locks
When a disk group is created, the system writes a lock on all disks in the disk
group. The lock ensures that dual-ported disks (disks that can be accessed
simultaneously by two systems) are not used by both systems at the same time. If a
Copyright 2014 Symantec Corporation. All rights reserved.

system crashes, the locks stored on the disks remain, and if you try to import a disk
group containing those disks, the import fails.
Importing as temporary
A temporary import does not persist across reboots. A temporary import can be
useful, for example, if you need to perform administrative operations on the
temporarily imported disk group.
Forcing an import
A disk group import fails if the VxVM configuration daemon cannot find all of the
disks in the disk group. If the import fails because a disk has failed, you can force
the import. Forcing an import should always be performed with caution.

CONFIDENTIAL - NOT FOR DISTRIBUTION


131 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
519
How to deport a disk group
Before deporting a disk group, unmount all file systems used within the disk group
that is to be deported. You can also stop all volumes in the disk group to verify that
they are not being used:
umount mount_point
vxvol -g diskgroup stopall
After you deport a disk group, disks that were in the disk group have a state of
Deported. If the disk group was deported to another host, the disk state is Foreign.

How to import a disk group:


With SF 5.1 SP1 and later, all volumes in the disk group are started automatically
during a disk group import by default. However, with earlier versions of SF or if
the autostartvolumes parameter is modified to off, you must manually start all
Copyright 2014 Symantec Corporation. All rights reserved.

volumes after you import a disk group from the command line.
A disk group must be deported from its previous system before it can be imported
to the new system. During the import operation, the system checks for host import
locks. If any locks are found, you are prompted to clear the locks.
To temporarily import a disk group, you use the -t option. This option does not
set the autoimport flag, which means that the import cannot survive a reboot.
To display all disk groups, including deported disk groups:
vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
emc0_dd1 auto:cdsdisk appdg01 appdg online
emc0_dd2 auto:cdsdisk - (oradg) online
CONFIDENTIAL - NOT FOR DISTRIBUTION
132 520 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Renaming VxVM objects
Changing the disk media name
VxVM creates a unique disk media name for a disk when you add a disk to a disk
group. Sometimes you may need to change a disk name to reflect changes of
ownership or use of the disk. Renaming a disk does not change the physical disk
device name. The new disk name must be unique within the disk group. 5

Before you rename a VxVM object


Before you rename a VxVM object, you should carefully consider the change. For
example, VxVM names subdisks based on the disks on which they are located. A
disk named appdg01 contains subdisks that are named appdg01-01,
appdg01-02, and so on. Renaming a disk does not automatically rename its
subdisks. Similarly, renaming a volume does not automatically rename its plexes.
Copyright 2014 Symantec Corporation. All rights reserved.

Volumes are not affected when subdisks are named differently from the disks.

CONFIDENTIAL - NOT FOR DISTRIBUTION


133 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
521
Renaming a disk group
You cannot import or deport a disk group when the target system already has a
disk group of the same name. To avoid name collision or to provide a more
appropriate name for a disk group, you can rename a disk group.
To rename a disk group when moving it from one system to another, you
specify the new name during the deport or during the import operations.
To rename a disk group without moving the disk group, you must still deport
and reimport the disk group on the same system.
Note that renaming a disk group:
does not change the disk group ID (dgid).
may require modifying the filesystem table (For example, /etc/vfstab for
Solaris).
Copyright 2014 Symantec Corporation. All rights reserved.

may require modifying applications, such as databases, using the volumes.


Using the CLI, for example, to rename the disk group appdg to oradg:
vxdg -n oradg deport appdg or vxdg deport appdg
vxdg import oradg vxdg -n oradg import appdg
From the command line, if you need to restart all volumes in the disk group:
vxvol -g new_dg_name startall
vxvol -g oradg startall

CONFIDENTIAL - NOT FOR DISTRIBUTION


134 522 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
5
Copyright 2014 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 5: Making Configuration Changes, page A-103

CONFIDENTIAL - NOT FOR DISTRIBUTION


135 Lesson 5 Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
523
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


136 524 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Lesson 6
Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


137
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


138 62 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Benefits of using Veritas File System
A file system is simply a method for storing and organizing computer files and the
data they contain to make it easy to find and access them.
Veritas File System includes the following features:
Intent log
Veritas File System (VxFS) was the first commercial journaling file
system. With journaling, metadata changes are first written to a log (or
journal) then to disk. Since changes do not need to be to be written in
multiple places, throughput is much faster as the metadata is written
asynchronously.
VxFS provides fast recovery of a file system from system failure because 6
the recovery usually involves only a log replay.
Extent-based allocation
Copyright 2014 Symantec Corporation. All rights reserved.

Extents allow disk I/O to take place in units of multiple blocks if storage is
allocated in consecutive blocks. This topic is analyzed in more detail in the
following pages.
Extent attributes
Extent attributes are the extent allocation policies associated with a file.
Online administration
A lot of the file system administration tasks, such as backing the file system up
or resizing the file system, can be performed while the file system is still
mounted. Online file system defragmentation is discussed later in this lesson.

CONFIDENTIAL - NOT FOR DISTRIBUTION


139 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
63
Storage checkpoints
Backup and restore applications can leverage Storage Checkpoint, a disk- and
I/O-efficient copying technology for creating periodic frozen images of a file
system.
Multi-volume file system support
The multi-volume support feature allows several volumes to be represented by
a single logical object. This feature is used with the SmartTier feature.
SmartTier (previously known as Dynamic Storage Tiering)
The SmartTier feature allows you to configure policies that automatically
allocate storage from specific volumes for certain files, or relocate files by
running file relocation commands, which can improve performance for
applications that access specific types of files.
Improved database performance
Databases can be created on the character devices to achieve the same
Copyright 2014 Symantec Corporation. All rights reserved.

performance as databases created on raw disks.


Performance tuning options
The VxFS file system supports extended mount options to specify
enhanced data integrity modes, enhanced performance modes, temporary
file system modes. For more information on these modes of operation,
refer to the Veritas Storage Foundation Administrators Guide.
VxFS provides superior performance for synchronous write applications.
VxFS supports files larger than two gigabytes and large file systems up to
256 terabytes.
Cross-platform data sharing

CONFIDENTIAL - NOT FOR DISTRIBUTION


140 64 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Cross-platform data sharing allows data to be serially shared among
heterogeneous systems where each system has direct access to the physical
devices that hold the data.
Access control lists (ACLs)
An access control list (ACL) stores a series of entries that identify specific
users or groups and their access privileges for a directory or file.
Quotas
VxFS supports quotas, which allocate per-user and per-group quotas and limit
the use of two principal resources: files and data blocks.
File change log
The VxFS file change log tracks changes to files and directories in a file
system.
The SmartMove feature
The information stored by Veritas File System about used and unused blocks is
used by Veritas Volume Manager to optimize mirror synchronization
operations.
Storage Foundation thin reclamation
The thin reclamation feature allows you to release free data blocks of a VxFS
file system to the free storage pool of a thin storage LUN. This feature is only
supported on file systems mounted on a VxVM volume.
Note: The Storage Foundation thin reclamation feature is not supported on the
Solaris x64 operating environment.
File system data compression
The file system data compression feature with SF 6.x aims to reduce the space
used by files, while retaining the accessibility of the files and being transparent
to applications.
File system deduplication
The Veritas file system deduplication feature is another new feature with SF
6.x that aims to maximize storage utilization. This feature scans the file
system, identifies the duplicate data and eliminates it without any continuous
6
cost.
File replication
Copyright 2014 Symantec Corporation. All rights reserved.

Veritas File Replicator (VFR), which is available as an option to Storage


Foundation, included in the Veritas Replicator license, supports file-level
replication of application data. VFR tracks all updates to the file system and
periodically replicates these updates at the end of a configured time interval.
Cluster File System
Clustered file systems are an extension of VxFS that support concurrent direct
media access from multiple systems.

CONFIDENTIAL - NOT FOR DISTRIBUTION


141 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
65
VxFS extent-based allocation
Similar to other file systems on UNIX platforms, VxFS uses index tables to store
information and location information about blocks used for files. VxFS allocation
is extent-based as opposed to block-based.
Block-based allocation: File systems that use block-based allocation assign
disk space to a file one block at a time.
Extent-based allocation: File systems that use extent-based allocation assign
disk space in groups of contiguous blocks, called extents.
Veritas File System selects a contiguous range of file system blocks, called an
extent, for inclusion in a file. The number of blocks in an extent varies and is based
on either the I/O pattern of the application, or explicit requests by the user or
programmer. Extent-based allocation enables larger I/O operations to be passed to
the underlying drivers.
Copyright 2014 Symantec Corporation. All rights reserved.

VxFS attempts to allocate each file in one extent of blocks. If this is not possible,
VxFS attempts to allocate all extents for a file close to each other.
Each file is associated with an index block, called an inode. In an inode, an extent
is represented as an address-length pair, which identifies the starting block address
and the length of the extent in logical blocks. This enables the file system to
directly access any block of the file.
VxFS automatically selects an extent size by using a default allocation policy that
is based on the size of I/O write requests. The default allocation policy attempts to
balance two goals:
Optimum I/O performance through large allocations

CONFIDENTIAL - NOT FOR DISTRIBUTION


142 66 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Minimal file system fragmentation through allocation from space available in
the file system that best fits the data
The first extent allocated is large enough for the first write to the file. Typically,
the first extent is the smallest power of 2 that is larger than the size of the first
write, with a minimum extent allocation of 8K. Additional extents are
progressively larger, doubling the size of the file with each new extent. This
method reduces the total number of extents used by a single file.

6
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


143 Lesson 6 Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
67
Using Veritas File System commands
You can generally use Veritas File System (VxFS) as an alternative to other disk-
based, OS-specific file systems, except for the file systems used to boot the
system. File systems used to boot the system are mounted read-only in the boot
process, before the VxFS driver is loaded.
VxFS can be used in place of:
UNIX File System (UFS) on Solaris, except for root, /usr, /var, and /opt.
Hierarchical File System (HFS) on HP-UX, except for /stand.
Journaled File System (JFS) and Enhanced Journaled File System (JFS2) on
AIX, except for root and /usr.
Extended File System Version 2 (EXT2) and Version 3 (EXT3) on Linux,
except for root, /boot, /etc, /lib, /var, and /usr.
Copyright 2014 Symantec Corporation. All rights reserved.

Location of VxFS commands


Most Veritas file system commands are located in /opt/VRTS/bin, which must
be included in the PATH environment variable. Other locations where Veritas file
system commands can be found are listed in the following table:

Platform Location of VxFS Commands


Solaris /opt/VRTSvxfs/sbin, /usr/lib/fs/vxfs, /etc/fs/vxfs
HP-UX /sbin/fs
AIX /opt/VRTSvxfs/sbin, /usr/lib/fs/vxfs, /etc/fs/vxfs
Linux /usr/lib/fs/vxfs
CONFIDENTIAL - NOT FOR DISTRIBUTION
144 68 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
General file system command syntax
To access VxFS-specific versions, or wrappers, of standard commands, you use
the Virtual File System switchout mechanism followed by the file system type,
vxfs. The switchout mechanism directs the system to search the appropriate
directories for VxFS-specific versions of commands.

Platform File System Switchout


Solaris -F vxfs
HP-UX -F vxfs
AIX -V vxfs (or -v vxfs when used with crfs)
Linux -t vxfs

Note: The Linux platform includes a native fsadm command in the


/usr/sbin directory. If this path is listed before the /opt/VRTS/bin
directory in the PATH environment variable, provide the full pathname of
the fsadm command (/opt/VRTS/bin/fsadm) to use the VxFS-
specific version of this command.

Using VxFS commands by default


If you do not use the switchout mechanism, then the file system type is taken from
the default specified in the OS-specific default file system file. If you want Veritas
File System to be your default file system type, then you change the default file
system file to contain vxfs.

Platform Default File System File


Solaris /etc/default/fs
HP-UX /etc/default/fs
AIX /etc/vfs
6
Linux /etc/default/fs
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


145 Lesson 6 Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
69
Using mkfs command options
You can set a variety of file system properties when you create a Veritas file
system by adding VxFS-specific options to the mkfs command.
Here are some example outputs from a Linux platform before and after a Veritas
file system is created:
mkfs -t vxfs -o N /dev/vx/rdsk/appdg/appvol
version 9 layout
4194304 sectors, 2097152 blocks of size 1024, log size
16384 blocks
rcq size 1024 blocks
largefiles supported
Copyright 2014 Symantec Corporation. All rights reserved.

mkfs -t vxfs /dev/vx/rdsk/appdg/appvol


mkfs -m /dev/vx/rdsk/appdg/appvol
mkfs -t vxfs -o
bsize=1024,version=9,inosize=256,logsize=16384,rcqsize=10
24,largefiles /dev/vx/rdsk/appdg/appvol 4194304

CONFIDENTIAL - NOT FOR DISTRIBUTION


146 610 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Identifying file system type
If you do not know the file system type of a particular file system, you can
determine the file system type by using the fstyp command. You can use the
fstyp command to describe either a mounted or unmounted file system.

Identifying free space


To report the number of free disk blocks and inodes for a VxFS File System, you
use the df command. The df command displays the number of free blocks and
free inodes in a file system or directory by examining the counts kept in the
superblocks. Extents smaller than 8K may not be usable for all types of allocation,
so the df command does not count free blocks in extents below 8K when reporting
the total number of free blocks. 6
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


147 Lesson 6 Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
611
Logging in VxFS
Role of the intent log
A file system may be left in an inconsistent state after a system failure. Recovery
of structural consistency requires examination of file system metadata
structures.Veritas File System provides fast file system recovery after a system
failure by using a tracking feature called intent logging, or journaling. Intent
logging is the process by which intended changes to file system metadata are
written to a log before changes are made to the file system structure. Once the
intent log has been written, the other updates to the file system can be written in
any order. In the event of a system failure, the VxFS fsck utility replays the intent
log to nullify or complete file system operations that were active when the system
failed.
Traditionally, the length of time taken for recovery using fsck was proportional to
Copyright 2014 Symantec Corporation. All rights reserved.

the size of the file system. For large disk configurations, running fsck is a time-
consuming process that checks, verifies, and corrects the entire file system.
The VxFS version of the fsck utility performs an intent log replay to recover a
file system without completing a full structural check of the entire file system. The
time required for log replay is proportional to the log size, not the file system size.
Therefore, the file system can be recovered and mounted seconds after a system
failure. Intent log recovery is not readily apparent to users or administrators, and
the intent log can be replayed multiple times with no adverse effects.
Note: Replaying the intent log may not completely recover the damaged file
system structure if the disk suffers a hardware failure. Such situations may require
a complete system check using the VxFS fsck utility.
CONFIDENTIAL - NOT FOR DISTRIBUTION
148 612 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Maintaining file system consistency
You use the VxFS-specific version of the fsck command to check the consistency
of and repair a VxFS file system. The fsck utility replays the intent log by
default, instead of performing a full structural file system check, which is usually
sufficient to set the file system state to CLEAN. You can also use the fsck utility
to perform a full structural recovery in the unlikely event that the log is unusable.
The syntax for the fsck command is:
fsck [fstype] [generic_options] [-y|-Y] [-n|-N] \
[-o full,nolog] special
For a complete list of generic options, see the fsck(1m) manual page. Some of
the generic options include: 6

Option Description
Copyright 2014 Symantec Corporation. All rights reserved.

-m Checks, but does not repair, a file system before mounting


-n|N Assumes a response of no to all prompts by fsck (This option
does not replay the intent log and performs a full fsck.)
-V Echoes the expanded command line but does not execute the
command
-y|Y Assumes a response of yes to all prompts by fsck (If the file
system requires a full fsck after the log replay, then a full fsck is
performed.)

-o p can only be run with log fsck, not with full fsck.
CONFIDENTIAL - NOT FOR DISTRIBUTION
149 Lesson 6 Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
613
Controlling file system fragmentation
In a Veritas file system, when free resources are initially allocated to files, they are
aligned in the most efficient order possible to provide optimal performance. On an
active file system, the original order is lost over time as files are created, removed,
and resized. As space is allocated and deallocated from files, the available free
space becomes broken up into fragments. This means that space has to be assigned
to files in smaller and smaller extents. This process is known as fragmentation.
Fragmentation leads to degraded performance and availability.
VxFS provides online reporting and optimization utilities to enable you to monitor
and defragment a mounted file system. These utilities are accessible through the
file system administration command, fsadm.

Types of fragmentation
Copyright 2014 Symantec Corporation. All rights reserved.

VxFS addresses two types of fragmentation:


Directory fragmentation: As files are created and removed, gaps are left in
directory inodes. This is known as directory fragmentation. Directory
fragmentation causes directory lookups to become slower.
Extent fragmentation: As files are created and removed, the free extent map
for an allocation unit changes from having one large free area to having many
smaller free areas. Extent fragmentation occurs when files cannot be allocated
in contiguous chunks and more extents must be referenced to access a file. In a
case of extreme fragmentation, a file system may have free space, none of
which can be allocated.

CONFIDENTIAL - NOT FOR DISTRIBUTION


150 614 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Running fragmentation reports
You can monitor fragmentation in a Veritas file system by running reports that
describe fragmentation levels. You use the fsadm command to run reports on
both directory and extent fragmentation. The df command, which reports on file
system free space, also provides information useful in monitoring fragmentation.

Interpreting fragmentation reports


In general, for optimum performance, the percentage of free space in a file system
should not fall below 10 percent. A file system with 10 percent or more free space
has less fragmentation and better extent allocation.
A badly fragmented file system will have one or more of the following 6
characteristics:
Greater than 5 percent of free space in extents of less than 8 blocks in length
Copyright 2014 Symantec Corporation. All rights reserved.

More than 50 percent of free space in extents of less than 64 blocks in length
Less than 5 percent of the total file system size available as free extents in
lengths of 64 or more blocks
Fragmentation can also be determined based on the fragmentation index. The
fragmentation report displays fragmentation indices for both the free space and the
files in the file system. A value of 0 for the fragmentation index means that the file
system has no fragmentation, and a value of 100 means that the file system has the
highest level of fragmentation. The fragmentation index is new with SF 6.x and
enables you to determine whether you should perform extent defragmentation or
free space defragmentation.

CONFIDENTIAL - NOT FOR DISTRIBUTION


151 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
615
VxFS defragmentation
You can use the online administration utility fsadm to defragment, or reorganize,
file system directories and extents. The fsadm utility defragments a file system
mounted for read/write access by:
Removing unused space from directories
Making all small files contiguous
Consolidating free blocks for file system use
Sorting entries by the time of last access
Only a privileged user can reorganize a file system.
The fsadm defragmentation options
If you specify both -d and -e, directory reorganization is always completed
before extent reorganization.
Copyright 2014 Symantec Corporation. All rights reserved.

If you use the -D and -E with the -d and -e options, fragmentation reports are
produced both before and after the reorganization.
You can use the -t and -p options to control the amount of work performed by
fsadm, either in a specified time or by a number of passes. By default, fsadm
runs five passes. If both -t and -p are specified, fsadm exits if either of the
terminating conditions is reached.

Note: On the Linux platform, the -T time option is used instead of the
-t time option because the -t switch is used for file system switchout
mechanism.

CONFIDENTIAL - NOT FOR DISTRIBUTION


152 616 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Free space defragmentation for a file system
The free space defragmentation option is new with SF 6.x. It attempts to get bigger
chunks of free space in the file system by:
Freeing as many fragmented allocation units as possible
Filling as many allocation units completely as possible
Never breaking any file extent during data movement to ensure that file extent
fragmentation does not get worse during the process
Note that you can observe the available free extents by size using the VxFS-
specific df -os command as shown on the slide.

6
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


153 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
617
Scheduling defragmentation
The best way to ensure that fragmentation does not become a problem is to
defragment the file system on a regular basis. The frequency of defragmentation
depends on file system usage, activity patterns, and the importance of file system
performance. In general, follow these guidelines:
Schedule defragmentation during a time when the file system is relatively idle.
For frequently used file systems, you should schedule defragmentation daily or
weekly.
For infrequently used file systems, you should schedule defragmentation at
least monthly.
Full file systems tend to fragment and are difficult to defragment. You should
consider expanding the file system.
To determine the defragmentation schedule that is best for your system, select
Copyright 2014 Symantec Corporation. All rights reserved.

what you think is an appropriate interval for running extent reorganization and run
the fragmentation reports both before and after the reorganization. If the degree of
fragmentation is approaching the bad fragmentation figures, then the interval
between fsadm runs should be reduced. If the degree of fragmentation is low, then
the interval between fsadm runs can be increased.
You should schedule directory reorganization for file systems when the extent
reorganization is scheduled. The fsadm utility can run on demand and can be
scheduled regularly as a cron job.
The defragmentation process can take some time. You receive an alert when the
process is complete.

CONFIDENTIAL - NOT FOR DISTRIBUTION


154 618 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Using thin provisioning disk arrays
What is a thin provisioning disk array?
Thin provisioning is a hardware-based storage solution that enables system
administrators to configure storage space for a server without pre allocating the
space on storage array. A thin provisioning disk array creates virtual disk drives
(LUNs) that appear to be one size, but whose actual physical storage only covers a
fraction of their claimed size. If a LUN needs more storage, the storage array
allocates more physical storage to it, without changing the presented size.
So for example, a project may require up to a 1TB of storage space over the life of
the project. The actual data that currently exists may be only 100GB. In the
standard method of provisioning, 1TB of space needs to be preallocated. A 6
majority of that space may never be used and is therefore wasted space.
Copyright 2014 Symantec Corporation. All rights reserved.

When using a thin provisioning capable array, a virtual container (virtual volume)
is created for the 1TB. The array then creates/resizes LUNs as actual data is
written to the virtual container. The administrator is not involved after the initial
virtual container is created unless the amount of actual physical storage is used up.
To truly benefit from thin storage, you need the right stack on all hosts:
A multi-pathing driver that supports the thin hardware
A file system optimized not to waste storage on thin volumes
A stack to reclaim space as you migrate to thin storage
A stack to continually optimize utilization of thin storage
SF unlocks thin provisionings full potential with DMP and VxFS which is the
only cross-platform thin storage friendly file system.
CONFIDENTIAL - NOT FOR DISTRIBUTION
155 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
619
Displaying information on thin disks
SF automatically controls the applicability of features such as SmartMove and thin
reclamation based on known device attributes. If SmartMove is enabled only for
thin LUNs and a device is known to be thin by Storage Foundation, then mirroring
operations are optimized to keep the device thin. If a device is known to be
thinrclm, then SF allows thin reclamation commands to be issued to it.
SF 5.0 MP3 and later automatically discover thin LUNs and their attributes. If a
thin LUN is not automatically discovered as thin, you can use the following
command to manually inform SF that the LUN is thin or thin reclaim:
vxdisk -g diskgroup set dm_name thin=[on|reclaim]
The vxdisk -e list command prints the extended device attributes
(EXT_ATTR) as the last column to indicate the type of the device.
Copyright 2014 Symantec Corporation. All rights reserved.

To display properties of the devices that support thin provisioning, use the
vxdisk -o thin list command. This command also indicates whether the
LUN supports thin reclamation. Thin reclamation is the process of reclaiming
unused storage that is a result of deleted files and volumes back to the available
free pool of the thin provisioning capable array. Not all thin provisioning arrays
support thin reclamation. Use the vxdisk -o thin,fssize list command
to display and compare the physically allocated storage size to the storage size
used by the file system. If there is a big difference between the two sizes, it is time
to initiate a thin reclamation process on the corresponding device.
The vxdisk -p list command displays the discovered properties of the
disks including the attributes related to thin provisioning and thin reclamation.

CONFIDENTIAL - NOT FOR DISTRIBUTION


156 620 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Introducing the SmartMove feature
When you mirror a volume, the whole volume content is normally copied to the
newly added plex. This is because the volume has no knowledge of the data stored
in it and is not aware which blocks are in use. Therefore, if you mirror a
1-Terabyte volume where only 10 GB is in use, the copy operation still copies all
of that 1-Terabyte of volume content.
With the SmartMove feature, Volume Manager can use Veritas File System
information to identify the free blocks and skip copying them. So, in the previous
example, only 10 Gigabytes would be copied to complete the mirror
synchronization. Note that the SmartMove feature is only available when the
volume has a Veritas File System on it. If you are using raw volumes with other
applications, such as databases, you still need to copy the whole mirror content. 6
By default, the SmartMove feature is turned on all LUNs. To enable the
Copyright 2014 Symantec Corporation. All rights reserved.

SmartMove feature only for volumes that contain thin LUNs, you need to specify
usefssmartmove=thinonly in the /etc/default/vxsf file. This
tunable is system-wide and persistent, so it only needs to be set once per server.
Setting this tunable parameter to none completely disables the SmartMove
feature. Note that with SF 5.1 and later, you can also use the vxdefault
command to change the value of this tunable parameter. The vxdefault
command is explained in more detail later in this topic.

Note: The Veritas file system must be mounted to get the benefits of the
SmartMove feature.

This feature can be used for faster plex creation and faster array migration.
CONFIDENTIAL - NOT FOR DISTRIBUTION
157 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
621
Administering thin provisioning parameters
In SF 5.1 and later, the vxdefault command is used to modify and display the
tunable parameters that are stored in the /etc/vx/vxsf file as shown on the
slide.
The sharedminorstart tunable parameter is used with the dynamic disk group
reminoring feature. This feature is used to allocate minor numbers dynamically to
disk groups based on their private or shared status. Shared disk groups are used
with Cluster Volume Manager and are not covered in this course.
The fssmartmovethreshold defines a threshold value; only if the filesystem
%usage is less than this threshold, then the SmartMove feature is used. By default,
the fssmartmovethreshold is set to 100 which means that SmartMove is used with
all vxfs file systems with less than 100% usage.
The autostartvolumes tunable parameter turns on or off automatic volume
Copyright 2014 Symantec Corporation. All rights reserved.

recovery. If this parameter is set to on, VxVM automatically recovers and starts
disabled volumes when you import, join, move or split a disk group.

CONFIDENTIAL - NOT FOR DISTRIBUTION


158 622 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Migrating to thin provisioning using SmartMove
The example steps provided on the slide for migrating from a traditional disk array
to a disk array that supports thin provisioning assume that the total space provided
by the thin provisioning array is larger in size than the traditional LUNs used to
build the volume and file system. Here is an example implementation:
1 Turn the SmartMove feature on if necessary.
vxdefault list
vxdefault set usefssmartmove all (if necessary)
2 Add the new, thin LUN, called thinarray0_01 in this example, to the
existing disk group. Note that you can use multiple LUNs although this
example is showing only one.
vxdisksetup -i thinarray0_01 6

vxdg -g appdg adddisk thinarray0_01


Copyright 2014 Symantec Corporation. All rights reserved.

3 Add the new, thin LUN as a new plex to the volume.


vxassist -g appdg mirror appvol thinarray0_01
4 Test the performance of the new LUN.
You can optionally direct all read requests to the plex on the new LUN and
then use benchmarking tools or statistic commands to test performance.
vxvol -g appdg rdpol prefer appvol appvol-02
5 Remove the original mirror and the original LUN.
vxplex -g appdg -o rm dis appvol-01
6 Optionally, grow the file system and the volume to use all of the larger thin
LUN.
vxresize -g appdg -x appvol newsize thinarray0_01

CONFIDENTIAL - NOT FOR DISTRIBUTION


159 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
623
Reclaiming storage with thin provisioning
Thin provisioning (TP) capable arrays allocate actual physical storage only when
the applications using the LUNs write data. However, when portions of this data is
deleted, storage is not normally reclaimed back to the available free pool of the
thin provisioning capable array.
Storage Foundation uses the VxFS knowledge of used and unused blocks at the
file system level to reclaim that unused space. This process must be manually
started by the system administrator.
Thin reclamation can only be performed on volumes with mounted VxFS file
systems. Volumes without a VxFS file system or volumes that are not currently
mounted are not reclaimed. If the volume consists of a mix of thin-provisioning
disks and regular disks, the reclamation is only performed on the thin-provisioning
disks.
Copyright 2014 Symantec Corporation. All rights reserved.

Thin reclamation can be triggered on one or more disks, enclosures or disk groups,
or at the file system level on a mounted VxFS file system as displayed on the slide.
When you reclaim at the file system level, the command goes through all the free
extents in the file system and issues the storage level reclaim on the regions which
are free. Every time the command is run, the complete file system is scanned.
VxVM is optimized to issue the reclaim only to the TP LUNs in the file system.
When you reclaim at the VxVM level, the reclaim command goes through the list
of all TP LUN-backed mounted file systems associated to the specified object, and
issues the reclaim on all the file systems. The output displays the list of volumes
skipped and the list of volumes reclaimed.

CONFIDENTIAL - NOT FOR DISTRIBUTION


160 624 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Aggressive reclamation in SF 5.1 and later
Thin reclamation as implemented in SF 5.0 MP3 (and as described on the previous
page) is a best effort in the sense that it takes any existing contiguous free space in
the file system and reports it to Volume Manager for reclamation. If that
contiguous free space is large enough to be reclaimed in the array (based on chunk
size and chunk alignment on the LUN), the space is effectively reclaimed.
Otherwise, the free space is not reclaimed.
The core benefit of this approach is that it either returns storage to the array free
pool, or it does not; the operation never triggers additional storage usage.
The main drawback is that if the free space is fragmented into small contiguous
areas, it may not get reclaimed. 6
SF 5.1 and later have the capability to perform more aggressive reclamation by
moving data around in the file system to maximize the size of the contiguous free
Copyright 2014 Symantec Corporation. All rights reserved.

space. This is an additional option for reclamation that can only be triggered at the
file system level using the fsadm -R -A mount_point command. Note that
you can use the -o analyze option first to determine if you should perform a
normal reclaim operation or an aggressive reclaim operation.
Notes:
Aggressive reclamation can only be performed on file systems that are known
to use thin reclaim capable storage.
Aggressive reclamation can increase the thin storage usage temporarily during
the data compaction process.

CONFIDENTIAL - NOT FOR DISTRIBUTION


161 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
625
Automatic reclamation for volumes
In SF 5.1 and later, commands like vxassist remove volume, and vxedit
rf rm volume, and the volume shrink operation can trigger automatic
reclamation if the released storage is on thin provision reclaimable LUNs.
The reclaim operation is asynchronous, because the delete or shrink operations are
quicker.
The reclamation of the storage released due to volume delete or shrink is
performed by the vxrelocd daemon and can be controlled by the following
tunable parameters:
reclaim_on_delete_wait_period=[-1 366]
A value of -1 indicates immediate reclamation and a value of 366 indicates that
no reclamation will be performed by the vxrelocd daemon.
Copyright 2014 Symantec Corporation. All rights reserved.

reclaim_on_delete_start_time=[00:00-23:59]
The vxdg destroy diskgroup command does not reclaim any storage
automatically. The thin provision reclaimable LUNs belonging to the destroyed
disk group must be reclaimed manually using the vxdisk reclaim disk
command.

CONFIDENTIAL - NOT FOR DISTRIBUTION


162 626 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Default VxVM behavior with thin LUNs and SF Enterprise license
The SF Enterprise license enables the FastResync feature of Veritas Volume
Manager. The FastResync feature is used for fast resynchronization of the plexes
of a mirrored volume. This feature is mostly used with instant volume snapshots.
However, it is also used for resynchronization of plexes that become stale with
respect to the contents of the volume due to failures.
Without FastResync, when a plex of a mirrored volume becomes stale, the
resynchronization involves an entire atomic copy from the active plexes to the
stale plex. With FastResync, Volume Manager keeps track of the changed regions
of the volume and synchronizes only those regions. This behavior helps with
optimizing thin LUN usage. Therefore, FastResync is automatically enabled on
mirrored volumes if the disk group contains thin LUNs and the feature is licensed. 6

When FastResync is enabled on a mirrored volume, a data cache object (DCO) is


Copyright 2014 Symantec Corporation. All rights reserved.

created with a DCO volume to hold the FastResync maps as well as the DRL
recovery maps and other special maps used with instant snapshot operations on
disk.
Note that you cannot remove a mirrored volume using the vxassist remove
volume command if it has an associated DCO log. To remove a mirrored volume
with a DCO log, use the following vxedit command:
vxedit -g diskgroup -rf rm volume_name

CONFIDENTIAL - NOT FOR DISTRIBUTION


163 Lesson 6 Administering File Systems
Copyright 2014 Symantec Corporation. All rights reserved.
627
Copyright 2014 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 6: Administering File Systems, page A-123

CONFIDENTIAL - NOT FOR DISTRIBUTION


164 628 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Lesson 7
Managing Devices Within the VxVM
Architecture
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


165
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


166 72 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Managing components in the VxVM architecture
VxVM architecture
VxVM is a device driver that is placed between the UNIX operating system and
the SCSI device drivers. When VxVM is running, UNIX invokes the VxVM
device drivers instead of the SCSI device drivers. VxVM determines which SCSI
drives are involved in the requested I/O and delivers the I/O request to the drives.

VxVM daemons
VxVM relies on the following constantly running daemons for its operation:
vxconfigdThe VxVM configuration daemon maintains disk and group
configurations, communicates configuration changes to the kernel, and
modifies configuration information stored on disks. When a system is booted,
the vxdctl enable command is automatically executed to start
Copyright 2014 Symantec Corporation. All rights reserved.

vxconfigd. VxVM reads the /etc/vx/volboot file to determine disk


ownership and automatically imports disk groups owned by the host.
vxiodThe VxVM I/O daemon provides extended I/O operations without
7
blocking calling processes. Several vxiod daemons are usually started at boot
time, and they continue to run at all times.
vxrelocdvxrelocd is the hot-relocation daemon that monitors events
that affect data redundancy. If redundancy failures are detected, vxrelocd
automatically relocates affected data from mirrored or RAID-5 subdisks to
spare disks or other free space within the disk group.
vxsvcThe VEA server process is used for the graphical user interface.

CONFIDENTIAL - NOT FOR DISTRIBUTION


167 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
73
vxconfigbackupdThe configuration backup daemon is used to take
automatic backups of disk group configurations whenever there are metadata
changes in the disk group configurations.
vxcachedThis daemon monitors the cache objects used by space-
optimized snapshots and automatically grows cache volumes when necessary.
Space-optimized snapshots and cache objects are described in the Using
Copy-on-Write SF Snapshots lesson.
vxattachdThis daemon monitors local disk detachment events and
automatically recovers temporarily failed disks that become available again. It
also monitors site detachment events in remote mirroring environments and
automatically recovers detached sites when failed disks become available at
the site. Remote mirroring is covered in detail in the Using Site Awareness for
Mirroring lesson.
vxnotifyThe notification daemon is started by other daemons such as
vxrelocd and vxattachd to detect changes in volume manager object
states such as detached sites, plexes, and disks.
vxesdThe event source daemon is used to log DMP events and handle OS
device reconfiguration events.
vxdclidThe distributed command line daemon is used by Veritas
Operations Manager on managed hosts.
vxdbdThe vxdbd daemon handles the database edition requests. Note that
with SF 5.1 and later, the Veritas Storage Foundation for Oracle (VRTSdbed)
package is installed as part of the Storage Foundation installation if the
recommended or all packages option is selected.
Note that some of these daemons, such as vxcached, may not be started if you
have a Storage Foundation Standard license.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


168 74 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
VxVM configuration database
The VxVM configuration database stores all disk, volume, plex, and subdisk
configuration records. The vxconfig device (/dev/vx/config) is the
interface through which all changes to the volume driver state are performed. This
device can only be opened by one process at a time, and the initial volume
configuration is downloaded into the kernel through this device.
The configuration database is stored in the private region of a VxVM disk. Each
disk that has a private region holds an entire copy of the configuration database for
the disk group. The size of the configuration database for a disk group is limited by
the size of the smallest copy of the configuration database on any of its member
disks.
The VxVM configuration is replicated within the disk group to protect against loss
of the configuration in case of physical disk failure. vxconfigd actively
Copyright 2014 Symantec Corporation. All rights reserved.

monitors five or more copies of the configuration database for each disk group.
VxVM balances their locations based on the number of controllers, targets and
disks in the disk group.
7
VxVM configuration copies are placed across the enclosures spanned by a disk
group to ensure maximum redundancy across enclosures.
The vxconfigd configuration daemon, is the process that updates the
configuration through the vxconfig device. The vxconfigd daemon was
designed to be the sole and exclusive owner of this device.

CONFIDENTIAL - NOT FOR DISTRIBUTION


169 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
75
Displaying disk group configuration data
To display the status of the configuration database for a disk group:
vxdg list diskgroup
If no disk group is specified, information from all disk groups is displayed in an
abbreviated format. When you specify a disk group, a longer format is used to
display the status of the disk group and its configuration.
In the example, five disks have active configuration databases (online), and two
disks do not have an active copy of the data (disabled). The configuration database
for a disk group is the size of the smallest private region in the disk group.
The log is used by the VxVM kernel to keep the state of the drives accurate if the
database cannot be kept accurate (for example, if the configuration daemon is
stopped). The log entries are also enabled on some disks and disabled on others.
Copyright 2014 Symantec Corporation. All rights reserved.

By default, for each disk group, VxVM maintains a minimum of five active
database copies on the same controller. In most cases, VxVM also attempts to
alternate active copies with inactive copies. In the example, the copies on c1t3d0
and c1t9d0 are disabled. If different controllers are represented on the disks in the
same disk group, VxVM maintains a minimum of two active copies per controller.
In the output on the slide, the Configuration database size (permlen=) is next to a
field named free=. The free= field can be used to check how fast the configuration
database is filling up so that action can be taken before the disk group runs out of
database space.

CONFIDENTIAL - NOT FOR DISTRIBUTION


170 76 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Displaying disk header information
The terms displayed in the output of vxdisk list include:

Term Description
Device Full UNIX device name of disk
devicetag Device name used by VxVM to refer to the physical disk
type Method of placing the disk under VxVM control
hostid Name of system that manages the disk group (If blank, no host is
currently controlling this group.)
disk VM disk media name and internal ID
group Disk group name and internal ID
Copyright 2014 Symantec Corporation. All rights reserved.

info Disk format, private region offset, and partition numbers for
public and private regions
flags Settings that describe status and options for the disk
7
pubpaths Paths for block and character device files of the public region of
the disk
version Version number of header format
iosize The iosize range that the disk accepts
public, Partition (slice) number, offset from beginning of the partition,
private length of the partition, and disk offset

CONFIDENTIAL - NOT FOR DISTRIBUTION


171 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
77
Controlling the VxVM configuration daemon
The VxVM configuration daemon must be running in order for configuration
changes to be made to the VxVM configuration database. If vxconfigd is not
running, volume I/O is unaffected, but configuration changes and queries of the
database are not possible.
The vxconfigd daemon synchronizes multiple requests and incorporates
configuration changes based on a database transaction model:
All utilities make changes through vxconfigd.
Utilities must identify all resources needed at the start of a transaction.
Transactions are serialized, as needed.
Changes are immediately reflected in all copies of the configuration database.
The vxconfigd daemon does not interfere with user or operating system access
Copyright 2014 Symantec Corporation. All rights reserved.

to data on disk.

Note: With SF 5.1 and later, the vxconfigd daemon is able to process the
following query requests while it is performing disk group import
operations:

vxdctl mode
vxdg list
vxdisk list
vxprint

CONFIDENTIAL - NOT FOR DISTRIBUTION


172 78 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
vxconfigd modes
vxconfigd reads the kernel log to determine current states of VxVM
components and updates the configuration database. Kernel logs are updated even
if vxconfigd is not running. For example, upon startup, vxconfigd reads the
kernel log and determines that a volume needs to be resynchronized.
vxconfigd operates in one of three modes:
Enabled
Enabled is the normal operating mode in which configuration operations are
allowed. Disk groups are imported, and VxVM begins to manage device nodes
stored in /dev/vx/dsk and /dev/vx/rdsk.
Disabled
In the disabled mode, most operations are not allowed. vxconfigd does not
Copyright 2014 Symantec Corporation. All rights reserved.

retain configuration information for the imported disk groups and does not
maintain the volume and plex device directories. Certain failures, most
commonly the loss of all disks or configuration copies in the boot disk group,
cause vxconfigd to enter the disabled state automatically.
7
Booted
The booted mode is part of normal system startup, prior to checking the root
file system. The booted mode imports the boot disk group and waits for a
request to enter the enabled mode. Volume device node directories are not
maintained, because it may not be possible to write to the root file system.

CONFIDENTIAL - NOT FOR DISTRIBUTION


173 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
79
Managing the volboot file
The hostid field in the /etc/vx/volboot file is used to ensure that two or
more hosts that can access disks on a shared SCSI bus do not interfere with each
other in their use of those disks. This hostid is important in the generation of
unique ID strings that are used internally for stamping disks and disk groups.
The volboot file also contains the name of the system-wide default disk group if
this has been configured. If the boot disk is under VxVM control, the volboot
file also contains the name of the boot disk group to which the boot disk belongs.
Caution: Never edit the volboot file manually. If you do so, its checksum is
invalidated.

Viewing the contents of volboot


To view the decoded contents of the volboot file:
Copyright 2014 Symantec Corporation. All rights reserved.

vxdctl list
Volboot file
version: 3/1
seqno: 0.1
cluster protocol version: 110
hostid: train1
...

CONFIDENTIAL - NOT FOR DISTRIBUTION


174 710 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Discovering disk devices
What is device discovery?
Device discovery is the process of locating and identifying the disks that are
accessible to a host. VxVM features, such as dynamic multipathing (DMP),
depend on device discovery. Device discovery enables you to dynamically add
support for disk arrays from a variety of vendors without rebooting the system.

Discovering and configuring disk devices


To dynamically discover new devices, use the vxdiskconfig utility on the
Solaris platform. This utility scans for disks that were added since VxVMs
configuration daemon was last started and dynamically configures the disks to be
recognized by VxVM. The vxdiskconfig utility invokes OS utilities, such as
devfsadm on Solaris, to ensure that the OS recognizes the disks.
Copyright 2014 Symantec Corporation. All rights reserved.

vxdiskconfig then invokes vxdctl enable, which rebuilds volume node


directories and the DMP internal database to reflect the new state of the system.
Note: The vxdiskconfig utility exists on the Solaris platform only.
7
On other UNIX platforms, you must first use OS-specific methods to get the OS to
recognize any changes, and then you can execute the vxdisk scandisks or
vxdctl enable commands to get VxVM to recognize the new devices.
DDL enables VxVM to use more descriptive names when using enclosure-based
naming, for example, emc0_1 rather than Disk_1.

CONFIDENTIAL - NOT FOR DISTRIBUTION


175 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
711
What Is dynamic multipathing?
Dynamic multipathing is the method that VxVM uses to manage two or more
hardware paths directing I/O to a single drive. VxVM arbitrarily selects one of the
two names and creates a single device entry, and then transfers data across both
paths to spread the I/O.
VxVM detects multipath systems by using the universal world-wide device
identifiers (WWD IDs) and manages multipath targets, such as disk arrays, which
define policies for using more than one path.
The dynamic multipathing (DMP) feature of VxVM provides greater reliability
and performance for your system by enabling path failover and load balancing.

Note: DMP is also available as a stand-alone product, which extends DMP


metadevices to support the OS native logical volume manager (LVM). You
Copyright 2014 Symantec Corporation. All rights reserved.

can create LVM volumes and volume groups on DMP metadevices. Veritas
Dynamic Multi-Pathing can be licensed separately from Storage
Foundation products. Veritas Volume Manager and Veritas File System
functionality is not provided with a DMP license.

CONFIDENTIAL - NOT FOR DISTRIBUTION


176 712 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Benefits of DMP
The advantages of using the dynamic multipathing feature of Volume Manager
over third-party multipathing solutions are listed on the slide. Additional benefits
of using a multipathing solution include:
High availability
DMP provides greater reliability using a path failover mechanism. When one
connection to a disk is lost, the system continues to access the critical data over
the other sound connections to the disk until you replace the failed path.
Improved performance
DMP provides greater I/O throughput by balancing the I/O load uniformly
across multiple I/O paths to the disk device.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


177 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
713
DMP architecture
The slide displays where DMP is located in the host I/O stack; DMP is on top of
the operating system SCSI drivers, which sit on top of the host bus adapter (HBA)
drivers. This means that DMP becomes aware of an I/O failure only after the SCSI
driver gives up that I/O.
DDL discovers all the devices that are connected to the DMP host. It polls for the
serial number of the device at the end of each path; if multiple paths share the same
serial number, DMP deduces that they are multiple paths to the same device and
that they can be aggregated into a DMP node.
Array support libraries (ASLs) allow DMP to properly claim devices and identify
what array is serving the devices to DMP.
After the array is identified, array policy modules (APMs) provide array specific
procedures for any function that may have to be array specific, such as LUN
Copyright 2014 Symantec Corporation. All rights reserved.

trespassing, load balancing, and so on.

CONFIDENTIAL - NOT FOR DISTRIBUTION


178 714 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Displaying information about ASLs and APMs
For fully optimized support of any array and for support of more complicated array
types, DMP requires the use of array-specific array support libraries (ASLs),
possibly coupled with array policy modules (APMs). ASL and APMs effectively
are array-specific plug-ins that allow close tie-in of DMP with any specific array
model.
You can display information on available ASLs and APMs on the system using the
vxddladm listsupport and vxdmpadm listapm all commands
respectively. The /etc/vx/diag.d/vxcheckasl utility can be used to
provide more detailed information about which devices are claimed by which
array support libraries.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


179 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
715
Partial device discovery
VxVM supports partial device discovery where you can include or exclude sets of
disks or disks attached to controllers from the discovery process. Partial device
discovery reduces redundant discovery operations by scanning only a part of the
OS device tree.
The vxdisk scandisks command rescans the devices in the OS device tree
and triggers a DMP reconfiguration. You can specify parameters to vxdisk
scandisks to implement partial device discovery. Some examples are provided
on the slide. Refer to manual pages for detailed usage information.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


180 716 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Managing multiple paths to disk devices
What Is a multiported disk array?
A multiported disk array is an array that can be connected to host systems through
multiple paths. The two basic types of multiported disk arrays are:
Active/active disk arrays
Active/passive disk arrays
For each supported array type, VxVM uses a multipathing policy that is based on
the characteristics of the disk array. For a description of all the different types of
arrays supported by DMP, refer to Veritas Storage Foundation Administrators
Guide.

Active/active disk arrays


Copyright 2014 Symantec Corporation. All rights reserved.

Active/active disk arrays permit several paths to be used concurrently for I/O.
With these arrays, DMP provides greater I/O throughput by balancing the I/O load
uniformly across the multiple paths to the disk devices. If one connection to an
array is lost, DMP automatically routes I/O over the other available connections to 7
the array.

Active/passive disk arrays


Active/passive disk arrays permit only one path at a time to be used for I/O. The
path that is used for I/O is called the active path, or primary path. An alternate
path, or secondary path, is configured for use in the event that the primary path
fails. If the primary path to the array is lost, DMP automatically routes I/O over the
secondary path or other available primary paths.
CONFIDENTIAL - NOT FOR DISTRIBUTION
181 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
717
Setting the I/O policy for an enclosure
After analyzing statistics, you can use the vxdmpadm setattr command with
the iopolicy option to change the I/O policy for balancing the I/O load across
multiple paths to a disk array or enclosure.
You can set policies for an enclosure (for example, HDS01), for all enclosures of a
particular type (for example, HDS), or for all enclosures of a particular array type
(A/A for active/active, or A/P for active/passive).
adaptive automatically determines the paths that have the least delay and
schedules I/O on paths that are expected to carry a higher load.
adaptiveminq is similar to the adaptive policy except that the I/O is
scheduled according to the length of the I/O queue on each path. The path with
the shortest queue is assigned the highest priority.
balanced takes the track cache into consideration when balancing I/O across
Copyright 2014 Symantec Corporation. All rights reserved.

paths.
minimumq sends I/O on paths that have the minimum number of I/O requests
in the queue. This is the default policy for all types of arrays.
priority assigns the path with the highest load carrying capacity as the
priority path.
round-robin sets a simple round-robin policy for I/O.
singleactive channels I/O through the single active path.
To display the current I/O policy:
vxdmpadm getattr enclosure enclosure_name iopolicy

CONFIDENTIAL - NOT FOR DISTRIBUTION


182 718 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Setting path attributes
You can set the following attributes of the paths to an enclosure or disk array by
using the command:
vxdmpadm setattr path path_name pathtype=type
active changes a standby path to active.
nomanual restores the original primary or secondary attributes of a path.
nopreferred restores the normal priority of the path.
preferred [priority=N] specifies a preferred path and optionally
assigns a priority value to it. This indicates a path that is able to carry a higher
I/O load. The priority value must be an integer greater than or equal to 1.
Larger priority values indicate a greater load carrying capacity.

Note: Marking a path as a preferred path does not change its I/O load
balancing policy.

primary assigns a primary path for an Active/Passive disk array.


secondary assigns a secondary path for an Active/Passive disk array.
standby marks a path as not available for normal I/O scheduling. This path
is only invoked if there are no active paths available for I/O.
The changes are not persistent across reboots with SF versions before 5.1. With SF
5.1 and later, if the command returns successfully, the changes are saved in the
/etc/vx/dmppolicy.info file. During vxconfigd startup, the path
attributes are read from this file and the corresponding paths are updated.
See the Veritas Dynamic Multi-Pathing Administrator's Guide and the
vxdmpadm(1m) manual page for more information.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


183 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
719
Enabling or disabling I/O to a controller
By disabling I/O to a host disk controller, you can prevent DMP from issuing I/O
through a specified controller. You can disable I/O to a controller to perform
maintenance on disk arrays or controllers attached to the host. For example, when
replacing a system board, you can stop all I/O to the disk controllers connected to
the board before you detach the board.
For active/active disk arrays, when you disable I/O to one active path, all I/O
shifts to other active paths.
For active/passive disk arrays, when you disable I/O to one active path, all I/O
shifts to a secondary path or to an active primary path on another controller.
You cannot disable the last enabled path to the root disk or any other disk.
On HP-UX, you can disable the last enabled path to any other disk (without using
the -f (force) option).
Copyright 2014 Symantec Corporation. All rights reserved.

When you disable I/O to a controller, disk, or path, you override the DMP path
restoration threads ability to reset the path to ENABLED; the status of the manually
disabled path is displayed as DISABLED(M) or disabled(m).
When you enable I/O to a controller:
For active/active disk arrays, the controller is used again for load balancing.
For active/passive disk arrays, the operation results in failback of I/O to the
primary path.

CONFIDENTIAL - NOT FOR DISTRIBUTION


184 720 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Copyright 2014 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 7: Managing Devices Within the VxVM Architecture, page A-141

CONFIDENTIAL - NOT FOR DISTRIBUTION


185 Lesson 7 Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
721
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


186 722 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Lesson 8
Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


187
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


188 82 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
How does VxVM interpret failures in hardware?
VxVM interprets failures in hardware in a variety of ways, depending on the type
of failure.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


189 Lesson 8 Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
83
Identifying I/O failure

Disk failure
Data availability and reliability are ensured through most failures if you are using
VxVM redundancy features, such as mirroring or RAID-5. If the volume layout is
not redundant, loss of a drive may result in loss of data and may require recovery
from backup.

Disk failure handling


When a drive becomes unavailable during an I/O operation or experiences
uncorrectable I/O errors, the operating system (or the HBA) detects SCSI failures
and reports them to VxVM. The method that VxVM uses to process the SCSI
failure depends on which VxVM objects the failure impacts.
Copyright 2014 Symantec Corporation. All rights reserved.

Failing vs. failed disks


Volume Manager differentiates between failing and failed drives:
Failing: If there are uncorrectable I/O failures on the public region of the drive,
but VxVM can still access the private region of the drive, the disk is marked as
failing.
Failed: If VxVM cannot access the private region or the public region, the disk
is marked as failed.

CONFIDENTIAL - NOT FOR DISTRIBUTION


190 84 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Identifying disabled disk groups
When disk groups are disabled, the status changes to dgdisabled.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


191 Lesson 8 Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
85
Identifying failed disks
When VxVM detaches the disk, it breaks the mapping between the VxVM
diskdisk media record (appdg02)and the disk drive (emc0_dd2).
However, information on the disk media record, such as the disk media name, the
disk group, the volumes, plexes, and subdisks on the VxVM disk, and so on, is
maintained in the configuration database in the active private regions of the disk
group.
The output of vxdisk list displays the failed drive as online until the VxVM
configuration daemon is forced to reread all the drives in the system and to reset its
tables.
To force the VxVM configuration daemon to reread all the drives in the system:
vxdctl enable
Copyright 2014 Symantec Corporation. All rights reserved.

After you run this command, the drive status changes to error for the failed drive,
and the disk media record changes to failed. The disk is immediately marked as
error state, when the public region is not accessible.
Note that the example on the slide shows three failed disks one of which was in a
disk group and was assigned a disk media name. Failed disks that were not part of
a disk group also change their status to error but they have no disk media records
to show as failed.

CONFIDENTIAL - NOT FOR DISTRIBUTION


192 86 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Disk failure types
The basic types of disk failure are permanent and temporary.
Permanent disk failures are failures in which the data on the drive can no
longer be accessed for any reason (that is, uncorrectable). In this case, the data
on the disk is lost.
Temporary disk failures are disk devices that have failures that are repaired
some time later. This type of failure includes a drive that is powered off and
back on, or a drive that has a loose SCSI connection that is fixed later. In these
cases, the data is still on the disk, but it may not be synchronized with the other
disks being actively used in a volume.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


193 Lesson 8 Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
87
Recovering disabled disk groups
Device recovery
As soon as the hardware problem is resolved, the OS recognizes the disk array and
the disks. DMP automatically detects the change, adds the disk array to the
configuration, and enables the DMP paths.
Relevant messages are logged to the system log.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


194 88 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Recovering from temporary disk group failures
The disks still have their private regions, so there is no need to recover the disk
group configuration data.
Recover the disk group as described in the slide.
To ensure that the DMP paths are enabled, view the output of
vxdisk -o alldgs list.
If necessary, to start the volumes in the disk group, use vxvol -g diskgroup
startall. Note that by default, volumes are automatically started when a disk
group is imported with SF 5.1 SP1 and later.
Also note that when you start the volumes in the disk group, mirrored volumes
may go through a synchronization process at the background if they were open at
the time of the failure.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


195 Lesson 8 Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
89
Resolving disk failures
Volume states after the failure
As soon as VxVM detects the disk failure, it detaches the disk media record from
the disk access record, the corresponding plex states change to NODEVICE as
shown on the slide. At this point, VxVM does not differentiate between a
permanent failure and a temporary failure.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


196 810 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Disk recovery tasks
Recovering a permanently or temporarily failed disk involves both physically
recovering the hardware problem and then logically replacing the disk and
recovering volumes in Volume Manager:
Physical disk recovery: When a disk fails permanently, you replace the
corrupt disk with a new disk. The replacement disk cannot already be in a disk
group. If you want to use a disk that exists in another disk group, then you must
remove the disk from the disk group before you can use it as the replacement
disk.
If the disk failure was temporary, you first need to resolve the physical
problem; for example, power the disk back up or reconnect the cable.
Volume Manager disk recovery: After the physical problem is resolved, you
need to tell Volume Manager which disk is to be attached to the failed disk
Copyright 2014 Symantec Corporation. All rights reserved.

media name. For temporary failures, this is the same physical disk that was
used for the disk media before.
Volume recovery: When a disk fails and is removed for replacement, the plex
on the failed disk is disabled, until the disk is replaced. Volume recovery
involves starting disabled volumes, resynchronizing mirrors, and
resynchronizing RAID-5 parity.
After successful recovery, the volume is available for use again. Redundant
(mirrored or RAID-5) volumes can be recovered by VxVM. With permanent
failures, nonredundant (unmirrored) volumes must be restored from backup. 8

CONFIDENTIAL - NOT FOR DISTRIBUTION


197 Lesson 8 Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
811
Recovering the physical disk
1 Connect the new disk or resolve the hardware problem.
2 Get the operating system to recognize the disk:

Platform OS-Specific Commands to Recognize a Disk


Solaris devfsadm
prtvtoc /dev/dsk/device_name
HP-UX ioscan -fC disk
insf -e
AIX cfgmgr
lsdev -C -l device_name
Linux blockdev --rereadpt /dev/xxx
Copyright 2014 Symantec Corporation. All rights reserved.

3 Get VxVM to recognize that a failed disk is now working again. Although you
can use the vxdctl enable command to get VxVM to recognize a new or
recovered disk, this command causes VxVM to reread all of the configuration
information on all of the existing devices. In large configurations this can be
time consuming. You can use the vxdisk scandisks commands
displayed on the slide to limit the discovery operation to a subset of disks.
4 Verify that VxVM recognizes the disk:
vxdisk -o alldgs list
After the operating system and VxVM recognize the new disk or the recovered
disk, you can then attach the disk to the failed disk media record.
CONFIDENTIAL - NOT FOR DISTRIBUTION
198 812 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Attaching the VxVM diskPermanent failures

Replacing a Failed Disk: vxdiskadm


To replace a disk that has already failed or that has already been removed, you
select the Replace a failed or removed disk option. This process creates a public
and private region on the new disk and populates the private region with the disk
media name of the failed disk.

Replacing a Disk: CLI


The -k switch forces VxVM to take the disk media name of the failed disk and
assign it to the new disk. For example, if the failed disk appdg02 in the appdg disk
group was removed, and you want to add the new device emc0_dd2 as the
replacement disk:
Copyright 2014 Symantec Corporation. All rights reserved.

vxdg -k -g appdg adddisk appdg02=emc0_dd2


Note that the disk needs to be initialized first.

CONFIDENTIAL - NOT FOR DISTRIBUTION


199 Lesson 8 Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
813
Attaching the VxVM diskTemporary failures
If the disk failure was temporary, the disk still has the private region that would
enable VxVM to recognize it.
Storage Foundation 5.1 and later include a daemon called the vxattachd
daemon. This daemon detects when a temporarily failed disk becomes available
again and performs all the necessary recovery actions including the recovery of
redundant or startable volumes automatically.
If the vxattachd daemon is not available, for example with previous versions of
SF, you need to attach the failed disk media record to the temporarily failed disk
manually.
The vxreattach utility reattaches temporarily failed disks to their disk media
records that are in failed state. This command attempts to find the name of the
drive in the private region and to match it to a disk media record that is missing a
Copyright 2014 Symantec Corporation. All rights reserved.

disk access record. If you use the r option with the vxreattach command,
volume recovery is also initiated and there is no need to perform volume recovery
separately.

Note: The vxreattach command is the equivalent of the vxdg -k


adddisk command for temporarily failed disks.

CONFIDENTIAL - NOT FOR DISTRIBUTION


200 814 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Volume states after attaching the disk media
After reattaching the disk, volume and plex states are as displayed in the slide.
Notice the different states of vol01 and vol02. The vol01 volume can still receive
I/O and contains a plex in the IOFAIL state. This indicates that there was a
hardware failure underneath the plex while the plex was online.
Also notice that the only plex of vol02 has a state of IOFAIL. To display more
information about this plex, type:
vxprint -g appdg -l vol02-01
Plex: vol02-01
info: len=204800
type: layout=CONCAT
Copyright 2014 Symantec Corporation. All rights reserved.

state: state=ACTIVE kernel=DISABLED io=read-write


assoc: vol=vol02 sd=appdg02-02
flags: complete recover iofail
The recover flag means that VxVM believes that the data in this plex will need
to be recovered. In a temporary disk failure, where the disk may have been turned
off during an I/O stream, the data on that disk may still be valid. Therefore, do not
always interpret the recover and iofail flags in terms of bad data on the disk.

CONFIDENTIAL - NOT FOR DISTRIBUTION


201 Lesson 8 Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
815
Recovering a volume
To perform volume recovery operations from the command line, you use the
vxrecover command. The vxrecover program performs plex attach, RAID-5
subdisk recovery, and resynchronize operations for specified volumes
(volume_name), or for volumes residing on specified disks (dm_name). You can
run vxrecover any time to resynchronize mirrors.
For example, after replacing the failed disk appdg02 in the appdg disk group, and
adding the new disk in its place, you can attempt to recover the appvol volume:
vxrecover -bs -g appdg appvol
To recover, in the background, any detached subdisks or plexes that resulted from
replacement of the disk appdg02 in the appdg disk group:
vxrecover -b -g appdg appdg02
Copyright 2014 Symantec Corporation. All rights reserved.

Note that the -s option of the vxrecover command starts all disabled volumes
that can be started. However, if a non-redundant volume does not have a clean or
active plex, the vxrecover -s command will not succeed in starting it. In this
case, you may need to start the non-redundant volume forcibly using the vxvol
f start command as shown in the slide. Starting a volume is necessary before
you can perform any I/O on the volume, for example to restore data from a backup.

CAUTION You must never start redundant volumes forcibly. If you do so, you
may cause data corruption.

CONFIDENTIAL - NOT FOR DISTRIBUTION


202 816 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Volume states during recovery
When you start the recovery on redundant volumes, the plex that is not
synchronized with the mirrored volume has a state of ENABLED and STALE.
During the period of synchronization, the stale plex is write-only (WO). After the
synchronization is complete, the plex state changes to ENABLED and ACTIVE,
and it becomes read-write (RW).
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


203 Lesson 8 Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
817
Resolving disk failures: Summary flowchart
Disk failures can be resolved by following the process described in the slide.
Note that to recover the user data on non-redundant volumes, you will need to
either perform a file system check and mount the file system if the failure was
temporary or create a new file system to restore from backup if the failure was
permanent.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


204 818 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Intermittent disk failures
Intermittent disk failures are failures that occur off and on and involve problems
that cannot be consistently reproduced. Therefore, these types of failures are the
most difficult for the operating system to handle and can cause the system to slow
down considerably while the operating system attempts to determine the nature of
the problem. If you encounter intermittent failures, you should move data off of the
disk and remove the disk from the system to avoid an unexpected failure later.
However, intermittent disk failures are also very rare. With intermittent disk
failures, you can sometimes observe disks being labeled by VxVM as failing as
shown on the slide.
If Volume Manager experiences occasional I/O failures on a disk but can still
access the private region of the disk, it marks the disk as failing.
Copyright 2014 Symantec Corporation. All rights reserved.

Note: If the failing flag is set on a disk, it is not turned off until the administrator
executes the following command:
vxedit -g diskgroup set failing=off dm_name

CONFIDENTIAL - NOT FOR DISTRIBUTION


205 Lesson 8 Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
819
Managing hot relocation at the host level
What is hot relocation?
Hot relocation is a feature of VxVM that enables a system to automatically react to
I/O failures on redundant (mirrored or RAID-5) VxVM objects and restore
redundancy and access to those objects. VxVM detects I/O failures on objects and
relocates the affected subdisks. The subdisks are relocated to disks designated as
spare disks or to free space within the disk group. VxVM then reconstructs the
objects that existed before the failure and makes them redundant and accessible
again.

Note: VxVM hot relocation is applicable when working with both physical disks
and hardware arrays. For example, even with hardware arrays if you mirror
a volume across LUN arrays, and one array becomes unusable, it is better
Copyright 2014 Symantec Corporation. All rights reserved.

to reconstruct a new mirror using the remaining array than to do nothing.

Note: Hot relocation is only performed for redundant (mirrored or RAID-5)


subdisks on a failed disk. Nonredundant subdisks on a failed disk are not
relocated, but the system administrator is notified of the failure.

How does hot relocation work?


The hot-relocation feature is enabled by default. No system administrator action is
needed to start hot relocation when a failure occurs.

CONFIDENTIAL - NOT FOR DISTRIBUTION


206 820 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
The vxrelocd daemon starts during system startup and monitors VxVM for
failures involving disks, plexes, or RAID-5 subdisks. When a failure occurs,
vxrelocd triggers a hot-relocation attempt and notifies the system administrator,
through e-mail, of failures and any relocation and recovery actions.
The vxrelocd daemon is started by a VxVM start-up script during system boot
up. The argument to vxrelocd is the list of people to e-mail notice of a relocation
(default is root). To disable vxrelocd, you can place a # in front of the line in
the corresponding start-up file.
A successful hot-relocation process involves:
1 Failure detection: Detecting the failure of a disk, plex, or RAID-5 subdisk
(The affected Volume Manager objects are identified and the system
administrator and other designated users are notified.)
2 Relocation: Determining which subdisks can be relocated, finding space for
those subdisks, and relocating the subdisks (The system administrator and
other designated users are notified of the success or failure of these actions.
Hot relocation does not guarantee the same layout of data or the same
performance after relocation.)
3 Recovery: Initiating recovery procedures, if necessary, to restore the volumes
and data (Again, the system administrator and other designated users are
notified of the recovery attempt.)

How is space selected for relocation?


A spare disk must be initialized and placed in a disk group as a spare before it can
be used for replacement purposes.
Hot relocation attempts to move all subdisks from a failing drive to a single
spare destination disk, if possible.
If no disks have been designated as spares, VxVM automatically uses any
available free space in the disk group not currently on a disk used by the
volume.
If there is not enough spare disk space, a combination of spare disk space and
free space is used. Free space that you exclude from hot relocation is not used.
In all cases, hot relocation attempts to relocate subdisks to a spare in the same disk
group, which is physically closest to the failing or failed disk. Note that if there is
Copyright 2014 Symantec Corporation. All rights reserved.

not enough free space, it is possible for a subdisk to be relocated to multiple


subdisks scattered to different disks.
When hot relocation occurs, the failed subdisk is removed from the configuration
database. The disk space used by the failed subdisk is not recycled as free space.

CONFIDENTIAL - NOT FOR DISTRIBUTION


207 Lesson 8 Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
821
Managing spare disks
When you add a disk to a disk group, you can specify that the disk be added to the
pool of spare disks available to the hot relocation feature of VxVM. Any disk in
the same disk group can use the spare disk. Try to provide at least one hot-
relocation spare disk per disk group. While designated as a spare, a disk is not used
in creating volumes unless you specifically name the disk on the command line.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


208 822 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Unrelocating a disk

The vxunreloc utility


The hot-relocation feature detects I/O failures in a subdisk, relocates the subdisk,
and recovers the plex associated with the subdisk.
VxVM also provides a utility that unrelocates a diskthat is, moves relocated
subdisks back to their original disk. After hot relocation moves subdisks from a
failed disk to other disks, you can return the relocated subdisks to their original
disk locations after the original disk is repaired or replaced.
Unrelocation is performed using the vxunreloc utility, which restores the system
to the same configuration that existed before a disk failure caused subdisks to be
relocated.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


209 Lesson 8 Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
823
Copyright 2014 Symantec Corporation. All rights reserved.

Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions and solutions.
Lab 8: Resolving Hardware Problems, page A-159

CONFIDENTIAL - NOT FOR DISTRIBUTION


210 824 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Appendix A
Lab Solutions
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


211
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


212 A2 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab environment overview


The labs for this course are performed within a virtual environment consisting of
five virtual machines. This virtual environment has been developed specifically for
this course and is detailed in the next sections.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


213 Copyright 2014 Symantec Corporation. All rights reserved.
A3
Virtual machine configuration
These labs use the five virtual machines shown in the slide. The introductory page
of each lab shows which virtual machines are being used. The labs contain icons,
such as those shown in the slide, to indicate which steps are performed on which
virtual machines.

System name Description Fully qualified host name


mgt NFS Server mgt.example.com
DNS Server
iSCSI Array
Veritas Operations
Manager Management
Server
Copyright 2014 Symantec Corporation. All rights reserved.

util1 iSCSI Array util1.example.com


winclient Veritas Operations winclient.example.com
Manager console

CONFIDENTIAL - NOT FOR DISTRIBUTION


214 A4 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

System name Description Fully qualified host name


sym1 Storage Foundation sym1.example.com
Server
sym2 Storage Foundation sym2.example.com
Server

Note: The sym3 and sym4 systems are not used for the Storage Foundation labs.

Note: In the following exercises, the virtual machines are identified by the system
names in the preceding table.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


215 Copyright 2014 Symantec Corporation. All rights reserved.
A5
Required parameters
The software and the scripts required during the labs throughout the course exist in
subdirectories under the /student directory on each user system as displayed on
the slide.

Virtual machine account information


Refer to the following table for the logon credentials to the various virtual
machines:

System Logon information


mgt The user does not log into this system unless instructed to
by the instructor
util1 The user does not log into this system unless instructed to
Copyright 2014 Symantec Corporation. All rights reserved.

by the instructor
winclient User: administrator
Password: train

CONFIDENTIAL - NOT FOR DISTRIBUTION


216 A6 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
System Logon information
sym1 User: root
Password: train
sym2 User: root A
Password: train
sym3 User: root
Password: train
sym4 User: root
Password: train

Accessing virtual machines


Depending on how and where you attend this class, you access the virtual
machines in the lab environment in one of several ways. Your instructor will direct
you to the appropriate set of lab procedures for the environment you are using in
this class.
If you are working with VMware Workstation, continue to the VMware
Workstation Introduction section on the next page.
If you are working with Hatsize, skip to the Hatsize Introduction section that
starts after all the VMware Workstation exercises.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


217 Copyright 2014 Symantec Corporation. All rights reserved.
A7
Lab 1: VMware Workstation Introduction
In this lab, you become familiar with the lab environment used with the Symantec
Storage Foundation 6.x for UNIX: Administration Fundamentals course, as well
as the method for accessing systems.
The hands-on portion of this lab enables you to perform basic operations on virtual
machines, as shown in the slide overview. Adopting the best practice guidelines
provided in this lab enables you to perform the remaining labs more efficiently.

Note: The exercises in this section apply to VMware Workstation lab


environments. Exercises for other environments, such as Hatsize, are
located elsewhere in this document.

Appendix A provides complete lab instructions and solutions.


Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


218 A8 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

VMware Workstation lab environment


In this lab environment, each of the virtual machines is connected to a virtual
network (10.10.2.0) residing on the host. This network can be used to access the
Web server on the vom virtual machine from a Web browser on the host machine
(if available). However, lab instructions use a Web browser on the vom virtual
machine itself.
Note that other virtual networks exist in the environment for accessing multiple
disk arrays and for the purpose of multi-pathing.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


219 Lab 1: VMware Workstation Introduction
Copyright 2014 Symantec Corporation. All rights reserved.
A9
VMware Workstation interface
The screen shot in the slide shows the VMware Workstation interface used to
access the virtual machines. Virtual machines are referred to as guest systems,
which are running their own guest operating systems. The physical system running
the VMware Workstation application is referred to as the host system running the
host operating system. Virtual machines are accessed by clicking on the tab with
the appropriate system name.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


220 A10 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 1: Starting virtual machines (VMware Workstation)

In this exercise, you start the virtual machines and display the existing snapshots A
for each virtual machine.

1 If VMware is not already open, start VMware Workstation.


Solution

a On the desktop of your host system, double-click the Load Environment


icon.

b Ensure that VMware Workstation opens and that the following tabs are
present - mgt, util1, winclient, sym1 and sym2. If these tabs are not
present, do not proceed until notifying the instructor.

End of Solution

2 Review the current settings for each virtual machine.

Solution

a To select a virtual machine, click a tab.

b Use the Summary view to locate the Devices tab and review the
information showing the virtual machine configuration.

c Click each of the remaining tabs and review the Devices pane information
for each virtual machine.
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


221 Lab 1: VMware Workstation Introduction
Copyright 2014 Symantec Corporation. All rights reserved.
A11
mgt

3 Start the mgt virtual machine.

Solution

a In VMware, click the mgt tab.

b From the toolbar, click the green Power On button.

c While the virtual machine is starting, proceed to the next virtual machine.
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


222 A12 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
util1
A
4 Start the util1 virtual machine.

Solution

a In VMware, click the util1 tab.

b From the toolbar, click the green Power On button.

c Wait until the login window is displayed.

End of Solution

CAUTION Do not proceed to the next step until the login screen is visible on
both mgt and util1. The mgt server will show a typical RHEL
graphical logon screen, while the util1 server will stop at a CLI
login prompt.

Note: The first two virtual machines must be turned on at all times during all lab
testing. Failure to start the mgt and util1 virtual machines results in
missing files and missing shared LUNs.
Copyright 2014 Symantec Corporation. All rights reserved.

winclient

5 Start the winclient virtual machine.

Solution

a In VMware, click the winclient tab.

CONFIDENTIAL - NOT FOR DISTRIBUTION


223 Lab 1: VMware Workstation Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A13
b From the toolbar, click the green Power On button.

c While the virtual machine is starting, proceed to the next virtual machine.

End of Solution

sym1

6 Start the sym1 virtual machine.

Solution

a In VMware, click the sym1 tab.

b From the toolbar, click the green Power On button.

c While the virtual machine is starting, proceed to the next virtual machine.

End of Solution

sym2

7 Start the sym2 virtual machine.


Copyright 2014 Symantec Corporation. All rights reserved.

Solution

a In VMware, click the sym2 tab.

b From the toolbar, click the green Power On button.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


224 A14 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 2: Logging on to virtual machines (VMware Workstation)

Log on to each virtual machine to become familiar with the logon procedures for A
each system type.

Note: Do not log onto the mgt and util1 virtual machines unless the instructor
requests you to do so.

winclient

1 Log on to Windows Server (winclient) as the root user.


Solution

a Click the winclient tab.

b On the login screen of the winclient system, type the username and
password. Click the Enter key.

User name: administrator


Password: train
End of Solution

sym1
Copyright 2014 Symantec Corporation. All rights reserved.

2 Log on to the first Storage Foundation Server (sym1) as the root user.

Solution

a Click the sym1 tab.

b On the login screen of the sym1 server, type the username and click the
Enter key.

User name: root

CONFIDENTIAL - NOT FOR DISTRIBUTION


225 Lab 1: VMware Workstation Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A15
c When prompted, type the password for this system and click the Enter key.

Password: train

d Wait until all startup scripts have completed.

End of Solution

sym2

3 Log on to the second Storage Foundation Server (sym2) as the root user.

Solution

a Click the sym2 tab.

b On the login screen of the sym2 server, type the username and click the
Enter key.

User name: root

c When prompted, type the password for this system and click the Enter key.

Password: train

d Wait until all startup scripts have completed.

End of Solution

4 Press Ctrl+Alt to release keyboard and mouse controls from the virtual
Copyright 2014 Symantec Corporation. All rights reserved.

machine.

CONFIDENTIAL - NOT FOR DISTRIBUTION


226 A16 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 3: Running basic commands (VMware Workstation)

Determine whether the virtual machines can communicate by way of TCP/IP on A


the virtual network.

sym1

5 On the first Storage Foundation Server (sym1), open a terminal window if


none is already open.

Solution
On the desktop, right-click and select Konsole.
End of Solution

6 Record the IP addresses assigned to this system.

Solution

a From one of the open terminal windows, type ip addr.

b Locate the entries for the eth1, eth2, eth3, and eth4 interfaces.

c Record the IP address on the following line.

End of Solution

sym1 Server IP address - eth1: ___________________________________


Copyright 2014 Symantec Corporation. All rights reserved.

sym1 Server IP address - eth2: ___________________________________


sym1 Server IP address - eth3: ___________________________________
sym1 Server IP address - eth4: ___________________________________

7 From the terminal window, ping default gateways.

Solution
ping 10.10.2.1
ping 10.10.3.1

CONFIDENTIAL - NOT FOR DISTRIBUTION


227 Lab 1: VMware Workstation Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A17
ping 10.10.4.1
ping 10.10.5.1
End of Solution

Did you receive a reply, indicating that systems are communicating?


Solution
Yes, the output shows a reply has been received from the gateway IP address.
End of Solution

Note: If a ping command reports unknown host or timeout errors, verify


the command syntax, and then contact the instructor for assistance.

8 Use the nslookup command to view the fully qualified host name of the
second Storage Foundation Server (sym2).

Solution
nslookup sym2
End of Solution

What is the fully qualified host name of sym2?


Solution
sym2.example.com
End of Solution

9 Ensure that iSCSI LUNs are available using the fdisk -l command.

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

From one of the open terminal windows, type fdisk -l.


End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


228 A18 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Note: The mgt and util1 virtual machines must be running to have access to the
iSCSI LUNs. If only the sda and sdb disks are visible, contact the
instructor to isolate the issue.
A

sym2

10 On the second Storage Foundation Server (sym2), open a terminal window if


none is already open.

Solution
On the desktop, right-click and select Konsole.
End of Solution

11 Record the IP addresses assigned to this system.

Solution

a From one of the open terminal windows, type ip addr.

b Locate the entries for the eth1, eth2, eth3 and eth4 interfaces.

c Record the IP address on the following line.

End of Solution

sym2 Server IP address - eth1: ___________________________________


sym2 Server IP address - eth2: ___________________________________
Copyright 2014 Symantec Corporation. All rights reserved.

sym2 Server IP address - eth3: ___________________________________


sym2 Server IP address - eth4: ___________________________________

12 From the terminal window, ping default gateways.

Solution
ping 10.10.2.1
ping 10.10.3.1
ping 10.10.4.1

CONFIDENTIAL - NOT FOR DISTRIBUTION


229 Lab 1: VMware Workstation Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A19
ping 10.10.5.1
End of Solution

Did you receive a reply, indicating that systems are communicating?


Solution
Yes, the output shows a reply has been received from the gateway IP address.
End of Solution

Note: If a ping command reports unknown host or timeout errors, verify


the command syntax, and then contact the instructor for assistance.

13 Use the nslookup command to view the fully qualified host name of the first
Storage Foundation Server (sym1).

Solution
nslookup sym1
End of Solution

What is the fully qualified host name of sym1?


Solution
sym1.example.com
End of Solution

14 Ensure that iSCSI LUNs are available using the fdisk -l command.

Solution
From one of the open terminal windows, type fdisk -l.
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

Note: The mgt and util1 virtual machines must be running to have access to the
iSCSI LUNs. If only the sda and sdb disks are visible, contact the
instructor to isolate the issue.

End of lab

CONFIDENTIAL - NOT FOR DISTRIBUTION


230 A20 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab 1: Hatsize Introduction


In this lab, you become familiar with the lab environment used with the Symantec
Storage Foundation 6.x for UNIX: Administration Fundamentals course, as well
as the way to access the systems in it.
The hands-on portion of this lab enables you to perform basic operations on virtual
machines, as shown in the slide overview. Adopting the best practice guidelines
provided in this lab enables you to perform the remaining labs more efficiently.

Note: These exercises are to be used only if the class is using the hosted Hatsize
platform to access the lab environment. Exercises for other environments,
such as VMware Workstation, are located elsewhere in this document.

Appendix A provides complete lab instructions and solutions.


Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


231 Lab 1: Hatsize Introduction
Copyright 2014 Symantec Corporation. All rights reserved.
A21
Hatsize lab environment
The following table provides a translation of virtual machine system names
referred to in the lab guide to the corresponding system names in the Hatsize
interface. Each system name is prefixed with S#. (Student number) in Hatsize.

System Name Description Hatsize Name


mgt NFS Server S#.mgt
DNS Server
iSCSI Array
Veritas Operations
Manager Management
Server
Copyright 2014 Symantec Corporation. All rights reserved.

scst iSCSI Array S#.scst


winclient Veritas Operations S#.winclient
Manager console
sym1 Storage Foundation S#.sym1
Server
sym2 Storage Foundation S#.sym2
Server
sym3 VCS cluster S#.sym3
sym4 VCS cluster S#.sym4

CONFIDENTIAL - NOT FOR DISTRIBUTION


232 A22 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Hatsize interface
The screen shot in the slide shows the Hatsize interface used to access the virtual
machines. Instead of using tabs, such as the tabs in VMware Workstation, you
access Hatsize virtual machines from the System Access menu. Other key
interface elements include:
Machine Commands: Indicates the currently connected machine and whether you
have control of the machine or are in view-only mode.
System Access: Is used to Power on, Power off, revert, and saved options.

Note: The sym3 and sym4 systems are not used for the Storage Foundation labs.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


233 Lab 1: Hatsize Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A23
Exercise 1: Connecting to the lab environment (Hatsize)

Log on to Hatsize and connect to the first system. For each lab environment in
Hatsize, a particular virtual machine is marked as a primary machine. All other
machines are marked as secondary machines. When you connect to the Hatsize
interface, you are initially connected to the primary virtual machine.

1 Locate the Hatsize portal URL and login credentials from your registration
e-mail. Record your credentials here:

Hatsize username:

Hatsize password:

2 Your student number is the number at the end of your Hatsize username
recorded in the previous step.

Record your student number here:

Note: When you use the Hatsize environment, all of the virtual machines
assigned to you are prefixed with a letter and your student number. For
example, if your student number is 8, the virtual machine named vom is
named something like k8-vom or s8-vom. Because the prefix is
different for each student, the lab exercises refer only to the system
name without the prefix.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


234 A24 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3 In Internet Explorer, open the Hatsize portal URL and log in with your
assigned user name and password recorded in a previous step.

Solution
A
The logon screen in the browser is similar to this:

End of Solution

4 After logging in, find your class in the Current Classes table and click Enter
Lab. Note that the name of your class will be different than the sample shown
here.

Sample:
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


235 Lab 1: Hatsize Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A25
Exercise 2: Connecting to virtual machines (Hatsize)

Connect to additional virtual machines to become familiar with Hatsize


environment.

Note: In this exercise you are going to perform these steps on sym1. Similarly
you can repeat the same steps for other virtual machines sym2 and
winclient in Hatsize environment.

sym1

1 Use the System Access menu to connect sym1.


Solution
Select System > System Access > Open.

Note: The sample screenshot is displayed in this solution shows different


system names than what you will observe in your environment
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


236 A26 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2 Run the Java application to view the sym1 in new Java console.

3 Now it displays the sym1 VM in new Java console.


Copyright 2014 Symantec Corporation. All rights reserved.

Note: The above screen displayed (Java console) might be different for
different VMs. You can open a VM by single-clicking the thumbnail
screen image or by clicking the System pull-down control (located in
the bottom-right corner of the thumbnail) and selecting Open.

CONFIDENTIAL - NOT FOR DISTRIBUTION


237 Lab 1: Hatsize Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A27
Other VMs in the kit can also be opened, each displayed as a new Java-
based window. The limit to the number of VMs you can have open at
the same time is only limited by your internet bandwidth.

4 Login to a Linux system is a straightforward username/password entry,


whereas Windows systems require a CTRL-ALT-DEL key sequence to allow
user login.

5 Clicking the white triangle in the green square icon at the top of the
VM desktop, will provide access to the VM control functions, such as
keyboard entry and power management.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


238 A28 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 3: Running basic commands (Hatsize)

Determine whether the virtual machines can communicate by way of TCP/IP on A


the virtual networks.

Note: In this exercise you are going to perform these steps on sym1. Similarly
you can repeat the same steps for other virtual machine sym2 in Hatsize
environment.

sym1

1 On the first Storage Foundation Server (sym1), open a terminal window if


none is already open.
Solution
On the desktop, right-click and select Konsole.
End of Solution

2 Record the IP addresses assigned to this system.

Solution

a From one of the open terminal windows, type ip addr.

b Locate the entries for the eth1, eth2, eth3 and eth4 interfaces.
Copyright 2014 Symantec Corporation. All rights reserved.

c Record the IP address on the following line.

End of Solution

sym1 Server IP address - eth1: ___________________________________


sym1 Server IP address - eth2: ___________________________________
sym1 Server IP address - eth3: ___________________________________
sym1 Server IP address - eth4: ___________________________________

CONFIDENTIAL - NOT FOR DISTRIBUTION


239 Lab 1: Hatsize Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A29
3 From the terminal window, ping sym2 IP addresses corresponding to each of
these networks. Note that sym2 system IP addresses have.12 in the last octet on
each network; for example, 10.10.2.12 on the 10.10.2.0 network.

Solution
ping 10.10.2.12
ping 10.10.3.12
ping 10.10.4.12
ping 10.10.5.12
End of Solution

Did you receive a reply, indicating that systems are communicating?


Solution
Yes, the output shows a reply has been received from the sym2 IP addresses.
End of Solution

Note: If a ping command reports unknown host or timeout errors, verify


the command syntax, and then contact the instructor for assistance.

4 Use the nslookup command to view the fully qualified host name of the
second Storage Foundation Server (sym2).

Solution
nslookup sym2
End of Solution

What is the fully qualified host name of sym2?


Solution
sym2.example.com
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

5 Ensure that iSCSI LUNs are available using the fdisk -l command.

Solution
From one of the open terminal windows, type fdisk -l.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


240 A30 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Note: The mgt and scst virtual machines must be running to have access to the
iSCSI LUNs. If only the sda, sdc, and sdb disks are visible, contact the
instructor to isolate the issue.
A

sym2

1 On the first Storage Foundation Server (sym2), open a terminal window if


none is already open.
Solution
On the desktop, right-click and select Konsole.
End of Solution

2 Record the IP addresses assigned to this system.

Solution

a From one of the open terminal windows, type ip addr.

b Locate the entries for the eth1, eth2, eth3 and eth4 interfaces.

c Record the IP address on the following line.

End of Solution

sym1 Server IP address - eth1: ___________________________________


sym1 Server IP address - eth2: ___________________________________
sym1 Server IP address - eth3: ___________________________________
Copyright 2014 Symantec Corporation. All rights reserved.

sym1 Server IP address - eth4: ___________________________________

3 From the terminal window, ping sym1 IP addresses corresponding to each of


these networks. Note that sym1 system IP addresses have.11 in the last octet on
each network; for example, 10.10.2.11 on the 10.10.2.0 network.

Solution
ping 10.10.2.11
ping 10.10.3.11
ping 10.10.4.11
CONFIDENTIAL - NOT FOR DISTRIBUTION
241 Lab 1: Hatsize Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A31
ping 10.10.5.11
End of Solution

Did you receive a reply, indicating that systems are communicating?


Solution
Yes, the output shows a reply has been received from the sym2 IP addresses.
End of Solution

Note: If a ping command reports unknown host or timeout errors, verify


the command syntax, and then contact the instructor for assistance.

4 Use the nslookup command to view the fully qualified host name of the
second Storage Foundation Server (sym1).

Solution
nslookup sym1
End of Solution

What is the fully qualified host name of sym1?


Solution
sym1.example.com
End of Solution

5 Ensure that iSCSI LUNs are available using the fdisk -l command.

Solution
From one of the open terminal windows, type fdisk -l.
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

Note: The mgt and scst virtual machines must be running to have access to the
iSCSI LUNs. If only the sda, sdc, and sdb disks are visible, contact the
instructor to isolate the issue.

CONFIDENTIAL - NOT FOR DISTRIBUTION


242 A32 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 4: Restarting virtual machines (Hatsize)

If the lab steps instruct you to restart a VMware machine, you must preserve the A
system state during the process. Otherwise, the machine is restored to the initial
state and loses any changes you have made. Only discard the state of the machine
after consulting with your instructor. There are two methods, either within the
operating system on the virtual machine, or in the console System Control menu.

sym1

CAUTION Do not perform any of these steps with out receiving any
permission or notice from the instructor.

1 On the first Storage Foundation Server (sym1), open a terminal window if


none is already open.
Solution
On the desktop, right-click and select Konsole.
End of Solution

2 From a terminal window, use the shutdown -ry now command to restart
the virtual machine.

Solution
shutdown -ry now
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

Note: Using this method preserves system state.

CONFIDENTIAL - NOT FOR DISTRIBUTION


243 Lab 1: Hatsize Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A33
Note: In this portion of the lab, you do not actually restart the virtual machine;
you stop at the screen where you can restart.

3 The power management machine commands are available at two places from
the white on green triangle at the top of the VM desktop and from the
System pull-down control from the thumbnail view. The two power
management machine commands that are currently available in the VA lab are:

CAUTION Do not perform any of these steps with out receiving any
permission or notice from the instructor.

a Power Cycle (Power-off, followed by power-on operation) This is like


pulling the plug on a real system, which will probably result in an
automatic file system integrity scan on reboot (like CHKDSK). Power
Cycle should only be used when the VM is totally unresponsive and cannot
be shutdown/rebooted normally.

b Power Cycle and revert to last saved state This operation will return
the VM back to first-day-of-class condition. Any and all work, including
software installations and configurations, performed since the beginning of
the class will be lost. This choice should only be used as a last resort when
a VM has become unusable and a total refresh is necessary. You should not
choose this option without Instructor direction.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


244A34 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Note: The other Machine command choices are typically not used during VA
labs, such as Power Off, with manual Power on. Power Off and
overwrite saved state is an unimplemented snapshot feature in the
Hatsize lab. A

4 After finishing all the lab exercises remember formally to disconnect the VA
session. On the top right corner you will see a Disconnect option to close the
VA session.

Note: If you simply close the web browser window your access will take a
few minutes to eventually time out. Reconnecting attempts will result
in a User is already logged in message in the portal until that
timeout.

End of lab
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


245 Lab 1: Hatsize Introduction Copyright 2014 Symantec Corporation. All rights reserved.
A35
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


246 A36 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab 2: Installing SF and Accessing SF Interfaces


In this lab, you verify that your lab system is ready for SF 6.x installation. You
then install the Veritas Storage Foundation 6.x on your lab systems.
This lab contains the following exercises:
Verifying that the system meets installation requirements
Installing Veritas Storage Foundation
Performing post-installation and version checks
Optional lab: Setting up Veritas Enterprise Administrator
Optional lab: Text-based VxVM menu interface
Optional lab: Accessing CLI commands
Optional lab: Adding managed hosts to the VOM Management Server
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


247 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A37
Lab information
In preparation for this lab, you need the following information about your lab
environment.
Object Value
root password train
Host names of lab systems sym1
sym2
Domain name example.com
Fully qualified hostnames (FQHN) sym1.example.com
sym2.example.com
Boot disk on lab systems: sda
Location of Storage Foundation 6.x /student/software/sf/sf61/
software: rhel/rhel5_x86_64
Location of SORT data collector: /student/software/sf/sort
Location of lab scripts: /student/labs/sf/sf61

The exercises for this lab start on the next page.


Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


248 A38 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 1: Verifying that the system meets installation
requirements

sym1

1 Before installing Storage Foundation, save the following important system


files into backup files named with a.preVM extension. Also, save your boot
disk information to a file for later use (do not store the file in /tmp). You may
need the boot disk information when you bring the boot disk under VxVM
control in a later optional lab.

/etc/grub.conf
/etc/modprobe.conf
Solution

a cp /etc/grub.conf /etc/grub.conf.preVM

b cp /etc/modprobe.conf /etc/modprobe.conf.preVM

c fdisk -l /dev/sda

Note: This lab section shows the steps for one lab system. These steps should
be repeated for all systems that SF 6.x will be installed on, for example
sym2.

End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

2 If you have access to the Internet, start a Web browser and navigate to the
Symantec Operations Readiness Tools (SORT) Web site at
https://sort.symantec.com. In the Download section, click the link
for SORT Data Collectors. If a Read and accept Terms of Service page
appears, click Accept. Select the link for the Linux (x86-64) operating system.
Save the SORT data collector sharball to a local directory, such as /var/
tmp, or to the Desktop.

If you do not have access to the Internet, copy the SORT data collector sharball
located in the /student/software/sf/sort directory to a local
directory, such as /var/tmp.

CONFIDENTIAL - NOT FOR DISTRIBUTION


249 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A39
Note: The SORT data collector is updated for each release. You can download
the latest version from https://sort.symantec.com.

Solution

a cp /student/software/sf/sort/sort_linux_x64.sh \
/var/tmp

b cd /var/tmp

End of Solution

3 Decompress the SORT data collector sharball you copied to the local directory.
Note that you may need to change file permissions to execute the sharball or
run it using sh. When the Would you like to run the data
collector now? prompt is displayed, enter n.

Solution
sh ./sort_linux_x64.sh
Would you like to run the data collector now? [y,n] (y) n
End of Solution

4 Run SORT data collector and verify completion using displayed text output.

a Start the SORT utility. If you need to install Storage Foundation on more
than one system, start the SORT utility to check all systems.

Note: If your system has access to the Internet and a more recent version of the
SORT utility is available than the version you are running, you are
Copyright 2014 Symantec Corporation. All rights reserved.

prompted that a newer version is available. Symantec recommends that you


always use the latest version of the SORT utility.
You may also be prompted to download the latest tier sheet. Type n and
proceed.

Solution
./sort/sortdc
End of Solution

b When prompted to accept the terms and conditions enter y.


CONFIDENTIAL - NOT FOR DISTRIBUTION
250 A40 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Note: If you are running the updated SORT data collector, the subsequent
menu options may vary.

A
Solution

c If required, press ENTER (RETURN) to continue.

d Type y to accept the terms and conditions.

Solution
Press [Return] to indicate your acceptance the
terms and conditions as indicated in the /var/tmp/
sort/advanced/terms.txt file or q to decline: (y)
y
End of Solution

e When prompted to choose a Symantec enterprise product family, select


option 2) Storage Foundation and HA Solutions and press
Enter to continue.

Solution
Main Menu:

Choose the Symantec enterprise product family:


1) NetBackup
2) Storage Foundation and HA Solutions

Choose your option: [1-2,q] (1) 2


Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

f When prompted for which task to accomplish, select option 1)


Installation/Upgrade report and press Enter to continue.

Solution
Main Menu ->Storage Foundation and HA Solutions:

What task do you want to accomplish?


1) Installation/Upgrade report

CONFIDENTIAL - NOT FOR DISTRIBUTION


251 Lab 2: Installing SF and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
A41
2) Risk Assessment report
3) License/Deployment report
4) VxExplorer report
5) Other tasks
b) Back to previous menu

Choose your option (separate multiple selections


with commas): [1-5,b,q] (1,2,3) 1
End of Solution

g When prompted on which system to run the report, select option 2) One
or more remote systems and press Enter to continue.

Solution
Main Menu->Installation/Upgrade report-
>SFProductFamily:

On which systems do you want to run and report?


1) This system only(<sym1>)
2) One or more remote systems
3) IP Address Range
b) Back to previous menu

Choose your option: [1-3,b,q] (1) 2


End of Solution

h When prompted, type the names of the systems that you desire to test.

Note: SORT data collector uses the same code base as the CPI installer, so
Copyright 2014 Symantec Corporation. All rights reserved.

you can specify multiple systems of the same OS and the utility
includes all specified systems in the test. A single XML file is
created that includes all systems.

Solution
Enter one or more system names separated by space,
or the full-qualified path to a file containing a
list of system names: [b,q,?] sym1 sym2
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


252 A42 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
i Provide the user name for accessing remote systems if you entered a
system other than the system that contains the SORT utility. Accept the
default user name (root) and press Enter to continue.

Solution A

Enter a user name to access the remote system(s):


[q,?] (root)
End of Solution

j After the SORT data collector checks for partial clusters and performs
some basic data collection, choose the Symantec enterprise product you
want to install or upgrade to. Select option 1)Storage Foundation
and press Enter to continue.

Solution
Choose the Symantec enterprise product you want to
install or upgrade to. If you are installing or
upgrading multiple products, run the data
collector for each one.

1) Storage Foundation
2) Storage Foundation for Oracle
3) Cluster Server
4) Storage Foundation HA
5) Storage Foundation Cluster File System
6) Storage Foundation for Cluster File
System/HA
7) Storage Foundation for Oracle RAC
8) Storage Foundation for Sybase
9) Storage Foundation for DB2
10) Storage Foundation Cluster File System for
Copyright 2014 Symantec Corporation. All rights reserved.

Oracle RAC
11) Storage Foundation Sybase ASE CE
b) Back to previous menu

Choose the product: [1-10,b,q] (1) 1


End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


253 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A43
k Choose the product version to which you want to install or upgrade. Select
option 1)and press Enter to continue.

Solution
Choose the product version you want to install or
upgrade to on the system(s) in your environment.

Storage Foundation

1) 6.1 (AIX, Linux_x86_64, SunOS_sparc)


2) 6.0.4 (Linux_x86_64)
3) 6.0.1 (AIX, HP-UX, Linux_x86_64, SunOS_sparc,
SunOS_x86_64)
4) 6.0 (AIX, HP-UX, Linux_x86_64, SunOS_sparc,
SunOS_x86_64)
5) 5.1SP1 (AIX, HP-UX, Linux_x86_64, SunOS_sparc,
SunOS_x86_64)
6) 5.1 (AIX, Linux_x86_64, SunOS_sparc,
SunOS_x86_64)
7) 5.0MP4 (Linux_ppc64, Linux_x86_64)
8) 5.0RU4 (Linux_ppc64, Linux_x86_64)
9) 5.0RU1 (Linux_x86_64)
10) 5.0MP3 (AIX, Linux_x86_64, SunOS_sparc,
SunOS_x86_64)
11) 5.0MP2 (HP-UX, Linux_x86_64)
12) 5.0MP1 (AIX, HP-UX, Linux_x86_64,
SunOS_sparc)
13) 5.0 (AIX, HP-UX, Linux_x86_64, SunOS_sparc,
SunOS_x86_64)
b) Back to previous menu
Copyright 2014 Symantec Corporation. All rights reserved.

Choose the product version: [1-13,b,q] (1) 1


End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


254 A44 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
l The SORT data collector collects data and generates XML and TXT report
files. If the system has access to the SORT Web site you are prompted to
upload the file. Otherwise, a message is displayed stating that the SORT
Web site cannot be accessed and describing how to manually upload the
file. A

Solution
Analyzing systems: 100%

Estimated time remaining: 00:00:00


5 of 5

Detecting the server tier


......................................... Done
Detecting the processor tier
...................................... Done
Detecting installed Storage Foundation
products ................... Done
Running a pre-installation assessment
............................. Done
Detecting installed Storage Foundation patches
.................... Done

Generated XML and text files based on the systems


and the time you ran the
data collector.
Created /var/tmp/sort/reports/
sym1andothers_IAS_20111101_111659.xml
Created /var/tmp/sort/reports/
sym1andothers_IAS_20111101_111659.txt

The system cannot access the SORT Web site now,


Copyright 2014 Symantec Corporation. All rights reserved.

you can manually upload the XML file to the SORT


Web site (https://sort.symantec.com/) to view your
custom server report that contains documentation
and links related to your environment. The text
file does not contain this additional information.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


255 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A45
m When all tasks have been completed, you are prompted to exit the data
collector; select y.

Solution
Your tasks are completed. Would you like to exit
the data collector? [y,n,q](y) y
End of Solution

n If desired, view the report .txt file.

Solution
more /var/tmp/sort/reports/[report_name].txt
End of Solution

5 If you have access to the Internet, upload the SORT .xml output file to the
SORT Web site. Otherwise, skip steps 5 and 6.

Note: Uploading the SORT .xml report to the SORT Web site requires that
there be access to the Internet from the classroom lab. If an external
connection is not available, the .xml file can be saved to a USB drive
and these steps can be performed at a later date.

Solution

a Open a Web browser and navigate to https://


sort.symantec.com.

b Under the SORT section, click the My SORT link. Under the Custom
Copyright 2014 Symantec Corporation. All rights reserved.

Reports Using Data Collectors section, select the Upload Report tab.
Click the Choose File button and then browse to the /var/tmp/sort/
reports directory, select the SORT .xml file, and click the Open
button. Click the Upload button to continue.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


256 A46 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
6 Using the displayed output, determine if the system is ready for installation.

Solution

Mark the checkbox next to Passed in the Filter View By section at the top of A
the page. In the Summary for this server section, ensure that each section
displays a green icon. If any of the sections display an orange or red icon,
record the steps that need to be taken before the installation can be performed
on the following lines.
_______________________________________________________________
______________________________________________________________
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


257 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A47
Exercise 2: Installing Symantec Storage Foundation

sym1

1 Open a terminal window and navigate to the directory that contains the Storage
Foundation 6.x installer script.
Solution
cd /student/software/sf/sf61/rhel/rhel5_x86_64
End of Solution

2 Perform a CPI installation of Storage Foundation 6.x.

a Start the installer script.

Note: The SF installer script is designed to check for SSH


communications first and then RSH communications if SSH is not
available.

Solution
./installer
End of Solution

b Select I for Install a Product option.

c Select 3 for Symantec Storage Foundation (SF) option.


Copyright 2014 Symantec Corporation. All rights reserved.

d Type y to agree to the terms of the End User License Agreement (EULA).

e Select 3 for Install all rpms.

f Type the names of your two systems when prompted. The server where the
installer script was executed on is the default value.

Solution
sym1 sym2
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


258 A48 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
g Observe that the following checks complete successfully;

System communications
Release compatibility
Installed product A
Prerequisite patches and rpms
Platform version
File system free space
Product licensing
Product prechecks
If you discover any issues, report them at this time.

h Press Enter to scroll through the list of packages and start the package
installation.

i Select 2 for the Enable keyless licensing and complete system licensing
later option.

j Select 2 for SF Enterprise product mode to license.

k Type n to not enable replication.

l Observe that the Storage Foundation startup completes successfully.

m Type n when asked to send the information about this installation.

n View the summary file, if desired.

3 Check to ensure that the Storage Foundation path (/opt/VRTS/bin) is


present in the profile.

Notes:
Your lab systems are already configured with the PATH and MANPATH
Copyright 2014 Symantec Corporation. All rights reserved.

environment variable settings. However, in a real-life environment, you


must modify /usr/.bash_profile or /etc/profile yourself.
The VxVM commands in the /opt/VRTS/bin directory are linked to
the same commands in /usr/lib/vxvm/bin.
Solution
echo $PATH

echo $MANPATH
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


259 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A49
4 Verify that Storage Foundation 6.x packages have been properly installed.

Solution
rpm -qa | grep VRTS
VRTSvlic-3.02.61.010-0
VRTSfsadv-6.1.000.000-GA_RHEL5
VRTSob-3.4.678-0
VRTSsfmh-6.0.0.0-0
VRTSspt-6.1.000.000-GA
VRTSlvmconv-6.1.000.000-GA_RHEL5
VRTSdbed-6.1.000.000-GA_RHEL
VRTSvxvm-6.1.000.000-GA_RHEL5
VRTSvxfs-6.1.000.000-GA_RHEL5
VRTSodm-6.1.000.000-GA_RHEL5
VRTSaslapm-6.1.000.000-GA_RHEL5
VRTSsfcpi60-6.1.000.000-GA_GENERIC
VRTSperl-5.16.1.6-RHEL5.5
VRTSfssdk-6.1.000.000-GA_RHEL5

rpm -qi VRTSvxvm


End of Solution

5 View the log files from the installation using the location specified at the end
of the installation. The log file directory is located in
/opt/VRTS/install/logs.

Solution
cd /opt/VRTS/install/logs/
ls
cd installer-unique_string/
Copyright 2014 Symantec Corporation. All rights reserved.

ls
installer-unique_string.summary
installer-unique_string.response
installer-unique_string.tunables
installer-unique_string.log#
install.package.system
start.SFprocess.system
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


260 A50 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
6 Use the vxlicrep command to view the keys that were installed during the
installation.

Solution
A
vxlicrep | more

Symantec License Manager vxlicrep utility version


3.02.61.010
Copyright (C) 1996-2013 Symantec Corporation. All
rights reserved.

Creating a report on all VERITAS products


installed on this system

-----------------***********************---------
--------

License Key = XXXX-XXXX-


XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-X
Product Name = VERITAS
Storage Foundation Enterprise
Serial Number = 16689
License Type = PERMANENT
OEM ID = 2006
Site License = YES
Editions Product = YES

Features :=
Reserved = 0
CPU Count = Not
Restricted
Copyright 2014 Symantec Corporation. All rights reserved.

VxVM#VERITAS Volume Manager = Enabled


VXFS#VERITAS File System = Enabled
QLOG#VERITAS File System = Enabled
PGR#VERITAS Volume Manager = Enabled
VERITAS Storage Foundation Enterprise = Enabled
. . . (more
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


261 Lab 2: Installing SF and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
A51
Exercise 3: Performing post-installation and version checks

sym1

1 Open a terminal window and navigate to the directory that contains the Storage
Foundation 6.x installer script.
Solution
cd /student/software/sf/sf61/rhel/rhel5_x86_64
End of Solution

2 Perform a CPI post-installation check of the Storage Foundation 6.x systems.

a Start the installer script.

Note: The SF installer script is designed to check for SSH


communications first and then RSH communications if SSH is not
available.

Solution
./installer
End of Solution

b Select O for the Perform a Post-Installation Check option.

c Select 3 for the Symantec Storage Foundation (SF) option.


Copyright 2014 Symantec Corporation. All rights reserved.

d Type the names of your two systems when prompted. The server where the
installer script was executed is the default value.

Solution
sym1 sym2
End of Solution

e Observe that the following checks complete successfully;

System communications
CONFIDENTIAL - NOT FOR DISTRIBUTION
262 A52 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Release compatibility
Installed product
Platform version
Product prechecks
A
If you discover any issues report them at this time.

f Observe that the Storage Foundation postcheck completes successfully.

Note: Notice that the post installation check displays a warning due to sda,
sdb, and sdc disks are not being in an online state. This warning can
be ignored.

g View the summary file, if desired.

3 Perform a version check of the installed Storage Foundation 6.x systems. Start
the installer script with the -version option. Specify the sym1 and
sym2 system names.

Note: The SF installer script is designed to check for SSH


communications first and then RSH communications if SSH is not
available. Perform the following steps 3a, 3b, 3c, and 3d as mentioned
below.

Solution

a ./installer -version sym1 sym2


CPI WARNING v-9030-2323 The version checker cannot
connect to SORT website. Due to lack of access to
release matrix files on the SORT website, the results
returned from the version checker may not be up to
date.
Copyright 2014 Symantec Corporation. All rights reserved.

b Do you want to continue [y,n,q] (n) y


....
Installed product(s) on sym1:
Symantec Storage Foundation - 6.1

Product:
Symantec Storage Foundation - 6.1

CONFIDENTIAL - NOT FOR DISTRIBUTION


263 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A53
Packages:
Installed Required packages for Symantec
Storage Foundation 6.1:
#PACKAGE #VERSION
VRTSaslapm 6.1.000.000
VRTSfsadv 6.1.000.000
VRTSperl 5.16.1.6
VRTSsfcpi61 6.1.000.000
VRTSvlic 3.02.61.010
VRTSvxfs 6.1.000.000
VRTSvxvm 6.1.000.000

Installed Optional packages for Symantec


Storage Foundation 6.1:
#PACKAGE #VERSION
VRTSdbed 6.1.000.000
VRTSfssdk 6.1.000.000
VRTSlvmconv 6.1.000.000
VRTSob 3.4.678
VRTSodm 6.1.000.000
VRTSsfmh 6.0.0.0
VRTSspt 6.1.000.000

Summary:

Packages:
7 of 7 required Symantec Storage Foundation
6.1 packages installed
7 of 7 optional Symantec Storage Foundation
6.1 packages installed
Copyright 2014 Symantec Corporation. All rights reserved.

Installed Public and private Hot Fixes for Symantec


Storage Foundation 6.1: None

Installed product(s) on sym2:


Symantec Storage Foundation - 6.1

Product:
Symantec Storage Foundation - 6.1

CONFIDENTIAL - NOT FOR DISTRIBUTION


264 A54 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Packages:
Installed Required packages for Symantec
Storage Foundation 6.1:
#PACKAGE #VERSION
A
VRTSaslapm 6.1.000.000
VRTSfsadv 6.1.000.000
VRTSperl 5.16.1.6
VRTSsfcpi61 6.1.000.000
VRTSvlic 3.02.61.010
VRTSvxfs 6.1.000.000
VRTSvxvm 6.1.000.000

Installed Optional packages for Symantec


Storage Foundation 6.1:
#PACKAGE #VERSION
VRTSdbed 6.1.000.000
VRTSfssdk 6.1.000.000
VRTSlvmconv 6.1.000.000
VRTSob 3.4.678
VRTSodm 6.1.000.000
VRTSsfmh 6.0.0.0
VRTSspt 6.1.000.000

Summary:

Packages:
7 of 7 required Symantec Storage Foundation
6.1 packages installed
7 of 7 optional Symantec Storage Foundation
6.1 packages installed
Copyright 2014 Symantec Corporation. All rights reserved.

Installed Public and private Hot Fixes for Symantec


Storage Foundation 6.1: None

Note: If required perform the following 3c and 3d steps, else skip them and
move to next step.

c Would you like to view Available Upgrade


Options[y,n,q] (y) n
d Would you like to version check additional
systems[y,n,q] (y) n
CONFIDENTIAL - NOT FOR DISTRIBUTION
265 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A55
Please visit https://sort.symantec.com for more
information.
End of Solution

Note: The installer script attempts to contact the SORT Web site to
check for product updates.
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


266 A56 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 4: Optional lab: Setting up Veritas Enterprise
Administrator

sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

The VEA GUI client package has been removed from the Storage Foundation
installed packages, although the object bus (VRTSob) is still present. This section
covers how to enable the server and install the VEA GUI client if desired.

1 From a terminal window, use the vxsvcctrl command to activate vxsvc.


Solution
/opt/VRTS/bin/vxsvcctrl activate
End of Solution

2 Verify that the service is online (enabled).

Solution
chkconfig --list |grep isisd
isisd 0:off 1:off 2:off 3:on
Copyright 2014 Symantec Corporation. All rights reserved.

4:off 5:on 6:off

Note: The chkconfig command is used to list and maintain the /etc/
rc[0-6].d directories
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


267 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A57
3 Use the ps -ef command to determine if the vxsvc daemon is running. If it
is not, use vxsvcctrl start to start the daemon.

Solution
ps -ef | grep vxsvc
/opt/VRTS/bin/vxsvcctrl start
/opt/VRTS/bin/vxsvcctrl status
End of Solution

4 Attempt to start the VEA GUI using the vea command. Observe the message
displayed. Press Enter.

Solution
vea &

VEA GUI is no longer packaged. Symantec recommends that you use VOM to
manage, monitor, and report on multi-host environments. You can download
this utility at no charge from http://go.symantec.com/vom. If you
wish to continue using VEA GUI, you can downloaded it from the same Web
site.
End of Solution

5 Navigate to the directory that contains the VEA GUI package.

Solution
cd /student/software/sf/vea_gui
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

6 Install the VEA GUI package using the appropriate OS commands.

Solution
rpm -ivh VRTSobgui-3.4.15.0-0.i686.rpm
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


268 A58 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
7 Re-create the symbolic link from /opt/VRTS/bin/vea to
/opt/VRTSob/bin/vea. By default, vea is symbolically linked to
/opt/VRTS/bin/vea.sh.

Solution A

rm /opt/VRTS/bin/vea
ln -s /opt/VRTSob/bin/vea /opt/VRTS/bin/vea
End of Solution

8 Verify that you can start the VEA GUI and connect to the local host.

Solution
/opt/VRTS/bin/vea &
End of Solution

9 In the Select Profile window, click the Manage Profiles button and configure
VEA to always start with the Default profile.

Solution
Set the Start VEA using profile option to Default, click Close, and then
click OK to continue.
End of Solution

10 Click the Connect to a Host or Domain link and connect to your system as
root.

Solution
Hostname: (For example, sym1)
Copyright 2014 Symantec Corporation. All rights reserved.

Username: root
Password: train
End of Solution

11 On the left pane (object tree) view, navigate the system and observe the various
categories of VxVM objects.

12 Select the Assistant perspective, on the quick access bar and view tasks for
systemname.
CONFIDENTIAL - NOT FOR DISTRIBUTION
269 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A59
13 Using the System perspective, determine which disks are available to the OS.

Solution
In the System perspective object tree, expand your host and then select the
Disks node. Examine the Device column in the grid.
End of Solution

14 Execute the Disk Scan command and check if any messages are displayed on
the Console view.

Solution
In the VEA System perspective object tree, select your host. Select Actions >
Rescan.
End of Solution

15 Which commands were executed by the Disk Scan task?

Solution
Navigate to the Log perspective. Select the Task Log tab in the right pane and
double-click the Scan for new disks task.
End of Solution

16 Exit the VEA graphical interface.

Solution
In the VEA main window, select File > Exit. Confirm when prompted.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

17 Create a root equivalent administrative account named admin1 for use of VEA.

Solution

a Create a new administrative account named admin1:

useradd admin1
passwd admin1

CONFIDENTIAL - NOT FOR DISTRIBUTION


270 A60 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
b Type a password for admin1. If a BAD PASSWORD message appears,
ignore it and retype the same password.

c Modify the /etc/group file to add the vrtsadm group and specify the
root and admin1 users by using the vi editor, as follows: A

vi /etc/group

d In the file, navigate to the location where you want to insert the vrtsadm
entry, change to insert mode by typing o, and then add the line:

vrtsadm::99:root,admin1

e When you are finished editing, press Esc to leave insert mode.

f Then, save the file and quit, as follows:

:wq
End of Solution

18 Test the new account. After you have tested the new account, exit VEA.

Solution

a Launch VEA, as follows:

vea &

b Select Connect, and specify the host name, as follows:

Hostname: sym1

c Select the Connect using a different user account option and click
Copyright 2014 Symantec Corporation. All rights reserved.

Connect.

d Type the username and password for the new user, as follows:

User: admin1
Password: (Type the password that you created for admin1.)

e After confirming the account, select File > Exit.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


271 Lab 2: Installing SF and Accessing SF Interfaces
Copyright 2014 Symantec Corporation. All rights reserved.
A61
Exercise 5: Optional lab: Text-based VxVM menu interface

sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

1 From the command line, invoke the text-based VxVM menu interface using
the vxdiskadm command.
Solution
vxdiskadm
End of Solution

2 Display information about the menu or about specific commands.

Solution
Type ? at any of the prompts within the interface.
End of Solution

3 Which disks are available to the OS?


Copyright 2014 Symantec Corporation. All rights reserved.

Solution

a Type list at the main menu.

b Type all at the next prompt.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


272 A62 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
4 Exit the vxdiskadm interface.

Solution
Type q at the prompts until you exit vxdiskadm.
A
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


273 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A63
Exercise 6: Optional lab: Accessing CLI commands

sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

This exercise introduces several commonly used VxVM commands. These


commands and associated concepts are explained in detail throughout this course.
If you have used Volume Manager before, you may already be familiar with these
commands. If you are new to Volume Manager, this exercise aims to show you the
amount of information you can get from the manual pages. Note that you do not
need to read all of the manual pages for this exercise.

1 From the command line, invoke the VxVM manual pages as follows and then
read about the vxassist command.
Solution
man vxassist
End of Solution

2 Which vxassist command parameter creates a VxVM volume?

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

The make parameter is used in creating a volume.


End of Solution

3 From the command line, invoke the VxVM manual pages to read about the
vxdisk command.

Solution
man vxdisk
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


274 A64 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
4 Which disks are available to VxVM?

Solution
vxdisk -o alldgs list
A
All the available disks are displayed in the list.
End of Solution

5 From the command line, invoke the VxVM manual pages to read about the
vxdg command.

Solution
man vxdg
End of Solution

6 How do you list locally imported disk groups?

Solution
vxdg list
Note: Because you have not created any disk groups yet, the command output
shows only the header statement at this stage in the labs.
End of Solution

7 From the command line, invoke the VxVM manual pages to read about the
vxprint command.

Solution
man vxprint
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

8 Determine which Storage Foundation daemons are running on the system


using the ps -ef command.

Solution
ps -ef | grep -i vx

CONFIDENTIAL - NOT FOR DISTRIBUTION


275 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A65
vxconfigd, vxrelocd, vxnotify, vxesd, vxdclid,
vxconfigbackupd, vxsvc, vxattachd, vxdbd.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


276 A66 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 7: Optional lab: Adding managed hosts to the VOM
Management Server

This section requires an additional system that has SF 6.x pre-installed but is not A
already configured in the VOM MS. For this lab section, use the sym1 and sym2
virtual machines.

sym1

1 Open a terminal window and use the ps -ef command to determine if the
vxsvc daemon is running. If it is not, use vxsvcctrl activate followed
by vxsvcctrl start to start the daemon.

Note: You are enabling the vxsvc service now for ease of use later in the lab.

Solution

a ps -ef | grep vxsvc

b /opt/VRTS/bin/vxsvcctrl activate

c /opt/VRTS/bin/vxsvcctrl start

End of Solution

2 Verify that the service (isisd) is online (enabled) on the system to be added
as a managed host on the MS server.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
chkconfig --list | grep isisd
isisd 0:off 1:off 2:off 3:on 4:off
5:on 6:off

CONFIDENTIAL - NOT FOR DISTRIBUTION


277 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A67
Note: The chkconfig command is used to list and maintain the
/etc/rc[0-6].d directories.

End of Solution

sym2

3 Open a terminal window and use the ps -ef command to determine if the
vxsvc daemon is running. If it is not, use vxsvcctrl activate followed
by vxsvcctrl start to start the daemon.

Note: You are enabling the vxsvc service now for ease of use later in the lab.

Solution

a ps -ef | grep vxsvc

b /opt/VRTS/bin/vxsvcctrl activate

c /opt/VRTS/bin/vxsvcctrl start

End of Solution

4 Verify that the service (isisd) is online (enabled) on the system to be added
as a managed host on the MS server.

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

chkconfig --list | grep isisd


isisd 0:off 1:off 2:off 3:on 4:off
5:on 6:off

Note: The chkconfig command is used to list and maintain the


/etc/rc[0-6].d directories.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


278 A68 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
winclient
A
5 If the Web console is already open, skip this step. Otherwise, open a Web
browser and type the URL for the VOM MS Web console on the address line.
You may receive an error about the Web sites certificate; click the buttons
necessary to continue to this Web site. Log on using the root username and
password for your server.

Solution

a From the browser window, in the address field,


type: https://mgt.example.com:14161/

b In the Username field, type root

c In the Password field, type train

End of Solution

Note: If required manually change the resolution of the web page using IE
settings option.

6 On the Home page, under the different perspectives, click on Settings.

7 Click on the Host icon.

8 Click Add Hosts > Agent.


Copyright 2014 Symantec Corporation. All rights reserved.

9 In the Host Name field, type the fully qualified host name of the first system to
be added as a managed host (sym1.example.com).

10 In the User Name field, type root, and in the Password field type the root
password.

11 Click the Add Entry button.

CONFIDENTIAL - NOT FOR DISTRIBUTION


279 Lab 2: Installing SF and Accessing SF Interfaces Copyright 2014 Symantec Corporation. All rights reserved.
A69
12 In the new row that is displayed, type the fully qualified hostname of the
second system to be added as a managed host (sym2.example.com). Type
root for the username, and the appropriate password.

13 Click Finish to have the VOM server add the hosts, and then click OK.

Note: The VOM MS server and the managed host must be time synchronized.
Check the system times using the date command to ensure that the
time difference between the two systems is not greater than five
minutes.

14 Verify that your hosts have been added. You can do this by going to the Server
perspective and viewing the Hosts tab. (Click on Home > Server> Hosts.)

End of lab
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


280 A70 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab 3: Creating a Volume and File System


In this lab, you create new disk groups, simple volumes, and file systems, mount
and unmount the file systems, and observe the volume and disk properties. The
first exercises use the command line interface. The optional exercises use the
VOM interface.
This lab contains the following exercises:
Creating disk groups, volumes and file systems: CLI
Removing volumes and disks: CLI
Destroying disk data using disk shredding: CLI
Optional lab: Creating disk groups, volumes, and file systems: VOM
Optional lab: Removing volumes, disks, and disk groups: VOM
If you use object names other than the ones provided, substitute the names
Copyright 2014 Symantec Corporation. All rights reserved.

accordingly in the commands.

CAUTION In this lab, do not include the boot disk in any of the tasks.

Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four empty and unused
external disks to be used during the labs.

CONFIDENTIAL - NOT FOR DISTRIBUTION


281 Lab 3: Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
A71
Although you should not have to perform disk labeling, here are some tips that
may help if your disks are not properly formatted:
On Linux, if you have problems initializing a disk, you may need to run this
command: fdisk /dev/disk.
Use options -o and -w to write a new DOS partition table.
If you are unsure of the device name, you can use the fdisk -l command to
list all devices that are visible to the Linux Operating System.

Note: If there are multiple paths to each disk, the fdisk -l output will show a
higher number of devices than actually present.

Lab information
In preparation for this lab, you need the following information about your lab
environment.
Object Value
root password train
Host names of lab systems winclient
sym1
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_60

The exercises for this lab start on the next page.


Exercise 1: Creating a volume and file system: VOM
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


282 A72 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 1: Creating disk groups, volumes and file systems: CLI

sym1

1 View all the disk devices on the system. What is the status of the disks
assigned to you for the labs?
Solution
vxdisk -o alldgs list
End of Solution

2 Choose a disk (emc0_dd7) and initialize it, if necessary, using the CLI. Using
the vxdisk -o alldgs list command observe the change in the Status
column. What is the status of the disk now?

CAUTION Do not initialize the sda device. This is the system boot disk.
Do not initialize the sdb device. This disk has Oracle binaries.

Note: If any of the emc0_dd7 through emc0_d12 disks have a


udid_mismatch error status, then reinitialize the disks using
vxdisksetup -i disk_name.

Solution
vxdisksetup -i emc0_dd7
vxdisk -o alldgs list
Copyright 2014 Symantec Corporation. All rights reserved.

The TYPE field should change to auto:cdsdisk and the STATUS of the
disk should change to online but the DISK and GROUP columns should still
be empty.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


283 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A73
3 Create a new disk group using the disks you initialized in the previous step
(emc0_dd7). Name the new disk group appdg. Observe the change in the disk
status.

Solution
vxdg init appdg appdg01=emc0_dd7
vxdisk -o alldgs list

The TYPE and STATUS of the disk are the same but the DISK and GROUP
columns now show the new disk media name and the disk group name
respectively.
End of Solution

4 Using the vxassist command, create a new volume of size 1g in appdg.


Name the new volume appvol.

Solution
vxassist -g appdg make appvol 1g
End of Solution

5 Create a Veritas file system on the appvol volume, mount the file system to the
/app directory. Create the directory if it does not exist.

Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


284 A74 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
6 Make sure that the file system is mounted at boot time.

Solution

A
a Edit the fstab file: vi /etc/fstab

b Add the following line in fstab file and save:

/dev/vx/dsk/appdg/appvol /app vxfs rw,\


largefiles,delaylog 0 2

:wq

End of Solution

7 Unmount the /app file system, verify the unmount, and remount using the
mount -a command to mount all file systems in the file system table.

Solution
umount /app
/bin/mount | grep app
/bin/mount -a
/bin/mount | grep app
End of Solution

8 Identify the amount of free space in the appdg disk group. Try to create a
volume in this disk group named largevol with a size slightly larger than the
available free space, for example 2g on standard Symantec classroom systems.
What happens?
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vxdg -g appdg free
The free space is displayed in sectors in the LENGTH column.

Note: You can use vxdg -g appdg -u H free command to display the
free space in the appdg disk group.

vxassist -g appdg make largevol 2g

CONFIDENTIAL - NOT FOR DISTRIBUTION


285 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A75
You should receive an error indicating that Volume Manager cannot allocate
the requested space for the volume, and the volume is not created.
End of Solution

9 Choose a second disk (emc0_dd8), initialize it, if necessary, and add it to the
appdg disk group. Observe the change in free space.

Solution
vxdisksetup -i emc0_dd8
vxdg -g appdg adddisk appdg02=emc0_dd8
vxdg -g appdg free
End of Solution

10 Create the same volume, largevol, in the appdg disk group using the same size
as in step 8.

Solution

Note: The 2g volume size is used as an example here. You may need to use a
value more suitable to your lab environment if you are not working in a
standard Symantec classroom.

vxassist -g appdg make largevol 2g


This time the volume creation should complete successfully.
End of Solution

11 Display volume information for appdg disk group using the


vxprint -g appdg -htr command. Can you identify which disks are
Copyright 2014 Symantec Corporation. All rights reserved.

used for which volumes?

Solution
vxprint -g appdg -htr
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


286 A76 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
12 List the disk groups on your system using the vxdg list command.

Solution
vxdg list
A
If you have followed the labs so far, you should have one disk group listed
appdg.
End of Solution

13 Display disk property information for each disk in the appdg disk group using
the vxdisk -p list command. From the output record the following
information:

DISK _______________ _______________


SCSI_VERSION _______________ _______________
LUN_SERIAL_NO _______________ _______________
ATYPE _______________ _______________
Solution
vxdisk -p list -g appdg appdg01

DISK _______________ _______________


SCSI_VERSION _______________ _______________
LUN_SERIAL_NO _______________ _______________
ATYPE _______________ _______________

vxdisk -p list -g appdg appdg02

DISK _______________ _______________


SCSI_VERSION _______________ _______________
LUN_SERIAL_NO _______________ _______________
ATYPE _______________ _______________
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

14 Display the OS native names for all the disks using the vxdisk -e list
command.

Solution
vxdisk -e list
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


287 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A77
Exercise 2: Removing volumes and disks: CLI

sym1

1 Unmount the /app file system and remove it from the file system table.
Solution
umount /app
vi /etc/fstab
Navigate to the line with the entry corresponding to the /app file system and
type dd to delete the line.
Type :wq to save and close the file.
End of Solution

2 Remove the largevol volume in the appdg disk group. Observe the disk group
configuration information using the vxprint -g appdg -htr command.

Solution
vxassist -g appdg remove volume largevol
vxprint -g appdg -htr
There should be only appvol volume, and the second disk, appdg02, should be
unused.
End of Solution

3 Remove the second disk (appdg02) from the appdg disk group. Observe the
change in its status.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vxdg -g appdg rmdisk appdg02
vxdisk -o alldgs list
Note that the disk is still in online state; it is initialized.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


288 A78 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 3: Destroying disk data using disk shredding: CLI

sym1

1 Mount the appvol volume to the /app directory. Do not add the entry to the
file system table.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution

2 Copy some data files into the /app file system. For this test, use the files
located in the /etc/default directory. List the contents of the /app
directory after the copy has completed.

Solution
cp /etc/default/* /app
ls -al /app
End of Solution

3 Unmount the /app file system, verify the unmount.

Solution
umount /app
/bin/mount | grep app
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

4 Destroy the appdg disk group.

Solution
vxdg destroy appdg
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


289 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A79
5 Observe the status of the disk devices on the system.

Solution
vxdisk -o alldgs list
End of Solution

6 To prove that the data files have not been destroyed, recreate the appdg disk
group and the appvol volume using the exact same steps as used in Exercise 1 -
steps 3-4. DO NOT create a new file system on the appvol volume.

Solution
vxdg init appdg appdg01=emc0_dd7
vxdisk -o alldgs list
vxassist -g appdg make appvol 1g
End of Solution

7 Mount the appvol volume to the /app directory and list the contents of the
directory. The data files should still exist.

Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
ls -al /app
End of Solution

8 Unmount the /app file system, verify the unmount.

Solution
umount /app
Copyright 2014 Symantec Corporation. All rights reserved.

/bin/mount | grep app


End of Solution

9 Destroy the appdg disk group.

Solution
vxdg destroy appdg
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


290 A80 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
10 Use the vxdiskunsetup command with the -o shred option to shred the
disk that was used in the appdg disk group for the appvol volume. This
command may take a while to compete.

Solution A

vxdiskunsetup -o shred emc0_dd7


End of Solution

11 Re-initialize the disk using the vxdisksetup command.

Solution
vxdisksetup -i emc0_dd7
End of Solution

12 Recreate the appdg disk group and the appvol volume using the exact same
steps as used in Exercise 1 - steps 3-4. DO NOT create a new file system on the
appvol volume.

Solution
vxdg init appdg appdg01=emc0_dd7
vxdisk -o alldgs list
vxassist -g appdg make appvol 1g
End of Solution

13 Attempt to mount the appvol volume to the /app directory. What happened?

Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
Copyright 2014 Symantec Corporation. All rights reserved.

The volume will not mount because the file system information was shredded.
End of Solution

14 Destroy the appdg disk group.

Solution
vxdg destroy appdg
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


291 Lab 3: Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
A81
Exercise 4: Optional lab: Creating disk groups, volumes, and file
systems: VOM

winclient

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

1 If you are already logged on to VOM, proceed to Step 2. Else, start the VOM
GUI with IE browser and log on as the root user.
Solution

a https://mgt.example.com:14161

b A warning message may appear to advise that this has an invalid security
certificate.

Choose or you can add an exception,


add exception,
get certificate,
confirm security exception.

c Log on using root, with a password of train.

End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

2 View all the disk devices on the system. What is the status of the disks
assigned to you for the labs?

Solution

a Click Home on the upper right corner and then navigate to the Server
perspective. On the navigation tree, expand Data Center > Uncategorized
Hosts. Click the sym1.example.com host link, and choose the Disks
tab.

b View the disks in the table.


CONFIDENTIAL - NOT FOR DISTRIBUTION
292 A82 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Normally the disks should be in Free (Uninitialized) state unless they have
already been initialized for Volume Manager use and the state would be
Free (Initialized).
End of Solution
A

3 Select an uninitialized disk (emc0_dd7) and initialize it, if necessary, using


VOM. Observe the change in the Status column. What is the status of the disk
now?

Solution

a Select the disk to be initialized (emc0_dd7).

b Right-click on the disk and click Initialize.

c Ensure that CDS format is select. Click OK. Click OK again after the
message displays successful completion of operation.

The state of the disk should display Free (Initialized).


End of Solution

4 Create a new disk group using the disk you initialized in the previous step.
Name the new disk group appdg. Observe the change in the disk status.

Solution

a Select the disk (emc0_dd7) to be used for creating a disk group.

b Right-click the disk and click Create Disk Group.

c In the Create Disk Group screen, type the name of the disk group
Copyright 2014 Symantec Corporation. All rights reserved.

(appdg). Ensure that Enable Cross-platform Data Sharing (CDS)


remains checked and click Next.

d In the Change internal disk name screen, select the Custom Name
option. Under New Name field, type appdg01 as the disk media name
and click Next.

e On the summary page verify the details and click Finish. Click OK.

CONFIDENTIAL - NOT FOR DISTRIBUTION


293 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A83
The state of the disk should change to In Use. Right-click and the disk and
select Properties to view the disk media name (VxVM Name) and the Disk
Group name. Click OK.
End of Solution

5 Using VOM, create a new 1g volume in the appdg disk group. Name the new
volume appvol. Create a file system on it and make sure that the file system is
mounted at boot time to /app directory.

Solution

a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com. Click Disk Groups.

b Right-click on the appdg disk group and select Create Volume.

c Let the Symantec Storage Foundation (VxVM) decide what disks to use
from the disk group and click Next to continue.

d On the Volume Attributes screen, enter volume name as appvol and


specify the volume size as 1 GB. Leave the other options at their default
values and click Next to continue.

e On the File System Options screen, select Create file system to create a
VxFS file system. Select Mount options and enter the mount point /app.
Ensure that Add to file system table is checked. Click Next.

f Verify the summary information, and click Finish and OK.

End of Solution

6 Check if the file system is mounted and verify that there is an entry for this file
Copyright 2014 Symantec Corporation. All rights reserved.

system in the file system table.

Solution

a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups. Select the appdg disk group. The
disk group details are displayed.

b Click the Disks tab to view the details of disks in the disk group.

CONFIDENTIAL - NOT FOR DISTRIBUTION


294 A84 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
c Click the Volumes tab to view the details of the volumes in the disk group.
The /app file system should be listed here. Note the Mount Point
column.
d You can also use the command line on sym1 to verify the changes as
follows: A
mount
cat /etc/fstab
The /app file system should show as mounted and there should be a line
in the file system table to ensure that it is mounted at boot time.
End of Solution

7 Go back to the Disks tab and view the properties of the disk in the appdg disk
group and note the Total Size and the Free Size fields.

Total Size ___________________________


Free Size ____________________________
Solution
Right-click the emc0_dd7 disk appearing under Disks tab and select
Properties.
End of Solution

8 Try to create a second volume, largevol, in the appdg disk group and specify a
size slightly larger than the unallocated space on the existing disk in the disk
group, for example 2g in the standard Symantec classroom systems. Do not
create a file system on the volume. What happens?

Solution

a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.symantec.com. Click Disk Groups.
Copyright 2014 Symantec Corporation. All rights reserved.

b Right-click on the appdg disk group and select Create Volume.

c Let the Symantec Storage Foundation (VxVM) decide what disks to use
from the disk group and click Next to continue.

d On the Volume Attributes screen, enter volume name as largevol and


specify the volume size as 2 GB. Leave the other options at their default
values and click Next to continue.

CONFIDENTIAL - NOT FOR DISTRIBUTION


295 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A85
You should receive an error indicating that the specified volume size is more
than available size. Click Cancel to exit the page.
End of Solution

9 Add a second disk (emc0_dd8) to the appdg disk group.

Solution

a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com. Click Disk Groups.

b Right-click on the appdg disk group and select Add Disk.

c Choose the emc0_dd8 disk and click Next. You may need to refresh the
page if the disks dont display promptly.

d In the Change internal disk name screen, select the Custom Name
option. Under New Name field, type appdg02 as the disk media name
and click Next.

e On the summary page verify the details and click Finish. Click OK.

End of Solution

10 Create the same volume, largevol, in the appdg disk group using the same size
as in step 8. Do not create a file system.

Solution

a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com. Click Disk Groups.
Copyright 2014 Symantec Corporation. All rights reserved.

b Right-click on the appdg disk group and select Create Volume.

c Let the Symantec Storage Foundation (VxVM) decide what disks to use
from the disk group and click Next to continue.

d On the Volume Attributes screen, enter volume name as largevol and


specify the volume size as 2 GB. Leave the other options at their default
values and click Next to continue.

e On the File System Options screen, select Do not create a file system.
Click Next.
CONFIDENTIAL - NOT FOR DISTRIBUTION
296 A86 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
f On the summary page verify the details and click Finish. Click OK.

This time the volume creation should complete successfully.


End of Solution
A

11 Observe the volumes displayed in the Volumes in Disk Group table. Can you
tell which volume has a mounted file system?

Solution
Double-click on the appdg disk group. View the details of the volumes in the
disk group from the Volumes tab. You should notice that the FS Type and
Mount Point columns have file system information for appvol and not for
largevol.
End of Solution

12 Create a VxFS file system on largevol and mount it to /large directory.


Ensure that the file system is not mounted at boot time. Check if the /large
file system is currently mounted and verify that it has not been added to the file
system table.

Solution

a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups.

b Select the appdg disk group and click the Volumes tab.

c Right-click the largevol volume and select File System > Create File
System.

d Verify the disk group and volumes names and that the file system type is
Copyright 2014 Symantec Corporation. All rights reserved.

vxfs. Ensure that Mount Options is selected.

e Enter the mount point as /large. Ensure that the Add to file system
table option is not selected, and click Next.

f View the summary and click Finish, and OK.

g View the details of the volumes in the appdg disk group from the Volumes
tab. You should notice that the FS Type and Mount Point columns have file
system information now for largevol with a mount point of /large.

The /large file system should show as mounted but there should be no
CONFIDENTIAL - NOT FOR DISTRIBUTION
297 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A87
change in the file system table.

h You can also use the command line on sym1 to verify the changes as
follows:

mount
cat /etc/fstab
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


298 A88 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 5: Optional lab: Removing volumes, disks, and disk
groups: VOM

A
winclient

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

1 Unmount both /app and /large file systems using VOM. Accept to remove
the file systems from the file system table if prompted. Check if the file
systems are unmounted and verify that any corresponding entries have been
removed from the file system table.
Solution

a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups.

b Select the appdg disk group and click the Volumes tab.

c Right-click the appvol volume and select File System > Unmount.

d Ensure that Remove from fstab? option is selected. Click OK.

e Follow similar steps to unmount the file system on the largevol


volume.

The volumes should no longer have mounted file systems.


Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

2 Remove the largevol volume in the appdg disk group.

Solution

a Right-click on the largevol volume and select Delete.

b Click OK and then OK.

CONFIDENTIAL - NOT FOR DISTRIBUTION


299 Lab 3: Creating a Volume and File System Copyright 2014 Symantec Corporation. All rights reserved.
A89
c If the delete operation does not complete successfully, repeat the steps by
selecting the Force (delete if enabled) option on the Delete Volume
screen.

End of Solution

3 View the Disks tab for appdg disk group. Can you identify which disk is
empty?

Solution
Click on the Disks tab of appdg disk group. Viewing the properties should
show that the second disk in the disk group emc0_dd8 (appdg02) is empty.
End of Solution

4 Remove the disk you identified as empty from the appdg disk group.
Right

Solution
Right-click on the empty disk emc0_dd8 and select the Remove From Disk
Group option. Click OK, and OK.
End of Solution

5 Observe all the disks on the system. What is the status of the disk you removed
from the disk group?

Solution
On the navigation tree, expand Data Center > Uncategorized Hosts. Select
sym1.example.com and click on the Disks tab to view all disks.
The disk removed in step 4 should be in Free (Initialized) state.
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

6 Destroy the appdg disk group.

Solution

a On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups.

b Right-click appdg disk group and select Destroy option.


CONFIDENTIAL - NOT FOR DISTRIBUTION
300 A90 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
c Read the warning message and click OK, and OK.

End of Solution

A
7 Verify that the appdg disk group is no longer present.

Solution
On the navigation tree, expand Data Center > Uncategorized Hosts >
sym1.example.com > Disk Groups. The appdg disk group should not be
displayed.
End of Solution

End of lab
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


301 Lab 3: Creating a Volume and File System
Copyright 2014 Symantec Corporation. All rights reserved.
A91
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


302 A92 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab 4: Working with Volumes with Different Layouts


In this lab, you create simple concatenated volumes, striped volumes, and mirrored
volumes. Optionally, you practice creating volumes with user defaults using CLI.
This lab contains the following exercises:
Creating volumes with different layouts: CLI
Optional lab: Creating volumes with user defaults: CLI

Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four empty and unused
external disks to be used during the labs.

Lab information
Copyright 2014 Symantec Corporation. All rights reserved.

In preparation for this lab, you need the following information about your lab
environment.
Object Value
root password train
Host names of lab systems sym1
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_60

CONFIDENTIAL - NOT FOR DISTRIBUTION


303 Lab 4: Working with Volumes with Different Layouts Copyright 2014 Symantec Corporation. All rights reserved.
A93
Exercise 1: Creating volumes with different layouts: CLI

sym1

1 Add four initialized disks (emc0_dd7 - emc0_d10) to a disk group called


appdg. Verify your action using vxdisk -o alldgs list.

a If you have completed the Creating a Volume and File System lab
(lab 4), you should already have some initialized disks. You will need four
disks for this lab. If four disks are not initialized, then initialize the needed
disks from the same enclosure for use in Volume Manager (all disks on the
EMC array).

Solution
vxdisksetup -i emc0_dd9
vxdisksetup -i emc0_d10

Perform the above command for any disks that have not been initialized for
Volume Manager use and that will be used in this lab.
End of Solution

b Create a new disk group and add disks:

Solution
vxdg init appdg appdg01=emc0_dd7 \
appdg02=emc0_dd8 appdg03=emc0_dd9 \
appdg04=emc0_d10

Alternatively, you can also create the disk group using a single disk device
and then add each additional disk as follows:
Copyright 2014 Symantec Corporation. All rights reserved.

vxdg g appdg adddisk appdg##=accessname


where accessname is the enclosure-based name for the disk as displayed
in the DEVICE column of the output of the vxdisk list command in
SF 6.x.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


304 A94 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
2 Create a 50-MB concatenated volume in appdg disk group called appvol with
one drive.

Solution
A
vxassist -g appdg make appvol 50m
End of Solution

3 Display the volume layout. What names have been assigned to the plex and
subdisks?

Solution
To view the assigned names, view the volume using:
vxprint -g appdg -htr | more
End of Solution

4 Remove the volume.

Solution
vxassist -g appdg remove volume appvol
End of Solution

5 Create a 50-MB striped volume on two disks in appdg and specify which two
disks to use in creating the volume. Name the volume stripevol.

Solution
vxassist -g appdg make stripevol 50m layout=stripe \
appdg01 appdg02
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

What names have been assigned to the plex and subdisks?


Solution
To view the assigned names, view the volume using:
vxprint -g appdg -htr | more
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


305 Lab 4: Working with Volumes with Different Layouts Copyright 2014 Symantec Corporation. All rights reserved.
A95
6 Create a 20-MB, two-column striped volume with a mirror in appdg. Set the
stripe unit size to 256K. Name the volume strmirvol.

Solution
vxassist -g appdg make strmirvol 20m \
layout=mirror-stripe ncol=2 stripeunit=256k
End of Solution

What do you notice about the plexes?


Solution
View the volume using vxprint -g appdg -htr | more.
Notice that you now have a second plex.
End of Solution

7 Create a 20-MB, two-column striped volume with a mirror. Set the stripe unit
size to 128K. Select at least one disk that you should not use. Name the volume
2colstrvol.

Solution
vxassist -g appdg make 2colstrvol 20m \
layout=mirror-stripe ncol=2 stripeunit=128k \
\!appdg03

Note: As you are using bash as your shell environment, you must use the
escape character before the exclamation mark; for example
\!appdg03.

End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

Was the volume created?


Solution
This operation should fail because there are not enough disks available in the
disk group. A two-column striped mirror requires at least four disks.
VxVM vxassist ERROR V-5-1-15315 Cannot allocate space
for 40960 block volume: Not enough HDD devices
available for allocation.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


306 A96 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
8 Create a 20-MB 3-column striped volume with a mirror. Specify three disks to
be used during volume creation. Name the volume 3colstrvol.

Solution
A
vxassist -g appdg make 3colstrvol 20m \
layout=mirror-stripe ncol=3 appdg01 appdg02 \
appdg03
End of Solution

Was the volume created?


Solution
Again, this operation should fail because there are not enough disks allocated
on the command line. At least six disks are required for this type of volume
configuration.
VxVM vxassist ERROR V-5-1-15315 Cannot allocate space
for 40960 block volume: Not enough HDD devices
available for allocation.
End of Solution

9 Create the same volume specified in the previous step using the same three
disks, but without the mirror. However, this time first determine what the
maximum size that the volume can be based on the remaining free space.
Create the volume with the maximum possible size for this layout.

a Determine the maximum size that the volume could be.

Solution
vxassist -g appdg maxsize layout=stripe ncol=3 \
appdg01 appdg02 appdg03
Maximum volume size: 12128256 (5922Mb)
Copyright 2014 Symantec Corporation. All rights reserved.

Note: The example solution provided here shows a maximum volume


size that will be different than what will be seen on your system. It
is for example purposes only.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


307 Lab 4: Working with Volumes with Different Layouts Copyright 2014 Symantec Corporation. All rights reserved.
A97
b Create a new volume with the maximum possible volume size for the
layout specified in the previous step.

Solution
vxassist -g appdg make 3colstrvol maxsize \
layout=stripe ncol=3 appdg01 appdg02 appdg03
End of Solution

What names have been assigned to the plex and subdisks?


Solution
To view the assigned names, view the volume using:
vxprint -g appdg -htr | more
End of Solution

10 Remove the volumes created in this exercise.

Solution
vxassist -g appdg remove volume stripevol
vxassist -g appdg remove volume strmirvol
vxassist -g appdg remove volume 3colstrvol
End of Solution

Note: Only perform the remaining step if you do not intend to complete the
optional exercise. Otherwise, skip the next step.

11 Remove the disk group that was used in this exercise.


Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vxdg destroy appdg
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


308 A98 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 2: Optional lab: Creating volumes with user defaults: CLI

A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

This optional guided practice illustrates how to use the files:


/etc/default/vxassist
/etc/default/alt_vxassist
to create volumes with defaults specified by the user.

sym1

1 Navigate to the /etc/default directory.


Solution
cd /etc/default
End of Solution

2 Create two files in /etc/default:

a Using the vi editor, create a file called vxassist that includes the
following:

nmirror=3
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vi vxassist
# When mirroring create three mirrors
nmirror=3
End of Solution

b Using the vi editor, create a file called alt_vxassist that includes the
following:

CONFIDENTIAL - NOT FOR DISTRIBUTION


309 Lab 4: Working with Volumes with Different Layouts Copyright 2014 Symantec Corporation. All rights reserved.
A99
stripeunit=256k
Solution
vi alt_vxassist
# use 256K as the default stripe unit size for
# regular volumes
stripeunit=256k
End of Solution

3 Use these files when creating the following volumes:

a Create a 100-MB volume called mirrorvol using layout=mirror.

Solution
vxassist -g appdg make mirrorvol 100m \
layout=mirror
End of Solution

b Create a 100-MB, two-column stripe volume called 2colstrvol using


-d /etc/default/alt_vxassist so that Volume Manager uses
the specified default file.

Note: The -d option of the vxassist command specifies the file


containing custom values for specific attributes related to volume
creation and space allocation. If -d option is not specified, the
command defaults to /etc/default/vxassist.

Solution
vxassist -g appdg -d /etc/default/alt_vxassist \
make 2colstrvol 100m layout=stripe
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

4 View the layout of these volumes using vxprint -g appdg -htr. What
do you notice?

Solution
The first volume should show three plexes rather than the standard two.

CONFIDENTIAL - NOT FOR DISTRIBUTION


310 A100 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
The second volume should show a stripe size of 256K instead of the
standard 64K. Note that 256K is displayed as 512 sectors on the Linux
platform.
End of Solution
A

5 Remove any vxassist default files that you created in this optional lab
section. The presence of these files can impact subsequent labs where default
behavior is assumed.

Solution
rm -f /etc/default/vxassist
rm -f /etc/default/alt_vxassist
End of Solution

6 Remove all of the volumes in the appdg disk group.

Solution
vxassist -g appdg remove volume mirrorvol
vxassist -g appdg remove volume 2colstrvol
End of Solution

End of lab
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


311 Lab 4: Working with Volumes with Different Layouts
Copyright 2014 Symantec Corporation. All rights reserved.
A101
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


312 A102 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab 5: Making Configuration Changes


This lab provides practice in making configuration changes. In this lab, you add
mirrors and logs to existing volumes. You also change the volume read policy,
resize volumes, rename disk groups, and move data between systems.
This lab contains the following exercises:
Administering mirrored volumes
Resizing a volume and file system
Renaming a disk group
Moving data between systems
Optional lab: Resizing a file system only

Prerequisite setup
Copyright 2014 Symantec Corporation. All rights reserved.

To perform this lab, you need two lab systems with Storage Foundation pre-
installed, configured and licensed. In addition to this, you also need four external
shared disks to be used during the labs.
At the beginning of this lab, you should have a disk group called appdg that has
four external disks and no volumes in it.

CONFIDENTIAL - NOT FOR DISTRIBUTION


313 Lab 5: Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
A103
Lab information
In preparation for this lab, you will need the following information about your lab
environment.
Object Value
root password train
Host name of the main lab system sym1
Host name of the system sharing disks sym2
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_60

The exercises for this lab start on the next page.


Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


314 A104 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 1: Administering mirrored volumes

sym1

Note: In order to perform the tasks in this exercise, you should have at least four
disks in the disk group that you are using.

1 Ensure that you have a disk group called appdg with four disks in it. If not,
create the disk group using four disks.

Note: If you have completed the previous lab steps you should already have
the appdg disk group with four disks and no volumes.

Solution
vxdg init appdg appdg01=emc0_dd7 \
appdg02=emc0_dd8 appdg03=emc0_dd9 \
appdg04=emc0_d10

vxdisk -o alldgs list


End of Solution

2 Create a 50-MB, two-column striped volume called appvol in appdg.

Solution
vxassist -g appdg make appvol 50m layout=stripe \
Copyright 2014 Symantec Corporation. All rights reserved.

ncol=2
End of Solution

3 Display the volume layout. How are the disks allocated in the volume? Note
the disk devices used for the volume.

Solution
vxprint -g appdg -htr
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


315 Lab 5: Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
A105
Notice which two disks are allocated to the first plex and record your
observation.

4 Add a mirror to appvol, and display the volume layout. What is the layout of
the second plex? Which disks are used for the second plex?

Solution
vxassist -g appdg mirror appvol
vxprint -g appdg -htr

Note the disk devices used for the second plex. Note that the default layout
used for the second plex is the same as the first plex.
End of Solution

5 Add a dirty region log to appvol and specify the disk to use for the DRL.
Display the volume layout.

Solution
vxassist -g appdg addlog appvol logtype=drl \
appdg01
vxprint -g appdg -htr
End of Solution

6 Add a second dirty region log to appvol and specify another disk to use for the
DRL. Display the volume layout.

Solution
vxassist -g appdg addlog appvol logtype=drl \
appdg02
Copyright 2014 Symantec Corporation. All rights reserved.

vxprint -g appdg -htr


End of Solution

7 Remove the first dirty region log that you added to the volume. Display the
volume layout. Can you control which log was removed?

Solution
vxassist -g appdg remove log appvol \!appdg01

CONFIDENTIAL - NOT FOR DISTRIBUTION


316 A106 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Note: As you are using bash as your shell environment, you must use the
escape character before the exclamation mark; for example
\!appdg01.
A
vxprint -g appdg -htr
End of Solution

8 Find out what the current volume read policy for appvol is. Change the volume
read policy to round robin, and display the volume layout.

Solution
vxprint -g appdg -htr

You should observe that the read policy shows as SELECT which is the value
used for selected based on layouts.

vxvol -g appdg rdpol round appvol


vxprint -g appdg -htr

The value of the attribute will change to ROUND.


End of Solution

9 Remove the original mirror (appvol-01) from appvol, and display the volume
layout.

Solution
vxassist -g appdg remove mirror appvol \
\!disk_used_by_original_mirror
vxprint -g appdg -htr
Copyright 2014 Symantec Corporation. All rights reserved.

Note: As you are using bash as your shell environment, you must use the
escape character before the exclamation mark; for example
\!appdg01. The appdg01 disk was used by the original plex.

Note that the DRL log will also be removed automatically with this command
because the volume is no longer mirrored.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


317 Lab 5: Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
A107
10 Remove appvol.

Solution
vxassist -g appdg remove volume appvol
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


318 A108 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 2: Resizing a volume and file system

sym1

1 Create a 20-MB concatenated mirrored volume called appvol in appdg. Create


a Veritas file system on the volume and mount it to /app. Make sure that the
file system is not added to the file system table.
Solution
vxassist -g appdg make appvol 20m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution

2 View the layout of the volume and display the size of the file system.

Solution
vxprint -g appdg -htr
df -k /app
End of Solution

3 Add data to the volume by creating a file in the file system and verify that the
file has been added.

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

echo "hello app" > /app/hello


cat /app/hello
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


319 Lab 5: Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
A109
4 Expand the file system and volume to 100 MB. Observe the volume layout to
see the change in size. Display file system size.

Solution
vxresize -g appdg appvol 100m
vxprint -g appdg -htr
df -k /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


320 A110 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 3: Renaming a disk group

sym1

1 Create a 100-MB concatenated volume called webvol in appdg. Create a


Veritas file system on the volume and mount it to /web. Make sure that the
file system is not added to the file system table.
Solution
vxassist -g appdg make webvol 100m layout=concat
mkfs -t vxfs /dev/vx/rdsk/appdg/webvol
mkdir /web (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/webvol /web
End of Solution

2 Add data to the webvol volume by copying the /etc/group file to the
/web file system. Verify that the file has been added.

Solution
cp /etc/group /web
ls -l /web
End of Solution

3 Try to deport and rename the appdg disk group to webdg while the /app and
/web file systems are still mounted. Can you do it?
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vxdg -n webdg deport appdg
You receive an error message indicating that the volumes in the disk group are
in use.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


321 Lab 5: Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
A111
4 Observe the contents of the /dev/vx/rdsk and /dev/vx/dsk directories
and their subdirectories. What do you see?

Solution
ls -lR /dev/vx/rdsk

This directory contains a subdirectory for each imported disk group, which
contains the character devices for the volumes in that disk group.

ls -lR /dev/vx/dsk

This directory contains a subdirectory for each imported disk group, which
contains the block devices for the volumes in that disk group.
End of Solution

5 Unmount all the mounted file systems in the appdg disk group.

Solution
umount /app
umount /web
End of Solution

6 Deport and rename the appdg disk group to webdg. Then import the newly
renamed webdg disk group.

Solution
vxdg -n webdg deport appdg
vxdg import webdg
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

7 Observe the contents of the /dev/vx/rdsk and /dev/vx/dsk directories


and their subdirectories. What has changed?

Solution
ls -lR /dev/vx/rdsk
ls -lR /dev/vx/dsk

CONFIDENTIAL - NOT FOR DISTRIBUTION


322 A112 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
The device subdirectories are rebuilt with the new name of the disk group.
End of Solution

A
8 Observe the disk media names. Is there any change?

Solution
vxdisk -o alldgs list
vxprint -g webdg -htr

There should be no change in disk media names.


End of Solution

9 Mount the /app and /web file systems, and observe their contents.

Solution
mount -t vxfs /dev/vx/dsk/webdg/appvol /app
mount -t vxfs /dev/vx/dsk/webdg/webvol /web

ls -l /app
ls -l /web
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


323 Lab 5: Making Configuration Changes Copyright 2014 Symantec Corporation. All rights reserved.
A113
Exercise 4: Moving data between systems

sym1

1 Copy new data to the /app and /web file systems. For example, copy the file
/etc/group to /app and the file /etc/hosts to /web.
Solution
cp /etc/group /app
cp /etc/hosts /web
End of Solution

2 View all the disk devices on the system.

Solution
vxdisk -o alldgs list
End of Solution

3 Unmount all file systems in the webdg disk group and deport the disk group.
Do not assign it to a new host. View all the disk devices on the system.

Solution
umount /app
umount /web
vxdg deport webdg
vxdisk -o alldgs list
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


324 A114 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
sym2
A
4 Import the webdg disk group on the other system (sym2), ensure that the
volumes in the imported disk group are all started, and view all the disk
devices on the system.

Solution
vxdg import webdg
vxprint -g webdg -htr
vxdisk -o alldgs list
End of Solution

5 Mount the /app and /web file systems. Note that you will need to create the
mount directories on the other system before mounting the file systems.
Observe the data in the file systems.

Solution
mkdir /app
mkdir /web
mount -t vxfs /dev/vx/dsk/webdg/appvol /app
mount -t vxfs /dev/vx/dsk/webdg/webvol /web

ls -l /app
ls -l /web

The data should be the same as it was on the first system.


End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

6 Unmount the file systems.

Solution
umount /app
umount /web
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


325 Lab 5: Making Configuration Changes Copyright 2014 Symantec Corporation. All rights reserved.
A115
7 Deport webdg and assign the original machine name (sym1) as the new host.

Solution
vxdg -h sym1 deport webdg
End of Solution

sym1

8 Import the disk group and change its name back to appdg. View all the disk
devices on the system.

Note: Because the hostname of the sym1 system is assigned to the disk group
during the deport operation, the disk group can be automatically
imported if you execute the vxdctl enable command on your
system.

Solution
vxdg -n appdg import webdg
vxdisk -o alldgs list
End of Solution

9 Deport the disk group appdg by assigning the ownership to a system called
anotherhost. View all the disk devices on the system. Why would you do this?

Solution
vxdg -h anotherhost deport appdg
Copyright 2014 Symantec Corporation. All rights reserved.

vxdisk -o alldgs list

You would do this to ensure that the disks are not imported accidentally by any
system other than the one whose name you assigned to the disks.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


326 A116 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
10 Display detailed information about one of the disks in the disk group
(emc0_dd7) using the vxdisk list command. Note the hostid field in
the output.

Solution A

vxdisk list emc0_dd7


End of Solution

11 Import appdg. Were you successful?

Solution
vxdg import appdg

This operation should fail, because appdg belongs to another host.


End of Solution

12 Now import appdg and overwrite the disk group lock. What did you have to do
to import it and why?

Solution
vxdg -C import appdg

You had to forcefully clear the host lock using the -C option because the disks
in the disk group were locked to anotherhost.
End of Solution

13 Display detailed information about the same disk in the disk group as you did
in step 10. Note the change in the hostid field in the output.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vxdisk list emc0_dd7
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


327 Lab 5: Making Configuration Changes Copyright 2014 Symantec Corporation. All rights reserved.
A117
14 Remove the appvol and webvol volumes.

Solution
vxassist -g appdg remove volume appvol
vxassist -g appdg remove volume webvol
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


328 A118 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 5: Optional lab: Resizing a file system only

sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

1 Create a 50-MB concatenated volume named appvol in the appdg disk group.
Solution
vxassist -g appdg make appvol 50m
End of Solution

2 Create a Veritas file system on the volume by using the mkfs command.
Specify the file system size as 40 MB.

Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol 40m
0

End of Solution

3 Create a mount point /app on which to the mount the file system, if it does not
already exist.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
mkdir /app (if necessary)
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


329 Lab 5: Making Configuration Changes Copyright 2014 Symantec Corporation. All rights reserved.
A119
4 Mount the newly created file system on the mount point /app.

Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution

5 Verify disk space using the df command. Observe that the total size of the file
system is smaller than the size of the volume.

Solution
df -k
End of Solution

6 Expand the file system to the full size of the underlying volume using the
fsadm -b newsize command.

Note: On Linux there is more than one fsadm command, you must use the
command located in /opt/VRTS/bin.

Solution
/opt/VRTS/bin/fsadm -b 50m -r \
/dev/vx/rdsk/appdg/appvol /app
End of Solution

7 Verify disk space using the df command.

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

df -k
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


330 A120 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
8 Make a file on the file system mounted at /app, so that the free space is less
than 50 percent of the total file system size.

Solution
A
dd if=/dev/zero of=/app/25_mb bs=1024k count=25
End of Solution

9 Shrink the file system to 50 percent of its current size. What happens?

Solution
/opt/VRTS/bin/fsadm -b 25m -r \
/dev/vx/rdsk/appdg/appvol /app

The command fails. You cannot shrink the file system because blocks are
currently in use.
End of Solution

10 Unmount the /app file system and remove the appvol volume in appdg.

Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution

End of lab
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


331 Lab 5: Making Configuration Changes
Copyright 2014 Symantec Corporation. All rights reserved.
A121
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


332 A122 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab 6: Administering File Systems


In this lab, you defragment a fragmented file system. You also observe the time to
creation improvement when using the SmartMove feature with mirroring and try
the thin reclamation feature.
This lab contains the following exercises:
Preparation for defragmenting a Veritas File System lab
Defragmenting a Veritas File System
SmartMove
Thin reclamation

Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four external disks and
Copyright 2014 Symantec Corporation. All rights reserved.

the third (sdc) internal disk to be used during the labs. If you do not have a third
(sdc) internal disk or if you cannot use the third (sdc) internal disk, you need five
external disks to complete the labs.
At the beginning of this lab, you should have a disk group called appdg that has
four external disks and no volumes in it. The third (sdc) internal disk should be
empty and unused.

CONFIDENTIAL - NOT FOR DISTRIBUTION


333 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A123
Lab information
In preparation for this lab, you need the following information about your lab
environment.
Object Value
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_60
2nd internal disk: sdc
Location of Lab Scripts (if any): /student/labs/sf/sf61

The exercises for this lab start on the next page.


Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


334 A124 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 1: Preparation for defragmenting a Veritas File System
lab

sym1

1 Identify the device name for the third (sdc) internal disk on your lab system.
Solution
df -k
vxdisk -o alldgs list

The third internal disk should be sdc.


End of Solution

2 Initialize the sdc disk using a non-cds disk format.

Solution
vxdisksetup -i sdc format=sliced

Note: If an error occurs when initializing the sdc disk, then use -f option to
initialize the disk.
For example: vxdisksetup -f -i sdc format=sliced

End of Solution

3 Create a non-cds disk group called testdg using the internal disk you initialized
Copyright 2014 Symantec Corporation. All rights reserved.

in step 2.

Solution
vxdg init testdg testdg01=sdc cds=off
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


335 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A125
4 In the testdg disk group create a 1-GB concatenated volume called testvol
initializing the volume space with zeros using the init=zero option to
vxassist.

Solution
vxassist -g testdg make testvol 1g init=zero
End of Solution

5 Create a VxFS file system on testvol.

Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/testvol
End of Solution

6 Change into the /student/labs/sf/sf61 directory and run the


extentfrag_vxfs.pl script.

This script restores a fragmented file system onto the volume and performs a
file system check so that the volume can be mounted. Whatever files are in the
existing file system will be lost.
Solution
cd /student/labs/sf/sf61
./extentfrag_vxfs.pl
End of Solution

7 Mount the file system on /test. Note that you may need to perform a file
system check before mounting the file system.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
fsck -t vxfs /dev/vx/rdsk/testdg/testvol (if necessary)
mkdir /test
mount -t vxfs /dev/vx/dsk/testdg/testvol /test
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


336 A126 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 2: Defragmenting a Veritas File System

sym1

The purpose of this section is to examine the structure of a fragmented and an


unfragmented file system and compare the file systems throughput in each case.
The general steps in this exercise are:
Make and mount a file system.
Examine the structure of the new file system for extents allocated.
Then examine a fragmented file system and report the degree of fragmentation
in the file system.
De-fragment the file system, reporting the degree of fragmentation.
Compare the total throughput before and after the defragmentation process.

1 In the appdg disk group create a 1-GB concatenated volume called appvol.
Solution
vxassist -g appdg make appvol 1g
End of Solution

2 Create a VxFS file system on appvol and mount it on /app.

Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

3 Run a fragmentation report on /app to analyze directory and extent


fragmentation. Is a newly created, empty file system considered fragmented?
In the report, what percentages indicate a file systems fragmentation?

CONFIDENTIAL - NOT FOR DISTRIBUTION


337 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A127
Note: On Linux there is more than one fsadm command, you must use the
command located in /opt/VRTS/bin.

Solution
/opt/VRTS/bin/fsadm -D -E /app

Directory Fragmentation Report


Dirs Total Immed Immeds Dirs to Blocks to
Searched Blocks Dirs to Add Reduce Reduce
total 2 0 2 0 0 0

File System Extent Fragmentation Report

Free Space Fragmentation Index : 5


File Fragmentation Index : 0

# Files Fragmented by Fragmentation Index


0 1-25 26-50 51-75 76-100
2 0 0 0 0

Total Average Average Total


Files File Blks # Extents Free Blks
0 0 0 1030827
blocks used for indirects: 0
% Free blocks in extents smaller than 64 blks: 0.01
% Free blocks in extents smaller than 8 blks: 0.00
% blks allocated to extents 64 blks or larger: 0.00
Free Extents By Size
1: 1 2: 1 4: 2
8: 2 16: 1 32: 2
64: 1 128: 2 256: 1
Copyright 2014 Symantec Corporation. All rights reserved.

512: 2 1024: 1 2048: 0


4096: 1 8192: 1 16384: 0
32768: 1 65536: 1 131072: 1
262144: 1 524288: 1 1048576: 0
2097152: 0 4194304: 0 8388608: 0
16777216: 0 33554432: 0 67108864: 0
134217728: 0 268435456: 0 536870912: 0
1073741824: 0 2147483648: 0
A newly created file system with no files or directories cannot be fragmented.

CONFIDENTIAL - NOT FOR DISTRIBUTION


338 A128 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
The following table displays the percentages you should be observing in the
output of the fragmentation report to determine if a file system with files and
directories is fragmented.

Percentage Unfragmented Badly


A
fragmented
% of Free blocks in extents smaller than < 5% > 50%
64 blocks
% of Free blocks in extents smaller than 8 < 1% > 5%
blocks
% blks allocated to extents 64 blks or > 5% < 5%
larger

End of Solution

4 What is a fragmented file system?

Solution
A fragmented file system is a file system where the free space and/or file data
is in relatively small extents scattered throughout different allocation units
within the file system.
End of Solution

5 If you were shown the following extent fragmentation report about a file
system, what would you conclude?

Directory Fragmentation Report


Dirs Total Immed Immeds Dirs to Blocks to
Searched Blocks Dirs to Add Reduce Reduce
total 199185 85482 115118 5407 5473 5655
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
A high total in the Dirs to Reduce column indicates that the directories are not
optimized. This file system's directories should be optimized by directory
defragmentation.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


339 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A129
6 Unmount /app and remove appvol in the appdg disk group.

Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution

7 Run a fragmentation report on /test to analyze directory and extent


fragmentation. Is /test fragmented? Why or why not? What should be done?

Solution
/opt/VRTS/bin/fsadm -D -E /test

Directory Fragmentation Report


Dirs Total Immed Immeds Dirs to Blocks to
Searched Blocks Dirs to Add Reduce Reduce
total 2 2 1 0 0 0

File System Extent Fragmentation Report

Free Space Fragmentation Index : 38


File Fragmentation Index : 48

# Files Fragmented by Fragmentation Index


0 1-25 26-50 51-75 76-100
21 0 0 35 0

Total Average Average Total


Files File Blks # Extents Free Blks
56 4457 778 830117
Copyright 2014 Symantec Corporation. All rights reserved.

blocks used for indirects: 560


% Free blocks in extents smaller than 64 blks: 30.22
% Free blocks in extents smaller than 8 blks: 17.08
% blks allocated to extents 64 blks or larger: 19.94
Free Extents By Size
1: 16949 2: 11484 4: 25466
8: 10812 16: 1405 32: 3
64: 1 128: 1 256: 0
512: 1 1024: 1 2048: 0
4096: 1 8192: 0 16384: 1

CONFIDENTIAL - NOT FOR DISTRIBUTION


340 A130 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
32768: 1 65536: 0 131072: 0
262144: 0 524288: 1 1048576: 0
2097152: 0 4194304: 0 8388608: 0
16777216: 0 33554432: 0 67108864: 0
134217728: 0 268435456: 0 536870912: 0 A
1073741824: 0 2147483648: 0

Dirs to Reduce column is 0. Therefore, the directories do not need to be


optimized. But the extents need to be optimized. Because:
% Free blocks in extents smaller than 64 blks: 30.22 (<50%) - OK
% Free blocks in extents smaller than 8 blks: 17.08 (>5%) - Not OK
% blks allocated to extents 64 blks or larger: 19.94 (>5%) - OK
Therefore, the file system's extents should be defragmented.
End of Solution

8 Defragment /test and gather summary statistics after each pass through the
file system. After the defragmentation completes, determine if /test is
fragmented? Why or why not?

Note: The defragmentation can take about 5 minutes to complete.

Solution

/opt/VRTS/bin/fsadm -e -E -s /test

The file system no longer needs to be defragmented, because:


% Free blocks in extents smaller than 64 blks: (<50%) - OK
% Free blocks in extents smaller than 8 blks: (<1%) - OK
% blks allocated to extents 64 blks or larger: (>5%) - OK
Copyright 2014 Symantec Corporation. All rights reserved.

Note: Run a fragmentation report on /test to analyze the directory and


check the file system fragmentation.

/opt/VRTS/bin/fsadm -D -E /test
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


341 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A131
9 What is the difference between an unfragmented and a fragmented file system?

Solution
A fragmented file system has free space scattered throughout the file system in
relatively small extents whereas an unfragmented file system has free space in
just a few relatively large extents.
End of Solution

10 Is any one environment more prone to needing defragmentation than another?

Solution
Yes, volatile environments wherein files are grown, shrunk, erased, moved,
and so on, especially where the file systems do not have much free space, are
prone to fragmentation.
Stable environments, such as Oracle databases and logs, have very little impact
on the supporting file system so require infrequent defragmentation.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


342 A132 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 3: SmartMove

sym1 A

In this lab section, you make a larger volume so that you can see the time
difference when using the SmartMove feature.

1 Create a 1-GB, volume called appvol in appdg.


Solution
vxassist -g appdg make appvol 1g
End of Solution

2 Create a VxFS File System on appvol and mount it to /app.

Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution

3 Copy /etc/hosts to /app.

Solution
cp /etc/hosts /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

4 Umount the /app file system.

Solution
umount /app
End of Solution

5 Mirror the appvol volume. Record the time it takes to complete the mirror
operation.

CONFIDENTIAL - NOT FOR DISTRIBUTION


343 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A133
Note: Although SmartMove is enabled by default in Storage Foundation 6.x,
it is not used when the file system is unmounted.

Solution
time -p vxassist -g appdg mirror appvol
Time to create mirror _____________________________________
End of Solution

6 Delete the mirror that you added to appvol.

Solution
vxassist -g appdg remove mirror appvol
End of Solution

7 Mount the /app file system.

Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution

8 Mirror the appvol volume. Record the time it takes to complete the mirror
operation.

Solution
time -p vxassist -g appdg mirror appvol

Time to create mirror _____________________________________


Copyright 2014 Symantec Corporation. All rights reserved.

Note: The mirroring operation should not take as long as it did the first time
the mirror was created. This is because it is only mirroring the used
data in the file system and not the whole volume by using SmartMove,
as the file system is now mounted.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


344A134 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
9 Unmount /app using the umount command.

Solution
umount /app
A
End of Solution

10 Remove appvol in the appdg disk group.

Solution
vxassist -g appdg remove volume appvol
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


345 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A135
Exercise 4: Thin reclamation

sym1

1 View all the disk devices on the system. Then use the vxdisk -o thin
list command to list only the thin provisioning capable devices.
Solution
vxdisk -o alldgs list
vxdisk -o thin list
End of Solution

2 Locate the thin provisioning and thin reclamation capable devices from the
output in the previous step. The TYPE column in the output of the
vxdisk -o thin list command should display thinrclm. Choose
two thin reclamation capable devices (3pardata0_49 and 3pardata0_50), if they
are uninitialized use the vxdisksetup command to initialize them.

Note: If you do not see any thin provisioning and thin reclamation capable
devices in the vxdisk list output, contact your instructor. You
must have thin provisioning and thin reclamation capable devices to
complete this lab section.

Solution
vxdisksetup -i 3pardata0_49
vxdisksetup -i 3pardata0_50
vxdisk -o alldgs list
vxdisk -o thin list
Copyright 2014 Symantec Corporation. All rights reserved.

The TYPE field in the output of the vxdisk -o alldgs list command
should change to auto:cdsdisk and the STATUS of the disk should change
to online thinrclm but the DISK and GROUP columns should still be
empty.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


346 A136 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
3 Create a new disk group using the two disks (3pardata0_49 and 3pardata0_50)
you initialized in the previous step. Name the new disk group thindg. Observe
the change in the disk status.

Solution A

vxdg init thindg thindg01=3pardata0_49 \


thindg02=3pardata0_50
vxdisk -o alldgs list

The TYPE and STATUS of the disks are the same but the DISK and GROUP
columns now show the new disk media name and the disk group name
respectively.
End of Solution

4 Using the vxassist command, create a new volume of size 3g in thindg.


Name the new volume thinvol.

Solution
vxassist -g thindg make thinvol 3g
End of Solution

5 Create a Veritas file system on the volume and mount it to /thin. Do not add
the file system to the file system table.

Solution
mkfs -t vxfs /dev/vx/rdsk/thindg/thinvol
mkdir /thin (if necessary)
mount -t vxfs /dev/vx/dsk/thindg/thinvol /thin
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

6 Display the size of the file system using the df -k /thin command.

Solution
df -k /thin
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


347 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A137
7 Use the vxdisk -o thin,fssize list command to view the size of
the disks compared to the physically allocated space.

Solution
vxdisk -o thin,fssize list
End of Solution

8 Use the dd command to make some 400MB files on the file system mounted at
/thin, so that the free space is less than 10 percent of the total file system
size. Use the df -k /thin command to monitor the file system free space.

Solution
dd if=/dev/zero of=/thin/file1 bs=1024k count=400
dd if=/dev/zero of=/thin/file2 bs=1024k count=400
dd if=/dev/zero of=/thin/file3 bs=1024k count=400
dd if=/dev/zero of=/thin/file4 bs=1024k count=400
dd if=/dev/zero of=/thin/file5 bs=1024k count=400
dd if=/dev/zero of=/thin/file6 bs=1024k count=400
dd if=/dev/zero of=/thin/file7 bs=1024k count=400
df -k /thin
End of Solution

9 Use the vxdisk -o thin,fssize list command again to view the


increase to the physically allocated space.

Solution
vxdisk -o thin,fssize list
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

10 Delete the files created in the step 8.

Solution
rm -f /thin/file*
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


348 A138 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
11 Use the df -k and vxdisk -o thin,fssize list commands again.
Note that the usage has decreased back to the starting point, but the physically
allocated space remains the same.

Solution A

df -k /thin
vxdisk -o thin,fssize list
End of Solution

12 Use the vxdisk reclaim command on the thindg disk group to reclaim
the space on the LUNS.

Solution
vxdisk reclaim thindg
End of Solution

13 Use the vxdisk -o thin,fssize list command again to view the


decrease to the physically allocated space.

Solution
vxdisk -o thin,fssize list
End of Solution

14 Unmount the /thin file system and destroy the thindg disk group.

Solution
umount /thin
vxdg destroy thindg
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


349 Lab 6: Administering File Systems Copyright 2014 Symantec Corporation. All rights reserved.
A139
15 Unmount /test file system and destroy the testdg and appdg disk groups.

Solution
umount /test
vxdg destroy testdg
vxdg destroy appdg
End of Solution

End of lab
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


350 A140 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab 7: Managing Devices Within the VxVM Architecture


In this lab, you explore the VxVM tools used to manage the device discovery layer
(DDL) and dynamic multipathing (DMP). The objective of this exercise is to make
you familiar with the commands used to administer multipathed disks.
This lab contains the following exercises:
Administering the Device Discovery Layer
Displaying DMP information
Displaying DMP statistics
Enabling and disabling DMP paths
Managing array policies

Prerequisite setup
Copyright 2014 Symantec Corporation. All rights reserved.

To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need a minimum of three
external disks to be used during the labs.
Before you begin this lab, destroy any data disk groups that are left from previous
labs:
vxdg destroy diskgroup

CONFIDENTIAL - NOT FOR DISTRIBUTION


351 Lab 7: Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
A141
Lab information
In preparation for this lab, you need the following information about your lab
environment.
Object Value
root password train
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
Location of lab scripts: /student/labs/sf/sf61
Location of the vxbench program: /opt/VRTSspt/FS/VxBench

The exercises for this lab start on the next page.


Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


352 A142 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 1: Administering the Device Discovery Layer

sym1 A

1 List all currently supported disk arrays.


Solution
vxddladm listsupport
End of Solution

2 List all the enclosures connected to your system using the vxdmpadm
listenclosure all command. Does Volume Manager recognize the disk
array you are using in your lab environment? What is the name of the
enclosure? Note the enclosure name here.

Original enclosure name:__________________________________________


Solution
vxdmpadm listenclosure all

Volume Manager recognizes the disk array if it is among the supported disk
arrays you listed in step 1. Any internal disks will show with an enclosure
name of OTHER_DISKS or DISK.

Note: Volume Manager 6.x does not require that the all option be used. If it
is left out of the command, all is assumed.

End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

3 Use the vxddladm command to determine if enclosure based naming is set on


your system.

Solution
vxddladm get namingscheme
NAMING_SCHEME PERSISTENCE LOWERCASE USE_AVID
======================================================
Enclosure Based Yes Yes Yes
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


353 Lab 7: Managing Devices Within the VxVM Architecture Copyright 2014 Symantec Corporation. All rights reserved.
A143
4 If enclosure based naming (EBN) is not set on your system, set it using the
vxdiskadm command.

Solution
vxdiskadm (only if EBN is not set)

Select the option, Change the disk-naming scheme and complete the
prompts to select enclosure-based naming.
End of Solution

5 Display the disks attached to your system and note the changes.

Solution
vxdisk -o alldgs list
End of Solution

6 Rename the emc0 enclosure to emc_disk using the vxdmpadm setattr


command. To find the exact command syntax, check the manual pages for the
vxdmpadm command.

Note: The original name of the enclosure is displayed by the vxdmpadm


listenclosure all command that you used in step 2.

Solution
vxdmpadm setattr enclosure emc0 name=emc_disk
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

7 Display the disks attached to your system and note the changes.

Solution
vxdisk -o alldgs list

The disks should now contain the new name that you entered in the previous
step, for example emc_disk_dd1.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


354 A144 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 2: Displaying DMP information

sym1 A

1 List all controllers on your system using the vxdmpadm listctlr


command. How many controllers are listed for the disk array your system is
connected to?
Solution
vxdmpadm listctlr

In the virtual lab environment, you will observe two controllers listed for the
enclosure you renamed to emc_disk.
End of Solution

2 Using one of the controller names discovered in the previous step, display all
paths connected to the controller using the vxdmpadm getsubpaths
ctlr=controller command. Compare the NAME and the
DMPNODENAME columns in the output.

Solution
vxdmpadm getsubpaths ctlr=controller
The NAME column lists all of the disk devices that the operating system sees
whereas the DMPNODENAME column provides the corresponding DMP node
name used for that disk device. If you have not switched to enclosure based
naming, these names will be the same. Note that the DMP node names are the
ones displayed by the vxdisk -o alldgs list command.

Example Output
vxdmpadm getsubpaths ctlr=c1
Copyright 2014 Symantec Corporation. All rights reserved.

NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE


ENCLR-NAME ATTRS
======================================================
==========================
sde ENABLED(A) - emc_disk_dd1 EMC emc_disk -
sdg ENABLED(A) - emc_disk_dd2 EMC emc_disk -
sdi ENABLED(A) - emc_disk_dd3 EMC emc_disk -
sdk ENABLED(A) - emc_disk_dd4 EMC emc_disk -
. . . (rest of output omitted)
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


355 Lab 7: Managing Devices Within the VxVM Architecture Copyright 2014 Symantec Corporation. All rights reserved.
A145
3 In the displayed list of paths, use the DMP node name of one of the paths to
display information about paths that lead to the particular LUN. How many
paths can you see?

Solution
vxdmpadm getsubpaths dmpnodename=emc_disk_dd7
End of Solution

4 View DDL extended attributes for the dmpnodename used in the previous step
using the vxdisk -p list command.

Solution
vxdisk -p list emc_disk_dd7

You should see extended attributes such as cabinet serial number, array type,
transport, and so on.
End of Solution

5 Determine the Port ID (PID) for all devices attached to the system using the
vxdisk -p list command and the -x option.

Solution
vxdisk -x PID -p list

Selecting a specific attribute is useful when you wish to see that attribute for all
devices attached to a system.
End of Solution

6 Determine the DDL_DEVICE_ATTR for all disks attached to the system using
Copyright 2014 Symantec Corporation. All rights reserved.

the vxdisk -p list command and the -x option. If no attributes are set
the attribute displays a NULL.

Solution
vxdisk -x DDL_DEVICE_ATTR -p list
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


356 A146 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
7 Choose an attribute from the following list and view the attribute for all disks
using vxdisk -x attribute -p list. Not all attributes will have
values set.

The supported attributes: A


DGID VID
PID ANAME
ATYPE TPD_SUPPRESSED
NR_DEVICE CAB_SERIAL_NO
LUN_SERIAL_NO PORT_SERIAL_NO
CUR_OWNER LIBNAME
LUN_OWNER LUN_TYPE
SCSI_VERSION REVISION
TPD_META_DEVNO TPD_META_NAME
TPD_LOGI_CTLR TPD_PHY_CTLR
TPD_SUBPATH TPD_DEVICES
ASL_CACHE ASL_VERSION
UDID ECOPY_DISK
ECOPY_TARGET_ID ECOPY_OPER_PARM
DEVICE_TYPE DYNAMIC
TPD_HIDDEN_DEVS LOG_CTLR_NAME
PHYS_CTLR_NAME DISK_GEOMETRY
MT_SAFE FC_PORT_WWN
FC_LUN_NO HARDWARE_MIRROR
TPD_CONTROLLED TPD_PARTITION_MAP
DMP_SINGLE_PATH DMP_VMDISK_IOPOLICY
DDL_DEVICE_ATTR DDL_THIN_DISK
Solution
vxdisk -x attribute -p list
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


357 Lab 7: Managing Devices Within the VxVM Architecture Copyright 2014 Symantec Corporation. All rights reserved.
A147
Exercise 3: Displaying DMP statistics

sym1

1 Create a disk group called appdg that contains two disks (emc_disk_dd7 and
emc_disk_dd8).
Solution
vxdisksetup -i emc_disk_dd7 (if necessary)
vxdisksetup -i emc_disk_dd8 (if necessary)
vxdg init appdg appdg01=emc_disk_dd7 \
appdg02=emc_disk_dd8
End of Solution

2 Create a 1-GB volume called appvol in the appdg disk group.

Solution
vxassist -g appdg make appvol 1g
End of Solution

3 Determine the device used for the appvol volume. This device name will be
used as the dmpnodename in step 10.

Solution
vxprint -g appdg -htr

v appvol - ENABLED ACTIVE 2097152


Copyright 2014 Symantec Corporation. All rights reserved.

SELECT - fsgen
pl appvol-01 appvol ENABLED ACTIVE 2097152
CONCAT - RW
sd appdg01-01 appvol-01 appdg01 0 2097152
0 emc_disk_dd7 ENA

In the above example, the emc_disk_dd7 device is used for the appvol volume.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


358 A148 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
4 Create a VxFS file system on the appvol volume using the mkfs command.

Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
A
End of Solution

5 Create a mount point for the appvol volume called /app and mount the file
system created in the previous step to the mount point.

Solution
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution

6 Enable the gathering of I/O statistics for DMP.

Solution
vxdmpadm iostat start
End of Solution

7 Reset the DMP I/O statistics counters to zero.

Solution
vxdmpadm iostat reset
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

8 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:

./dmpiotest

Solution
cd /student/labs/sf/sf61
./dmpiotest /app

CONFIDENTIAL - NOT FOR DISTRIBUTION


359 Lab 7: Managing Devices Within the VxVM Architecture Copyright 2014 Symantec Corporation. All rights reserved.
A149
This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:

/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
nrep=100

Note: Note that the script is using a version of the vxbench program
specific to your platform.

End of Solution

9 Display I/O statistics for all controllers.

Solution
vxdmpadm iostat show all
End of Solution

10 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for eight times.

Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.

Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
Copyright 2014 Symantec Corporation. All rights reserved.

interval=2 count=8
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


360 A150 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 4: Enabling and disabling DMP paths

sym1 A

1 Use the dmpiotest script to generate I/O on the disk used by the appdg disk
group. The dmpiotest script uses the vxbench utility, which is a part of
the VRTSspt package and is installed as a part of the SF installation. Change to
the directory containing lab scripts and execute the script:
./dmpiotest

Solution
cd /student/labs/sf/sf61
./dmpiotest /app

This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:

/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \ nrep=100
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &

Note: Note that the script is using a version of the vxbench program
specific to your platform.

End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

2 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for one thousand times. This will
ensure that the output continues as you enable and disable paths. I/O should be
present for both paths to the device.

CONFIDENTIAL - NOT FOR DISTRIBUTION


361 Lab 7: Managing Devices Within the VxVM Architecture
Copyright 2014 Symantec Corporation. All rights reserved.
A151
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.

Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=1000
End of Solution

3 Open a new terminal and use the vxdmpadm disable command to disable
one of the paths shown in the vxdmpadm iostat output. Go back to the
original terminal and note that I/O for that path stops.

Solution
vxdmpadm disable path=path_name
End of Solution

4 Switch to the new terminal and use the vxdmpadm enable command to
enable the path that was disabled in the previous step. Go back to the original
terminal and note that I/O for that path resumes.

Solution
vxdmpadm enable path=path_name
End of Solution

5 Switch to the new terminal and use the vxdmpadm disable command to
disable the other path shown in the vxdmpadm iostat output. Go back to
the original terminal and note that I/O for that path stops.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vxdmpadm disable path=path_name
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


362 A152 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
6 Switch to the new terminal and use the vxdmpadm enable command to
enable the path that was disabled in the previous step. Go back to the original
terminal and note that I/O for that path resumes.

Solution A

vxdmpadm enable path=path_name


End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


363 Lab 7: Managing Devices Within the VxVM Architecture Copyright 2014 Symantec Corporation. All rights reserved.
A153
Exercise 5: Managing array policies

sym1

1 Display the current I/O policy for the enclosure you are using.
Solution
vxdmpadm getattr enclosure emc_disk iopolicy

The default I/O policy is MinimumQ for the array used in the virtual lab
environment.
End of Solution

2 Change the current I/O policy for the enclosure to stop load-balancing and only
use multipathing for high availability.

Solution
vxdmpadm setattr enclosure emc_disk \
iopolicy=singleactive
End of Solution

3 Display the new I/O policy attribute.

Solution
vxdmpadm getattr enclosure emc_disk iopolicy
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

4 Reset the DMP I/O statistics counters to zero.

Solution
vxdmpadm iostat reset
End of Solution

5 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part

CONFIDENTIAL - NOT FOR DISTRIBUTION


364 A154 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:

./dmpiotest
A

Solution
cd /student/labs/sf/sf61
./dmpiotest /app

This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:

/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \ nrep=100
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &

Note: Note that the script is using a version of the vxbench program
specific to your platform.

End of Solution

6 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for eight times. Compare the
output to the output you observed before changing the DMP policy to
singleactive. Note that a single path is now used.

Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=8
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


365 Lab 7: Managing Devices Within the VxVM Architecture Copyright 2014 Symantec Corporation. All rights reserved.
A155
7 Change the DMP I/O policy back to its default value (MinimumQ).

Solution
vxdmpadm setattr enclosure emc_disk iopolicy=minimumq
End of Solution

8 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:

./dmpiotest

Solution
cd /student/labs/sf/sf61
./dmpiotest /app

This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:

/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \ nrep=100
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &

Note: Note that the script is using a version of the vxbench program
specific to your platform.

End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

9 Display I/O statistics for the DMP node again. Compare the output to the
output you observed when changing the DMP policy to singleactive. Note that
both paths are now used again.

Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=8
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


366 A156 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
10 Unmount /app.

Solution
umount /app
A
End of Solution

Note: If the unmount of /app fails because the device is busy, it is because
the vxbench commands started by the dmpiotest script are still
running. Either let them complete, or kill each running command
(ps -ef | grep vxbench). Use pkill vxbench command to
stop the dmpiotest script.

11 Rename the enclosure back to its original name (emc0) using the vxdmpadm
setattr command.

Note: The original name of the enclosure was displayed by the vxdmpadm
listenclosure all command that you used in step 2 of
Exercise 1.

Solution
vxdmpadm setattr enclosure emc_disk name=emc0
End of Solution

12 Destroy the appdg disk group.

Solution
vxdg destroy appdg
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

End of lab

CONFIDENTIAL - NOT FOR DISTRIBUTION


367 Lab 7: Managing Devices Within the VxVM Architecture Copyright 2014 Symantec Corporation. All rights reserved.
A157
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


368 A158 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
A

Lab 8: Resolving Hardware Problems


In this lab, you practice recovering from a variety of hardware failure scenarios,
resulting in disabled disk groups and failed disks. First you recover a temporarily
disabled disk group and then you use a set of interactive lab scripts to investigate
and practice recovery techniques. Each interactive lab script:
Sets up the required volumes
Simulates and describes a failure scenario
Prompts you to fix the problem
This lab contains the following exercises:
Recovering a temporarily disabled disk group
Preparing for disk failure labs
Recovering from temporary disk failure
Copyright 2014 Symantec Corporation. All rights reserved.

Recovering from permanent disk failure


Optional lab: Recovering from temporary disk failure - Layered volume
Optional lab: Recovering from permanent disk failure - Layered volume
Optional lab: Replacing physical drives (without hot relocation)
Optional lab: Replacing physical drives (with hot relocation)
Optional lab: Recovering from temporary disk failure with vxattachd
daemon
Optional lab: Exploring spare disk behavior
Optional lab: Using the Support Web Site

CONFIDENTIAL - NOT FOR DISTRIBUTION


369 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A159
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four external disks to be
used during the labs.

Lab information
In preparation for this lab, you need the following information about your lab
environment.
Object Value
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_54
Location of lab scripts: /student/labs/sf/sf61

The exercises for this lab start on the next page.


Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


370 A160 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 1: Recovering a temporarily disabled disk group

sym1 A

This lab section requires two terminal windows to be open.

1 Create a disk group called appdg that contains one disk (emc0_dd7).
Solution
vxdisksetup -i emc0_dd7 (if necessary)
vxdg init appdg appdg01=emc0_dd7
End of Solution

2 Create a 1g concatenated volume called appvol in appdg disk group.

Solution
vxassist -g appdg make appvol 1g
End of Solution

3 Create a Veritas file system on appvol and mount it to /app.

Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if required)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

4 Copy the contents of /etc/default directory to /app and display the


contents of the file system.

Solution
cp -r /etc/default /app
ls -lR /app
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


371 Lab 8: Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
A161
5 If you want to observe the error messages displayed in the system log while the
failure is being created, open a second terminal window and use the tail -f
command to view the system log. Exit the output using CTRL-C when you are
satisfied.

Solution
tail -f /var/log/messages
End of Solution

6 Change into the directory containing the faildg_temp.pl script and


execute the script to create a failure in the appdg disk group.

Notes:
The faildg_temp.pl script disables the paths to the disk in the disk
group to simulate a hardware failure. This is just a simulation and not a real
failure; therefore, the operating system will still be able to see the disk after
the failure. The script will prompt you for the disk group name and then it
will create the failure by disabling the paths to the disk, performing some
I/O and then re-enabling the paths.
All lab scripts are located in the /student/labs/sf/sf61 directory.

Note: The script may have to run two or three times before the error occurs.

Solution
/student/labs/sf/sf61/faildg_temp.pl
What is the name of the disk group would you like to
temporarily disable? [appdg]: appdg
Checking to make sure appdg is enabled . . . done.
Creating failure, please be patient

dd: opening `/app/testfile': Input/output error


Copyright 2014 Symantec Corporation. All rights reserved.

Finished creating failure!

Note: You will see a dd error because I/O will be stopped as soon as the
failure is recognized.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


372 A162 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
7 Use the vxdisk -o alldgs list and vxdg list commands to
determine the status of the disk group and the disk.

Solution
A
vxdisk -o alldgs list
vxdg list

The disk group should show as disabled and the disk status should change to
online dgdisabled.
End of Solution

8 What happened to the file system?

Solution
df -k /app
The file system is also disabled.
End of Solution

9 Assuming that the failure was due to a temporary fiber disconnection and that
the data is still intact, recover the disk group and start the volume using the first
terminal window. Verify the disk and disk group status using the vxdisk -o
alldgs list and vxdg list commands.

Solution
umount /app
vxdg deport appdg
vxdg import appdg
vxdisk -o alldgs list
vxdg list
Copyright 2014 Symantec Corporation. All rights reserved.

The disk group should now be enabled and the disk status should change back
to online.
End of Solution

10 Remount the file system and verify that the contents are still there.

Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
ls -lR /app
CONFIDENTIAL - NOT FOR DISTRIBUTION
373 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A163
It is not necessary to run an fsck on the file system.
End of Solution

11 Unmount the file system.

Solution
umount /app
End of Solution

12 Destroy the appdg disk group.

Solution
vxdg destroy appdg
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


374 A164 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 2: Preparing for disk failure labs

sym1 A

Overview
The following sections use an interactive script to simulate a variety of disk failure
scenarios. Your goal is to recover from the problem as described in each scenario.
Use your knowledge of VxVM administration, in addition to the VxVM recovery
tools and concepts described in the lesson, to determine which steps to take to
ensure recovery. After you recover the test volumes, the script verifies your
solution and provides you with the result. You succeed when you recover the
volumes without corrupting the data.
For most of the recovery problems, you can use any of the VxVM interfaces: the
command line interface, the Veritas Operations Manager (VOM) Web console, or
the vxdiskadm menu interface. Lab solutions are provided for only one method.
If you have questions about recovery using interfaces not covered in the solutions,
see your instructor.

Setup
Due to the way in which the lab scripts work, it is important to set up your
environment as described in this setup section:

1 Create a disk group named testdg and add three disks (emc_disk_dd7,
emc_disk_dd8 and emc_disk_dd9) to the disk group. Assign the following disk
media names to the disks: testdg01, testdg02, and testdg03.
Solution
vxdg init testdg testdg01=emc0_dd7 testdg02=emc0_dd8 \
testdg03=emc0_dd9
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

2 In the first terminal window, navigate to the directory that contains the lab
scripts. Note that the lab scripts are located at the
/student/labs/sf/sf61 directory.

Solution
cd /student/labs/sf/sf61
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


375 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A165
Exercise 3: Recovering from temporary disk failure

sym1

In this lab exercise, a temporary disk failure is simulated. Your goal is to recover
all of the redundant and nonredundant volumes that were on the failed drive. The
lab script disk_failures.pl sets up the test volume configuration and
simulates a disk failure. You must then recover and validate the volumes.

Note: The lab scripts are located at the /student/labs/sf/sf61 directory.

1 From the first terminal window (from the directory that contains the lab
scripts), run the script disk_failures.pl, answer the initial configuration
questions and then select option 1, Exercise 3 - Recovering from temporary
disk failure. Note that the initial configuration questions will only be asked the
first time you run the script. Use test as the prefix for disk group and volume
names.
Solution
./disk_failures.pl
Initial Configuration File Check

What prefix should be used for the disk group name and
volume names? [app]: test <ENTER>

What is the path to the SF 6.1 software? [/student/


software/sf/sf61]: <ENTER>

What is the path to the SF 6.1 lab scripts? [/student/


labs/sf/sf61]: <ENTER>
Copyright 2014 Symantec Corporation. All rights reserved.

This script can be used to test all or any specific


exercises for the SF 6.1 Disk Failure Labs.

Choose the desired lab from the list below.


1. Exercise 3 - Recovering from temporary disk failure
2. Exercise 4 - Recovering from permanent disk failure
3. Exercise 5 - Optional Lab: Recovering from temporary
disk failure - Layered volume
4. Exercise 6 - Optional Lab: Recovering from permanent
disk failure - Layered volume

CONFIDENTIAL - NOT FOR DISTRIBUTION


376 A166 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Which setup do you wish to run? Enter 1 - 4: 1
End of Solution

This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are A
then mounted.
test1 with a mirrored layout
test2 with a concatenated layout

2 In a second terminal window, view the failure using the vxdisk -o


alldgs list and vxprint -g testdg -htr commands.

Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution

3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?

Solution
ls /test1
ls /test2

Because the test1 volume is mirrored, the files in the /test1 mount point are
still accessible. When trying to view the files in /test2, you should see the
following error:
/test2: I/O error
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

4 Attempt to recover the volumes.

Note: When performing recovery procedures, run vxprint and vxdisk


list often to see what is changing after issuing recovery commands:

Solution
vxprint -g testdg -htr

CONFIDENTIAL - NOT FOR DISTRIBUTION


377 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A167
vxdisk -o alldgs list
End of Solution

To recover from the temporary failure:

a If you are using enclosure based naming, identify the OS native name of
the disk that has temporarily failed. You will use this OS disk name while
verifying that the operating system recognizes the device.

Solution
vxdisk -e list ebn_of_failed_disk
DEVICE TYPE DISK GROUP STATUS
OS_NATIVE_NAME ATTR
ebn_of_failed_disk auto:cdsdisk testdg online
osn_of_failed_disk lun
End of Solution

b Ensure that the operating system recognizes the device using the
appropriate OS commands. Ignore any warning message about disk
geometry mismatch, if displayed.

Solution
partprobe /dev/osn_of_failed_disk
End of Solution

c Verify that the operating system recognizes the device using the
appropriate OS commands.

Solution
fdisk -l /dev/osn_of_failed_disk
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

d Force the VxVM configuration daemon to reread all of the drives in the
system.

Solution
vxdctl enable
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


378 A168 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
e Reattach the device to the disk media record using the vxreattach
command.

Solution
A
vxreattach
End of Solution

f Recover the volumes using the vxrecover command.

Solution
vxrecover
End of Solution

g Use the vxvol command to start the nonredundant volume.

Solution
vxvol -g testdg -f start test2
End of Solution

5 Because this is a temporary failure, the files in the test2 volume (and file
system) are still available. Recover the mount point by performing the
following:

a Unmount the /test2 mount point.

Solution
umount /test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

b Perform an fsck on the file system.

Solution
fsck -t vxfs /dev/vx/rdsk/testdg/test2
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


379 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A169
c Mount the test2 volume to /test2.

Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution

6 Compare the two mount points using the diff command.

Solution
diff /test1 /test2
If the files are the same you should not see an output. Only differences would
be displayed. You should see an output for common subdirectories.
Common subdirectories: /test1/lost+found and /test2/lost+found

Note: There is a potential for file system corruption in the test2 volume
since it has no redundancy.

End of Solution

7 Unmount the file systems and delete the test1 and test2 volumes.

Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


380 A170 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 4: Recovering from permanent disk failure

sym1 A

In this lab exercise, a permanent disk failure is simulated. Your goal is to replace
the failed drive and recover the volumes as needed. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.

Note: The lab scripts are located at the /student/labs/sf/sf61 directory.

1 In the first terminal window (from the directory that contains the lab scripts),
run the script disk_failures.pl, and select option 2, Exercise 4 -
Recovering from permanent disk failure.
Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.1 Disk Failure Labs.

Choose the desired lab from the list below.

1. Exercise 3 - Recovering from temporary disk failure


2. Exercise 4 - Recovering from permanent disk failure
3. Exercise 5 - Optional Lab: Recovering from temporary
disk failure - Layered volume
4. Exercise 6 - Optional Lab: Recovering from permanent
disk failure - Layered volume

Which setup do you wish to run? Enter 1 - 4: 2


End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a mirrored layout
test2 with a concatenated layout

CONFIDENTIAL - NOT FOR DISTRIBUTION


381 Lab 8: Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
A171
2 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands.

Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution

3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?

Solution
ls /test1
ls /test2

Because the test1 volume is mirrored, the files in the /test1 mount point are
still accessible. When trying to view the files in /test2 you should see the
following error
/test2: I/O error
End of Solution

4 Replace the permanently failed drive with a new disk at another SCSI location.
Then, recover the volumes.

Note: When performing recovery procedures, run vxprint and vxdisk


list often to see what is changing after issuing recovery commands.

To recover from the permanent failure:


Copyright 2014 Symantec Corporation. All rights reserved.

a In the second terminal window, initialize a new drive (emc0_d10).

Solution
vxdisksetup -i emc0_d10 (if necessary)
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


382 A172 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
b Attach the failed disk media name (testdg02) to the new drive.

Solution
vxdg -g testdg -k adddisk testdg02=emc0_d10
A
End of Solution

c Recover the volumes using the vxrecover command.

Solution
vxrecover
End of Solution

d Use the vxvol command to start the nonredundant volume.

Solution
vxvol -g testdg -f start test2
End of Solution

Note: You can also use the vxdiskadm menu interface to correct the failure.
Select Replace a failed or removed disk option and select the desired
drive when prompted.

5 Because this is a permanent failure, the files in the test2 volume (and file
system) are no longer available. Recover the mount point by performing the
following:

a Unmount the /test2 mount point.

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

umount /test2
End of Solution

b Attempt to mount the test2 volume to /test2. You will see an error
because the file system has been lost during the recovery.

Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
UX:vxfs mount: ERROR: V-3-20012: not a valid vxfs file
system
CONFIDENTIAL - NOT FOR DISTRIBUTION
383 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A173
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk
layout version
End of Solution

c Create a new file system on the test2 volume and then mount the test2
volume to /test2.

Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/test2
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution

6 List the contents of /test2. In a real failure scenario, the files in this file
system would need to be restored from a backup.

Solution
ls /test2
lost+found
End of Solution

7 Unmount the file systems and delete the test1 and test2 volumes.

Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

8 When you have completed this exercise, the disk device that was originally
used during disk failure simulation is in an online invalid state,
reinitialize the disk to prepare for later labs.

Solution
vxdisk -o alldgs list
vxdisksetup -i accessname
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


384 A174 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 5: Optional lab: Recovering from temporary disk failure -
Layered volume

A
sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

In this optional lab exercise, a temporary disk failure is simulated. Your goal is to
recover all of the volumes that were on the failed drive. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.

Note: The lab scripts are located at the /student/labs/sf/sf61 directory.

1 Use the vxdg command with the adddisk option to add a fourth disk
(emc0_d11) called testdg04 to the testdg disk group. If necessary,
initialize a new disk before adding it to the disk group.
Solution
vxdisksetup -i emc0_d11 (if necessary)
vxdg -g testdg adddisk testdg04=emc0_d11
End of Solution

2 From the directory that contains the lab scripts, run the script
disk_failures.pl, and select option 3, Exercise 5 - Optional Lab:
Recovering from temporary disk failure - Layered volume.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.1 Disk Failure Labs.

Choose the desired lab from the list below.

1. Exercise 3 - Recovering from temporary disk failure


2. Exercise 4 - Recovering from permanent disk failure

CONFIDENTIAL - NOT FOR DISTRIBUTION


385 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A175
3. Exercise 5 - Optional Lab: Recovering from temporary
disk failure - Layered volume
4. Exercise 6 - Optional Lab: Recovering from permanent
disk failure - Layered volume

Which setup do you wish to run? Enter 1 - 4: 3


End of Solution

This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a stripe-mirror layout on 4 disks
test2 with a concatenated layout

3 In a second terminal window, view the failure using the vxdisk -o


alldgs list and vxprint -g testdg -htr commands. Notice that
there are two disks that have failed.

Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution

4 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?

Solution
ls /test1
ls /test2

Because the test1 volume is layered and mirrored, the files in the /test1
Copyright 2014 Symantec Corporation. All rights reserved.

mount point are still accessible even though two disks have failed. When trying
to view the files in /test2 you should see the following error
/test2: I/O error
End of Solution

5 Assume that the failure was temporary. In a second terminal window, attempt
to recover the volumes.

CONFIDENTIAL - NOT FOR DISTRIBUTION


386 A176 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands.

A
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution

To recover from the temporary failure:

a If you are using enclosure based naming, identify the OS native names of
the disks that have temporarily failed. You will use these OS disk names
while verifying that the operating system recognizes the devices.

Solution
vxdisk -e list
DEVICE TYPE DISK GROUP STATUS
OS_NATIVE_NAME ATTR
ebn_of_failed_disk1 auto:cdsdisk testdg online
osn_of_failed_disk1 lun
ebn_of_failed_disk2 auto:cdsdisk testdg online
osn_of_failed_disk2 lun
End of Solution

b Ensure that the operating system recognizes the devices using the
appropriate OS commands.Igonore any warning message about disk
geometry mismatch, if displayed.

Solution
partprobe /dev/osn_of_first_failed_disk
Copyright 2014 Symantec Corporation. All rights reserved.

partprobe /dev/osn_of_second_failed_disk
End of Solution

c Verify that the operating system recognizes the devices using the
appropriate OS commands.

Solution
fdisk -l /dev/osn_of_first_failed_disk

CONFIDENTIAL - NOT FOR DISTRIBUTION


387 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A177
fdisk -l /dev/osn_of_second_failed_disk
End of Solution

d Force the VxVM configuration daemon to reread all of the drives in the
system.

Solution
vxdctl enable
End of Solution

e Reattach the devices to the disk media records using the vxreattach
command.

Solution
vxreattach
End of Solution

f Recover the volumes using the vxrecover command.

Solution
vxrecover
End of Solution

g Use the vxvol command to start the nonredundant volume.

Solution
vxvol -g testdg -f start test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

6 Because this is a temporary failure, the files in the test2 volume (and file
system) are still available. Recover the mount point by performing the
following:

a Unmount the /test2 mount point.

Solution
umount /test2
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


388 A178 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
b Perform an fsck on the file system.

Solution
fsck -t vxfs /dev/vx/rdsk/testdg/test2
A
End of Solution

c Mount the test2 volume to /test2.

Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution

7 Compare the two mount points using the diff command.

Solution
diff /test1 /test2
If the files are the same you should not see an output. Only differences would
be displayed. You should see an output for common subdirectories.
Common subdirectories: /test1/lost+found and /test2/lost+found
End of Solution

Note: There is a potential for file system corruption in the test2 volume since
it has no redundancy.

8 Unmount the file systems and delete the test1 and test2 volumes.

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


389 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A179
Exercise 6: Optional lab: Recovering from permanent disk failure -
Layered volume

sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

In this optional lab exercise, a permanent disk failure is simulated. Your goal is to
replace the failed drive and recover the volumes as needed. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.

Note: The lab scripts are located at the /student/labs/sf/sf61 directory.

1 From the directory that contains the lab scripts, run the script
disk_failures.pl, and select option 4, Exercise 6 - Optional Lab:
Recovering from permanent disk failure - Layered volume:
Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.1 Disk Failure Labs.

Choose the desired lab from the list below.

1. Exercise 3 - Recovering from temporary disk failure


2. Exercise 4 - Recovering from permanent disk failure
Copyright 2014 Symantec Corporation. All rights reserved.

3. Exercise 5 - Optional Lab: Recovering from temporary


disk failure - Layered volume
4. Exercise 6 - Optional Lab: Recovering from permanent
disk failure - Layered volume

Which setup do you wish to run? Enter 1 - 4: 4


End of Solution

This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.

CONFIDENTIAL - NOT FOR DISTRIBUTION


390 A180 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
test1 with a stripe-mirror layout
test2 with a concatenated layout

Note: You can neglect if the above script displays a warning message such as
Disk destroy failed, and skip the steps 4a and 4b for A
initializing and adding the failed disk. But you can recover the data
using the vxrecover command as displayed in step 4c.

2 In a second terminal window, view the failure using the vxdisk -o


alldgs list and vxprint -g testdg -htr commands. Notice that
there are two disks that have failed.

Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution

3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?

Solution
ls /test1
ls /test2

Because the test1 volume is layered and mirrored, the files in the /test1
mount point are still accessible even though two disks have failed. When trying
to view the files in /test2 you should see the following error
/test2: I/O error
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

4 Replace the permanently failed drive with either a new disk at the same SCSI
location or by another disk at another SCSI location. Then, recover the
volumes.

Note: When performing recovery procedures, run vxprint and vxdisk


list often to see what is changing after issuing recovery commands.

CONFIDENTIAL - NOT FOR DISTRIBUTION


391 Lab 8: Resolving Hardware Problems
Copyright 2014 Symantec Corporation. All rights reserved.
A181
To recover from the permanent failure:

Note: If you are unable to initialize and add the failed disk then skip the steps
4a and 4b, then continue with step 4c and finish the exercise.

a In the second terminal window, initialize the drive that failed. In a real
failure scenario this drive would have been replaced with a new drive.

Solution
vxdisksetup -i accessname
End of Solution

b Attach the failed disk media name to the new drive.

Solution
vxdg -g testdg -k adddisk testdg02=accessname
End of Solution

c Recover the volumes using the vxrecover command.

Solution
vxrecover
End of Solution

d Use the vxvol command to start the nonredundant volume.

Solution
vxvol -g testdg -f start test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

Note: You can also use the vxdiskadm menu interface to correct the failure.
Select Replace a failed or removed disk option and select the desired
drive when prompted.

5 Because this is a permanent failure, the files in the test2 volume (and file
system) are no longer available. Recover the mount point and file system by
performing the following:

CONFIDENTIAL - NOT FOR DISTRIBUTION


392 A182 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
a Unmount the /test2 mount point.

Solution
umount /test2
A
End of Solution

b Create a new file system.

Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/test2
End of Solution

c Mount the test2 volume to /test2 and list the contents. The mount point
should only contain a lost+found directory.

Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
ls /test2
End of Solution

6 Unmount the file systems and delete the test1 and test2 volumes.

Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

7 Destroy the testdg disk group.

Solution
vxdg destroy testdg
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


393 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A183
Exercise 7: Optional lab: Replacing physical drives (without hot
relocation)

sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

Note: If you have not already done so, destroy the testdg disk group before you
start this section.

1 Create a disk group called appdg that contains four disks (emc0_dd7 -
emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution

2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.

Solution
vxassist -g appdg make appvol 100m layout=mirror
Copyright 2014 Symantec Corporation. All rights reserved.

mkfs -t vxfs /dev/vx/rdsk/appdg/appvol


mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution

3 If the vxrelocd daemon is running, stop it using ps and kill, in order to


stop hot relocation from taking place. Verify that the vxrelocd processes are
killed before you continue.

CONFIDENTIAL - NOT FOR DISTRIBUTION


394 A184 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Note: If you have executed the disk_failures.pl script in the previous
lab sections, the vxrelocd daemon may already be killed.

A
Solution
ps -ef | grep vxrelocd
kill -9 pid (if necessary)
ps -ef | grep vxrelocd
End of Solution

4 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script.

Note: The lab scripts are located at the /student/labs/sf/sf61


directory.

While using the script, substitute the appropriate disk device name for one of
the disks in use by appvol, for example enter emc0_dd7.
Solution
cd /student/labs/sf/sf61
./overwritepr.pl
Enter a device used in appvol when prompted.
End of Solution

5 When the error occurs, view the status of the disks from the command line.

Solution
vxdisk -o alldgs list
Copyright 2014 Symantec Corporation. All rights reserved.

The physical device is no longer associated with the disk media name and the
disk group.
End of Solution

6 View the status of the volume from the command line.

Solution
vxprint -g appdg -htr

CONFIDENTIAL - NOT FOR DISTRIBUTION


395 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A185
The plex displays a status of DISABLED NODEVICE.
End of Solution

7 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.

Solution
vxdisksetup -i accessname

Note: This step is only necessary when you replace the failed disk with a
brand new one. If it were a temporary failure, this step would not be
necessary.

End of Solution

8 Bring the disk back under VxVM control.

Solution
vxdg -g appdg -k adddisk dm_name=accessname

where dm_name is the disk media name of the failed disk and accessname
is the enclosure-based name of the disk device used to replace the failed one.
End of Solution

9 Check the status of the disks and the volume. The disk should now be a part of
the disk group, but the volume still has a failure.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


396 A186 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
10 From the command line, recover the volume.

Solution
vxrecover
A
End of Solution

11 Check the status of the disks and the volume to ensure that the disk and volume
are fully recovered.

Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution

12 Unmount the /app file system and remove the appvol volume.

Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


397 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A187
Exercise 8: Optional lab: Replacing physical drives (with hot
relocation)

sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

1 Verify that the relocation daemon (vxrelocd) is running. If not, start it as


follows:
Solution
ps -ef |grep vxrelocd
vxrelocd root & (if necessary)
ps -ef |grep vxrelocd
End of Solution

2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.

Solution
vxassist -g appdg make appvol 100m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
Copyright 2014 Symantec Corporation. All rights reserved.

End of Solution

3 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script.

Note: The lab scripts are located at the /student/labs/sf/sf61


directory.

CONFIDENTIAL - NOT FOR DISTRIBUTION


398 A188 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
While using the script, substitute the appropriate disk device name for one of
the disks in use by appvol, for example enter emc0_dd7.
Solution
cd /student/labs/sf/sf61 A
./overwritepr.pl
Enter a device used in appvol when prompted.
End of Solution

4 When the error occurs, view the status of the disks and volume from the
command line using the vxdisk list and vxprint commands. Allow
sufficient time for the vxrelocd daemon to relocate the failed device.

Solution
vxdisk -o alldgs list
vxprint -g appdg -htr

The physical device is no longer associated with the disk media name and the
disk group. The failed device in the volume should be relocated to a different
device within the disk group
End of Solution

5 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.

Solution
vxdisksetup -i accessname

Note: This step is only necessary when you replace the failed disk with a
Copyright 2014 Symantec Corporation. All rights reserved.

brand new one. If it were a temporary failure, this step would not be
necessary.

End of Solution

6 Bring the disk back under VxVM control.

Solution
vxdg -g appdg -k adddisk dm_name=accessname

CONFIDENTIAL - NOT FOR DISTRIBUTION


399 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A189
where dm_name is the disk media name of the failed disk and accessname
is the enclosure-based name of the disk device used to replace the failed one.
End of Solution

7 Check the status of the disks and the volume. The failed disk should now be a
part of the disk group, but the plex that used to be on the failed disk is now
relocated to another disk in the disk group.

Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution

8 Use the vxunreloc command to return the plex back to the original device.

Solution
vxunreloc -g appdg appdg01
vxprint -g appdg -htr

Note: This solution assumes that the failed and then recovered disk was
appdg01. Depending on which disk you failed in step 3, you may need
to use a different disk media name with the vxunreloc command.

End of Solution

9 Unmount the /app file system and remove the appvol volume.

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

umount /app
vxassist -g appdg remove volume appvol
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


400 A190 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 9: Optional lab: Recovering from temporary disk failure
with vxattachd daemon

A
sym1

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

1 Enable the vxattachd daemon if it is not already running.


Solution
ps -ef | grep vxattachd
vxattachd & (if necessary)
ps -ef | grep vxattachd
End of Solution

2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.

Solution
vxassist -g appdg make appvol 100m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

3 Determine the first device use by the appvol volume. This device should be
emc0_dd7. Use the vxdisk list command to determine all paths to the
device.

Solution
vxprint -g appdg -htr
vxdisk list emc0_dd7
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


401 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A191
4 Use the vxdmpadm -f disable command to disable all paths to the
device.

Solution
vxdmpadm -f disable path=path1,path2
vxdisk list emc0_dd7
End of Solution

5 Use the dd command to write to the /app directory to produce a failure.

Solution
dd if=/dev/zero of=/app/test1 bs=1 count=10
End of Solution

6 Use the vxdmpadm enable command to enable all paths to the failed
device. Monitor the vxdisk list and vxprint outputs until the
vxattachd daemon senses that the device is back online and reattaches the
device and recovers the failed plexes.

Note: If the vxrelocd daemons are running, then the plex on the failed disk
will first be relocated to another disk in the disk group. Then the failed
plex and disk will be recovered.

Solution
vxdmpadm enable path=path1,path2
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

7 Unmount the /app file system and remove the appvol volume.

Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


402 A192 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 10: Optional lab: Exploring spare disk behavior

sym1 A

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

1 You should have four disks (appdg01 through appdg04) in the disk group
appdg. Set all disks to have the spare flag on.
Solution
vxedit -g appdg set spare=on appdg01
vxedit -g appdg set spare=on appdg02
vxedit -g appdg set spare=on appdg03
vxedit -g appdg set spare=on appdg04
End of Solution

2 Create a 100-MB mirrored volume called sparevol.

Is the volume successfully created? Why or why not?


Solution
vxassist -g appdg make sparevol 100m layout=mirror

No, the volume is not created, and you receive the error:
...Cannot allocate space for size block volume ...
The volume is not created because all disks are set as spares, and vxassist
Copyright 2014 Symantec Corporation. All rights reserved.

does not find enough free space to create the volume.


End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


403 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A193
3 Attempt to create the same volume again, but this time specify two disks to
use. Do not clear any spare flags on the disks.

Solution
vxassist -g appdg make sparevol 100m layout=mirror \
appdg03 appdg04

Notice that VxVM overrides its default and applies the two spare disks to the
volume because the two disks were specified by the administrator.
End of Solution

4 Remove the sparevol volume.

Solution
vxassist -g appdg remove volume sparevol
End of Solution

5 Verify that the relocation daemon (vxrelocd) is running. If not, start it.

Solution
ps -ef |grep vxrelocd
vxrelocd root & (if necessary)
ps -ef |grep vxrelocd
End of Solution

6 Remove the spare flags from three of the four disks.

Solution
Copyright 2014 Symantec Corporation. All rights reserved.

vxedit -g appdg set spare=off appdg01


vxedit -g appdg set spare=off appdg02
vxedit -g appdg set spare=off appdg03
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


404A194 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
7 Create a 100-MB concatenated mirrored volume called sparevol.

Solution
vxassist -g appdg make sparevol 100m layout=mirror
A
End of Solution

8 Save the output of vxprint -g appdg -htr to a file.

Solution
vxprint -g appdg -htr > /tmp/savedvxprint
End of Solution

9 Display the properties of the sparevol volume. In the table, record the device
and disk media name of the disks used in this volume. You are going to
simulate disk failure on one of the disks. Decide which disk you are going to
fail.

For example, the volume sparevol uses appdg01 and appdg02:

Device Name Disk Media Name


Disk 1 emc0_dd7 appdg01
Disk 2 emc0_dd8 appdg02

10 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script. In the standard virtual lab environment, this script
is located in the /student/labs/sf/sf61 directory.

While using the script, substitute the appropriate disk device name for one of
the disks in use by sparevol, for example enter emc0_dd7.
Copyright 2014 Symantec Corporation. All rights reserved.

Solution
cd /student/labs/sf/sf61
./overwritepr.pl
Enter a device used in sparevol when prompted.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


405 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A195
11 Run vxprint -g appdg -htr and compare the output to the vxprint
output that you saved earlier. What has occurred?

Note: You may need to wait a minute or two for the hot relocation to
complete.

Solution
Hot relocation has taken place. The failed disk has a status of NODEVICE.
VxVM has relocated the mirror of the failed disk onto the designated spare
disk.
End of Solution

12 Run vxdisk -o alldgs list. What do you notice?

Solution
This disk is displayed as a failed disk.
End of Solution

winclient

13 In the VOM console, view the status of the disks and the volume.

Solution
https://mgt.example.com:14161/
Navigate to the Server perspective. On the navigation tree, expand Data
Center > Uncategorized Hosts. Click the sym1.example.com host link, and
Copyright 2014 Symantec Corporation. All rights reserved.

choose the Disks tab. View the status of the disks.


In VOM, the disk does not shown any disk group association and is in a state
Free (Uninitialized).
Expand sym1.example.com > Volumes and click on sparevol. Click the
Disks tab and view the devices used.
VxVM has relocated the mirror of the failed disk onto the designated spare
disk.
End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


406 A196 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
sym1
A
14 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.

Solution
vxdisksetup -i accessname
End of Solution

15 Bring the disk back under VxVM control and into the disk group to replace the
failed disk media name.

Solution
vxdg -g appdg -k adddisk dm_name=accessname
End of Solution

16 Undo hot relocation for the disk.

Solution
vxunreloc -g appdg dm_name

where dm_name is the disk media name of the failed and replaced disk.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

17 Wait until the volume is fully recovered before continuing. Check to ensure
that the disk and the volume are fully recovered.

Solution
vxdisk -o alldgs list
vxprint -g appdg -htr

Note: The vxprint command shows the subdisk with the UR tag.

End of Solution

CONFIDENTIAL - NOT FOR DISTRIBUTION


407 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A197
18 Rename the unrelocated subdisk to its original name.

Solution
vxedit -g appdg rename appdg01-UR-001 appdg01-01
End of Solution

19 Remove the sparevol volume.

Solution
vxassist -g appdg remove volume sparevol
End of Solution

20 Remove the spare flag from the last disk.

Solution
vxedit -g appdg set spare=off appdg04
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


408 A198 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Exercise 11: Optional lab: Using the Support Web Site

sym1 A

Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.

Note: If you do not have access to the Internet from the classroom systems, skip
this optional lab section.

1 Access the latest information on Veritas Storage Foundation from the


Symantec Support Web site.
Solution
Go to the Symantec Technical Support Web site at
http://support.symantec.com.
On the Support & Communities tab, under the Product Support section
select Supported Products A-Z.
From the list of Symantec products select Storage Foundation for UNIX/
Linux.
On the next window, click the Review critical product alerts link. This
will show the latest alerts about the Storage Foundation product. Click
back to the previous screen when finished.
End of Solution
Copyright 2014 Symantec Corporation. All rights reserved.

2 Which Linux platform is supported for Storage Foundation 6.1?

Solution
Select Find a product manual link. This will redirect to SORT website.
In the Linux column, find the 6.1 row and then select the Product guides
link.
Look for the Release Notes documents. Download and open the PDF file
for Symantec Storage Foundation Release Notes.

CONFIDENTIAL - NOT FOR DISTRIBUTION


409 Lab 8: Resolving Hardware Problems Copyright 2014 Symantec Corporation. All rights reserved.
A199
Under the System Requirements section you will see the supported Linux
operating system versions.
End of Solution

3 Where would you locate the latest patch for Veritas Storage Foundation and
High Availability?

Solution
Go to the Symantec Operations Readiness Tools (SORT) Web site at
http://sort.symantec.com/.
Select the Downloads tab.
Click the Patches link.
Select the Product, Product version and Platform.
Available patches are displayed for download.
End of Solution

End of lab
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


410 A200 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Appendix B
Using the VEA
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


411
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


412 B2 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Creating a disk group and a volume and adding a file system

B
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


413 Appendix B Using the VEA Copyright 2014 Symantec Corporation. All rights reserved.
B3
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


414 B4 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
B
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


415 Appendix B Using the VEA Copyright 2014 Symantec Corporation. All rights reserved.
B5
Displaying disk, disk group and volume information
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


416 B6 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
B
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


417 Appendix B Using the VEA Copyright 2014 Symantec Corporation. All rights reserved.
B7
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


418 B8 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
B
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


419 Appendix B Using the VEA Copyright 2014 Symantec Corporation. All rights reserved.
B9
Removing volumes, disks, and disk groups
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


420 B10 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Performing basic administration tasks on volumes and file
systems

B
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


421 Appendix B Using the VEA Copyright 2014 Symantec Corporation. All rights reserved.
B11
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


422 B12 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
B
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


423 Appendix B Using the VEA Copyright 2014 Symantec Corporation. All rights reserved.
B13
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


424B14 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
B
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


425 Appendix B Using the VEA Copyright 2014 Symantec Corporation. All rights reserved.
B15
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


426 B16 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Index
Files and Directories array support library 7-14
ASL 7-14, 7-15
/dev/vx/config 7-5
authentication broker 2-25
/dev/vx/dsk 3-8, 7-9
/dev/vx/rdsk 3-8, 7-9
/etc/default/fs 6-9 B
/etc/default/vxassist 4-9 balanced I/O policy 7-18
/etc/default/vxsf 6-21 block device file 3-12
/etc/filesystems 3-13, 3-14 block-based allocation 6-6
/etc/fs/vxfs 6-8 bootdg 3-8, 3-29
/etc/fstab 3-14
/etc/system 2-16
/etc/vfs 6-9
C
/etc/vfstab 3-14 CDS 1-9
/etc/vx/elm 2-7 CDS disk 1-10
/etc/vx/volboot 7-3 CDS disk layout 1-9
/opt/VRTS/bin 6-8 cfgmgr 8-12
/opt/VRTS/install/logs 2-14 chfs 3-14
/opt/VRTS/man 2-29 CLI 2-28
/opt/VRTSvxfs/sbin 6-8 cluster management 3-4
/sbin 6-8 column 4-4
/sbin/fs 6-8 command line interface 2-28, 2-29
/usr/lib/fs/vxfs 6-8 concatenated volume 1-14, 4-3
creating 4-9
concatenation 1-14
A advantages 4-7
active path attribute 7-19 disadvantages 4-7
active/active disk arrays 7-17 configuration daemon 7-5
controlling 7-8
active/passive disk array 7-17
configuration database 7-5, 8-6
Copyright 2014 Symantec Corporation. All rights reserved.

adaptive I/O policy 7-18 copies 7-6


adaptiveminq I/O policy 7-18 disk group status 7-6
address-length pair 6-6 size 7-6
aixdisk 1-10 controller 1-4
APM 7-14, 7-15 enabling or disabling I/O to 7-20
architecture of VxVM 7-3 creating a volume 3-11
array 1-6 crfs 3-13
active/active 7-17 cron 6-18
active/passive 7-17 cross-platform data sharing 1-9
array policy module 7-14

CONFIDENTIAL - NOT FOR DISTRIBUTION


427 Copyright 2014 Symantec Corporation. All rights reserved.
Index-17
D disk failure 8-4
permanent 8-7
daemons of VxVM 7-3 resolving intermittent failure 8-19
data change object 3-22, 6-27 temporary 8-7
data redundancy 1-14 disk failure handling 8-4
DDL 7-14 disk format 7-7
defaultdg 3-8 disk group
defragmentation clearing host locks 5-19
scheduling 6-18 configuration database data 7-6
creating 3-9
defragmenting a file system 6-16
creating in vxdiskadm 3-10
deporting a disk group definition 1-11
and renaming 5-18 deporting 5-18, 5-20
to new host 5-18 destroying 3-29
destroying a disk group 3-29 destroying in CLI 3-29
devfsadm 8-12 displaying deported 3-19
displaying free space in 3-19
device discovery
displaying properties for 3-19
partial 7-16
forcing an import 5-19
device name 7-7 high availability 3-7
device path 3-17 importing 5-20
device tag 7-7 importing and renaming 5-19
devicetag 3-17 importing as temporary in CLI 5-20
purpose 1-11, 3-7
directory fragmentation 6-14 reserved names 3-8
dirty region logging 5-7 disk group configuration 1-11
disabling I/O to a controller 7-20 disk group ID 3-18
disaster recovery Intro-8 disk group name 7-7
disk disk header
configuring for VxVM 3-5 displaying 7-7
displaying summary information 3-18
failing 8-4 disk header version 7-7
naming 1-4 disk ID 3-18
recognizing by operating system 8-12 disk initialization 3-5
removing 3-26 disk layout 1-9
removing from a disk group 3-26 changing 3-5
replacing failed in vxdiskadm 8-13
disk media name 1-11, 3-6, 3-9, 7-7, 8-6
Copyright 2014 Symantec Corporation. All rights reserved.

replacing in CLI 8-13


default 1-11
shredding 3-27, 3-28
uninitializing 3-27 disk media record 8-6
unrelocating 8-23 disk name 3-17
viewing in CLI 3-15 disk naming 3-9
viewing information about 3-15 AIX 1-5
disk access name 3-6 HP-UX 1-4
disk access record 1-12, 3-6 Linux 1-5
Solaris 1-4
disk array 1-6
active/active 7-17 disk replacement 8-11
active/passive 7-17 disk spanning 1-13

CONFIDENTIAL - NOT FOR DISTRIBUTION


428 Index-18 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
disk status defragmenting 6-16
online 3-15 fragmentation 6-14
online invalid 3-16 fragmentation reports 6-15
disk status flags 7-7 fragmentation types 6-14
intent log 6-12
disks
mounting at boot 3-14
adding to a disk group 3-9
resizing 5-15
displaying detailed information 3-17
resizing methods 5-13
evacuating data 3-25
renaming 5-21 file system free space
uninitialized 3-5 identifying 6-11
DMP 7-12 file system type 6-11
benefits 7-13 fragmentation 6-14
setting path attributes 7-19 directory 6-14
setting the I/O policy 7-18 extent 6-14
dynamic LUN free space 6-17
resizing 5-16 fragmentation index 6-15
dynamic multipathing 3-4, 7-12 free space pool 3-6
benefits 7-13 fsadm 5-15, 6-14, 6-15
setting path attributes 7-19
fsck 6-12, 6-13
setting the I/O policy 7-18

G
E
group name 3-18
ENABLED state 8-17
enabling I/O to a controller 7-20
encapsulation 3-5 H
enclosure-based naming HFS 6-8
benefits 3-4
Hierarchical File System 6-8
error disk status 8-6
high availability 5-17
error status 3-16
host locks
evacuating a disk 3-25 clearing 5-19
exclusive OR 4-6 hostid 3-17, 7-7
EXT2 6-8 hot relocation
EXT3 6-8 definition 8-20
Copyright 2014 Symantec Corporation. All rights reserved.

Extended File System 6-8 failure detection 8-21


extent 6-6 notification 8-21
process 8-20
extent fragmentation 6-14
recovery 8-21
extent-based allocation 6-6 selecting space 8-21
unrelocating a disk 8-23
hpdisk 1-10
F
file system
adding to a volume 3-12 I
adding to a volume in CLI 3-12
I/O
consistency checking 6-13

CONFIDENTIAL - NOT FOR DISTRIBUTION


429 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals Copyright 2014 Symantec Corporation. All rights reserved.
Index-19
enabling and disabling to a controller 7-20 K
I/O failure
identifying 8-4 kernel issues
and VxFS 2-16
I/O policy
setting for DMP 7-18 kernel log 7-9
importing a disk group keyless licensing 2-6, 2-19
and renaming 5-19
forcing 5-19
L
inode 6-6
insf 8-12 layered volume 1-14
installation menu 2-11 licensing 2-6
generating a license key 2-7
installer 2-11, 2-12
Web site 2-7
installfs 2-12
listing installed packages 2-15
Installing SF
load balancing 4-7
Web installer 2-18
location code 1-5
installing SF 2-11
assessment service 2-8 logging 5-7
installation logs 2-14 for mirrored volumes 5-7
verifying on AIX 2-15 logical unit number 1-4
verifying on HP-UX 2-15 logtype 4-11
verifying on Linux 2-15
lsdev 8-12
verifying on Solaris 2-14
verifying package installation 2-14 lsfs 3-14
installp 2-12 lslpp 2-15
installsf 2-12 LUN 1-4
and resizing VxVM structures 5-16
installvm 2-12
Intelligent Storage Provisioning 3-22
intent logging 6-12 M
interfaces 2-28 man 2-29
command line interface 2-28
vxdiskadm 2-28 manual pages 2-29
intermittent disk failure minimumq I/O policy 7-18
resolving 8-19 mirror
iopolicy 7-18 adding in CLI 5-6
Copyright 2014 Symantec Corporation. All rights reserved.

removing 5-3
ioscan 8-12
mirrored volume 1-14, 4-5
creating 4-11
J mirroring 1-14
advantages 4-8
JFS 6-8 disadvantages 4-8
JFS2 6-8 mirrors
Journaled File System 6-8 adding 4-10
journaling 6-12 mkdir 3-12
mkfs 3-12
mkfs options 6-10

CONFIDENTIAL - NOT FOR DISTRIBUTION


430 Index-20 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
mount 3-12 plex name
moving a disk default 1-12
vxdiskadm 5-13, 5-16 preferred path attribute 7-19
multipathed disk array 1-6 primary path attribute 7-19
multiported disk array 7-17 priority I/O policy 7-18
private region 1-9, 3-5, 8-4
private region offset 7-7
N
private region size 1-9
naming disks AIX 1-9
defaults 3-9 HP-UX 1-9
ncol 4-9 Linux 1-9
newfs 3-12 Solaris 1-9
nlog 4-11 prtvtoc 8-12
nmirror 4-10 public region 1-9, 1-11, 8-4
nodg 3-8 pubpaths 7-7
nomanual path attribute 7-19
nopreferred path attribute 7-19 R
nostripe 4-9
RAID 1-13
RAID array
O benefits with VxVM Intro-8
RAID levels 1-13
Object Data Manager 1-5
RAID-5 column 4-6
online disk status 8-6
RAID-5 volume 1-14, 4-6
online invalid status 3-16
raw device file 3-12
online status 3-15
read policy 5-10
operating system versions 2-3 changing in CLI 5-11
preferred plex 5-10
round robin 5-10
P selected plex 5-10
packages siteread 5-10
listing 2-15 recovering a volume 8-16
parity 1-14, 4-6 redundancy 1-14
Copyright 2014 Symantec Corporation. All rights reserved.

partial device discovery 7-16 removing a disk 3-26


partition 1-4 removing a volume 3-24
partition numbers 7-7 renaming a disk 5-21
permanent disk failure 8-7 renaming a disk group 5-22
physical disk replacing a disk 8-11
naming 1-4 CLI 8-13
pkgadd 2-12 replacing a failed disk
pkginfo 2-15 vxdiskadm 8-13
plex 1-12, 4-5 replicated volume group 3-22
definition 1-12 resilience 1-14
naming 1-12 resilient volume 1-14

CONFIDENTIAL - NOT FOR DISTRIBUTION


431 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals Copyright 2014 Symantec Corporation. All rights reserved.
Index-21
resizing a dynamic LUN 5-16 Symantec Operations Readiness Tools 2-8
resizing a file system 5-15 Patch Services 2-23
resizing a volume 5-12
with vxassist 5-15
with vxresize 5-14
T
resizing a volume and file system 5-13 target 1-4
resizing a volume with a file system 5-12 technical support for SF 2-20
rlink 3-22 temporary disk failure 8-7
round-robin I/O policy 7-18 thin provisioning 6-19
rpm 2-12, 2-15 displaying thin LUN information 6-20
fssmartmovethreshold 6-22
migration to 6-23
S overview 6-19
parameters 6-22
SAN management 3-4 SmartMove 6-21
secondary path attribute 7-19 thin reclamation 6-24, 6-25, 6-26
selected plex read policy 5-10 usefssmartmove 6-21
singleactive I/O policy 7-18 true mirror 4-5
slice 1-4 true mirroring 1-14
sliced disk 1-10 type 3-17
SmartMove 6-21
snap object 3-22 U
spare disks
UFS 6-8
managing 8-22
uninitialized disks 3-5
STALE state 8-17
UNIX File System 6-8
standby path attribute 7-19
unrelocating a disk 8-23
storage
allocating for volumes 4-12 user interfaces 2-28
storage attributes
specifying for volumes 4-12
V
storage cache 3-22
stripe unit 4-4, 4-6 VEA 2-28
installing the server and client 2-32
striped volume 1-14, 4-4
multiple views of objects 2-31
Copyright 2014 Symantec Corporation. All rights reserved.

stripeunit 4-10 Preferences window 2-31


striping 1-14 remote administration 2-31
advantages 4-7 security 2-31
disadvantages 4-8 setting preferences 2-31
subdisk 1-12 starting 2-32
definition 1-12 Veritas Operations Manager 2-24
subdisk name Veritas Volume Replicator 3-22
default 1-12 virtual storage objects 1-8
support for SF 2-20 vol_subdisk_num 1-12
swinstall 2-12 volboot 3-8, 7-10
swlist 2-15 viewing contents 7-10

CONFIDENTIAL - NOT FOR DISTRIBUTION


432 Index-22 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
volume 1-8, 3-6 vxassist 3-11, 5-13, 5-15
accessing 1-8 vxassist growby 5-15
adding a file system in CLI 3-12
vxassist growto 5-15
adding a mirror 5-3
creating 3-11 vxassist shrinkby 5-15
creating in CLI 3-11 vxassist shrinkto 5-15
creating mirrored and logged 4-11 vxattachd 8-14
definition 1-8, 1-12 vxcached 7-4
disk requirements 3-11
estimating size 4-10 vxconfig 7-5
expanding the size 5-12 vxconfigbackupd 7-4
recovering 8-16 vxconfigd 7-3, 7-5, 7-8
reducing the size 5-12 vxconfigd modes 7-9
removing 3-24
vxdclid 7-4
removing a mirror 5-3
resizing 5-12 vxdctl enable 3-6, 7-3, 8-6, 8-12
resizing methods 5-13 vxdctl list 7-10
resizing with vxassist 5-15 vxdefault 6-22
resizing with vxresize 5-14
vxdg destroy 3-29
starting manually 5-20
vxdg list 7-6
volume layout 1-13
concatenated 1-14 vxdisk list 3-10, 3-15, 3-17, 3-18, 8-6, 8-12
displaying in CLI 3-21 vxdisk resize 5-16
layered 1-14 vxdisk scandisks 7-16
mirrored 1-14 vxdiskadm 2-28, 2-30, 3-5
RAID-5 1-14 creating a disk group 3-10
selecting 4-3 replacing a failed disk 8-13
striped 1-14
vxdiskunsetup 3-27
Volume Manager control 1-9
vxdmpadm getattr 7-18
Volume Manager disk 1-11
naming 1-11 vxdmpadm setattr 7-18
Volume Manager Support Operations 2-28, 2- vxesd 7-4
30 VxFS 6-8
volume read policy 5-10 allocation 6-6
changing in CLI 5-11 command locations 6-8
command syntax 6-9
volume recovery 8-11 defragmenting 6-16
volume states features 6-3, 6-4
Copyright 2014 Symantec Corporation. All rights reserved.

after attaching disk media 8-15 file system switchout mechanisms 6-9
after recovering volumes 8-17 file system type 6-11
after running vxreattach 8-10 fragmentation types 6-14
after temporary disk failure 8-10 identifying free space 6-11
volumes intent log 6-12
allocating storage for 4-12 maintaining consistency 6-13
VOM resizing 5-15
architecture 2-25 using by default 6-9
support for virtual environments 2-26 vxinfo 3-23
vrtsadm 2-32 vxinstall 2-12
VRTSvxfs 2-16 vxiod 7-3

CONFIDENTIAL - NOT FOR DISTRIBUTION


433 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals Copyright 2014 Symantec Corporation. All rights reserved.
Index-23
vxlist 3-23
vxnotify 7-4
vxpal 7-4
vxprint 3-21
vxreattach 8-14
vxrecover 8-16
vxrelocd 7-3, 8-21
vxresize 5-13
vxsited 7-4
vxsvc 7-3
vxunreloc 8-23
VxVM
architecture 7-3
configuration database 7-5
daemons 7-3
user interfaces 2-28
VxVM and RAID arrays Intro-8
VxVM configuration daemon 3-6
vxvol rdpol prefer 5-11
vxvol rdpol round 5-11
vxvol rdpol select 5-11
vxvol stopall 5-20

X
XOR 1-14, 4-6
xprtlwid 2-18
Copyright 2014 Symantec Corporation. All rights reserved.

CONFIDENTIAL - NOT FOR DISTRIBUTION


434Index-24 Symantec Storage Foundation 6.x for UNIX: Administration Fundamentals
Copyright 2014 Symantec Corporation. All rights reserved.
Symantec IT certification holders are highly valued IT Professionals. Customers,
colleagues, and employers are confident that Symantec IT certification holders have the
knowledge and expertise to effectively install, configure, deploy, administer, or provide
consulting services on Symantec products. Protecting this value benefits you, as well
as Symantec.

x You invest a considerable amount of time, expense, and expertise to prepare for
and complete a Symantec technical exam, which is undermined by those who
engage in exam misconduct.
x Exam misconduct enables less qualified individuals to compete for the jobs and
benefits YOU deserve.
x Exam misconduct erodes confidence in both Symantec programs and your skills
as a certified IT professional and can lead to security and liability risks for your
customers and/or employer
x To confidentially report suspected cases of misconduct, please contact
global_exams@symantec.com.
Copyright 2014 Symantec Corporation. All rights reserved.

Symantec is committed to maintaining the security and integrity of its brand and
certification and accreditation exams. This ensures that our products are installed and
maintained by qualified IT Professionals and provides end users with the confidence
that their system software is operating at maximum efficiency. Symantec actively
investigates and takes corrective action against individuals and organizations who
attempt to compromise the security of our exams or engage in any form of exam
misconduct. To learn more about Symantec Testing Policies and Exam Security, visit
http://www.symantec.com/business/training/certification/path.jsp?pathID=policies

CONFIDENTIAL
To learn more - NOT Certification
about the Symantec FOR DISTRIBUTION
Program and exams,
435 visit http://go.symantec.com/certification

S-ar putea să vă placă și