Documente Academic
Documente Profesional
Documente Cultură
COPYRIGHT
© 2008 NetApp. All rights reserved. Printed in the U.S.A. Specifications subject to change
without notice.
No part of this book covered by copyright may be reproduced in any form or by any means—graphic,
electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval
system—without prior written permission of the copyright owner.
NetApp reserves the right to change any products described herein at any time and without notice.
NetApp assumes no responsibility or liability arising from the use of products or materials described
herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product or
materials does not convey a license under any patent rights, trademark rights, or any other intellectual
property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents,
or pending applications.
TRADEMARK INFORMATION
NetApp, the NetApp logo, and Go further, faster, FAServer, NearStore, NetCache, WAFL, DataFabric,
FilerView, SecureShare, SnapManager, SnapMirror, SnapRestore, SnapVault, Spinnaker Networks,
the Spinnaker Networks logo, SpinAccess, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer, and
SpinStor are registered trademarks of Network Appliance, Inc. in the United States and other countries.
Network Appliance, Data ONTAP, ApplianceWatch, BareMetal, Center-to-Edge, ContentDirector, gFiler,
MultiStore, SecureAdmin, Smart SAN, SnapCache, SnapDrive, SnapMover, Snapshot, vFiler, Web
Filer, SpinAV, SpinManager, SpinMirror, and SpinShot are trademarks of NetApp, Inc. in the United
States and/or other countries.
Apple is a registered trademark and QuickTime is a trademark of Apple Computer, Inc. in the United
States and/or other countries.
Microsoft is a registered trademark and Windows Media is a trademark of Microsoft Corporation in the
United States and/or other countries.
RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, and RealVideo are registered
trademarks and RealMedia, RealProxy, and SureStream are trademarks of RealNetworks, Inc. in the
United States and/or other countries.
All other brands or products are trademarks or registered trademarks of their respective holders and
should be treated as such.
NetApp is a licensee of the CompactFlash and CF Logo trademarks.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
TABLE OF CONTENTS
WELCOME ............................................................................................................................1
MODULE 1: OVERVIEW ......................................................................................................1-1
MODULE 2: INSTALLATION AND CONFIGURATION ......................................................2-1
MODULE 3: BASIC ADMINISTRATION .............................................................................3-1
MODULE 4: ADMINISTRATION SECURITY ......................................................................4-1
MODULE 5: NETWORKING ................................................................................................5-1
MODULE 6: PHYSICAL STORAGE MANAGEMEN ...........................................................6-1
MODULE 7: LOGICAL STORAGE MANAGEMENT ...........................................................7-1
MODULE 8: CIFS .................................................................................................................8-1
MODULE 9: NFS ...................................................................................................................9-1
MODULE 10: QTREES AND SECURITY STYLE ..............................................................10-1
MODULE 11: SAN ...............................................................................................................11-1
MODULE 12: SNAPSHOT COPIES ...................................................................................12-1
MODULE 13: WRITE AND READ REQUEST PROCESSING............................................13-1
MODULE 14: SYSTEM DATA COLLECTION ....................................................................14-1
MODULE 15: FLEXSHARE ................................................................................................15-1
MODULE 16: NDMP FUNDAMENTALS ............................................................................16-1
MODULE 17: ACTIVE-ACTIVE CONTROLLER CONFIGURATION .................................17-1
MODULE 18: FINAL WORDS .............................................................................................18-1
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
This page is intentionally left blank.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Welcome
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Logistics
Introductions
Schedule (start time, breaks, lunch, close)
Telephones and messages
Food and drinks
Restrooms
LOGISTICS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Safety
Alarm signal
Evacuation route
Assembly area
Electrical safety
SAFETY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Objectives
COURSE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Objectives (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Agenda
Day 1
– Data ONTAP Fundamentals
– Installation and Configuration
– Basic Administration
Day 2
– Administration Security
– Networking
– Physical Storage Management
COURSE AGENDA
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Agenda (Cont.)
Day 3
– Logical Storage Management
– Common Internet File System
– Network File System
– Qtrees and Security Styles
Day 4
– Storage Area Networks
– Snapshot Copies
– Write and Read Request Processing
– System Data Collection
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Course Agenda (Cont.)
Day 5
– FlexShare™
– Active-Active Controller Configuration
– NDMP Fundamentals
– Final Words
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Information Sources
NetApp University
– http://www.netapp.com/us/services/university/
INFORMATION SOURCES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Typographic Conventions
Convention Type of Information
Book titles
Word or characters that require special attention
Variable names or placeholders for information you
must supply, for example:
Italic Font
Enter the following command:
ifstat [-z] {-a interface}
Interface is the name of the interface for which
you want to view statistics.
TYPOGRAPHIC CONVENTIONS
Overview
Module 1
Data ONTAP® 7.3 Fundamentals
OVERVIEW
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Products
PRODUCTS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System
STORAGE SYSTEM
STORAGE-SYSTEM ARCHITECTURES
The primary function of a storage system is to store data. NetApp storage systems have
integrated disks that store data in a variety of network and storage environments including
SAN and NAS
The two main protocols used in SAN are Fibre Channel Protocol (FCP) and Internet Small
Computer System Interface (iSCSI). The two main protocols used in NAS are Network File
System (NFS) and Common Internet File System (CIFS). NetApp storage systems use the
Data ONTAP operating system to ensure simple operation, speed, and reliability.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Unified Storage
iSCSI NFS
CIFS Corporate
FC Ethernet LAN
SAN NAS
(Blocks) (Files)
NetApp
FAS
UNIFIED STORAGE
SAN
• Is a block-based storage system
• Makes data available over the network
• Uses FC and iSCSI protocols
NAS
• Is a file-based storage system
• Makes data available over the network
• Uses NFS and CIFS protocols
The NetApp SAN and unified storage architecture provides an outstanding level of
investment protection and flexibility. The fabric attached storage (FAS) system at the bottom
implies one “box” However, the actual storage environment includes small and large FAS
systems, and NearStore® systems.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NetApp Data ONTAP 7G Products
Data Center Remote Office/ Near-line Storage
Storage Dept Storage Storage Virtualization
NETAPP HARDWARE
NETAPP FAS
NetApp FAS systems comprise some of the largest families of compatible storage systems in
the storage industry today. NetApp FAS systems integrate easily into complex enterprise
environments and provide shared access to UNIX, Microsoft ® Windows®, Linux®, and
Web data while simultaneously supporting FC SAN, IP SAN, iSCSI, and NAS. FAS systems
are designed to consolidate and serve data for e-mail, Enterprise Content Management
(ECM), technical applications, files, home directories, and Web content.
NEARSTORE
NearStore is a near-line storage system that uses FC disk drives organized in disk shelves.
NearStore systems store data that requires faster access than tape storage, but requires less
activity than primary data. The product requires a software license to allow ATA and FC
disks on the same system. NearStore combines the Data ONTAP operating system with
inexpensive serial advanced technology attachment (SATA) disks to provide disk storage
performance and flexibility at near-tape storage costs.
V-SERIES SYSTEMS
V-Series systems are virtual storage systems for a multiprotocol, multivendor storage
environment. V-Series enables NAS and SAN simultaneous access to existing FC SAN
infrastructures.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NETAPP SOFTWARE
The NetApp manageability software family consists of four suites that provide software tools
for effective data management.
The NetApp Application Suite delivers increased productivity and flexibility across the
entire enterprise. The various NetApp SnapManager® software products enable you to
improve data availability, reduce unexpected data loss, and increase storage management
flexibility by leveraging the power of integrated NetApp storage systems.
The NetApp Server Suite includes the SnapDrive® and ApplianceWatch™ product families.
SnapDrive provides a server-aware alternative to maintaining manual host connections to
underlying NetApp storage systems. ApplianceWatch products integrate with third-party
system management tools from HP, IBM, and Microsoft. ApplianceWatch allows
administrators to view, monitor, and manage NetApp storage systems from within their
respective system-management environments.
The NetApp Data Suite, consisting of Protection Manager and VFM® (Virtual File Manager,
Enterprise Edition and Migration Edition), provides effective tools for abstracting storage and
enables administrators and users to think in terms of data and data management rather than
the underlying storage.
With the NetApp Storage Suite of products, including Operations Manager, File Storage
Resource Manager, SAN Manager, and Command Central Storage, you will be able to do
more with less. Instead of managing separate physical storage systems, you can view and
manage multiple devices from central consoles.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NETAPP STORAGE SUITE:
FILE STORAGE RESOURCE MANAGER
File Storage Resource Manager (FSRM) enables you to better understand the types of files in
your storage environment. Data is classified in terms of file size, file age, modification
history, access history, file type usage, and file owner usage. This data-classification
information gives you a clear picture of data within your organization. It allows you to put
policies in place that eliminate stale data, unused data, and data stored by personnel who are
no longer employed by your company.
FSRM also provides quota management. Administrators can set soft thresholds and hard
quotas. The system then issues policy violation notices. These FSRM features all contribute
to better storage utilization.
No other single management application provides the same level of NetApp monitoring and
management for NetApp FAS systems and NearStore near-line storage systems. The detailed
performance and health monitoring tools available through Operations Manager gives
administrators proactive information to help resolve potential problems before they occur,
and to troubleshoot problems faster when they do occur.
PROTECTION MANAGER
PROVISIONING MANAGER
Provisioning Manager provides automated, policy-based provisioning for NetApp NAS and
SAN environments. The software automates manual and repetitive provisioning processes,
increasing the productivity of administrators, and improves availability of data by providing
policy compliance for provisioned storage.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FAS2000 Series Overview
FAS2020 FAS2050
High-performance Serial-Attached SCSI (SAS) infrastructure
Each controller has dual GigE ports and dual 4Gb FC ports
You can use the FAS2000 series to manage your dispersed, expanding, and complex data
requirements, and leverage the common operating system, management tools, backup and
restore functions, and disaster-recovery solutions to support your special business needs. The
FAS2000 series also helps reduce system costs with high data availability and less downtime
costs.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FAS3000 Series Overview
The FAS3000 Series features:
A modular system with integrated I/O
– Eight 4Gb FC and eight GbE on
board
– Meets the needs of most
configurations
– Occupies six rack units (RU)
Superior scalability
– Up to 504 TB storage capacity
– Up to 504 FC or SATA spindles
– Six PCI slots
– Up to 32 FC ports
– Up to 32 GbE ports
Built-in, enterprise-class manageability
with enhanced capabilities through RLM
© 2008 NetApp. All rights reserved. 9
Use the FAS3000 to manage up to twice as much data. The FAS3000 can use the Data
ONTAP FlexShare™ software to dynamically adjust workload priorities so that important
applications always get a fast response. Other key benefits include:
• A single platform to satisfy multiple requirements for SAN, NAS, primary storage, and secondary
storage
• Consistent, stable performance when creating Snapshot copies
• Easy upgrades and expansion within the FAS3000 family, and to the high-end FAS6000 family
Storage capacity for the FAS3000 is dependent on the number of spindles and the per-disk
storage density. The 504 TB maximum capacity assumes 504 SATA drives (with 1 TB each).
Enhanced manageability is achieved through the addition of the optional Remote LAN
Module (RLM). Some of the key features offered by RLM include failure alert through e-mail
notifications, real-time monitor and event logging, console redirection, and remote power-
cycling.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FAS6000 Series Overview
The FAS6000 Series features:
A modular system with integrated I/O
– Sixteen 4Gb FC* and twelve GbE
onboard***
– Occupies twelve RU***
Superior scalability
– Up to 1176 TB storage capacity**
– Up to 1176 FC or SATA spindles**
– Up to 56 FC ports***
– Up to 52 GbE ports***
Built-in, enterprise-class manageability
with enhanced capabilities through RLM
Data can be stored either in file or block format. Supported protocols include FCP, NFS,
CIFS, HTTP, and iSCSI.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Shelf Compatibility
FAS6000
9 9 9 9
FAS3000
9 9 9 9
FAS2000
9 9 9 9
R200
9 9
FAS2XX
9 9
SHELF COMPATIBILITY
DISK SHELVES
NetApp storage systems use disk shelves as an alternative to sequential-access tape drives.
Each storage system has several FC ports at the rear of the system where you can attach disk
shelves. The number of ports varies by storage system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NAS Versus SAN
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NAS Versus SAN Topology
iSCSI NFS
CIFS Corporate
FC Ethernet LAN
SAN NAS
(Blocks) (Files)
NetApp
FAS
SAN, which is block-based, lowers TCO and increases the performance and availability of
corporate storage resources. Because of these benefits, SANs that are based on FC technology
have become standard in many corporate data centers.
NetApp IP SAN (iSCSI) encapsulates SCSI block-storage commands into Ethernet packets
for transport over IP networks, enabling companies to leverage standard, familiar Ethernet
networking infrastructures to create affordable SANs.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Supported Protocols
– HTTP
– WebDAV LAN (TCP/IP)
SAN
– FCP FCP Data
ONTAP
– iSCSI Platform
FC
CIFS: The CIFS protocol supports Windows 2000, Windows for Workgroups, and Windows
NT 4.0.
HTTP: The HTTP enables Web browsers to display files that are stored on the storage
appliance.
FTP: The FTP enables UNIX clients to remotely transfer files to and from the storage
appliance.
FCP or iSCSI: The FCP or iSCSI protocols enable a storage device to communicate with one
or more hosts running operating systems such as Solaris™ or Windows in a SAN
environment. You can also configure logical units of storage (LUNs) for multiprotocol
access, block access, or as files for file access (or both).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Architecture
ARCHITECTURE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Architecture
Network
Client
Client
NVRAM
Physical Disks
Memory
Client
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NetApp on the Web
The NOW knowledge database provides a source for support, information, and
documentation. NOW is a NetApp customer- and employee-driven knowledge base
accessible at either:
http://www.netapp.com
or
http://now.netapp.com
After you have logged into the NOW database, the Service and Support page is displayed.
From this page, you can access the following administrative support:
• Technical Assistance
• Submit or check the status of a technical assistance case
• Submit or check the status of a Return Materials Authorization (RMA)
• Find bug reports
• Documentation
• Downloads
• Your product information
• Troubleshooting solutions
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Simulate ONTAP Benefits
NOTE: Support for the Data ONTAP Simulator is on a best-effort volunteer basis; therefore,
support is not guaranteed. Please do not call the Global Support Centers for Simulator support
issues.
SIMULATOR REQUIREMENTS
• Hardware
Intel® processor-based PC
Network card
Recommended 256 MB main memory
Minimum 250 MB free hard drive space
• Linux installed, running, and networked
• Tested on Red Hat® Linux 7.1 through 9.0, SUSE 8.1, and 8.2
• Must log in as root for installation
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 1: Data ONTAP
Fundamentals
Estimated Time: 45 minutes
EXERCISE
Please refer to your Exercise Guide for more instruction.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Install Config
Installation and
Configuration
Module 2
Data ONTAP® 7.3 Fundamentals
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
By the end of this module, you should be able to:
Access the NOW site for the following documents:
– NetApp Configuration Guide
– Data ONTAP System Administration Guide
Locate hardware components using Parts Finder
Collect data for installation using a configuration
worksheet
Interpret the network interface configuration
Set up console access for a storage system
Configure a storage system using the setup
command
Describe how to perform Data ONTAP software
upgrades and reboots
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Documentation
DOCUMENTATION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NOW Support
NOW SUPPORT
PRODUCT DOCUMENTATION
Product documentation and additional information about your new storage system is available
online on the NOW site at http://now.netapp.com. From the NOW site, you can view a list of
all licenses purchased for your storage appliance.
ADDITIONAL INFORMATION
For the latest information about your version of Data ONTAP, see the Data ONTAP Release
Notes and Read Me First documents.
SOFTWARE
The system software is preinstalled, you don’t need CD’s or system boot diskettes to install or
configure a new storage system. If for any reason you need to reinstall the system software,
you can access to the NOW site.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Parts Finder
PARTS FINDER
The Parts Finder Web site allows you to search the parts database for spare parts and view
details about the part. There are currently three methods to search for a part:
• By part number
• By the description provided in the sysconfig output
• By category
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
System Administration Guide
The System Administration Guide describes how to configure, operate, and manage NetApp
storage systems running Data ONTAP 7.3 software. This guide provides information about
all storage system models.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Software Setup Guide
The Software Setup Guide describes how to set up and configure storage systems running
Data ONTAP 7.3 software. This guide provides information about all supported storage
system models.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
System Configuration Guide
The System Configuration Guide Web site provides configuration information for all NetApp
storage systems running multiple versions of Data ONTAP. It also provides a table of
component compatibilities for both normal environments and high-availability configurations.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Access
CONSOLE ACCESS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Access
DB9-RJ45
Connection to
console port
CONSOLE ACCESS
For console access, you can connect to a terminal (or terminal server) to the storage system
console port through a standard RS232 connection such as a DB9-to-DB9 serial cable (null
modem) with the following settings for the serial communication port:
• Bits per second: 9600
• Data bits: 8
• Parity: None
• Stop bits: 1
• Flow control: None
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Access from a Terminal Server
RS-232
RS-232
RS-232
RS-232
Terminal
Server
TCP/IP Network
To avoid the sometimes difficult task of connecting several consoles in the lab you can
instead connect a terminal server. A terminal server is a specialized computer with several
console ports that allows administrators to access a storage system console through the
network, which is important when rebooting a system or for hardware debugging.
You can access the storage system from the console by using the IP address of the terminal
server with the port number for the storage system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System Administration Interfaces
Storage System
B. Telnet A. Console
RS-232
D. SSH
(secure shell)
C. Terminal
Server DOT
E. rsh CLI
NIC shell
(non-interactive)
F. FilerView
(Web-based; GUI)
Telnet
G. Operations Manager
(Web-based; GUI)
HTTP
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Command Format
Example:
system> aggr create –traid-dp <aggrname>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Command Levels
Level Prompt Command For
Administrative > Administration
Privilege *> Special tasks:
Troubleshooting
System tuning
Testing
Displaying statistics
COMMAND LEVELS
Data ONTAP provides two separate sets of commands based on privilege level, either
administrative or advanced. You can set the privilege level using the priv command.
The administrative level provides access to commands that are sufficient for managing your
storage system. The advanced level provides access to these same administrative commands
as well as additional troubleshooting commands.
Advanced level commands should only be used with the guidance of NetApp technical
support. When using Advanced level commands, the following warning is displayed:
“Warning: These advanced commands are potentially dangerous; use them only when
directed to do so by Network Appliance personnel.”
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Changing Command Levels
priv set Command
Use the priv set command to change
command levels:
priv set level
Change to the advanced command level:
system> priv set advanced
system*>
Change to the administrative command level:
system> priv set admin
system>
The initial privilege level for the console and for each RSH session is Administrative. Use the
priv set command to change the privilege level. When you change to the advanced level,
the prompt includes an asterisk.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Viewing Manual (man) Pages From the CLI
When using the command line interface (CLI), you can get CLI syntax help by entering the
name of the command followed by help or the question mark (?).
For a list of all commands available at the current privilege level (administrative or
advanced), at the CLI prompt, type the question mark (?).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Command References on
Data ONTAP FilerView Menu
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Reboot and Installation
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Halting a Storage System
Use the halt command to perform an orderly shutdown that flushes file system updates to
disk and clears the NVRAM.
IMPORTANT
Always warn CIFS users in advance when you halt the storage system. This gives users a
chance to save changes and avoid losing data when the CIFS service is interrupted.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
USING THE REBOOT COMMAND
Using the reboot command is the same as halting and then booting the storage system.
During a reboot, the contents of the storage system NVRAM are flushed to disk and the
storage system sends a warning message to CIFS clients.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Boot Sequence
BOOT SEQUENCE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Boot Sequence
BOOT SEQUENCE
The special boot menu or maintenance menu (1-5 menu) is displayed either when a boot
variable is set or Ctrl-C is pressed during boot.
You can also enter one of the following boot options at the boot environment prompt:
• CFE> for FAS200, FAS3020, and FAS3050 systems
• LOADER> for FAS3040, FAS3070, and FAS6000 series systems
BOOT_ONTAP
The boot_ontap option boots the current Data ONTAP software release stored on the
CompactFlash card. By default, the storage system automatically boots this release if you do
not select another option from the basic menu.
BOOT_PRIMARY
The boot_primary option boots the Data ONTAP release stored on the CompactFlash card
as the primary kernel. This option overrides the firmware AUTOBOOT_FROM environment
variable if it is set to a value other than PRIMARY. By default, the boot_ontap and
boot_primary commands load the same kernel.
BOOT_BACKUP
The boot_backup option boots the backup Data ONTAP release from the CompactFlash
card. The backup release is created during the first software upgrade to preserve the kernel
that was preinstalled on the storage system. It provides a "known good" release from which
you can boot the storage system if it fails to automatically boot the primary image.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NETBOOT
The netboot option boots from a Data ONTAP version stored on a remote HTTP or TFTP
(Trivial File Transfer Protocol) server. The netboot option enables you to:
• Boot an alternative kernel if the CompactFlash card becomes damaged
• Upgrade the boot kernel for several devices from a single server
To enable netboot, you must configure networking for the storage system (using Dynamic
Host Configuration Protocol or static IP address) and place the boot image on a configured
server.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing the Flash Boot Commands
Selection (1-5)?
The menu choices available from the special boot menu allow you to continue booting the
storage appliance under normal or special conditions.
Menu selections 2 and 5 are used for troubleshooting. Selections 4 (or 4a) are usually
performed at the beginning of a system installation. To choose a selection on the command
line, enter the option number.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SPECIAL BOOT MENU OPTIONS
Menu Option Function
(1) Normal boot. This option allows the system to boot as normal.
(2) Boot without /etc/rc. This option performs a normal boot, but bypasses execution of the
etc/rc file. Following this boot, the system runs normally, but without
the configuration normally provided to it by the etc/rc file and system
daemons. To make the system fully operational, you can enter the
commands in the etc/rc file manually.
As a general rule, use this command when there is something in the
etc/rc file causing the storage appliance to misbehave. Often, only
ifconfig, nfs on , and exportfs –a commands are executed
manually, allowing NFS or CIFS to become operational. The etc/rc
file is then edited to remove any offending lines and the system is
rebooted. In this scenario, CIFS is disabled and cannot be restarted until
the system is rebooted.
(3) Change password. This option allows you to change the root password of the filer. It is
usually used when you forget the current password and cannot use the
online passwd command.
(4) Initialize all disks. This command zeroes all the disks in the storage appliance and re-enters
(4a)Same as option 4, but creates a the setup menu. It is typically used only once during system
flexible root volume. reinstallation. This option first prompts you to confirm your choice.
After confirming, there is no way to retrieve data that was previously on
the disks. Zeroing the disks can take time (sometimes hours), depending
on how many disks, and the capacity of each disk.
NOTE: Do not use this option unless you are certain you want to
initialize your disks.
(5) Maintenance mode boot. This option enters a special system mode with only a small subset of
commands available. It is usually used to diagnose hardware problems
(often disk-related). In maintenance mode, WAFL volumes are
recognized but not used, the /etc/rc file is not interpreted, and few
system services are started. NFS and CIFS cannot be used. Disk
reconstructions do not occur. No file system upgrade occurs, even if the
system is newer than the OS release previously installed.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Boot Sequence
system>
BOOT SEQUENCE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Boot Sequence (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Installation
INSTALLATION
1. Familiarize yourself with the requirements for the release you are installing.
2. Consider the following:
• Requirements for upgrading to Data ONTAP from your existing software
• Potential changes to your system following the upgrade
• Appropriate upgrade method for storage systems in an active-active configuration
• If you run the SnapMirror® software, you must identify storage systems with destination
and source volumes
3. Perform any necessary upgrades before upgrading to Data ONTAP. These upgrades
might include:
• Storage system firmware
• Disk firmware
4. Obtain the Data ONTAP system files from the NOW site at http://now.netapp.com/.
5. Install the Data ONTAP system files, and then download them to your storage
system(s).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the Setup Script
The setup command can be run from the storage system CLI at any time; however, it is
usually run during initial system configuration when it is invoked automatically.
Do not run the setup command unless you want to reconfigure your system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuration
CONFIGURATION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuration Worksheet
eng_router
10.10.10.1
NetApp1 OurDomain
Joey 10.10.10.100
GMT 10.10.10.200
Bldg. 1
en_US
adminhost
10.10.10.20
e0
10.10.10.100
10.10.10.30
255.255.255.0
Administrator
vif1 nollip
4
e4a,e4b,e4c,e4d
CONFIGURATION WORKSHEET
The setup script requires information specific to your network environment. A configuration
worksheet is provided in the Software Setup Guide. You can use the worksheet to gather the
necessary configuration information.
NOTE: You may not need to complete every field on this worksheet, depending on your
specific installation requirements.
HOST NAME
The host name is the name by which the storage appliance is identified on the network. If the
storage appliance is licensed for the NFS protocol, the host name can be no longer than 32
characters. If the storage system is licensed for the CIFS protocol, the host name can be no
longer than 15 characters. The host name must be unique for each storage appliance in a
cluster.
PASSWORD
The storage system requires a password before granting administrative access on the console,
through a telnet session, or through the remote shell protocol.
TIME ZONE
For a list of valid time zones, see the Setup Guide. The time zone must be identical on both
storage appliances in a clustered system.
LOCATION
The location is a description of the physical location of the storage system. This information
sets the SNMP location information.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
LANGUAGE
Language refers to the language used for multiprotocol storage systems when both the CIFS
and NFS protocols are licensed. For a list of supported languages and language abbreviations,
see the Setup Guide. The language must be identical on both storage appliances in a cluster.
ADMINISTRATION HOST
The administration host is a client computer that is allowed to access the storage appliance
through a telnet session or through the remote shell protocol. In /etc/exports, adminhost
is granted root access to / so that it can access and modify the configuration files in /etc. All
other NFS clients are granted access only to /home. If no adminhost is specified, all clients
are granted root access to the root directory (not recommended for sites where security is a
concern).
ETHERNET
If your network uses standard Ethernet or Gigabit Ethernet (GbE) interfaces, you must gather
the following information for each interface:
• Network interface name―The name of the Ethernet (or GbE) interface, depending on what port
the Ethernet card is installed in. Examples include: e0 (for Ethernet single); e1 (for GbE); and e3a,
e3b, e3c, e3d (for Ethernet quad-port). Data ONTAP automatically assigns network interface
names as it discovers them.
• IP address―A unique address for each network interface.
• Subnet mask (network mask)―The subnet mask for the network to which each network
interface is attached. Example: 255.255.255.0
• Partner IP address (interface to take over)―If your storage system is licensed for cluster
takeover, record the interface name or IP address belonging to the partner that this interface should
take over.
• Jumbo frames―Jumbo frames are packets that are longer than the standard Ethernet (IEEE
802.3) frame size of 1,518 bytes. Because jumbo frames are not part of the IEEE standard, the
frame size definition for jumbo frames is vendor-specific. The most commonly used jumbo frame
sizes are 9,018 bytes and higher.
VIRTUAL INTERFACE
Specify the interface name rather than the interface IP address.
DNS DOMAIN
Enter the name of your network domain name service (DNS). The DNS domain name must
be identical on both storage systems in an active-active configuration. Record the IP
addresses of your DNS servers.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NIS SERVERS
Enter the IP or host names of your preferred NIS servers.
WINDOWS DOMAIN
If your site uses Windows servers, it has one or more Windows domains. Record the name of
the Windows domain to which the storage appliance should belong.
WINS SERVERS
Identify the servers that handle your Windows Internet Name Service (WINS) name
registrations, queries, and releases. If you choose to make the storage appliance visible
through WINS, you should record up to four WINS IP addresses.
ACTIVE DIRECTORY
This is the container for the storage appliance accounts, which can be either the default of
computers or a previously created organizational unit (OU) specified by you. The path for the
OU must be specified in reverse order and separated by commas. Example: If the path is
eng\dev\mgmt, the active directory distinguished name is: ou=mgmt, ou=dev, ou=eng.
NOTE: A user in the Windows 2000 domain can create the account in advance, and then
Data ONTAP updates that account.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Network Interface Configuration
For storage systems with PCI buses, the network interface naming is as follows:
• Ethernet interface names start with the letter e, followed by the slot number (for example, e0), and
then the port number. If the adapter card is a quad-port Ethernet card, an example of an interface
name is e1a and e1b.
• PCI-based storage systems automatically create host names for network interfaces by appending
the interface slot number to the storage system host name. For example, if the storage system is
named toaster, the host name for e0 is toaster-e0, and the host name for e1a is
toaster-1a.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Administration Host
ADMINISTRATION HOST
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setup
SETUP
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The setup Script
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Check Software Version & License Status
system> version
NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008
system> sysconfig -v
NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008
System ID: 0084166726 (NetApp1)
System Serial Number: 3003908 (NetApp1)
slot 0: System Board 599 MHz (TSANTSA D0)
Model Name: FAS250
Part Number: 110-00016
Revision: D0
Serial Number: 280646
Firmware release: CFE 1.2.0 system> license
Processors: 2
Processor revision: B2 nfs site ABCDEFG
Processor type: 1250 cifs site BCDEFGH
Memory Size: 510 MB http site CDEFGHI
NVMEM Size: 64 MB of Main Memory Used
cluster not licensed
snapmirror not licensed
snaprestore not licensed
SOFTWARE VERSION
You can verify your software version by using the sysconfig or sysconfig -v command,
or by using the version command. One way to verify the software version on the disks is to
change to the /etc/boot directory and then view the link to which that directory points.
FIRMWARE VERSION
There are two primary ways to verify your firmware version: use sysconfig –v, or halt the
storage system and type version from the OK prompt. Ensure that the firmware version on
your system is what it should be and that you have the most current version of the firmware
for your platform.
LICENSES
Use the license command to verify that all licenses are listed for your storage appliance.
The licenses that are displayed in the autosupport log are encrypted versions of the actual
licenses. If all authorized licenses are not displayed using this command, contact NetApp.
NOTE: You can also access the license information using FilerView. Under Filer, select
Manage Licenses.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Message Logging
Message logging is done by a syslogd daemon
The /etc/syslog.conf configuration file on the
storage system's root volume determines how system
messages are logged
Messages can be sent to:
– The console
– A file
– A remote system
By default, all system messages are sent to the
console and logged in the /etc/messages file
You can access the /etc/messages files via
– An NFS or CIFS client (discussed later in this course)
– The FilerView administration tool
© 2008 NetApp. All rights reserved. 35
MESSAGE LOGGING
The syslog contains information and error messages that the storage system displays on the
console and logs in the /etc/messages file.
To specify the types of messages that the storage system logs, use the Syslog Configuration
page in FilerView to edit the /etc/syslog.conf file. This file specifies which types of
messages are logged by the syslogd daemon. (A daemon is a process that runs in the
background, rather than under the direct control of a user.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The /etc/syslog.conf File
By default, the /etc/syslog.conf file does not exist; however, there is a sample
/etc/syslog.conf file. To view a manual page, enter the man syslog.conf command.
The facility parameter uses one of the following keywords: kern, daemon, auth, cron,
or local7.
The level parameter is a keyword from the following ordered list (higher to lower):
emerg, alert, crit, err, warning, notice, info, debug.
The action parameter can be in one of three forms:
• A pathname (beginning with a leading slash)―Selected messages will be appended to the
specified log file.
• A hostname (preceded by ‘@’)―Selected messages will be forwarded to the syslogd daemon on
the named host.
• /dev/console―Selected messages are written to the console.
For more information about /etc/syslog.conf settings, see the System Administration
Guide.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Upgrades
UPGRADES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Options to Upgrade
NetApp periodically releases new versions of Data
ONTAP
Release families means major release to major
release upgrade
– First two digits of the release is the version number
Examples: 7.0, 7.1, 7.2
– Point releases have a new third digit
Examples: 7.2.1.1, 7.2.2L1
Download upgrades from the NOW site
Install an upgrade using either:
– CLI
software update <subcommand>
– A Windows or UNIX client to unzip or untar to the
shared or mounted /etc directory
OPTIONS TO UPGRADE
The process of upgrading a storage system can vary from release to release. Because disk
firmware is usually tied to the Data ONTAP version, there may be additional considerations
besides the ones identified here. For more information about the specific release you are
upgrading to, including instructions for downgrading to a previous version of Data ONTAP,
see the Upgrade Guide.
NOTE: The information presented in this course represents a generalized approach to
upgrading the operating system. Because some specifics may change between releases, be
sure to consult the appropriate guide for more information.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The software update Command
To upgrade Data ONTAP, enter
the command: software
update
This command:
Copies Data ONTAP
release file into
/etc/software
Root Volume Unzips Data ONTAP release
file
/etc
Updates /etc/boot (to be
loaded to CompactFlash)
/disk_fw Places other files in /etc
/boot /software (such as FilerView HTML
Other
System
files and disk firmware)
Files NOTE: The system continues
running the previous version of
Data ONTAP until rebooted.
© 2008 NetApp. All rights reserved. 39
The software update command allows you to upgrade Data ONTAP from the console
without using CIFS or NFS.
Before you using this command, you must first establish an HTTP host and download the
appropriate software (the Windows executable file for the specific platform, for example,
XX_setup_i.exe) from the NOW site to the host. After the .exe files are downloaded on an
HTTP server, you can install them on any storage appliance by using the software
install command.
The software update command requires the fewest number of steps and the least amount
of time to upgrade Data ONTAP. To upgrade Data ONTAP using the software install
command:
1. Verify the HTTP host and source URL.
2. Execute the software update command.
3. Reboot Data ONTAP.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 2: Installation and
Configuration
Estimated Time: 45 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Basic Admin
Basic
Administration
Module 3
Data ONTAP® 7.3 Fundamentals
BASIC ADMINISTRATION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Graphical Interfaces
GRAPHICAL INTERFACES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Administration Options
ADMINISTRATION OPTIONS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Graphical Interfaces
GRAPHICAL INTERFACES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FilerView Administration Tool
The FilerView interface is a Web-based graphical management interface that enables you to
manage most storage system functions from a Web browser rather than from the console, a
telnet session, or the rsh command. FilerView supports Windows, UNIX, Solaris, Linux,
and HP-UX® environments.
To access a storage system from a client using FilerView, complete the following steps:
1. Start your Web browser.
2. Enter the following URL:
http://filername/na_admin
Where filername is either the fully qualified name (or the short name) or the IP address
of your storage system.
3. Click FilerView.
A new browser window is displayed. If you are running SecureAdmin™ 2.1.1 or later,
click Secure FilerView to start an encrypted browser session.
4. Select a management function. If prompted, supply an administrative user name and
password.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Operations Manager
OPERATIONS MANAGER
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Alternative GUIs
ALTERNATIVE GUIS
Microsoft Windows Server 2000 and later, as well as client operating systems such as
Microsoft Windows XP and later, provide Computer Management, a management console
that is able to connect to a storage system. Microsoft Management Consoles (MMC) can also
be used to remotely administer a storage system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Command Line
Interface
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Command Line Interface
system>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Telnet to the Console
Login>
Password>
The ability to telnet to a console is useful because it allows remote access to that console.
However, it is limited by the fact that only one telnet session at a time is allowed.
The storage system can be configured to allow telnet access only by certain hosts using one
of the following commands:
options trusted.hosts
or
options telnet.hosts (deprecated)
The storage system can be configured to terminate an unused telnet session automatically
using the following commands:
options autologout.telnet.enable
options autologout.telnet.timeout
To terminate a telnet session immediately without waiting for the timeout period, you can
also use the logout telnet command from RSH or the console.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Remote LAN Module
RLM Port
ADDITIONAL FEATURES
The RLM provides secure, out-of-band access to the system that can be used regardless of the
system state. The RLM offers a number of remote management capabilities for NetApp
systems including remote access, monitoring, troubleshooting, logging, and alerting.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Some additional features of the RLM:
• Remote access to the storage system console without using a serial terminal or a terminal
concentrator
• Remote access to control system power if you need to power off, power on, or power cycle the
system remotely without using a LAN-based power strip
• Remote initiation of a core dump without requiring the use of the Non-Maskable Interrupt (NMI)
button on the system
• Remote access to hardware system event logs even when the system is down
The RLM also extends the AutoSupport capabilities of the NetApp system by sending alerts
or "down-filer" notifications through an AutoSupport message when the system goes down,
regardless of whether or not the system is able to send AutoSupport messages. These
AutoSupport messages provide proactive alerts to NetApp to help provide you with faster
service.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FilerView CLI
FILERVIEW CLI
FilerView allows you to access a telnet session to administer the storage system from a CLI.
NOTE: Only one telnet session at a time is allowed, regardless of whether you are using the
FilerView CLI editor or an alternative telnet emulator.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CLI Session Limitations
The FilerView interface allows only one telnet session at a time. If you try to open a telnet
session in FilerView or the CLI directly, and there is already an active session open, you will
receive an error message.
When you close the Use Command Line window in FilerView, the telnet session is also
closed.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Remote Shell
REMOTE SHELL
NOTE: Be sure to add an empty line or new line at the end of the hosts.equiv file. Also,
in Windows, the RSH "username" is required in the /etc/hosts.equiv file. In UNIX, if
the RSH "username" is omitted, only "root" may access RSH from the host(s) listed in the
/etc/hosts.equiv file. This allows user1 to run RSH from the Windows host.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Secure Shell
SECURE SHELL
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Common Commands
COMMON COMMANDS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Basic Administration Commands
system> ?
? halt nfs snapvault
aggr help nfsstat snmp
backup hostname nis software
cf httpstat options source
cifs ifconfig orouted storage
config ifstat partner sysconfig
dafs igroup passwd sysstat
date ipsec ping timezone
df ipspace priv traceroute
disk iscsi qtree ups
disk_fw_update iswt quota uptime
dns license rdate useradmin
download lock reboot version
dump logger restore vfiler
echo logout rmc vif
ems lun route vlan
environment man routed vol
exportfs maxfiles rsm vscan
fcp mt savecore wcc
fcstat nbtstat secureadmin ypcat
file ndmpcopy setup ypgroup
filestats ndmpd shelfchk ypmatch
fpolicy netdiag snap ypwhich
ftp netstat snapmirror
system>
At the normal administration privilege level, entering a question mark (?) at the command
line displays the commands available to a system administrator for disk management,
networking, system management, physical and virtual interface configuration, and related
tasks.
Some of these commands are simple; some use arguments; some perform obvious functions
such as backup, ping, or help. To display a brief description of a command, enter help
command name on the command line. To display the full syntax of a command, including
associated arguments, enter command name on the command line.
You can also use FilerView to access the manual pages for each command.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Advanced Privilege Commands
system> priv set advanced
Warning: These advanced commands are potentially dangerous; use
them only when directed to do so by Network Appliance
df personnel. led_on quota sysstat
disk led_on_all rdate test_lcd
disk led_on_off rdfile timezone
disk_fw_update led_test reboot toe
disk_list led_test_one registry traceroute
disk_stat license remote ups
dns lmem_stat restore uptime
download lock result useradmin
dump log revert_to version
echo logger rm vfiler
ems logout rmc vif
environ ls rmt vlan
environment lun rod vol
exit man route vscan
exportfs maxfiles routed wafl
fcadmin mbstat rsm wafl_susp
fcp mem_scrub_stats rtag wcc
fcstat mt savecore wrfile
file mv scsi ypcat
filestats nbtstat secureadmin ypgroup
fpolicy ndmpcopy setup ypmatch
ftp priv set admin
ndmpd sh ypwhich
getXXbyYY
Advanced privilege commands are additional commands that provide more control and access
to the storage system. In some cases, these commands are simply normal commands with
additional arguments or options available.
CAUTION: Because advanced privilege commands are potentially dangerous, they should
only be used by knowledgeable personnel.
You can access advanced privilege commands using the priv set advanced command.
This command changes the command-line prompt by embedding an asterisk (*) in the prompt
when advanced privileges are enabled. To return to basic administration mode, use the priv
set admin command.
There are additional administration commands that are considered advanced but are available
in the basic administration mode. However, these commands are hidden and do not appear
when you enter help while in the basic administration mode.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Basic System Configuration
Many console commands provide storage system configuration information. You can use
these commands to:
• Check your system configuration
• Monitor system status
• Verify the correct system
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring Your
System
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring Your System
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CLI Commands
System options:
options [option name] [value]
Example: options rsh.enable on
NOTE: If no value is entered, the current value is displayed.
Volume options:
vol options volname [option name] [value]
Aggregate options:
aggr options aggrname [option name] [value]
CLI COMMANDS
The options or vol options commands are used to change configurable storage system
software options.
If no options are specified, then the current value of all available options is printed. If an
option is specified with no value, then the current value of that option is printed. If only a part
of an option is specified with no value, then the list of all options that start with the partial-
option string is printed. This is similar to the UNIX grep command.
The default value for most options is off, which means that the option is not set.
Changing the value to on enables the option. For most options, the only valid values are:
• On (also expressed as yes, true, or 1) in any combination of upper and lower case
• Off (also expressed as no, false, or 0) in any combination of upper and lower case
If the default is not set to off, the description of an option indicates the default setting along
with allowable values (if it is not an on-or-off option).
For options that accept string values, use a double quote ("") as the option argument if you
want to set the option to be the null string. Normally, arguments are limited to 255 characters
in length.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Registry Files
File Usage
/etc/registry Current registry
/etc/registry.lastgood Copy of registry after last
successful boot
/etc/registry.bck First-level backup
/etc/registry.default Default registry
REGISTRY FILES
REGISTRY DATABASE
Persistent configuration information and other data is stored in a registry database.
There are several backups of the registry database that are automatically used if the original
registry becomes unusable. The /etc/registry.lastgood is a copy of the registry as it
existed after the last successful boot.
The /etc/registry is edited by Data ONTAP and should not be manually edited.
Configuration commands such as the network interface configuration (ifconfig) must
remain in the /etc/rc file.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Editing Configuration Files
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Editing Configurations from Host
To edit a configuration file from a host, the storage system must be configured with a NAS
protocol such as CIFS or NFS. An adminhost can then access the /etc directory to edit the
configuration file.
ADMINHOST
The term adminhost is used to describe an NFS or CIFS client machine that is able to view
and modify configuration files stored in the /etc directory of the storage system’s root
volume.
The storage system grants root permissions to the adminhost after the setup procedure is
complete.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Editing
CONSOLE EDITING
The console command rdfile displays the present contents of an ASCII text file. If the file
doesn’t exist or is empty, this command returns nothing.
For example, to display the /etc/hosts file from the CLI, enter rdfile /etc/hosts.
The console command wrfile creates or re-creates a file when executed.
For example, to re-create the /etc/hosts file from the CLI, enter wrfile /etc/hosts,
and then enter the contents of the file and press Ctrl-C to commit the file. Issue the command
rdfile to verify the contents of the file.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
System Configuration Using FilerView
FilerView has a variety of different menu items that display and allow configuration of a
storage system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport
AUTOSUPPORT
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport Mail Host
Email Server
AutoSupport is a call home feature included in the Data ONTAP and NetCache software for
all NetApp systems. AutoSupport is an integrated and efficient monitoring and reporting tool
that constantly monitors the health of your system.
AutoSupport allows storage systems to send messages to the NetApp Technical Support team
and to other designated addressees when specific events occur. The AutoSupport message
contains useful information for Technical Support to identify and solve problems quickly and
proactively.
You can also subscribe to the abbreviated version of urgent AutoSupport messages through
alphanumeric pages, or you can customize the type of message alerts you want to receive.
The AutoSupport Message Matrices list all the current AutoSupport messages in order of
software version.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
STAYING AHEAD OF POTENTIAL PROBLEMS
Not all AutoSupport messages lead to immediate actions. Warning messages in the syslog can
point to suspect components. Often, this initiates corrective action that can prevent unplanned
disruption. Our AutoSupport analysis tools also monitor syslog messages for known
configuration issues. A config alert notifies our support team of a configuration issue that
could lead to system instability. Finally, being connected through AutoSupport means that
NetApp already has details about your current system configuration.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport
AUTOSUPPORT
AUTOSUPPORT DAEMON
NetApp storage systems use an AutoSupport daemon to control how messages are sent to
NetApp Technical Support. The AutoSupport daemon is enabled by default on a storage
system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport E-Mail Events
To read descriptions of some of the AutoSupport messages you might receive, access the
NOW site and search for AutoSupport Message Matrices, You can view either the online
version or the version in the Data ONTAP guide.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport E-mail Contents
AutoSupport messages also contain additional information specific to your storage system.
This information helps identify crucial parameters required for follow-up handling of the
triggering event.
To control the detail level of event messages and weekly reports, use the options command
to specify the value of autosupport.content as complete or minimal. Complete
AutoSupport messages are required for normal technical support. Minimal AutoSupport
messages omit sections and values that might be considered sensitive information, and reduce
the amount of information sent. However, keep in mind that choosing minimal greatly affects
the level of support NetApp is able to provide.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport Configuration Options
AutoSupport commands:
options autosupport.support.enable
[on|off]
options autosupport.mailhost
[host1,…,host5]
options autosupport.to
[address1,…,address5]
options autosupport.from
options autosupport.content
options autosupport.noteto
options autosupport.doit [message]
options autosupport.enable [on|off]
The commands listed above are some of the AutoSupport configuration commands available
from the console.
The following table is an abbreviated version of the AutoSupport options list. See the
command reference for a full list of options and descriptions.
EXAMPLE RESULT
options autosupport.enable off Disables the AutoSupport daemon. The default is on.
options autosupport.support.enable Disables the AutoSupport notification to NetApp. The default is on.
off This option is superseded (overridden) by the value of
autosupport.enable.
options autosupport.mailhost Specifies two mail host names: maildev1 and mailengr1. (You can
maildev1,mailengr1 enter up to five mail host names.)
Hostname is the hostname of the SMTP mail host(s) that will receive
AutoSupport e-mail messages. The default is the hostname of the
adminhost specified during setup.
options autosupport.to Specifies two recipients (jjandar and ssmith) of AutoSupport e-mail
jjandar@netapp.com, messages.
ssmith@netapp.com Address is an SMTP e-mail address. You can specify up to five
addresses. NOTE: Do not enter autosupport@netapp.com if
autosupport.support.enable is on.
options autosupport.from techsupport Defines the user, techsupport, as the sender of the notification.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Testing AutoSupport
TESTING AUTOSUPPORT
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 3: Basic Administration
Estimated Time: 60 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Admin Security
Administration
Security
Module 4
Data ONTAP® 7.3 Fundamentals
ADMINISTRATION SECURITY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System Access
To manage a storage system, you can use the default system administration account, or root.
You can also create additional administrator user accounts using the useradmin command.
Administrator accounts are beneficial because:
• You can give administrators and groups of administrators differing levels of administrative access
to your storage systems.
• You can limit an individual administrator's access to specific storage systems by giving him or her
an administrative account only on those systems.
• Having different administrative users allows you to display information about who is performing
what commands on a storage system.
The auditlog file keeps a record of all administrator operations performed on a storage system
and the administrator who performed it, as well as any operations that failed due to insufficient
capabilities.
• You can assign each administrator to one or more groups whose assigned roles (sets of
capabilities) determine what operations they are authorized to carry out on the storage system.
• If a storage system running CIFS is a member of a domain or a Windows workgroup, domainuser
accounts authenticated on the Windows domain can use any available method to access the storage
system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
THE AUDIT LOG
An audit log is a record of commands executed at the console through a telnet shell, an SSH
shell, or by using the rsh command. All commands executed in a source-file script are also
recorded in the audit log. Audit log data is stored in the /etc/log directory in the auditlog
file. Administrative HTTP operations, such as those resulting from the use of FilerView, are
also logged. The maximum size of the auditlog file is specified using the
auditlog.max_file_size option. By default, Data ONTAP is configured to save an audit
log.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Role-Based Access Control
Role-Based Access Control (RBAC) is a mechanism for managing a
set of actions (capabilities) that a user or administrator can perform
on a storage system.
A role is created.
Capabilities are granted to the role.
Groups are assigned to one or more roles.
Users are assigned to groups.
Role-based access control (RBAC) specifies how users and administrators can use a
particular computing environment.
Most organizations have multiple system administrators, some of whom require more
privileges than others. By selectively granting or revoking privileges for each user, you can
customize the degree of access that each administrator has to the system.
RBAC allows you to define sets of capabilities (roles) that apply to one or more users. Users
are assigned to groups based on their job functions, and each group is granted a set of roles to
perform those functions.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Capabilities and Roles
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Capabilities
CAPABILITIES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Roles
Admin Role
Capabilities
Login capability
Security capability
CLI capability
API capability
ROLES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Predefined Administrative Roles
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Groups and Users
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Groups
A group is:
– A collection of users
– Associated with one or more roles
Groups have defined permissions and access levels
that are defined by roles
Admin Role
GROUPS
A group is a collection of users or domain users. It is important to remember that the groups
defined in Data ONTAP are separate from other groups such as groups defined in the
Microsoft Active Directory server or an NIS environment. This is true even if the groups
defined in the Microsoft Active Directory and the groups defined in Data ONTAP have the
same name.
When creating new users or domain users, Data ONTAP requires that you specify a group.
Therefore, you should create appropriate groups before defining users or domain users.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Predefined Groups
PREDEFINED GROUPS
To create or modify a group, start by giving the group capabilities associated with one or
more predefined or customized roles.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Users
A user is:
An individual account that may or may not have
capabilities defined for the storage system
Part of a group
All administrative users have a unique login name and
password.
Admin Role
USERS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Creation Requirements—Role
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Creation Requirements—Group
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Purpose of Local Users
Local users are often used to delegate configuration duties to other administrators. However,
local users are also created if the system storage is configured to perform local authentication
with CIFS or NFS protocols (for example, when the storage system’s CIFS server is
configured for Windows workgroup authentication).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Security Administration
User accounts are managed from the CLI only (no
FilerView interface) using the following command:
useradmin
– This command allows you to list, add, and delete users.
– The user account is maintained in the /etc/registry
file.
User authentication is performed locally on the storage
system.
Admin Role
SECURITY ADMINISTRATION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Security Administration (Cont.)
SECURITY OPTIONS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SECURITY OPTIONS (CONT.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Access
USER ACCESS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Creation Requirements - User
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Administration Host
Access
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Administration Host
ADMINISTRATION HOST
The term adminhost is used to describe an NFS or CIFS client machine that has the ability to
view and modify configuration files stored in the /etc directory of the storage system’s root
volume.
When you designate a workstation as an administration host, the storage system's root file
system (/vol/vol0 by default) is accessible only to the specified workstation in the
following ways:
• If the storage system is licensed for the CIFS protocol, the root file system is accessible as a share
named, C$.
• If the storage system is licensed for the NFS protocol, the root file system is accessible by NFS
mounting.
You can designate additional administration hosts after setup by modifying the storage
system's NFS exports and CIFS shares.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Restricting Access
RESTRICTING ACCESS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 4: Administration Security
Estimated Time: 30 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Networking
Networking
Module 5
Data ONTAP® 7.3 Fundamentals
NETWORKING
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Interface Configuration
INTERFACE CONFIGURATION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Interface Configuration
INTERFACE CONFIGURATION
From the CLI, the ifconfig command displays and configures network interfaces for a
storage system.
The following are ifconfig command examples:
• Display network interface configurations:
ifconfig -a
• Change an interface IP address:
ifconfig interface 10.10.10.XX
• Bring down an interface:
ifconfig interface down
• Bring up an interface:
ifconfig interface up
The /etc/rc file configures the interface settings during boot. You can edit this
configuration on the storage system using the wrfile command, from FilerView, or from an
adminhost using CIFS/NFS.
Example: Using the ifconfig command in the /etc/rc file:
ifconfig interface 10.10.10.XX netmask 255.255.252.0 up
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Interface Configuration (Cont.)
1G Ethernet 1 a
2 b
10G Ethernet (Data 3 c
ONTAP 7.2 or later) 4 d
Your storage system also supports the following virtual network interface types:
• Virtual interface (VIF)
• Virtual local area network (VLAN)
• Virtual hosting (VH)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Interface Naming Example
For physical interfaces, interface names are assigned automatically based on the slot where
the network adapter is installed.
VLAN interfaces are displayed in the interfaceID_and_slot_number-vlan_id format,
where slot_number is the slot where the network adapter is installed, and vlan_id is the
identifier of the VLAN that is configured on the interface. For example, e8-2, e8-3, and e8-4
are three VLAN interfaces for VLANs 2, 3, and 4, configured on interface e8.
You can assign names to vifs and emulated LAN interfaces.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Interfaces: ifconfig
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NETWORK PARAMETER DESCRIPTIONS
• IP address―Standard format is used for IP addresses (for example, 192.168.23.10). IP addresses
are mapped to host names in the /etc/hosts file.
• Netmask and broadcast address―Standard format is used for netmask and broadcast addresses
(for example, 255.255.255.0 for netmask, and 192.168.1.255 for broadcast address).
• Media type and speed―The following media types can be configured:
[ mediatype { tp | tp–fd | 100tx | 100tx–fd | 1000fx | auto }]
• MTU―Use a smaller interface MTU value if a bridge or router on the attached network cannot
break large packets into fragments.
• Flow control for the GbE II controller―The original GbE controller supports only full duplex,
not flow control. The GbE Controller II negotiates flow control with an attached device that
supports autonegotiation. However, if autonegotiation fails on either device, the flow control
setting that was entered using the ifconfig command is used. The following flow control
settings can be configured:
[ flowcontrol { none | receive | send | full } ]
• Up or down state―The state of any interface can be configured up or down.
NetApp> ifconfig
usage: ifconfig [ -a | [ <interface>
[ [ alias | -alias ] <address> ] [ up | down ]
[ netmask <mask> ] [ broadcast <address> ]
[ mtusize <size> ]
[ mediatype { tp | tp-fd | 100tx | 100tx-fd 1000fx | auto } ]
[ flowcontrol { none | receive | send | full } ]
[ trusted | untrusted ]
[ wins | -wins ]
[ [ partner { <address> | <interface> } ] | [ -partner ] ] ] ]
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Interfaces: FilerView
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Interfaces: FilerView (Cont.)
FilerView provides the same level of control over a storage system’s interfaces as the
ifconfig command in the CLI.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Interfaces: CLI
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Name Resolution
NAME RESOLUTION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host-Name Resolution
HOST-NAME RESOLUTION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host-Name Resolution (Cont.)
Data ONTAP stores and maintains host information in
the following locations:
/etc/hosts file
DNS server
Network Information Service (NIS) server
In host-name resolution:
The /etc/nsswitch.conf file controls the order in
which these three locations are checked.
Data ONTAP stops checking locations when a valid IP
address is returned.
NOTE: For convenience, you can use the Host Name Resolution
Policy Wizard in FilerView.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
/etc/hosts Configuration
/ETC/HOSTS CONFIGURATION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
/etc/hosts Configuration: FilerView
You can also manage the /etc/hosts file using the FilerView Web application. To access
and edit the file from FilerView, from the main menu, select Manage Hosts File from the
Network node.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
DNS Configuration
DNS CONFIGURATION
EXAMPLE RESULT
options dns.domainname dns_campus2 Sets the DNS domain name to dns_campus2
options dns.enable on Enables DNS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NIS
In UNIX environments, NIS provides:
A centralized mechanism for host-name resolution
User authentication
To configure NIS:
In FilerView, use the Host Name Resolution Policy
Wizard
In the CLI, use:
– setup command
– options nis.*
– nis command
NIS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Name Resolution Policy Wizard:
FilerView
To ease configuration, use the FilerView Host
Name Resolution Policy Wizard:
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Choose a resolution method:
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Provide DNS parameters:
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
List DNS server address(es):
DNS SETTINGS
DNS server parameters can also be configured through the CLI using the setup command.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Specify NIS information:
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Specify NIS Group Parameters:
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Specify the order for the Name Service
Configuration:
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Host Resolution Policy Wizard:
FilerView (Cont.)
Commit the changes:
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Route Resolution
ROUTE RESOLUTION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Route Information
FilerView
ROUTE INFORMATION
ROUTING
A storage appliance does not function as a router for other network hosts, even if it has
multiple network interfaces.However, the storage appliance does route its own packets.
To display the defaults and explicit routes your storage appliance uses to route its own
packets, use the netstat -r command to view the current routing table. The netstat
command displays network-related data structures.
COMMAND RESULT
route add default 10.10.10.1 1 Adds a default route through 10.10.10.1 with a metric (hop) of 1
route delete 193.20.8.173 Deletes the route destination 193.20.8.173 connecting through
193.20.4.254 193.20.4.254
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The netstat Command
The netstat command symbolically displays the contents of various network-related data
structures. There are a number of output formats, depending on the options chosen. Use the
man page to see all available options.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The route Command
The route command allows you to manually manipulate the network routing table for a
specific host or network specified by destination.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Virtual Interfaces
VIRTUAL INTERFACES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Virtual Interfaces
Load Balancing
VIRTUAL INTERFACES
A vif is a group of Ethernet interfaces working together as a logical unit. You can group up to
16 Ethernet interfaces into a single logical interface.
The following are some advantages of vifs over single-network interfaces:
• Higher throughput―Multiple interfaces work as one
• Fault tolerance―If one vif interface goes down, the remaining interfaces maintain the connection
to the network
• Protection against a switch port becoming a single point of failure
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Single-Mode VIF
SINGLE-MODE VIF
SINGLEMODE TRUNK
Vifs are also known as trunks, virtual aggregations, link aggregations, or EtherChannel
virtual interfaces.
Trunks can be single-mode or multimode. In a single-mode trunk, one interface is active
while the other interface is on standby.
NOTE: A failure signals the inactive interface to take over and maintain the connection with
the switch.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multimode VIF
MULTIMODE VIF
MULTIMODE TRUNK
In a multimode trunk, all interfaces are active, providing greater speed when multiple hosts
access the storage appliance. Because the switch determines how the load is balanced among
the interfaces, it must therefore support manually-configurable trunking.
In the figure above, three active interfaces comprise the multimode trunk. If any one of the
three interfaces fail, the storage appliance remains connected to the network.
Not all switches provide this capability. Check with your switch manufacturer for more
information.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Second-Level VIF
Switch X Switch Y
Vif_XA Vif_YA
Vif_YB
Vif_XB “Super” Vif_B
SECOND-LEVEL VIF
A second-level vif is a group of multimode vifs. If a primary multimode vif fails, a second-
level vifs provides a standby multimode. You can use second-level vifs on a single storage
system or in a cluster.
You can set up your storage system with two double-link multimode vifs where each vif is
connected to a different switch that is capable of link aggregation over multiple ports. You
can then set up a second-level single-mode vif that contains both of the multimode vifs.
When you configure the second-level vif using the vif create command, only one of the
two multimode vifs is brought up as the active link. If all the underlying interfaces in the
active vif fail, the second-level vif activates the link corresponding to the other vif.
In the example above, “Quad” is an Ethernet card with four Ethernet ports.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Load Balancing
Load balancing is supported for multimode VIFs only:
IP-based (default)
MAC-based
Round-robin (not recommended)
LOAD BALANCING
Load balancing ensures that all the interfaces in a multimode vif are equally utilized for
outbound traffic. Load balancing is supported for multimode trunks only, and assumes a nice
distribution of hosts. There are three methods of load balancing using the IP-based default:
• IP-based―The outgoing interface is selected based on the storage system and client’s IP address.
• MAC-based―The outgoing interface is selected on the basis of the storage system and client’s
MAC (Media Access Control) address.
• Round robin―All the interfaces are selected on a rotating basis.
Both IP-based and MAC-based address methods use a formula to determine which interface
to use for outgoing frames. The formula uses the exclusive operator (XOR) value of the last
four bits of the source destination addresses to determine which interface to return data on.
NOTE: The round-robin method provides true load balancing, but may cause out-of-order
packet delivery and retransmissions due to overruns.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VIF from the CLI:
Single-level Example
The named virtual interface is treated as a
single interface: ifconfig vif_name
Entries created on the command line are not
permanent
system> vif create single SingVif1 e3a e3b
system> ifconfig SingVif1 172.17.200.201 netmask
255.255.255.0 mediatype 100tx-fd up
system> vif favor e3a
system> ifconfig SingVif1
SingVif1:flags=1148043<UP,BROADCAST,RUNNING,MULTICAST,TC
PCKSUM> mtu 1500
inet 172.17.200.201 netmask 0xffffff00 broadcast
172.17.200.255
ether 02:a0:98:03:28:8e (Disabled virtual interface)
You create and modify a trunk using vif commands. A trunk name must be unique, must
begin with a letter, contain no spaces, and must not exceed 15 characters. After you create the
trunk, configure it like a regular network interface using the ifconfig command.
STEP 1
Enter the command:
vif create single vif_name [interface_list]
where vif_name is the name of the vif
and interface_list is a list of the interfaces you want the vif to consist
of.
NOTE: You must ensure that all interfaces to be included in the vif are configured down.
You can use the ifconfig command to configure an interface down.
Example:
To create a single-mode vif with the name SingleTrunk1:
vif create single SingleTrunk1 e0 e1
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
STEP 2
Enter the command:
ifconfig vifname IP_address netmask mask
where vifname is the name of the vif
IP_address is the IP address for this interface
and mask is the network mask for this interface.
Example:
To configure an IP address of 10.120.5.74 and a netmask of 255.255.255.0 on the single-
mode vif SingleTrunk1 that was created in the previous example:
ifconfig SingleTrunk1 10.120.5.74 netmask 255.255.255.0
STEP 3
To change the active interface in a single-mode vif, enter the command:
vif favor interface
where interface is the name of the interface you want to be active.
Example:
To specify the interface e1 as preferred:
vif favor e1
STEP 4
To check the status of your new interface, enter the command:
ifconfig vifname
where vifname is the name of the new interface.
STEP 5
To make a new vif permanent, update the /etc/rc file by entering the command:
wrfile -a /etc/rc
ifconfig SingleTrunk1 10.120.5.74 netmask 255.255.255.0
vif favor e1
STEP 6
Verify your changes to the /etc/rc file by entering one of the following commands:
rdfile /etc/rc
source /etc/rc
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VIF from the CLI:
Multilevel Example
system> vif create multi multiVif2 e3a e3b e3c e3d
system> ifconfig multiVif2 172.17.200.202 netmask
255.255.255.0 mediatype 100tx-fd up
system> ifconfig multiVif2
multiVif2:flags=1148043<UP,BROADCAST,RUNNING,MULTICAST,
TCPCKSUM> mtu 1500
inet 172.17.200.202 netmask 0xffffff00 broadcast
172.17.200.255
ether 02:a0:98:03:28:8e (Disabled virtual interface)
This procedure enables you to create a static or dynamic multilevel vif on your storage
system. By default, the load-balancing method based on IP address is used for a multilevel
vif. However, you can select another method when creating the vif. Once a load-balancing
method has been assigned to a vif, it cannot be changed.
STEP 1
To create a static multimode vif, enter the command:
vif create multi vif_name -b {rr|mac|ip} [interface_list]
Or, to create a dynamic multimode vif, enter the command:
vif create lacp vif_name -b {rr|mac|ip} [interface_list]
where -b specifies one of the following load-balancing methods:
• rr―round robin
• mac―based on MAC address
• ip―based on IP address (default)
NOTE: For dynamic multimode vifs, use the ip load-balancing method.
vif_name is the name of the vif
and interface_list is a list of the interfaces that make up the vif.
NOTE: You must ensure that all interfaces to be included in the vif are configured down.
You can use the ifconfig command to configure an interface down.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Example
To create a multimode vif made up of interfaces e0, e1, e2, and e3 using MAC-based load
balancing:
vif create multi MultiTrunk1 -b mac e0 e1 e2 e3
STEP 2
Enter the command:
ifconfig vifname IP_address netmask mask
where vifname is the name of the vif
IP_address is the IP address for this interface
and mask is the network mask for this interface.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VIF from the CLI:
Second-Level VIF Example
system> vif create multi multiVif1 e3a e3b
system> vif create multi multiVif2 e3c e3d
system> vif create single L2vif multiVif1 multiVif2
system> ifconfig L2vif 172.17.200.206 netmask
255.255.255.0 mediatype 100tx-fd up
system> ifconfig L2vif
L2vif:flags=1148043<UP,BROADCAST,RUNNING,MULTICAST,TCPC
KSUM> mtu 1500
inet 172.17.200.206 netmask 0xffffff00 broadcast
172.17.200.255
ether 02:a0:98:03:28:8c (Disabled virtual
interface)
This procedure creates a second-level vif called vif_name on a single storage system with
two multimode vifs called vif_name1 and vif_name2. The vif_name1 is composed of
two physical interfaces, if1 and if2, and vif_name2 is composed of two physical
interfaces, if3 and if4.
STEP 1
To create two multimode interfaces, enter the commands:
vif create multi -b {rr|mac|ip} vif_name1 if1 if2
vif create multi -b {rr|mac|ip} vif_name2 if3 if4
where -b specifies one of the following load-balancing methods:
• rr―round robin
• mac―based on MAC address
• ip―based on IP address (default)
NOTE: You must ensure that all interfaces to be included in the vif are configured down.
You can use the ifconfig command to configure an interface down.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
vif Commands
VIF COMMANDS
EXAMPLE RESULT
vif create single Creates a single-mode trunk vif on interfaces e1 and e2. Enter this
SingleTrunk e1 e2 command into the /etc/rc file to make it persistent over reboots.
vif stat SingleTrunk1 10 Displays the number of packets received and transmitted on each
interface. You can specify the time interval (in seconds) at which the
statistics are displayed. If no number is entered, statistics are displayed
by default at two-second intervals.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VIF with FilerView
After a vif is created using either the CLI vif command or FilerView, you must assign an
address to the vif. To configure the vif as if it were a single interface, use the ifconfig
command
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Virtual LANs
VIRTUAL LANS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Virtual LANs
VLAN 1 VLAN 2
Floor 1 0
1
2
1
Floor 2
2
VIRTUAL LANS
A virtual local area network (VLAN) is a switched network that is logically segmented by
function, project team, or applications. End stations can be grouped by department, by
project, or by security level. End stations can be geographically dispersed and still be part of
the broadcast domain in a switched network.
ADVANTAGES OF VLANS
• Ease of administration―VLANs enable a logical grouping of users who are physically
dispersed. Moving to a new location does not interrupt membership in a VLAN. Similarly,
changing job functions does not require moving the end station because it can be reconfigured into
a different VLAN.
• Confinement of broadcast domains―VLANs reduce the need for routers on the network to
contain broadcast traffic. Packet flooding is limited to the switch ports on the VLAN.
• Reduction of network traffic―Because the broadcast domains are confined to the VLAN, traffic
on the network is significantly reduced.
• Enforcement of security―End stations on one VLAN cannot communicate with end stations on
another VLAN unless a router is connected between them.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a VLAN from the CLI
Use the vlan create and the ifconfig commands to create and configure a VLAN.
After creating the VLAN interface with the vlan command, you can configure it using the
ifconfig command.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
vlan Commands
VLAN COMMANDS
A VLAN is created using the vlan create command in the CLI or in FilerView. After
creating the trunk, you can configure it like any other regular network interface using the
ifconfig command.
EXAMPLE RESULT
vlan create –g on e4 2 3 Creates three VLANs on interface e4 named e4-2, e4-3, and e4-4. The
4 -g on option enables GVRP on the VLANs. Enter this command in
the /etc/rc file to make it persistent over reboots.
vlan delete –q e8 2 Removes VLAN e8-2. If the interface was configured up, a message
appears asking you to confirm the deletion.
vlan add e8 3 Adds e8-3 to the VLAN. Enter this command in the /etc/rc file to
make it persistent over reboots.
vlan stat e4 10 Displays the number of packets received and transmitted on each
interface. You can specify the time interval (in seconds) at which the
statistics are displayed. If no number is entered, statistics are displayed
by default at two-second intervals.
vlan modify –g off e8 Interface e8 is excluded from participating with GVRP. Enter this
command in the /etc/rc file to make it persistent over reboots.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
In this module, you should have learned to:
Use the ifconfig command to configure interfaces
Identify host-name resolution methods:
– /etc/hosts file
– DNS
– NIS
Explain how a VIF is a single virtual interface created
from multiple physical interfaces
Identify trunking modes supported on the storage
system:
– Single mode―failover
– Multimode―increased bandwidth
Explain how VLANs increase IP network security by
tagging specific packets with the appropriate VLAN ID
© 2008 NetApp. All rights reserved. 45
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 5: Networking
Estimated Time: 45 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Physical Storage
Physical Storage
Management
Module 6
Data ONTAP® 7.3 Fundamentals
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disks
DISKS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disks
DISKS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Supported Disk Topologies
FC-AL
SERIAL ATA
Serial ATA (SATA) is a successor of the Advanced Technology Attachment (ATA) standard.
NetApp uses this topology to connect supported storage controllers and disk shelves in a
high-speed serial link.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Qualification
DISK QUALIFICATION
NetApp storage systems only support disks qualified by NetApp. Disks must be purchased
from NetApp or an approved reseller.
UNQUALIFIED DISKS
Data ONTAP automatically detects unqualified disks. If you attempt to use an unqualified
disk, Data ONTAP responds by issuing a “delay forced shutdown” warning, giving you 72
hours to remove and replace the unqualified disk before a forced system shutdown occurs.
In addition, when Data ONTAP detects an unqualified disk it takes the following actions:
• Provides notification through syslog entries, console messages, and AutoSupport
• Generates an automatic error message and delayed forced shutdown if the
/etc/qual_devices file is modified
• Marks unsupported drives as “unqualified.”
DISK QUALIFICATION
If you install a new disk drive into your disk shelf and the storage system responds with an
unqualified disk error message, you must remove the disk and replace it with a qualified disk.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
To correct an unqualified disk error and avoid a forced shutdown, complete the following
steps:
1. Remove any disk drives not provided by NetApp or an authorized NetApp vendor or reseller.
2. To update your list of qualified disks, download and install the most recent
/etc/qual_devices file from http://now.netapp.com/NOW/download/tools/diskqual/.
3. If the unqualified disk error message persists after installing an up-to-date
/etc/qual_devices file, try reinstalling the/etc/qual_devices file.
4. If the reinstallation fails, remove the unqualified disk and contact NetApp Technical Support.
The /etc/disk_fw directory should now have all the current disk firmware images stored.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Ownership
DISK OWNERSHIP
HARDWARE-BASED OWNERSHIP
In hardware-based disk ownership, disk ownership and pool membership is determined by the
slot position of the HBA or onboard port, and the shelf module port where the HBA is
connected.
SOFTWARE-BASED OWNERSHIP
In software-based disk ownership, disk ownership and pool membership is determined by the
storage system administrator. Data ONTAP might also set disk ownership and pool
membership automatically, depending on the initial configuration. Slot position and shelf
module port does not affect disk ownership.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Ownership
system> sysconfig -r
Volume vol0 (online, normal) (block checksums)
Plex /vol0/plex0 (online, normal, active)
RAID group /vol0/plex0/rg0 (normal)
Disk ID = Loop_id.Device_id
DEVICE OWNERSHIP
DISK ID
Disks are numbered in all storage systems. Disk numbering allows you to:
• Interpret messages displayed on your screen such as command output or error messages
• Quickly locate a disk associated with a displayed message
To determine a disk ID, use the sysconfig –r, vol status –r, or aggr status –r
commands.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Ownership: Loop_id
PCI 1 PCI 3
I I
O O
PCI 2 PCI 4
Console
e0a e0b RLM e0c e0d
L 0a F C 0b L L 0c F C 0d L
I I I I
N N N N
0e LVD SCSI
K K K K
AC AC
L L L L
I I I I
N 0a F C 0b N e0a e0b RLM e0c e0d N 0c F C 0d N
Console 0e LVD SCSI
K K K K status status
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Ownership: Device_id
Shelf ID
13 12 11 10 9 8 7 6 5 4 3 2 1 0 Bay Number
FC LOOP IDS
The table above shows the numbering system for FC loop IDs.
For DS14 Series shelves, the following IDs are reserved (not used): 0-15, 30-31, 46-47, 62-
63, 78-79, 94-95, 110-111.
The numbering system in the table above can be summarized by the following formula:
DS14 Disk/Loop ID = DS14 Shelf ID * 16 + Bay Number
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The fcstat device_map Command
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Matching Disk Speeds
If disks with different speeds are present on a NetApp system (for example, 10,000 RPM and
15,000 RPM disks), Data ONTAP attempts to avoid mixing them in one aggregate or
traditional volume.
By default, Data ONTAP selects disks:
• With the same speed when creating an aggregate or traditional volume in response to the following
commands:
• aggr create
• vol create
• That match the speed of existing disks in the aggregate or traditional volume that requires
expansion or mirroring in response to the following commands:
• aggr add
• aggr mirror
• vol add
• vol mirror
If you use the -d option to specify a list of disks for commands that add disks, the operation
fails if disk speeds differ from each other or differ from the speed of disks already included in
the aggregate or traditional volume. The commands for which the -d option will fail in this
case are aggr create, aggr add, aggr mirror, vol create, vol add, and vol
mirror. For example, if you enter aggr create vol4 -d 9b.25 9b.26 9b.27 and two
of the disks are different speeds, the operation fails.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
When using the aggr create or vol create commands, you can use the -R rpm option to
specify the type of disk used based on its speed. The –R rpm option is only necessary for
systems using different disks with different speeds. Typical values for rpm are 5400, 7200,
10,000, and 15,000. The -R option cannot be used with the -d option.
If you are going to specify a disk speed and you are not sure of its actual speed, use the
sysconfig -r command to first determine actual disk speed.
NOTE: It is possible to use the –f option to override the RPM check, but NetApp does not
recommend this practice. Using the –f option in this situation can produce an aggregate or
traditional volume that does not meet performance expectations.
Data ONTAP periodically checks to see if adequate spares are available for the storage
system. Only disks with matching speeds are considered acceptable spares. However, if a disk
fails and a spare with matching speed is not available, Data ONTAP may use a spare with a
different speed for RAID reconstruction.
NOTE: If an aggregate or traditional volume includes disks with different speeds, and
adequate spares are present, you can use the disk replace command to replace
mismatched disks. Data ONTAP uses Rapid RAID Recovery to copy these disks to more
appropriate replacements.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using Multiple Disk Types in an Aggregate
34 56
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Spare Disks
SPARE DISKS
ADDING SPARES
You can add spare disks to an aggregate to increase its capacity. If the spare is larger than the
other data disks, it becomes the parity disk. However, it does not use the excess capacity
unless another disk of similar size is added. The second larger additional disk has full use of
additional capacity.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Sizing
SIZING
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Sizing
DISK SIZING
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Right-Sizing
RIGHT-SIZING
Some disks have slightly more capacity, depending on the manufacturer and model. Disk
drives in the same size category that are made by different manufacturers can differ slightly in
size. Data ONTAP "right sizes" these disks and makes all usable disk space the same. Right-
sizing ensures that disks are compatible regardless of manufacturer.
When you add a new disk, Data ONTAP reduces the amount of space available for user data
on that disk by rounding down. This maintains compatibility across disks from different
manufacturers. This means that the available disk space displayed using an informational
command such as sysconfig is less than each disk’s rated capacity. The table above,
reprinted from the Storage Management Guide, shows how Data ONTAP rounds down
available disk space.
NOTE: Existing disks in an upgraded system are not automatically right-sized. Right-sizing
is applied only to disks that are added to the storage system. To compare physical space and
usable space, and to determine if disks are right-sized, use sysconfig -r.
FC VERSUS ATA
FC drives have 520 bytes per sector, while ATA drives have 512 bytes per sector. When data
is written to an FC disk, the checksum can be saved within the sector. However, for ATA
drives, the checksum must be reallocated. Because of this, a disk with 512 bytes per sector
has only 8/9 of the space of an equivalent disk with 520 bytes per sector, resulting in a 12.5%
loss.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Usable Disk Space
Similar to the UNIX FFS (Fast File System), the storage system reserves 10% of its total disk
space for efficiency . The df command does not count this 10% as part of the file system
space.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Space Allocation: Aggregates
Aggregates with a Aggregate
Space
Traditional Volume—Each
aggregate has 10%
10% WAFL Overhead
allocated for WAFL.
Traditional Volumes—Each
volume has 20% allocated
for Snapshot reserve. The
remainder is used for client
data. WAFL Aggregate 80%
Space
Snapshot Reserve—The
90%
amount of space allocated
for Snapshot reserve is
adjustable. To use this
space for data (not
Snapshot Reserve
recommended), you must
manually override the 20%
(Adjustable)
allocation used for
Snapshot copies.
AGGREGATES
The size of an aggregate depends on the number and size of disks allocated to it. In an
aggregate, 10 % is allocated for WAFL.
TRADITIONAL VOLUMES
An aggregate can include only one traditional volume. A traditional volume has 20%
allocated for Snapshot reserve, with no aggregate overhead.
SNAPSHOT RESERVE
Like flexible volumes, the space used for the Snapshot reserve in a traditional volume can be
expanded into user space as required by the system. This expansion could occur, for example,
if numerous changes are made to the active file system. If necessary, the Snapshot reserve
expands into user space as Snapshot copies are made, regardless of the designated Snapshot
reserve percentage.
You can manually reallocate disk space using the snap reserve command. However,
unless you specifically readjust the user data space on a volume, it will never exceed 80% of
the usable disk space.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Space Allocation: Flexible Volumes
Aggregates—Each Aggregate
Space
aggregate has 5%
allocated for Snapshot FlexVol
10% WAFL Overhead
reserve and 10% allocated Space
Plus
for WAFL. WAFL Aggregate Aggregate
Snapshot
Flexible Volumes—Each Space
Reserve
volume has 20% allocated
for Snapshot reserve. The FlexVol1x 80%
remainder is used for client
data. .snapshot 20% 95%
90%
Snapshot Reserve—The
amount of space allocated
for Snapshot reserve is FlexVol#n 80%
adjustable. To use this
space for data (not .snapshot 20%
recommended), you must
manually override the Aggregate Snapshot Reserve 5%
(Adjustable)
allocation used for
Snapshot copies.
© 2008 NetApp. All rights reserved. 20
AGGREGATES
The size of an aggregate depends on the number and size of disks allocated to it. Five percent
of the aggregate is allocated as Snapshot reserve for aggregate Snapshot copies, while 10 %
of the aggregate is allocated for WAFL.
FLEXIBLE VOLUMES
An aggregate can include more than one flexible volume. However, each flexible volume
allocates 20% for Snapshot reserve. To use the Snapshot reserve space for data (not
recommended), you must manually override the allocation used for Snapshot copies, allowing
the remainder to be used for client data.
SNAPSHOT RESERVE
The Snapshot reserve for aggregates does not automatically expand into the WAFL aggregate
space. When space is needed for Snapshot copies, by default, the older aggregate Snapshot is
deleted to accommodate a new Snapshot. You can adjust the Snapshot reserve size in an
aggregate using the snap reserve –A command.
In volumes, the space used for the Snapshot reserve expands into user space as required by
the system. This expansion could occur, for example, if numerous changes are made to the
active file system. If necessary, the Snapshot reserve expands into the user space as Snapshot
copies are taken, regardless of the designated Snapshot reserve percentage. You can manually
reallocate disk space using the snap reserve command.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Protection
DISK PROTECTION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Protection
DISK PROTECTION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Groups
RAID GROUPS
A RAID group includes several disks linked together in a storage system. While there are
different implementations of RAID, Data ONTAP supports only RAID 4 and RAID-DP. To
understand how to manage disks and volumes, it is important to first understand the concept
of RAID.
In Data ONTAP, each RAID 4 group consists of one parity disk and one or more data disks.
The storage system assigns the role of parity disk to the largest disk in the RAID group.
When a data disk fails, the storage system identifies the data on the failed disk and rebuilds a
hot spare with that data.
RAID-DP provides double-parity protection against a single- or double-disk failure within a
RAID group. The minimum number of disks in a RAID-DP group is three—one data disk,
one parity disk, and one double-parity (DP) disk.
NOTE: If a parity disk fails, it can be rebuilt from data on the data disks.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID 4 Technology
RAID 4 TECHNOLOGY
RAID 4 protects against data loss due to a single-disk failure within a RAID group.
Each RAID 4 group contains the following:
• One parity disk (assigned to the largest disk in the RAID group)
• One or more data disks
Using RAID 4, if one disk block goes bad, the parity disk in that disk's RAID group is used to
recalculate the data in the failed block, and then the block is mapped to a new location on the
disk. If an entire disk fails, the parity disk prevents any data from being lost. When the failed
disk is replaced, the parity disk is used to automatically recalculate its contents. This is
sometimes referred to as row parity.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID-DP Technology
RAID-DP protects against data loss that results from
double-disk failures in a RAID group
A RAID-DP group requires a minimum of three disks:
– One parity disk
– One double-parity disk
– One data disk
RAID-DP TECHNOLOGY
RAID-DP technology protects against data loss due to a double-disk failure within a RAID
group.
Each RAID-DP group contains the following:
• One data disk
• One parity disk
• One double-parity disk
RAID-DP employs the traditional RAID 4 horizontal row parity. However, in RAID-DP, a
diagonal parity stripe is calculated and committed to the disks when the row parity is written.
For more information about RAID-DP processes, see Technical Report 3298 at
http://www.netapp.com/library/tr/3298.pdf.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Group Size
RAID-DP
NetApp Minimum Maximum Default
Platform Group Size Group Size Group Size
All storage systems (with
3 16 14
SATA disks)
All storage systems (with
3 28 16
FC disks)
RAID 4
NetApp Minimum Maximum Default
Platform Group Size Group Size Group Size
FAS270 2 14 7
All other storage systems
2 7 7
(with SATA)
All other storage systems
2 14 8
(with FC)
RAID groups can include anywhere from 2 to 28 disks, depending on the platform and RAID
type. For best performance and reliability, NetApp recommends using the default RAID
group size.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data Reliability
DATA RELIABILITY
Media scrubbing checks disk blocks for physical errors. Disk scrubbing checks disk blocks on
all disks in the storage system for media errors and logical parity errors.
If Data ONTAP identifies media errors or inconsistencies, it repairs them by reconstructing
the data from parity data, and then rewriting the data back to the data disk. Disk scrubbing
reduces the chance of data loss from media errors that occur during reconstruction.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Checksums
RAID CHECKSUMS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Comparing Media and RAID Scrubs
A media scrub: A RAID scrub:
Is always running in the Is enabled by default
background when the storage Can be scheduled or disabled
system is not busy – Disabling is not
Looks for unreadable blocks recommended
at the lowest level (0s and 1s) Uses RAID checksums
Is unaware of the data stored Reads a block and then
in a block checks the data
Takes corrective action when If the RAID scrub finds a
it finds too many unreadable discrepancy between the
blocks on a disk (sends RAID checksum and the data
warnings or fails a disk, read, it re-creates the data
depending on findings) from parity and writes it back
to the block
Ensures that data has not
become stale by reading
every block in an aggregate,
even when users haven’t
accessed the data
DISK SCRUB
Storage systems use disk scrubbing to protect data from media errors or bad sectors on a disk.
Each disk in a RAID group is scanned for errors. If errors are identified, they are repaired by
reconstructing data from parity and rewriting the data. Without this process, a disk media
error could cause a multiple disk failure while running in degraded mode.
Automatic RAID scrub is enabled by default. If you prefer to control the timing of RAID
scrubs, you can turn off the automatic scrubs. You can also manually start and stop disk
scrubbing regardless of the current value (on or off) of the raid.scrub.enable option.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
About Disk Scrubbing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Group Options
options raid.timeout
options raid.reconstruct.perf_impact
options raid.scrub.enable
options raid.scrub.perf_impact
EXAMPLE RESULT
options raid.timeout 36 Changes the amount of time the system will operate in
degraded mode from the default (24 hours) to 36 hours.
aggr options aggr0 raidtype raid4 Changes the RAID type of RAID groups for aggr0 to
RAID 4. Default RAID type is RAID-DP.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
disk Commands
DISK COMMANDS
EXAMPLE RESULT
disk fail 4a.16 Fails the file system disk, 4a.16.
disk remove 4a.17 Removes the spare disk, 4a.17.
disk swap Prepares (quiets) external SCSI bus for swap (not required for FC-AL loops).
disk unswap Undoes disk swap (not required for FC-AL loops).
disk scrub stop Stops disk scrubbing.
disk replace start Replaces disk 4a.16 with a hot spare.
4a.16
disk zero spares Zeros all unzeroed RAID spare disks.
disk sanitize start Starts removal of all disk data by overwriting disk 4a.18 several times.
4a.18
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Failures
Volume 1
DISK FAILURES
If you have a disk failure, you can use the sysconfig -r command to determine which disk
has failed. You can also obtain the same information using the vol status –r or aggr
status –r commands.
Hopefully, your system is equipped with appropriate hot spare disks. In the figure above, hot
spares are part of the storage system, but are not part of a RAID group. When a disk fails in
this configuration, the storage system automatically rebuilds data or parity on an available hot
spare disk.
REPLACING DISKS
In addition to using hot spares, you can also replace a failed disk by hot swapping it, which
means that the disk is removed or installed while the storage system is running, Hot swapping
allows new disks to be added with minimal interruption to a file system.
If two disks are removed at the same time from a RAID 4 group, a double-disk failure occurs
and data loss results. If the volume uses RAID-DP, the data is protected.
If a volume contains more than one RAID 4 group, two disks in a volume can fail as long as
the disks are not in the same RAID group.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Degraded Mode
Degraded mode occurs when:
– A single disk fails in a RAID 4 group with no spares
– Two disks fail in a RAID-DP group with no spares
Degraded modes operates for 24 hours, during which
time:
– Data is still available
– Performance is less-than-optimal
Data must be recalculated from the parity until the failed disk
is replaced
CPU usage increases to calculate from parity
System shuts down after 24 hours
To change time interval, use the options
raid.timeout command
If an additional disk in the RAID group fails during
degraded mode, the result will be data loss
© 2008 NetApp. All rights reserved. 34
DEGRADED MODE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Replacing a Failed Disk by Hot Swapping
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Replacing Failed Disks
750 GB
When replacing a failed disk, the size of the new disk must be equal to or larger than the
usable space of the replaced disk to accommodate all the data blocks on the failed disk.
If the usable space on the replacement disk is larger than the failed disk, the replacement disk
is right-sized to the capacity of the failed disk. The extra space on the disk is not usable.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates
AGGREGATES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates
AGGREGATES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Naming Rules for Aggregates
AGGREGATE NAMES
Aggregate names must follow the naming conventions shown above. The same rules apply to
naming volumes.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Aggregate
ADDING AN AGGREGATE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating an Aggregate Using the CLI
The following command creates the aggregate newfastaggr, with 20 disks, the default RAID
group size, and all disks with 15,000 RPM:
aggr create newfastaggr -R 15000 20
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Common Aggregate Commands
aggr create <aggrname> [options]
<disklist>
aggr add <aggrname> [options] <disklist>
aggr status <aggrname> [options]
aggr rename <aggrname> <new-aggrname>
aggr show_space [-b] <aggrname>
aggr offline {<aggrname> | <plexname>}
aggr online {<aggrname> | <plexname>}
aggr destroy {<aggrname> | <plexname>}
Aggregate commands are similar to vol commands except that they are performed on an
aggregate. In fact, many aggr commands work on traditional volumes, and many vol
commands work on aggregates. For a complete list of commands, see your product
documentation.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating an Aggregate Using
the FilerView Aggregate Wizard
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregate Size
AGGREGATE SIZE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 6: Physical Storage
Management
Estimated Time: 60 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Logical Storage
Logical Storage
Management
Module 7
Data ONTAP® 7.3 Fundamentals
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage Concepts
STORAGE CONCEPTS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
WAFL File System
To recap, an aggregate:
– Is a collection of disks
– Represents physical storage
A flexible volume is a collection of stored data
(including the directory) within an aggregate
WAFL keeps track of:
– The aggregate
– The flexible volumes in the aggregate
– All the data in the flexible volumes
In Data ONTAP, the file system is called Write
Anywhere File Layout (WAFL)
© 2008 NetApp. All rights reserved. 4
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Volumes
VOLUMES
Volumes are file systems that contain user data accessible through one or more access
protocols supported by Data ONTAP, including NFS, CIFS, HTTP, WebDAV, FTP, FCP,
and iSCSI. To maintain multiple, space-efficient, point-in-time data images for the purpose of
backup and recovery, you can create one or more Snapshot copies of the data in a volume.
Data ONTAP limits a storage system to 100 aggregates, but within those aggregates you can
create up to 500 traditional and flexible volumes.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Naming Rules for Volumes
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Root Volumes
ROOT VOLUMES
The storage system contains a root volume that was created when the system was initially set
up. The default root volume name is /vol/vol0.
Storage systems with Data ONTAP 7.0 or later preinstalled have a FlexVol volume for a root
volume. Systems running earlier versions of Data ONTAP have a traditional root volume.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Root Volumes: Example
/cheryl Directory
Each storage system has only one root volume, although the designated root volume can be
changed. The root volume is used to start up the storage system. It is the only volume with
root attributes, meaning that its /etc directory is used for configuration information.
Volume path names begin with /vol. For example:
/vol/vol0
where vol0 is the name of the volume
/vol/users/Cheryl
where cheryl is a directory on the users volume
NOTE: The /vol path is not a directory. It is a special virtual root path that the storage
system uses to mount other directories. You cannot mount /vol to view all of the volumes on
the storage system. You must mount each volume separately.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Traditional Volumes
TRADITIONAL VOLUMES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Traditional Volumes
TRADITIONAL VOLUMES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates and FlexVol Volumes
FlexVol Volumes
In FlexVol® volumes:
The Primary unit of data
storage and
management is still the
Disks Disks Disks WAFL volume
RG1 RG2 RG3
Aggregate contains the
physical storage
Aggregate
Volumes are no longer
tied to physical storage
Can be multiple FlexVol
volumes per aggregate
Storage space can be
dynamically reallocated
© 2008 NetApp. All rights reserved. 13
Flexible volumes are logical data containers that can be sized, resized, managed, and moved
independently from the underlying physical storage without disrupting normal operations.
As shown in the figure above, an aggregate is defined as a pool of many disks from which
space is allocated to volumes (volumes are shown as FlexVol and FlexClone entities). From
an administrator’s perspective, volumes remain the primary unit of data management. But
transparently to the administrator, flexible volumes now refer to logical entities, not (directly)
to physical storage.
Flexible volumes are volumes that are no longer bound by the limitations of the disks on
which they reside. A FlexVol volume is simply a “pool” of storage that can be sized based on
how much data you want to store in the volume, not the physical disk capacity. You can
increase or decrease a FlexVol volume on-the-fly without any downtime. In a flexible
volume, all spindles in the aggregate are available at all times. Flexible volumes can run I/O-
bound applications must faster than traditional volumes of the same size.
Flexible volumes provide these additional benefits while preserving the familiar semantics of
volumes and the current set of volume-specific data management and space-allocation
capabilities.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Aggregates and FlexVol Volumes
Work
Create aggregate
FlexVol 1 FlexVol 2 FlexVol 3
– RAID groups are
created as result
vol1 vol2
vol3
Create FlexVol 1
– Only metadata space is
used
– There is no preallocation
of blocks to a specific
volume
Aggregate Create FlexVol 2
RG1 RG2 RG3 – WAFL allocates
aggregate space as
RG1 RG2 RG3 data is written
Populate volumes
Aggregate
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates and FlexVol Volume
Components
FlexVol volumes are a logical
storage container that:
– Can grow or shrink nondisruptively
– Can be just a few MBs in size or as
large as (or larger than) the
aggregate
– Uses physical storage space like
qtrees
– Preserves all other volume-level
properties
Aggregates are a physical storage
pool
FLEXIBLE VOLUMES
A flexible volume (also called a FlexVol volume) is a volume that is loosely coupled to its
container aggregate. Because the volume is managed separately from the aggregate, you can
create small FlexVol volumes (20 MB or larger), and then increase or decrease the size of
FlexVol volumes in increments as small as 4 kB.
Advantages of flexible volumes:
• You can create flexible volumes almost instantaneously. These volumes:
• Can be as small as 20 MB
• Are limited to aggregate capacity (if guaranteed)
• Can be as large as the volume capacity supported for your storage system (not guaranteed)
• You can increase and decrease a flexible volume while online, allowing you to:
• Resize without disruption
• Size in any increment (as small as 4 kB)
• Size quickly
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Flexible Volumes
FLEXIBLE VOLUMES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Flexible Volumes
FLEXIBLE VOLUMES
BACKUP
You can size your flexible volumes for convenient, volume-wide data backup.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Increasing I/O Performance With FlexVol
Volumes
Regular volumes:
– Volume performance
is limited by the
number of disks in the
volume
– “Hot” volumes can’t
be helped by disks on
other volumes
FlexVol volumes:
– Spindle-sharing
makes total
aggregate
performance available
to all volumes
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Improving Space Utilization With FlexVol
Volumes
Vol 1
Vol 2 Vol 4 Traditional volumes:
Vol 3
– Free space is
scattered across
volumes
– Free space is not
Vol 1 available to other
volumes
FlexVol volumes:
Vol 2
– No pre-allocation of
Free free space
Vol 3 – Free space is
available for use by
Vol 4
other volumes or new
volumes
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a Flexible Volume
When you create a FlexVol volume, you must provide the following information:
• A name for the volume
• The name of the container aggregate
The size of a FlexVol volume must be at least 20 MB and no more than 16 TB (or whatever is
the largest size your system configuration supports).
In addition, you can provide the following optional FlexVol volume values:
• Language (the default language is the language of the root volume)
• Space-guarantee setting for the new volume
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a Flexible Volume Using the CLI
EXAMPLE RESULT
vol create vol2 2 Creates the new volume, vol2, from spares. You can specify disks of
a certain size, enter a specific list of disks, or specify how many to
add.
vol create vol2 -n 3 Displays the command that the system will execute without actually
making any changes. In this example, vol create vol2 -d
0b.28 0b.27 0b.26 is returned.
The -n option is useful for displaying automatically selected disks,
as shown above.
vol create flexvol aggr1 Creates the new 20 GB volume, flexvol, on aggr1.
20G
vol add vol1 3 Adds three disks to the existing traditional volume, vol1.
vol status vol1 Displays the volume size, options, and so on, for vol1.
vol rename vol2 vol3 Changes the name of volume vol2 to vol3.
vol options vol3 Displays current options settings for vol3.
vol offline vol3 Removes volume vol3 from active use without restarting.
vol size flexvol 30G Changes the size of the flexvol volume to 30 GB.
vol size flexvol +10g Increases the size of the flexvol volume by 10 GB.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a Flexible Volume Using FilerView
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Resizing a Flexible Volume
Use the vol size command to resize a flexible volume.
Syntax:
vol size <vol-name> [[+|-]<size>[k|m|g|t]]
Command Result
vol size flexvol 50m FlexVol volume size is changed to 50 MB
Vol size flexvol +50m FlexVol volume size is increased by 50 MB
Vol size flexvol -25m FlexVol volume size is decreased by 25 MB
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexClone Volume Clones
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Volume Cloning Works
Volume cloning:
Aggregate Starts with a volume
Takes a Snapshot of the
volume
FlexVol Volume
Creates a clone (a new
volume based on the
Snapshot copy)
Parent
Snap
Modifies the original
shot volume
Modifies the cloned volume
Clone Result
Independent volume copies
are efficiently stored.
© 2008 NetApp. All rights reserved. 25
FlexClone volumes are managed similarly to regular FlexVol volumes, with a few key
differences.
The following is a list of important facts about FlexClone volumes:
• FlexClone volumes are a point-in-time, writable copy of the parent volume. Changes made to the
parent volume after the FlexClone volume is created are not reflected in the FlexClone volume.
• You can only clone FlexVol volumes. To create a copy of a traditional volume, you must use the
vol copy command, which creates a distinct copy with its own storage.
• Before you create FlexClone volumes, you must install the FlexClone license.
• FlexClone volumes are fully functional volumes managed just like the parent volume using the
vol command.
• FlexClone volumes always exist in the same aggregate as parent volumes.
• FlexClone volumes can be cloned.
• FlexClone volumes and parent volumes share the same disk space for common data. This means
that creating a FlexClone volume is instantaneous and requires no additional disk space (until
changes are made to the clone or parent).
• A FlexClone volume is created with the same space guarantee as the parent.
• While a FlexClone volume exists, there are some operations on the parent that are not allowed.
• You can sever the connection between the parent and the clone. This is called splitting the
FlexClone volume. Splitting removes all restrictions on the parent volume and causes the
FlexClone to use its own storage.
IMPORTANT: Splitting a FlexClone volume from its parent volume deletes all existing Snapshot
copies of the FlexClone volume and disables the creation of new Snapshot copies while the
splitting operation is in progress.
• Quotas applied to a parent volume are not automatically applied to the clone.
7-23 Data ONTAP® 7.3 Fundamentals: Logical Storage Management
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
• When a FlexClone volume is created, existing LUNs in the parent volume are also present in the
FlexClone volume, but are unmapped and offline.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Flexible Volume Clone Syntax
Use the vol clone create command to create a flexible
volume clone.
Syntax:
vol clone create <vol-name> [-s none | file |
volume] -b <parent_flexvol> [parent_snapshot>]
The following is an example of a CLI entry used to create a flexible
volume clone:
vol clone create clone1 –b flexvol1
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Splitting Volumes
SPLITTING VOLUMES
Splitting a FlexClone volume from its parent removes any space optimizations currently
employed by the FlexClone volume. After the split, both the FlexClone volume and the
parent volume require the full space allocation specified by their space guarantees. After the
split, the FlexClone volume becomes a normal FlexVol volume.
When splitting clones, keep in mind the following:
• When you split a FlexClone volume from its parent, all existing Snapshot copies of the FlexClone
volume are deleted.
• During the split operation, no new Snapshot copies of the FlexClone volume can be created.
• Because the clone-splitting operation is a copy operation that could take some time to complete,
Data ONTAP provides the vol clone split stop and vol clone split status
commands to stop clone-splitting or check the status of a clone-splitting operation.
• The clone-splitting operation executes in the background and does not interfere with data access to
either the parent or the clone volume.
• If you take the FlexClone volume offline while clone-splitting is in progress, the operation is
suspended. When you bring the FlexClone volume back online, the splitting operation resumes.
• Once a FlexClone volume and its parent volume have been split, they cannot be rejoined.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
vol clone split Command
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Destroying a Volume
DESTROYING A VOLUME
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees
QTREES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees
QTREES
CREATING QTREES
When you want to group files without creating a volume, you can create qtrees instead. When
creating qtrees, you can group files using any combination of the following criteria:
• Security style
• Oplocks setting
• Quota limit
• Backup unit
QTREES LIMITATIONS
The primary limitation of qtrees is that there is a maximum of 4,995 qtrees allowed per
volume on a storage system.
NOTE: When you enter a df command with a qtree path name on a UNIX client, the
command displays the smaller client file system limit or the storage system disk space,
making the qtree look fuller than it actually is.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Qtree
ADDING A QTREE
QTREE ADVANTAGES
BACKING UP QTREES
You can back up individual qtrees to:
• Add flexibility to your backup schedules
• Modularize backups by backing up only one set of qtrees at a time
• Limit the size of each backup to one tape
Many products with NetApp software (such as SnapMirror and SnapVault) are “qtree-aware.”
When you work at the qtree level, because you are working in a smaller increment than the
entire volume, you can perform backup and recovery of files quickly.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 7: Logical Storage
Management
Estimated Time: 40 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS
CIFS
Module 8
Data ONTAP Fundamentals
CIFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Overview
CIFS OVERVIEW
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Definition
CIFS DEFINITION
The Common Internet File System (CIFS) is a Microsoft network file-sharing protocol that
evolved from the Server Message Block (SMB) protocol.
When using CIFS, any application that processes network I/O can access and manipulate files
and folders (directories) on remote servers similar to the way it accesses and manipulates files
and folders on the local system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Authentication
USER AUTHENTICATION
For information about methods of authenticating users other than Active Directory, see the
Data ONTAP CIFS Administration course.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System Joining a Domain
Machine
name
MEMBER SERVER
The storage system joins the Windows topology as a member server with member-server
privileges in the Active Directory environment.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
User Authentication on a
Storage System in a Domain
Client B requests Authenticates Client B
User authentication
For storage systems in a domain, domain users browse the storage system for available shares
and then request access to that share.
User authentication is performed centrally on the domain controller, establishing a user
session with a storage system.
For user authentication on storage systems in a domain:
• Users must be authorized to access a share and its resources
• Data access on the storage system requires a network login to the storage system
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Up and
Configuring CIFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Preparing for CIFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Step 1: License CIFS
USING FILERVIEW
To license CIFS from FilerView:
1. From the left navigation pane, click Filer and then click Manage Licenses.
2. Enter the CIFS license number.
3. Click Apply.
When the CIFS license is preinstalled, the cifs setup script runs immediately after the
setup script.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Step 2: Set Up CIFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)
FILER NAME
The name of the storage system appears on the Filer Name screen of the CIFS Setup Wizard.
You can add a description of the storage system here. This description is available from the
CLI by typing cifs comment.
NOTE: Older environments might use WINS instead of DNS to resolve names.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)
AUTHENTICATION
On the Authentication screen of the CIFS Setup Wizard, select the type of Windows
authentication your system will use.
NOTE: This module covers Windows Active Directory Domain only. For information about
other user authentication methods, see the Data ONTAP CIFS Administration course, and the
File Access and Protocols Management Guide.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)
When entering an administrator name on the CIFS Setup Wizard Domain screen, the
Windows Administrator must have domain privileges to create a new computer account in
Active Directory.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)
NOTE: If your storage system supports CIFS clients only, set the
Security style to NTFS Only. Otherwise, use the default
Multi-Protocol.
SECURITY STYLE
The Security Style screen of the CIFS Setup Wizard is where you specify the type of security
to be used as the default on the storage system.
If FilerView is used to configure CIFS, the default security style is none.
If the CLI is used to configure CIFS:
• The default security style is NTFS only if CIFS is licensed.
• The default security style is Multi-Protocol if CIFS and NFS are licensed.
NOTE: Changing the default security style does not change existing files and directories,
only newly created files and directories.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Setup Wizard (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Services
Host1
Host2
cifs terminate
Host3
Host4
CIFS SERVICES
EXAMPLE RESULT
cifs terminate -t 10 Terminates a session in 10 minutes for the host gloriaswan. Alerts are
gloriaswan sent periodically to the affected host(s).
cifs terminate -t 0 Terminates all CIFS sessions immediately for all clients.
cifs restart Reconnects the storage system to the domain controller, and then
restarts the CIFS service.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Reconfiguring CIFS
RECONFIGURING CIFS
To reconfigure CIFS, you must run the cifs setup program again, and then enter new
configuration settings. You can use cifs setup to change the following CIFS settings:
• WINS server addresses
• Security style (multiprotocol or NTFS-only)
• Authentication (Windows domain, Windows workgroup, or UNIX password)
• File system used by the storage system
• Domain or workgroup to which the storage system belongs
• Storage system name
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
• Your storage system and the domain controllers in the same domain must be synchronized with
the same time source. If the time on the storage system and the time on the domain controllers are
not synchronized, the following error message is displayed:
sClock skew too great
For a detailed description of how to set up time synchronization services, see the Storage
Management Guide.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Sessions
CIFS SESSIONS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Displaying CIFS Sessions
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the CLI for CIFS Sessions
system> cifs sessions
Server Registers as ' NetApp1 ' in Windows 2000 domain 'EDSVCS'
Filer is using en_US for DOS users
Selected domain controller \\DEVDC for authentication
========================================
PC (user) #shares #files
TPILLON2-L2K (EDSVCS\administrator - root)
1 0
system> cifs sessions -s
users
Security Information
TPILLON2-L2K (EDSVCS\administrator - root)
***************
UNIX uid = 0
user is a member of group daemon (1)
user is a member of group daemon (1)
NT membership
EDSVCS\Administrator
EDSVCS\Domain Users
EDSVCS\Domain Admins
BUILTIN\Users
BUILTIN\Administrators
User is also a member of Everyone, Network Users,
Authenticated Users
***************
To display a summary of information about the storage system and connected users, use the
cifs sessions command without arguments. For information about a single connected
user, you can specify the user, machine name, or IP address, or use the -s option to obtain
security information about one or all connected users.
EXAMPLE RESULT
cifs sessions Displays a summary of all connected users.
cifs sessions growe Displays information about the user, files opened by the user,
and the access level of the open files.
cifs sessions growe_NT Displays information about the host, files opened by the host,
cifs sessions 192.168.33.3 and the access level of the open files.
cifs sessions –s growe_NT Displays security information about the connected machine.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using FilerView for CIFS Sessions
To obtain CIFS session information using FilerView, complete the following steps:
1. From the FilerView main menu, select CIFS > Session Report.
2. Enter a user name or PC name.
3. To view session information, click Sessions or Security.
NOTE: If you leave the name field blank and select one of the option buttons, a full session
or security report on all connected users is displayed. Current session status is displayed at the
bottom of the CIFS Session Report screen.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS Shares
CIFS SHARES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating and Managing Shares
FilerView
SHARES
When creating CIFS shares, there is a limitation with Windows Computer Management . For
more information, see the Data ONTAP CIFS Administration course, and the File Access and
Protocols Management Guide.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The cifs shares Command
Display shares
cifs shares [share_name]
Add shares
cifs shares -add <share_name> <path>
[-comment description] [-forcegroup
name] [-maxusers n]
Change shares
cifs shares -change <share_name>
<path> [-comment description]
[-forcegroup name] [-maxusers n]
Delete shares
cifs shares -delete <share_name>
You can use the CLI or FilerView to create and modify shares.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
PARAMETER WHAT IT DOES
sharename Name of the share that CIFS users will use to access the directory on the storage system.
If the sharename already exists, this command with the add option fails.
description Describes the purpose of the share and contains only characters in the current code page.
It is required by the CIFS protocol and is displayed in the share list in Network
Neighborhood. If the description contains spaces, enclose it in single quotes.
-nocomment Specifies that there is no description.
group name The name of the group in the UNIX group database. This group will own all files created
in the share.
- Specifies that no particular UNIX group owns files that are created in the share. Files that
noforcegroup are created belong to the same group as the owner of the file.
-nomaxusers Specifies no maximum number of users who can have simultaneous access.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The cifs shares Command: Example
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Share Permissions
FilerView
EXAMPLE RESULT
cifs access webfinal tuxedo Gives full Windows NT access to the group tuxedo on the
Full Control webfinal share.
cifs access webfinal Gives read/write access to the user engineering\jbrown on
engineering\jbrown rw the webfinal share.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The cifs access Command
user Specifies that the user or group for the ACL entry can be a Windows NT user or group (if the
storage system uses NT domain authentication), or can be the special group, everyone.
group Specifies the user or group for the ACL entry. Can be a Windows NT user or group (if the
storage system uses NT domain authentication), or can be the special group, everyone.
rights Assigns either Windows NT or UNIX-style rights. Windows NT rights are: No Access,
Read, Change, and Full Control.
-delete Removes the ACL entry for the named user on the share.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The cifs access Command: Example
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Client Access
CLIENT ACCESS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Mapping the Share
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Mapping the Share (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Other CIFS Administration Resources
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 8: CIFS
Estimated Time: 20 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NFS
NFS
Module 9
Data ONTAP® 7.3 Fundamentals
NFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NFS Overview
NFS OVERVIEW
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NFS Overview
NFS OVERVIEW
The Network File System (NFS) is a protocol originally developed by Sun Microsystems in
1984, allowing users on a client computer to access files over a network as easily as if the
network devices were attached to its local disks. NFS, like many other protocols, builds on
the Open Network Computing Remote Procedure Call (ONC RPC) system. The NFS protocol
is specified in RFC 1094, RC 1813, and RFC 3530.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exported Resources Overview
SS1
vol0 flexvol1
data_files
etc eng_files
home misc_files
Network Connection
Client1 Client1
In the diagram above, SS1 contains resources that many users need such as data_files,
eng_files, and misc_files.
To use a resource, SS1 must have the resource exported and Client1 must have the resource
mounted. A user on Client1 can then change to the directory (cd) that contains the mounted
resource and access it as if it were stored locally (assuming that permissions are set
appropriately).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Up and
Configuring NFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting up NFS
SETTING UP NFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring NFS Using the CLI
When you license NFS on a storage system, it starts the daemons (rpc.mountd and nfsd)
that handle NFS RPC protocol.
The following are NFS configurable options:
• nfs.v3.enable
• nfs.v4.enable
• nfs.tcp.enable
• nfs.udp.xfersize
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring NFS Using FilerView
The Configure NFS screen in FilerView enables you to configure NFS for use on the storage
system.
The following are NFS configurable parameters:
• NFS License
• NFS Enable
• PCNFS Enabled
• PCNFS umask
• WebNFS Enable
• Client statistics
• NFS Over TCP
• NFS Version 3
• Report Maximum
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exporting Resources
EXPORTING RESOURCES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exporting Resources
EXPORTING RESOURCES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Export: /etc/exports
Specifies the full path to the directory that is exported. The first option is listed following a dash.
Additional options are separated by commas.
In this example the -rw option allows host1
and host2 to mount the pubs directory. Host
names are listed separated by colons.
/vol/vol0/pubs -rw=host1:host2,root=host1
/vol/vol1 -rw=host2
/vol/vol0/home
All hosts can mount the /vol/vol0/home directory as This option gives read-write permissions to
read-write if an option is not specified. host2 only. All other hosts have no access.
System administrators must control how NFS clients access files and directories on a storage
system. Exported resources are resources made available to hosts. NFS clients can only
mount resources that have been exported from a storage system licensed for NFS.
To export directories, add an entry for each directory to the /etc/exports file, using the
full path to the directory and options. The full path name must include /vol.
Export specifications use the following options to restrict access:
• root = list of hosts, netgroup names, and subnets
• rw = list of hosts, netgroup names, and subnets
• ro = list of hosts, netgroup names, and subnets
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Test Your Knowledge
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exporting
EXPORTING
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The exportfs Command
system> exportfs
/vol/flexvol/qtree -sec=sys,rw=10.254.232.12
/vol/vol0/home -sec=sys,rw,root=10.254.232.12,nosuid
system>
To specify which file system paths Data ONTAP automatically exports when NFS starts up,
add export entries to (or remove them from) the /etc/exports file. To manually export or
unexport file system paths, use the exportfs command in the storage system CLI.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Export Using FilerView
OPTION DESCRIPTION
Root Access The root option specifies that the root on the client has root permissions for the
resource when it is mounted from the storage system.
Read-Write The rw option gives read-write access to specific hosts. If no host is specified, all
Access hosts have read-write access.
Read-Only The ro option gives read-only access to specific hosts. If no host is specified, all
Access hosts have read-only access.
Anonymous User The anon option determines the UID of the root user on the client.
ID
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Export Using FilerView (Cont.)
EXPORT PATH
You must specify as the export path the full path name for the exported resource.
Example:
/vol/vol0
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding an Export Using FilerView (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing NFS Exports
When Data ONTAP receives a mount request from a client, it compares the path name in the
mount request to the path names of the exported resources contained in the /etc/exports
file. If Data ONTAP finds a match, the NFS client is allowed to mount the resource.
FilerView enables you to insert and modify export lines in the file, add options for specific
hosts to an existing export line, and delete an existing export line. After adding export lines,
you can then export all resources for client access.
The FilerView Manage NFS Exports screen enables you to manage NFS exports on the
storage system. NFS clients can only mount resources after they have been exported
(meaning that they have been made available for mounting).
You can export a resource to the following targets:
• Hosts―Hosts are individual computers.
• Netgroups―When exporting a resource, if you do not want to list names of individual hosts as
targets, you can specify a predefined netgroup as the target.
• Subnets―Exporting to a subnet has the same effect as exporting to individual hosts on the subnet
without having to list individual host names. If all hosts on the same subnet should mount the same
resource, export that resource to the subnet.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
When exporting a resource, keep in mind the following:
• You must specify the complete path name for a resource to be exported.
• You cannot export /vol, which is not a path name to a file, directory, or volume. If you want to
export all volumes on the storage system, you must export each volume separately.
• Each line in the /etc/exports file can contain up to 4,096 noncommented characters. The
number of characters allowed for comments is unlimited.
• The /etc/exports file contains a list of resources that can be exported. When the storage
system is rebooted, Data ONTAP exports all resources in this file.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Temporary Exports
TEMPORARY EXPORTS
EXAMPLE RESULT
exportfs -a Exports all entries in the /etc/exports file.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Common exportfs Options
To export a file system path and add a corresponding export entry to the /etc/exports file,
enter the following command:
exportfs -p [options] path
NOTE: If you do not specify an export option, Data ONTAP automatically exports
the file system path with the rw and sec=sys export options.
To export all file system paths specified in the /etc/exports file and unexport all file
system paths not specified in the /etc/exports file, enter the following command:
exportfs –r
To unexport all file system paths without removing the corresponding export entries from the
/etc/exports file, enter the following command:
exportfs –uav
To unexport a file system path without removing the corresponding export entry from the
/etc/exports file, enter the following command:
exportfs -u path
To unexport a file system path and remove the corresponding export entry from the
/etc/exports file, enter the following command:
exportfs -z path
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Mounting
MOUNTING
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Mounting From a Client
To mount an export from a client:
1. Telnet or log in to the host.
2. Create a directory as a mountpoint for the storage appliance.
3. Mount the exported directory in the host directory you just
created.
4. Change directories to the mounted export.
5. Enter ls –l to verify that the storage appliance is mounted and
accessible.
Use the mount command to mount an exported NFS directory from another machine.
An alternate way to mount an NFS export is to add a line to the /etc/fstab (called
/etc/vfstab on some UNIX systems). This line must specify the NFS server host name,
the exported directory on the server, and the local machine directory where the NFS share is
to be mounted. For more information, see the NFS documentation for your client.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Other NFS Administration Resources
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 9: NFS
Estimated Time: 45 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees
QTREES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtrees
QTREES
Qtrees are similar to flexible volumes, but have the following unique characteristics:
• Qtrees allow you to set security styles.
• Qtrees allow you to set oplocks for CIFS clients.
• Qtrees allow you setup and apply quotas.
• Qtrees are used as a backup unit for SnapMirror and SnapVault.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtree Advantages
QTREE ADVANTAGES
CIFS OPLOCKS
CIFS oplocks (opportunistic locks) enable the CIFS client in certain file-sharing scenarios to
perform client-side caching of read-ahead, write-behind, and lock information. A client can
then work with a file (read or write it) without regularly reminding the server that it needs
access to the file. This improves performance by reducing network traffic. For more
information about CIFS oplocks, see the Data ONTAP CIFS Administration course.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Security Styles
SECURITY STYLES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Security Styles
Security Styles
Hosts that can
CIFS Client Access NFS Client Access
Security Style change Security/
Determined by Determined by
Permissions
UNIX permissions
unix NFS clients Windows user UNIX permissions
names mapped to
UNIX account
NFS and CIFS Depends on the last client to set security
mixed
clients settings (permissions)
Windows NT ACLs
ntfs CIFS clients Windows NT ACLs UNIX user names
mapped to
Windows account
SECURITY STYLES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SECURITY STYLES
• UNIX―Files and directories have the security style of UNIX permissions.
• Mixed―Both NTFS and UNIX security styles are allowed. A file or directory can have either
Windows NT permissions or UNIX permissions, and security style is determined on file-by-file
basis.
• NTFS―Files and directories have Windows NT file-level permissions over the access control lists
(ACLs).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Qtree
ADDING A QTREE
You can set the security style and oplocks state when you create the qtree, or you can modify
them later by completing the following steps:
1. From the FilerView main menu, select Volumes > Qtrees.
2. From the Qtrees menu, select Manage.
3. Open the Modify menu by selecting the name of the qtree you want to change.
4. After you modify the settings, click Apply.
The new qtree is listed and the changes are updated on the FilerView screen.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtree Commands
system> qtree create /vol/vol2/updates
system> qtree security /vol/vol2/updates mixed
QTREE COMMANDS
EXAMPLE RESULT
qtree create pubs Creates the qtree, pubs. If the qtree pathname does not begin with a slash (/),
the qtree is created in the root volume.
qtree security Applies mixed security to the files and directories in the projects volume.
/vol/projects/ mixed
qtree oplocks Enables oplocks for files and directories in the engr qtree.
/vol/projects/engr
enable
qtree oplocks Disables oplocks in the files and directories for qtree 0 in the projects
/vol/projects/ volume.
disable
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Qtrees
MANAGING QTREES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multiprotocols
MULTIPROTOCOLS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multiprotocols
MULTIPROTOCOLS
NetApp storage systems support both NFS-style and CIFS-style file permissions. NFS-style
file permissions are widely used in UNIX systems, while CIFS-style file permissions are used
in Windows when communicating over networks. Because the ACL security model for CIFS
is richer than the NFS file security model used in UNIX, you cannot perform one-to-one
mapping between them. This issue has led vendors of multiprotocol file storage products to
develop nonmathematical strategies to blend the two systems and make them compatible.
This section explains the NetApp approach to this issue of incompatibility.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NAS File Access: Four Scenarios
MULTIPROTOCOL SCENARIOS
For the purpose of this section, the assumption is that all UNIX files have no ACLs. This is
not always true, as ACLs are preserved when changing qtree styles. But ACLs on files in a
UNIX qtree are ignored when performing permissions checks, so the net effect is as if no file
in a UNIX qtree ever has an ACL.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The /etc/usermap.cfg File
The /etc/usermap.cfg file allows you to map a Windows user to a UNIX user, or a
UNIX user to a Windows user.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The /etc/usermap.cfg File (Cont.)
/etc/usermap.cfg
"Bob Garg" == bobg
mktg\Roy => nobody
engr\Tom => ""
uguest <= *
IP_QUALIFIER FIELD
The IP_qualifier field contains an IP address that narrows a message. For example,
192.4.1.0/24 narrows possible matches to the 192.4.1.0 class C subnet. The IP_qualifier can
be a host name or a network name (for example, corpnet/255.255.255.0 specifies the corpnet
subnet). The IP_qualifier is an optional parameter.
NTDOMAIN\NTUSER FIELD
This field contains a user name and an optional domain name. If the Windows NT user name
is empty or specified as “” on the destination side of the map entry, the matching UNIX name
is denied access. If the storage system uses local accounts for authentication, the domain
name is the storage system name. On the source side of the map entry, using the domain name
specifies the domain in which the user resides. On the destination side of the entry, the
domain specifies the domain used for the mapped UNIX entry. If an NT user name contains
spaces, enclose the name in quotation marks.
DIRECTION FIELD
This field specifies whether the entry will map Windows to UNIX or UNIX to Windows, or
will map in both directions. The direction field uses one of three values. To indicate that
mapping is bidirectional (so that the entry maps from Windows to UNIX and from UNIX to
Windows), use = =.
NOTE: Omitting the direction field is the same as using = =.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The /etc/usermap.cfg File (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
UNIX User Accessing an NTFS Qtree
/etc/usermap.cfg
Domain\user <= UNIX User
DOMAIN_of_filer\Username
Accept
Reject
Multiprotocol accept
When a UNIX client requests access to a file without UNIX permissions, the ACL on the file
is used to determine if access is granted. The UID of the requester is validated in the
/etc/passwd file (or NIS or LDAP). If CIFS domain authentication is configured, the valid
user is looked-up in the /etc/usermap.cfg file, mapped to a Windows user or SID, and
then passed to the domain for authentication,. If CIFS domain authentication is configured,
but there is no entry in /etc/usermap.cfg, then the
Domain Of The Storage System\Unix name is passed to the domain for authentication. If the
user is rejected, the storage system will allow authentication for the guest user, if configured,
or the default user (sometimes referred to as the generic user). If the user authentication is
valid, then the user is granted the share- and file-level access assigned to that account.
Otherwise, the user is denied access.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Windows NT User Accessing a UNIX Qtree
domain, user, password
Guest
Authenticate with domain
Reject
Accept
/etc/usermap.cfg
Domain\user <= UNIX User
/etc/passwd or
Maps to wafl.default_nt_user
NIS unixuser ID
Multiprotocol accept
When a user on a PC client requests access to a file without an ACL, access is granted or
denied based on the UNIX permissions on the file. The NTFS SID of the requester is looked-
up on the domain controller (or local user database) to acquire a user name. The user name is
then mapped to a UNIX user name using the name-mapping feature. Using this name-
mapping feature, the user name is looked-up in the /etc/passwd file (or NIS or LDAP) to
acquire the mapped UID and GIDs. These are then compared to the UID, GID, and associated
permissions for the file, exactly as if the requester is the mapped UNIX user.
To troubleshoot authentication, you can enable the following option:
options cifs.trace_login
NOTE: This option should be disabled when not troubleshooting authentication.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multiprotocol Security Administration
MAPPING
By default, Windows names map to identical user names in the UNIX space. For example,
the Windows user bob maps to the UNIX user bobg. You can use the user mapping file
/etc/usermap.cfg to map PC users to UNIX users that are named differently, and to
handle other special or generic users such as root, guest, administrator, nobody, and so on.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quotas
QUOTAS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quotas
QUOTAS
Quotas are important tools for managing the use of disk space on your storage system. A
quota is a limit that is set to control or monitor the number of files or amount of disk space an
individual or group can consume. Quotas allow you to manage and track the use of disk space
by clients on your system.
A quota is used to:
• Limit the amount of disk space or the number of files that can be used
• Track the amount of disk space or number of files used, without imposing a limit
• Warn users when disk space or file usage is high
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Quotas in FilerView
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Quota Using
the Quota Rule Wizard
Step 1: Quota Type
QUOTA TYPE
The type of a quota limit can be a:
• User—Indicated by a UNIX or Windows ID
• Group—Indicated by UNIX GIDs
• Qtree—Represented by the qtree path name
User quotas, group quotas, and tree quotas are stored in the /etc/quotas file. You can edit
this file at any time.
Quotas are based on a Windows account name, UNIX ID, or GID in both NFS and CIFS
environments.
The CIFS system administrator must maintain the /etc/passwd file for CIFS users to obtain
UIDs (if those users are going to create UNIX files), and the /etc/group file for CIFS users
to obtain GIDs or use an NIS server to implement CIFS quotas.
Qtree quotas do not require UIDs or GIDs. If you only implement qtree quotas, it is not
necessary to maintain the /etc/passwd and /etc/group files (or NIS services).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Quota Using
the Quota Rule Wizard (Cont.)
Step 2: Limits
LIMITS
DISK COLUMN
The Disk Space Hard Limit field list the maximum disk space allocated to the quota target.
This hard limit cannot be exceeded. If the limit is reached, messages are sent to the user and
console, and SNMP traps are created.
Use the abbreviations G for gigabytes, M for megabytes, and K for kilobytes. This field
accepts either uppercase or lowercase letters. If you omit the size abbreviation, the system
assumes K (kilobytes). Do not leave this field blank. If you want to track usage without
imposing a limit, enter a dash (-).
Maximum values for the Disk Space Hard Limit is:
• 4,294,967,295 K
• 4,194,303 M
• 4,095 G
FILES COLUMN
The Files Hard Limit field specifies the maximum number of files the quota target can use.
To track usage of the number of files without imposing a quota, enter a blank or a dash (-) in
this field. You can omit abbreviations (uppercase or lowercase) and you can enter an absolute
value, such as 15000.
Maximum value for the Files Hard Limit: 4,294,967,295 K
NOTE: The value for the Files Hard Limit field must be on the same line in your quotas file
as the value for the disk field, otherwise, the Files field is ignored.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
THRESHOLD COLUMN
The Threshold field specifies the limit at which write requests trigger messages to the
console. If the threshold is exceeded, the write still succeeds, but a warning is logged to the
console.
The Threshold field uses the same format as the Disk field.
Do not leave this field blank. The value following Files is always assigned to the Threshold
field. If you do not want to specify a threshold limit, enter a dash (-) here.
Maximum values for the Threshold:
• 4,294,967,295 K
• 4,194,303 M
• 4,095 G
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding a Quota Using
the Quota Rule Wizard (Cont.)
Step 3: Commit
Changes in the /etc/quotas file are not persistent until you click Commit.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Turning Quotas On or Off
QUOTA COMMANDS
The table below lists commands you can use to manage quotas in each volume. If there is
only one volume on the system, you can omit the volume name on all of these commands.
Quota commands are persistent across reboots.
USING FILERVIEW
You can also manage quotas using FilerView. To access the Quota functions, enter the
storage system address in your browser and click the Volumes option. Select Quotas >
Manage.
EXAMPLE RESULT
quota on vol1 Activates quotas on vol1 based on the contents of the /etc/quotas
file
quota resize vol1 Activates changes on vol1 based on the contents of the /etc/quotas
file
qtree create Creates a special directory at the root of a volume for
/vol/vol2/techpubs /vol/vol2/techpubs
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Qtree Statistics
QTREE STATISTICS
To help you determine what qtrees are incurring the most traffic, the qtree stats
command enables you to display statistics about user accesses to files in the qtrees on your
system. This information can identify traffic patterns to help with qtree-based load balancing.
The storage system maintains counters for each quota tree in each of the storage system’s
volumes. These counters are not persistent.
To reset the qtree counters, use the -z flag.
The values displayed by the qtree stats command correspond to the operations on the
qtrees that have occurred since the volume (where the qtrees exist) was created, or since it
was made online on the storage system (either through a vol online command or a reboot),
or since the counters were last reset, whichever occurred most recently.
If you do not specify a name in the qtree stats command, the statistics for all qtrees on
the storage system are displayed. Otherwise, statistics for qtrees in the volume name are
displayed.
Similarly, if you do not specify a name with the -z flag, the counters are reset on all qtrees
and all volumes.
The qtree stats command displays the number of NFS and CIFS accesses on the
designated qtrees since the counters were last reset. The qtree stats counters are reset
when one of the following actions occurs:
• System is booted
• Volume containing the qtree is brought online
• Counters are explicitly reset using the qtree stats -z command
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quota Errors
QUOTA ERRORS
EXCEEDING QUOTAS
Quotas are set to warn you that limits are being approached, allowing you to act before users
are affected.
For all quota types, Data ONTAP sends console messages when the quota is exceeded and
when it returns to normal. SNMP traps for quota events are also initiated. Additional
messages are sent to the client when hard quota limits are exceeded.
NOTE: Threshold quotas in CIFS are the same as soft quotas in NFS.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
MESSAGES FOR NFS CLIENTS
The client OS version and application determine what messages user will see. If a UNIX
client mounts a storage system without the noquota option, every time the user logs in, the
client login program checks to see if the user has reached disk and file quotas. If a hard quota
has already been reached, the client displays a message to alert the user before displaying the
system prompt.
Not all versions of UNIX perform this quota check, and messages vary depending on version.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Editing Quota Rules
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quota Rules
QUOTA RULES
TARGET COLUMN
The Target column identifies what the quota will be applied against. In the example above,
there are multiple equivalent ways you can specify the target. These entries provide target
UIDs (for users) or GIDs (for groups) to the storage system of. The ID numbers must not be
0. The system checks quotas every time it receives a write request, so it is important to use a
target that won’t change over time unless you account for the change in the quotas file.
NOTE: Do not use the backslash (\) or “at” sign (@) in UNIX quota targets. Data ONTAP
interprets these characters as part of Windows names.
DEFAULT QUOTAS
You can create a default quota for users, groups, or qtrees. A default quota applies to quota
targets that are not explicitly referenced in the /etc/quotas file.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quota Report
QUOTA REPORT
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Resizing Quotas
RESIZING QUOTAS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Resizing Quotas (Cont.)
USING FILERVIEW
You can also use FilerView to view reports showing quota usage for volumes. To access the
Reports function, enter your storage system address into your browser, and select Volumes.
Then click Quotas > Report.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Quota Information
QUOTA INFORMATION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 10: Qtrees and Security
Styles
Estimated Time: 60 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN
SAN
Module 11
Data ONTAP® 7.3 Fundamentals
SAN
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Overview
SAN OVERVIEW
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
What is a SAN?
iSCSI
FC Ethernet
Corporate
LAN
SAN NAS
(Blocks) (Files)
NetApp FAS
WHAT IS A SAN?
The Storage Networking Industry Association (SNIA) defines a Storage Area Network (SAN)
as “a network whose primary purpose is the transfer of data between computer systems and
storage elements and among storage elements.”
There are usually three parts to a SAN: host, fabric network, and storage system. SANs may
also be directly connected (host to storage system).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Protocols
WAFL Architecture
Block Services
SAN Protocols
FCP iSCSI
Network Interfaces
FC Ethernet
Encapsulated SCSI Encapsulated SCSI
SAN PROTOCOLS
Network access to LUNs on a NetApp storage system can be either through an FC network or
a TCP/IP-based network. Both of these protocols carry encapsulated SCSI commands as the
data transport mechanism.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Components
SAN COMPONENTS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Components
Hosts
– Supported platforms are Windows, Solaris,
AIX®,HP-UX, Linux, NetWare®, VMware®
– Referred to as “initiators”
Connectivity
– Direct-attach
– Network—Network can be fabric (FCP) or IP
(iSCSI)
Storage system
– Allocates blocks to an initiator group (igroup)
– Referred to as “targets”
© 2008 NetApp. All rights reserved. 7
SAN COMPONENTS
STORAGE DEVICES
The storage devices in a SAN are the NetApp storage systems.
FABRIC OR NETWORK
Fabrics and networks provide any-to-any connectivity between servers and storage devices.
• FC fabrics use FC switches.
• Ethernet networks use standard Ethernet switches.
HOSTS
Hosts are connected to the fabric or network using:
• FC SAN—Host bus adapters (HBAs) in an FC environment
• IP SAN—Standard NICs or HBAs in an IP SAN environment
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FC Components
HBA WWPN
21:00:00:2b:34:26:a6:54
22:00:00:2b:34:26:a6:54
FC COMPONENTS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
iSCSI Components
ISCSI COMPONENTS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Initiator/Target Relationship
Host (Initiator) Fabric/Network Controller (Target)
FC HBAs
Windows or UNIX Storage System
FC FC SAN Services
Application
Driver SCSI over FC Driver
Direct-Attached Storage
INITIATOR/TARGET RELATIONSHIP
An initiator is a host in a SAN. The initiator communicates directly with a target, or over a
network. The network can be either IP-based with iSCSI, or a fabric with FCP. The controller
or target receives calls from the initiator and allows it access to LUNs.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
LUN Overview
LUN OVERVIEW
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Up a SAN
SETTING UP A SAN
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Up a SAN
To set up a SAN:
1. License the appropriate SAN protocol on the
storage system.
2. Create a volume or qtree where the LUN will
reside (apply quotas when appropriate).
3. Verify the SAN protocol driver is on.
4. Configure the host initiator.
5. Create the LUN and igroup, and then
associate the igroup to the LUN.
SETTING UP A SAN
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Review Questions
REVIEW QUESTIONS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing FCP or iSCSI
After protocols are licensed, you must start the services. For FCP, this requires a reboot. For
iSCSI, you can issue the iscsi start command.
You can administrate FCP and iSCSI from both the CLI and from FilerView.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuring the Initiator
This module focuses on Windows platforms using the iSCSI Software Initiator from
Microsoft. For more information about configuring iSCSI and FCP LUNs on other platforms,
see the Data ONTAP SAN Administration course.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
iSCSI Software Initiator
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
iSCSI Software Initiator (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
iSCSI Software Initiator (Cont.)
6. In the Log On to Target window, if you select “Automatically restore this connection when the
system reboots,” then the connection appears in the Persistent Targets tab.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating LUNs
CREATING LUNS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The lun setup Command
When creating an iSCSI LUN for Windows, the LUN will have the following attributes:
LUN ID 0
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The lun setup Command (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The lun setup Command (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating a LUN Using FilerView
To create a LUN using the FilerView LUN Wizard, complete the following steps:
1 In the FilerView left navigation pane, click LUNs > Wizard.
2. To open the Specify LUN Parameters window, click Next.
3. Follow the Wizard instructions and enter all requested information.
4. To open the Commit Changes window, click Next.
5. If the information displayed is correct, click Commit.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN
ACCESSING A LUN
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN
ACCESSING A LUN
After creating a LUN with the lun setup command, use Windows Disk Management on the
host to prepare the LUN for use. The new LUN should be visible as a local disk. If it is not,
click the Action button in the toolbar, and then click Rescan Disks.
Disk Management will:
1 Write the disk signature
2 Partition the disk
3 Format the disk
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)
To open the Create Partition Wizard, right-click the bar that represents the unallocated disk
space, and then select Create Partition. Or, from the Action dropdown menu in the Computer
Management window, you can click All Tasks > Create Partition.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)
Create a Primary partition no larger than the maximum size available. Choose the partition
size and drive letter.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)
Accept the default drive assignment or use the drop-down list to select a different drive.
Partition the drive using the settings shown, but change the Volume Label to an appropriate
Windows volume name that represents the LUN you are creating.
Review the settings specified and then click Finish.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Accessing a LUN (Cont.)
Verify that the LUN appears as a local drive in Disk Management. If it appears as a local
drive, you can then copy files to the new disk and treat it like any other local disk.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SnapDrive
SNAPDRIVE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SnapDrive
SNAPDRIVE
SnapDrive is management software for Windows 2000, Windows 2003, and Windows 2008
systems that provides virtual-disk and Snapshot management on the client side. Use
SnapDrive to create FCP or iSCSI LUNs on a Windows host.
SnapDrive includes three main components:
1. Windows 2000 service
2. Microsoft Management Console (MMC) plug-in
3. CLI
SnapDrive includes the same features of the lun setup command on the storage system, but
also includes the ability to add LUNs to the Windows host and integrates use of LUNs into
other NetApp applications such as SnapManager.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SnapDrive for Windows
SnapDrive for Windows provides an interface for Windows to interact with LUNs directly.
SnapDrive also:
• Enables online storage configuration, LUN expansion, and streamlined management
• Integrates Snapshot technology to create point-in-time images of data stored in LUNs
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Other SAN Administration Resources
For more information about SAN administration, see the
SAN Administration on Data ONTAP 7.3 course.
This advanced course covers:
Creating FCP and iSCSI LUNs from the CLI
Creating FCP and iSCSI LUNs from SnapDrive with
Windows
Creating FCP and iSCSI LUNs from SnapDrive with
Solaris
Configuring Solaris hosts for FCP and iSCSI
Configuring Windows hosts for FCP and iSCSI
Configuring other hosts, such as Linux, HP-UX, and
AIX
SAN in a clustered storage system environment
SAN performance tuning
SAN troubleshooting
© 2008 NetApp. All rights reserved. 36
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 11: SAN
Estimated Time: 45 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Copies
Snapshot Copies
Module 12
Data ONTAP® 7.3 Fundamentals
SNAPSHOT COPIES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Overview
OVERVIEW
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Technology
SNAPSHOT TECHNOLOGY
The Snapshot technology is a key element in the implementation of the WAFL (Write
Anywhere File Layout) file system.
• A Snapshot copy is a read-only, space-efficient, point-in-time image of data in a volume or
aggregate
• A Snapshot copy is only a “picture” of the file system and does not contain any data file content
• Snapshot copies are used for backup and error recovery
Data ONTAP automatically creates and deletes Snapshot copies of data in volumes to support
commands related to Snapshot technology.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot and WAFL
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Inodes
INODES
ROOT INODES
The most important metadata file is the inode file, which contains the inodes that describe all
other files in the file system. The inode that describes the inode file itself is the root inode.
The root inode is a fixed-disk location.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Copies and Inodes
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Managing Inodes
MANAGING INODES
LEVEL 4 INODES
For file sizes between 64 GB and 8 TB, the single-indirect blocks in Level 3 inodes become
double-indirect blocks. These double-indirect blocks reference 1,024 single-indirect blocks,
which then reference up to 1024 4 kB data blocks.
DF -I
The df -i command displays the amount of inodes in a volume. For more information about
this command, see the manual pages.
MAXFILES
The maxfiles command increases the number of inodes designated in a volume. For more
information about this command, see the manual pages.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works
Active Data
File X
Disk Blocks A B C
Before a Snapshot copy is created, there is a file system tree pointing to data blocks that
contain content. When the Snapshot copy is created, a copy of the file structure metadata is
created. The Snapshot copy points to the same data blocks.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works (Cont.)
File X File X
* With the exception of 4 kB replicated root inode block that defines the Snapshot
There is no significant impact on disk space when a Snapshot copy is created. Because the
file structure takes up little space, and no data blocks must be copied to disk, a new Snapshot
copy consumes almost no additional disk space. In this case, the phrase “Consumes no space”
really means no appreciable space. The so-called “top-level root inode,” which is necessary
to define the Snapshot copy, is 4 kB.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works (Cont.)
File/LUN: X File/LUN: X
Disk Blocks A B C C’
Client sends
new data for New
block C Data
Snapshot copies begin to use space when data is deleted or modified. WAFL writes the new
data to a new block (C’) on the disk and changes the root structure for the active file system
to point to the new block.
Meanwhile, the Snapshot copy still references the original block C. As long as there is a
Snapshot copy referencing a data block, the block remains unavailable for other uses. This
means that Snapshot copies start to consume disk space only as the file system changes after a
Snapshot copy is created.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works (Cont.)
File X File X
Disk Blocks A B C C’
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating Snapshot
Copies
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Taking a Snapshot Copy
VOLUMES
Snapshot copies for traditional and flexible volumes are stored in special subdirectories that
can be made accessible to Windows and UNIX clients so that users can access and recover
their own files without assistance. The maximum number of Snapshot copies per volume is
255.
AGGREGATES
In an aggregate, 5% of space is reserved for Snapshot copies. In normal, day-to-day
operations, aggregate Snapshot copies are not actively managed by a system administrator.
For example, Data ONTAP automatically creates Snapshot copies of aggregates to support
commands related to the SnapMirror software, which provides volume-level mirroring.
NOTE: Even if the Snapshot reserve is 0%, you can still create Snapshot copies. If there is no
Snapshot reserve, Snapshot copies take their blocks from the active file system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Reserve
Aggregates Aggregate Space
for WAFL.
Volume Space
Flexible Volumes
Each volume has 20% allocated for
Snapshot reserve. The remainder is Active File 80%
used for client data. System
SNAPSHOT RESERVE
AGGREGATES
The size of an aggregate depends on the number and size of disks allocated to it. Ten percent
of the aggregate is allocated for WAFL. Five percent of the aggregate is allocated for the
Snapshot reserve.
FLEXIBLE VOLUMES
By default, a flexible volume has 20% of its space allocated for the Snapshot reserve. More
than one flexible volume can exist in an aggregate. Each flexible volume, however, has 20%
allocated for Snapshot reserve. To use the Snapshot reserve space for data (not
recommended), you must manually override the allocation for Snapshot copies. The
remainder of space can be used for client data.
SNAPSHOT RESERVE
The Snapshot reserve for an aggregate does not automatically expand into the WAFL
aggregate space. When space is needed the Snapshot copies, by default, older, aggregate
Snapshot copies are replaced by new Snapshot copies. The Snapshot aggregate reserve size is
adjustable using the snap reserve –A command.
On volumes, the Snapshot reserve space is expanded into user space as required by the
system (for example, if numerous changes are made to the active file system). If necessary, as
new Snapshot copies are created, the Snapshot reserve expands into user space regardless of
the designated Snapshot reserve amount. You can manually reallocate disk space using the
snap reserve command.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Creating Snapshot Copies
SNAPSHOT COMMANDS
In the snap command, the option A is used for aggregates and the option V is used for
volumes. If neither A nor V is specified, volume is the default.
The following table lists the commands used to create and manage Snapshot copies. If you
omit the volume name from any of these commands, the command will apply to the root
volume.
EXAMPLE RESULT
snap create engineering test Creates the Snapshot copy, test, in the engineering
volume.
snap list engineering Lists all available Snapshot copies in the engineering
volume.
snap delete engineering test Deletes the Snapshot copy test in the engineering
volume.
snap delete –a vol2 Deletes all Snapshot copies in vol2.
snap rename engineering nightly.0 Renames the Snapshot copy from nightly.0 to firstnight.0
firstnight.0 in the engineering volume.
snap reserve vol2 25 Changes the Snapshot reserve to 25 % on vol2.
snap sched vol2 0 2 6 @ 8, 12, 16, Sets the automatic schedule on vol2 to save the following
20 weekly Snapshot copies: 0 weekly, 2 nightly, and 6
hourly at 8 a.m., 12 p.m., 4 p.m., and 8 p.m.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Restoring Snapshot
Copies
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Recovering Data
RECOVERING DATA
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Options
SNAPSHOT OPTIONS
The following table lists the options available for controlling the creation of Snapshot copies
and access to those copies and Snapshot directories on a volume:
• Disable automatic Snapshot copies. Setting the nosnap option to on disables automatic Snapshot
creation. You can still create Snapshot copies manually at any time.
• Make the .snapshot directory invisible to clients and turn off access to the .snapshot
directory. Setting the nosnapdir option to on disables access to the Snapshot directory that is
present at client mountpoints and the root of CIFS directories, and makes the Snapshot directories
invisible. (NFS uses .snapshot for directories, while CIFS uses ~snapshot.) By default, the
nosnapdir option is off (directories are visible).
• Make the ~snapshot directory visible to CIFS clients by completing the following steps:
1 Turn the cifs.show_snapshot option on.
2 Turn the nosnapdir option off for each volume that you want directories to be visible.
NOTE: You must also ensure that Show Hidden Files and Folders is enabled on your
Windows system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SNAPSHOT OPTIONS
EXAMPLE RESULT
vol options vol2 nosnap on Disables automatic Snapshot copies for vol2.
vol options vol2 nosnapdir on Makes the .snapshot (or ~snapshot) directory invisible
to clients.
options cifs.show_snapshot on Makes the ~snapshot directory visible to CIFS clients.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The .snapshot Directory
/
system
vol0
.snapshot Directory
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot View from a UNIX Client
# pwd
/system/.snapshot
# ls -l
total 240
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.0
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.1
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.2
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.3
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.4
drwxrwxrwx 9 root other 12288 Jan 29 16:19 hourly.5
drwxrwxrwx 9 root other 12288 Jan 29 16:19 nightly.0
drwxrwxrwx 9 root other 12288 Jan 29 16:19 nightly.1
drwxrwxrwx 9 root other 12288 Jan 29 16:19 weekly.1
drwxrwxrwx 9 root other 12288 Jan 29 16:19 weekly.2
#
SNAPSHOT DIRECTORIES
Every volume in your file system contains a special Snapshot subdirectory that allows you to
access earlier versions of the file system to recover lost or damaged files.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot View from a Windows Client
Snapshot directories are hidden on Windows clients. To view them, you must first configure
the File Manager to display hidden files, then navigate to the root of the CIFS share and find
the directory folder.
The subdirectory for Snapshot copies appears to CIFS clients as ~snapshot. Files displayed
here are those files created automatically for specified intervals. Manually created Snapshot
copies would also be listed here.
RESTORING A FILE
To restore a file from the ~snapshot directory, rename or move the original file, then copy
the file from the ~snapshot directory to the original directory.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Scheduling Snapshot
Copies
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Scheduling Snapshot Copies
Default schedule:
– Once nightly, Monday through Saturday, at
midnight (12 a.m.)
– Four hourly at 8 a.m., 12 p.m., 4 p.m., and 8
p.m.
Retains:
– Two most recent nightly
– Six most recent hourly
First in, First out:
– Oldest nightly Snapshot copy
– Oldest hourly Snapshot copy
© 2008 NetApp. All rights reserved. 28
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Snapshot Schedule
Function Function
Entering the command with no arguments Entering the command with just a volume
prints the current Snapshot schedule for all name argument prints the schedule for the
volumes in the system. specified volume.
Function
Snapshot Command Syntax
The values in these positions specify how
snap sched [volume_name [weeks [days [ hours[@list]]]]] many Snapshot copies will be saved for each
interval (weekly, daily, and hourly). In the
Example: snap sched vol2 0 2 6@8,12,16,20 example, the zero means that no weekly
Snapshot copy will be made or saved.
Function
The hourly variable has an optional list
attached to it. Enter values here to specify
which times Snapshot copies are taken.
If you don’t enter a list, the default is hourly.
The Snapshot schedule above keeps the following Snapshot copies for vol2:
No weekly Snapshot copies
Two nightly Snapshot copies
Six hourly Snapshot copies taken at 8 a.m., 12 p.m., 4 p.m., and 8 p.m.
© 2008 NetApp. All rights reserved. 29
SNAPSHOT SCHEDULE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using FilerView to
Schedule Snapshot Copies
The Snapshot copy feature is turned on by default and uses a preset schedule until it is
changed by an administrator using the snap sched command, or the FilerView graphical
interface.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Space Usage
SPACE USAGE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the CLI to Monitor Space Used
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap list Command
%Used Snapshot List Example
The % used column shows the relationship
between accumulated Snapshot copies and
the total disk space consumed by the active system> snap list
file system. Values in parentheses show the Volume vol0
contribution of this individual Snapshot copy. working...
%used %total date name
------- ------ ------------ --------
%Total 0% (0%) 0% (0%) Apr 20 12:00 hourly.0
The % total column shows the relationship
between accumulated Snapshot copies and
17% (20%) 1% (1%) Apr 20 10:00 hourly.1
the total disk space consumed by the 33% (20%) 2% (1%) Apr 20 08:00 hourly.2
volume.
Values in parentheses show the contribution
of this individual Snapshot copy.
Date
The date column shows the date and time
the
Snapshot copy was taken. Time is indicated
on the 24-hour clock, and in this example
reflects the hours set in the automatic
Snapshot schedule.
Name
Scheduled Snapshot copies are
automatically
renumbered as new ones are taken so that
the most recent is always “.0.” This also
ensures that the file with the highest number
is always the oldest.
The snap list command displays a single line of information for each Snapshot copy in a
volume. In the Snapshot List Example in the figure above, a list of Snapshot copies is
displayed for the engineering volume.
The following is a description of each column in the list:
• %used—Shows the relationship between accumulated Snapshot copies and the total disk space
consumed by the active file system. Values in parentheses show the contribution of this individual
Snapshot copy.
• %total—Shows the relationship between accumulated Snapshot copies in the total disk space
consumed by the volume. Values in parentheses show the contribution of this individual Snapshot
copy.
• date—Shows the date and time the Snapshot copy was taken. Time is indicated on the 24-hour
clock, and in this example, reflects the hours set in the automatic Snapshot copy schedule.
• name—Lists the names of each of the saved Snapshot copies. Scheduled Snapshot copies are
automatically renumbered as new ones are created so that the most recent copy is always .0. This
also ensures that the file with the highest number (in this case, hourly.2) is always the oldest
Snapshot copy.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
EXAMPLE 1: NO CHANGES HAVE BEEN MADE TO THE VOLUME SINCE THE CREATION OF
SNAPSHOT COPIES
The NetApp storage system /vol/vol0 has used 100 MB. No data has changed since the
Snapshot was taken. The snap list command output shows:
NetApp> snap list
Volume vol0
working...
%used %total date name
---------- ---------- -------- --------
0% ( 0%) 0% ( 0%) Apr 20 08:00 hourly.0
The space used by the hourly.0 snapshot is 0%. No changes have been made to the files in
/vol/vol0, so no blocks have changed between the Snapshot copy and the active file
system.
EXAMPLE 2: CHANGES HAVE BEEN MADE TO THE VOLUME SINCE THE CREATION OF
SNAPSHOT COPIES
At 9:30 a.m., a 20-MB file is deleted and a new 20 MB file is created. At 10 a.m., a new
hourly Snapshot copy is taken. The snap list command output now shows:
NetApp> snap list
Volume vol0
working...
%used %total date name
-------- -------- --------- --------
0% ( 0%) 0% ( 0%) Apr 20 10:00 hourly.0
20% ( 20%) 1% ( 1%) Apr 20 08:00 hourly.1
The hourly.1 Snapshot copy now consumes space because it holds the blocks for the 20-
MB file that was deleted from the active file system. The hourly.0 Snapshot copy
consumes no space because no changes were made to the volume after this Snapshot copy
was created.
EXAMPLE 3: CHANGES HAVE BEEN MADE TO THE VOLUME BETWEEN THE SNAPSHOT
CREATIONS
At 11:30 a.m., the 20-MB file created at 9:30 a.m. was deleted. At 12 noon, the hourly.0
Snapshot is created. The snap list command output now shows:
NetApp> snap list
Volume vol0
working...
%used %total date name
---------- ---------- ---------- --------
0% ( 0%) 0% ( 0%) Apr 20 12:00 hourly.0
17% ( 20%) 1% ( 1%) Apr 20 10:00 hourly.1
33% ( 20%) 2% ( 1%) Apr 20 08:00 hourly.2
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
In this list, hourly.2 and hourly.1 both contain 20 MB of data that no longer exists in the
active file system (AFS). However, each file references different blocks on the system’s
disks.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap reclaimable and snap delta
Commands
snap reclaimable
system> snap reclaimable vol0 hourly.0 nightly.0
Processing (Press Ctrl-C to exit) ...........................
snap reclaimable: Approximately 47108 Kbytes would be freed.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
THE SNAP DELTA COMMAND
If your storage system is running Data ONTAP 7.0 or later, the snap delta command
provides an easy method to determine the rate of data change between Snapshot copies on a
volume. This command can be run for a single Snapshot copy, multiple Snapshot copies, or
all volumes on the storage system.
A possible application for this command could be in planning SnapMirror updates. For
example, if you are planning to implement SnapMirror and need to know the approximate
rate of change between Snapshot copy intervals (to estimate the size of the SnapMirror
transfers), the snap delta command can be used to display this rate:
NetApp> snap list
Volume vol0
working...
%used %total date name
-------- -------- ------------ --------
4% ( 4%) 0% ( 0%) Apr 20 00:00 nightly.0
5% ( 1%) 0% ( 0%) Apr 19 00:00 nightly.1
5% ( 0%) 0% ( 0%) Apr 18 00:00 nightly.2
NetApp> snap delta vol0
Volume vol0
working...
From Snapshot To kB changed Time Rate (kB/hour)
---------- ------ ---------- -------- ---------
nightly.0 AFS 46,932 0d 23:00 3,911.000
nightly.1 nightly.0 16,952 1d 00:00 4,237.705
nightly.2 nightly.1 16,952 1d 00:00 4,237.705
In this example, the rate of change is about 16,952 kB per day, assuming that one Snapshot
per day is created.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command
The snap autodelete command provides a way to automatically manage Snapshot copies.
EXAMPLES
To enable autodelete on volume vol1:
snap autodelete vol1 on
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
commitment Option
What Snapshot copies can autodelete remove?
The user can protect certain kinds of Snapshot copies
from deletion
The commitment option defines:
– try
Deletes Snapshot copies that are not being used by any
data mover, recovery, or clones. (NOT LOCKED)
– disrupt
Deletes snapshot copies locked by applications that move
data (such as SnapMirror), dump data, and restore data
(mirror and dumps are aborted).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
commitment Option (Cont.)
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
trigger Option
When does snap autodelete occur?
When the “trigger” criteria is near full:
volume
The volume is nearly full (98%)
snap_reserve
The reserve is nearly full
space_reserve
The space reserved is nearly full (useful for
volumes with fractional_reserve < 100)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
trigger Option (Cont.)
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
target_free_space Option
When does snap autodelete stop?
When the free space in the trigger criteria
reaches a user-specified percentage, snap
autodelete stops
– This percentage is controlled by the value of
target_free_space
– The default percentage is 80%
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
order Options
In what order are Snapshot copies deleted?
The delete_order option defines the age
order. If the value is set to:
– oldest_first
Delete oldest snapshots first
– newest_first
Delete newest snapshots first
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
order Options (Cont.)
Snapshot copies are deleted in the following
order:
The defer_delete option defines the order for
deletion
If the value is set to:
– scheduled
Delete the scheduled Snapshot copies last (identified by the
scheduled Snapshot naming convention)
– user_created
Delete the administrator-created Snapshot copies last
– prefix
Delete the Snapshot copies with names matching the prefix
String last
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
order Options (Cont.)
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The snap autodelete Command:
order Options (Cont.)
In what order are Snapshot copies deleted?
The prefix option value pair is only
considered when defer_delete is set to
prefix. Otherwise, it is ignored.
snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
commitment try, disrupt
trigger volume, snap_reserve, space_reserve
target_free_space 1-100
delete_order oldest_first, newest_first
defer_delete scheduled, user_created, prefix, none
prefix <string>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using FilerView to Monitor Space Used
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 12: Snapshot Copies
Estimated Time: 45 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Writes Reads
13-1 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
13-2 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Simplified
Network
Client
Client
NVRAM
Physical Disks
Memory
Client
Data ONTAP is the operating system that all NetApp storage systems use. Data ONTAP,
which simplifies storage management and helps ensure business continuity, is built on three
fundamental elements that provide speed, reliability, and safety for NetApp storage systems:
• Real-time mechanism for process execution
• WAFL file system with NVRAM support
• RAID manager
DATA FLOW
Client systems interact with Data ONTAP through the OS networking layer, with the protocol
layer providing appropriate protocol interfaces. Read and write requests are processed by the
WAFL layer and its associated memory. NVRAM is used to create a backup copy of the
WAFL buffers to prevent data loss. The WAFL determines where data is read from or written
to, and forwards this information to the RAID manager. The RAID manager calculates the
parity value required to protect the stored data.
With the WAFL data placement and RAID information computed, the storage layer writes the
blocks to the appropriate disks, and then Data ONTAP determines the new consistency point.
13-3 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Requests
WRITE REQUESTS
13-4 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Requests
WRITE REQUESTS
13-5 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Request Data Flow: Write Buffer
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL
CIFS
UNIX NIC Service
RAID
Client
Storage
Windows
Client
13-6 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Consistency Point
CONSISTENCY POINT
13-7 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Consistency Point (Cont.)
At least once every 10 seconds, the WAFL generates a CP (an internal Snapshot copy) so that
disks contain a completely self-consistent version of the file system. When the storage system
boots, the WAFL always uses the most recent CP on the disks. This means you don’t have to
spend time checking the file system, even after power loss or hardware failure. The storage
system boots in a minute or two, with most of the boot time devoted to spinning up disk
drives and checking system memory.
The storage system uses battery-backed NVRAM to avoid losing data write requests that
might have occurred after the most recent CP. During a normal system shutdown, the storage
system turns off protocol services, flushes all cached operations to disk. and turns off the
NVRAM. When the storage system restarts after a power loss or hardware failure, it replays
into system RAM any protocol requests stored in NVRAM that are not on the disk.
To view the CP types that the storage system is currently using, use the sysstat –x 1 option.
CPs triggered by the timer, a Snapshot copy, or internal synchronization are normal. Other
types of CPs can occur from time to time.
ATOMIC OPERATIONS
An atomic operation in computer science refers to a set of operations that can be combined so
that they appear to the rest of the system to be a single operation, with only two possible
outcomes: success or failure.
To accomplish an atomic operation, the following conditions must be met:
1. Until the entire set of operations is complete, no other process can be “aware” of the changes being
made.
2. If any one operation fails, then the entire set of operations fail and the system state is restored to its
state prior to the start of any operations.
Source: http://en.wikipedia.org/wiki/Atomic_operation
13-8 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Request Data Flow: WAFL to RAID
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL
CIFS
UNIX NIC Service
RAID
Client
Storage
Windows
Client
WAFL
The WAFL provides shorter response times to write requests by saving a copy of write
requests in system memory and battery-backed NVRAM, and then immediately sends
acknowledgments. This process is different from traditional servers that must write requests
to the disk before acknowledging them. The WAFL delays writing data to the disk, which
provides more time to collect multiple write requests and determine how to optimize storing
data across multiple disks in a RAID group. Because NVRAM is battery-backed, you don’t
have to worry about losing data.
The following are some key facts about WAFL:
• There is no fixed location for data except the superblock.
• Metadata is stored in files.
• Everything is a file.
• Always free to optimize layout.
13-9 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Consistency Point: WAFL to RAID
13-10 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Request Data Flow: RAID to Storage
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL
CIFS
UNIX NIC Service
RAID
Client
4k
Storage
Windows
Client
RAID LAYER
Storage drivers move data between system memory and the storage adapters, and ultimately
to the disks. The disk driver component reassembles writes into larger I/Os and also monitors
which disks have failed. The SCSI driver creates the appropriate SCSI commands to sync
with the reads and writes it receives.
13-11 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Consistency Point: RAID to Storage
13-12 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Request Data Flow: Storage Writes
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL
CIFS
UNIX NIC Service
RAID
Client
Storage
Windows
Client
STORAGE LAYER
The storage layer transfers data to physical disks. After data is written to the disks, a new root
inode is updated, a CP is created, and the NVRAM bank is cleared.
13-13 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NVRAM
NVRAM
13-14 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Requests
READ REQUESTS
13-15 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Requests
READ REQUESTS
READ CACHE
Data ONTAP includes several built-in, read-ahead algorithms. These algorithms are based on
patterns of usage, which helps ensure the read-ahead cache is used efficiently.
13-16 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Request Data Flow: Cache
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL
CIFS
UNIX NIC Service
RAID
Client
Storage
Windows
Client
13-17 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Request Data Flow: Read from Disk
Network
Network Protocols NVLOG N
Stack NVLOG V
Memory Buffer NVLOG R
SAN NVLOG A
RS-232 Service NVLOG M
NVRAM Full
SAN Host NFS
HBA Service WAFL
CIFS
UNIX NIC Service
RAID
Client
Storage
Windows
Client
13-18 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
13-19 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 13: Write and Read Request
Processing
Estimated Time: 10 minutes
EXERCISE
13-20 Data ONTAP® 7.3 Fundamentals: Write and Read Request Processing
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data Collection
System Data
Collection
Module 14
Data ONTAP® 7.3 Fundamentals
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
System Health
SYSTEM HEALTH
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Status
DISK STATUS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Status
Monitor disks:
– shelfchk
– led_on diskid and led_off diskid
Storage Health Monitor:
– Simple storage system management service
– Automatically initiates during system boot
– Provides background monitoring of individual
disk performance
– Detects impending disk problems before they
actually occur
– disk shm_stats
© 2008 NetApp. All rights reserved. 5
DISK STATUS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
STORAGE HEALTH MONITOR
The Storage Health Monitor (SHM) is a simple storage system management service that is
automatically initiated during system startup. It provides background monitoring of individual
disk performance.
Instead of detecting problems when a disk failure occurs, the SHM detects impending disk
problems before they occur, giving you the opportunity to replace the disk before any client
data problems occur.
SHM messages are written to two text files in the /etc directory and can then be reported
through SNMP, AutoSupport, and syslog, depending on what error metrics you specify.
The SHM provides three message levels:
• Urgent (current problem)
• Non-urgent (potential problem)
• Informational (general status information)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Syslog Messages
shm: disk has reported a predicted
failure (PFA) event: disk XX,
serial_number XXXX
shm: link failure detected, upstream
from disk: id XX, serial_number XXXXX
shm: disk I/O completion times too long:
disk XX, serial number XXXXX
shm: possible link errors on disk: id
XX, serial number XXXXX
shm: disk returns excessive recovered
errors: disk XX, serial number XXXXX
shm: intermittent instability on the
loop that is attached to Fibre Channel
adapter: id XXX, name XXXXX
SYSLOG MESSAGES
shm: disk has reported a predicted failure (PFA) event: disk XX,
serial_number XXXX
Description: The disk's internal error processing and logging algorithm computation
results are exceeding an internally set threshold. The disk will likely fail in a matter
of hours.
Category: Urgent
Action required: Replace the disk
shm: link failure detected, upstream from disk: id XX, serial_number
XXXXX
Description: An FC disk (or cable, if disks are in different disk shelves) might be
malfunctioning, causing an open loop condition. This results in a sync loss of more
than 100 milliseconds for a downstream disk that reported it as a link failure.
Category: Urgent
Action required: Shut down the storage appliance. Use disk scrub on each disk, and
replace the disks and cable one at a time to determine which component is
malfunctioning. Replace the malfunctioning disk or cable.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
shm: disk I/O completion times too long: disk XX, serial number XXXXX
Description: Either the disk is old and slow, or it is internally recovering errors and
taking too long to complete an I/O. This message also indicates that there are too
many I/O timeouts and retries on a disk. The disk could also be frequently returning
the Command Aborted status. All these issues can produce a low data-throughput rate
for this specific disk and a reduction in overall system performance.
Category: Non-urgent
Action required: The disk is prone to failure and should be replaced.
shm: possible link errors on disk: id XX, serial number XXXXX
Description: One of a group of four FC disks in a disk shelf (or any connecting
cable) might be malfunctioning. This results in a large number of invalid CRC frames
and data under-runs on the loop. The invalid CRC and under-run count has crossed
the specified threshold several times.
Category: Non-urgent
Action required: Shut down the storage appliance and remove the disks and cables
one at a time to determine which component is malfunctioning. Replace the
malfunctioning disk or cable.
shm: disk returns excessive recovered errors: disk XX, serial number
XXXXX
Description: Either the disk has found media or hardware errors (unrecovered
errors), or it has internally recovered a large number of errors. The disk might also be
returning a Command Aborted status. The errors returned have exceeded the bit error
rate specified by the disk vendor.
Category: Non-urgent
Action required: The disk is failure prone; you should replace it.
shm: intermittent instability on the loop that is attached to Fibre
Channel adapter: id XXX, name XXXXX
Description: An FC adapter, attached disk shelf, disk, cable, or connector might have
caused instability on the FC-AL loop, which resulted in I/O completion rates below a
set threshold.
Category: Informational
Action required: None
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Performance
WRITE PERFORMANCE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Performance Commands
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Write Performance: sysstat Command
system> sysstat -c 10 -s 5
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache
in out read write read write age
2% 0 0 0 0 0 9 23 0 0 >60
0% 0 0 0 0 0 0 0 0 0 >60
5% 0 0 0 0 0 21 27 0 0 >60
1% 0 0 0 0 0 0 0 0 0 >60
5% 0 0 0 0 0 20 28 0 0 >60
1% 0 0 0 0 0 0 0 0 0 >60
4% 0 0 0 0 0 21 26 0 0 >60
1% 0 0 0 0 0 0 0 0 0 >60
5% 0 0 0 0 0 22 27 0 0 >60
0% 0 0 0 0 0 0 0 0 0 >60
--
Summary Statistics (10 samples 5.0 secs/sample)
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache
in out read write read write age
Min
0% 0 0 0 0 0 0 0 0 0 >60
Avg
2% 0 0 0 0 0 9 13 0 0 >60
Max
5% 0 0 0 0 0 22 28 0 0 >60
system*>
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
• Disk kB/s reads and writes—Shows disk activity
Disk reads occur if data is not cached. Disk writes should occur ideally every 10 seconds.
• Cache age—Displays the age, in minutes, of the oldest read-only blocks in the buffer cache (not
information relevant to diagnosing performance)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The stats Command:
System Performance
The stats command displays statistical data about
the storage system and is capable of displaying
statistics on every aspect of the storage system
Statistics returned using the stats command are
based on the following hierarchy:
– Objects—Any entity in the system is an object (physical
or logical, including volumes, aggregates, qtrees, disks,
and NICs)
– Instances—An object such as a volume called nfsflex,
or an aggregate called aggr1, or a disk identified as
0b.17
– Counters—The counters associated with particular
objects and instances
Data ONTAP has a layer built into its architecture that collects data from several of its
subsystems. The stats command provides access (through the CLI or scripts) to a set of
predefined data-collection tools in Data ONTAP known as counters. These counters provide
you with information about your storage system, either instantaneously or over a period of
time. You can use the stats command and other tools such as the Microsoft Windows utility
(perfmon) to gather statistics from this layer.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The stats Command:
Examples of Objects and Instances
Examples of objects:
– aggregate
– Volume
– Qtree
– Disk
– Cifs
– Nfs
– LUN
Examples of instances:
– /vol/vol0, /vol/nfstree, 0b.18
– /vol/flex1/lun_test
– cifs_ops, cifs_latency, cifs_read_ops
© 2008 NetApp. All rights reserved. 11
Use the stats command list and show options to view current objects and instances. The
following are some examples of stat commands for objects and instances:
stats list objects
Displays the names of objects active in the system for which data is available.
stats list instances
Displays the list of active instance names for a specific object.
stats list counters | [ object_name ]
Displays a list of all counters associated with an object.
stats explain counters [ object_name [ counter_name ] ]
Displays an explanation for specific counter(s) in a specific object, or all counters in
all objects if no object_name or counter_name is provided.
stats show
Shows all or selected statistics in various formats.
For more information about the stats command, see your manual.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The stats Command (Cont.)
For more information about counters, see Chapter 8 of the Performance Advisor
Administration Guide.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
GATHERING COUNTER VALUES OVER A PERIOD OF TIME
In addition to using the stat command to view the current system state, you can also use it
to gather system information for a single period of time.
The following table shows some examples of using the stats command to gather counter
values over a period of time.
Description Command
Start collecting system information > stats start –I
processor:processor0
Display interim results without stopping the > stats show –I processor:processor0
background stats command
Stop collecting system information and output the > stats stop -I processor:processor0
final results
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The stats Command (Cont.)
Use stats list counters to see what is available
The statistics available through the stats
infrastructure are available using other tools such as
perfmom, perfstat and Operations Manager
The following are examples of stats commands:
system> stats show cifs:cifs:cifs_latency
cifs:cifs:cifs_latency:1.92m
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Client-Side Tools: Windows Command
The Windows perfmon
utility:
Connects to the
storage system from
Windows
Requires CIFS to be
licensed and running
on the storage system
Receives output from
the stats command
and graphs the data
The perfmon performance monitoring tool is integrated into the Microsoft Windows
operating system. If you use storage systems in a Windows environment, you can use
perfmon to access many of the counters and objects available through the Data ONTAP
stats command.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Performance
READ PERFORMANCE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Read Performance
READ PERFORMANCE
The WAFL is optimized for write performance. To enhance performance, WAFL does the
following:
• Writes adjacent blocks in files that are adjacent on the disk (whenever possible). As the file system
grows, blocks may not be written on an immediately adjacent disk, but the blocks will still be
close together.
• Reserves 10% of the disk space to increase the probability of blocks being available at or near
optimal locations.
• Handles interleaved writes much better than other file systems because WAFL does not
immediately write-allocate data. By holding the write data in system memory until a CP is
generated, WAFL can write-allocate a lot of data from a particular file into contiguous blocks.
• Minimizes the impact of write performance with the “write anywhere" allocation scheme, which
minimizes disk-seeks for writes.
The write optimizations can lead to decreased file and LUN read performance as the file
system ages because files are written to the best place on the disks for write performance. As
the file system expands and WAFL has fewer options for writing blocks, it may have to write
blocks that are not immediately adjacent on the disk. Using flexible volumes and the
autosize volume option can help prevent problems. In addition, WAFL uses built-in,
multiple-read cache algorithms to offset any potential performance degradation.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Configuration
RAID CONFIGURATION
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Groups
rg1
RAID GROUPS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
RAID Group Size and Composition
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Initial RAID Group Configuration
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Adding Disks to Existing RAID Groups
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Monitoring
Connectivity
MONITORING CONNECTIVITY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Connectivity
Use the following to monitor connectivity:
MAC
– ifconfig
– ifstat
– arp
TCP/IP
– /etc/rc and /etc/hosts
– ping
– netstat -r
Protocols
– nfsstat
– cifs stat
– nbtstat
CONNECTIVITY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Performance Measures
PERFORMANCE MEASURES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Measuring NFS Performance
options nfs.per_client_stats.enable [on|off]
Recommended to disable when not using nfsstat –l
This display shows The output includes server name and address, mount
the breakdown on flags, current read and write sizes, retransmissions count,
this mountpoint of and timers used for dynamic retransmission.
lookups, reads,
writes, and all
operations. The
Data ONTAP NFS Output - Command: nfsstat -l
average deviation /n/homesystem from homesystem.corp.com:/home
and the settings for Flags:vers=2,proto=udp,auth=unix,hard,intr,dynamic
retransmissions of ,rsize=8192
each type also are wsize=8192,retrans=5
displayed.
Lookups: sttr=7(17ms), dev=4(20ms),
cur=2(40ms)
Round trip response Reads: sttr=12(30ms), dev=4(20ms),
times for specific cur=3(40ms)
NFS operations are Writes: sttr=21(52ms), dev=5(25ms),
displayed. cur=5(100ms)
All: sttr=7(7ms), dev=4(20ms),
cur=2(40ms)
You can track the performance of each NFS server by routinely collecting statistics in the
background across all subnets. One of the most important ways to measure performance is to
capture response times for each NFS operation such as writes, reads, lookups, and get
attributes, so the data can be analyzed by the server and the file system.
You can obtain statistics for NFS operations by server (where the storage system is the NFS
server) by enabling the per-client stats option and running nfsstat -l. Once you
establish site-specific baseline measurements, you can compare your system’s performance
against optimum benchmark configurations, or against its own performance at different times.
Any changes from the baseline can indicate problems that require further analysis.
To measure NFS performance, use the sysstat and nfsstat commands.
• To display real-time NFS operations every second on your console, enter sysstat 1, or you
can view the output using FilerView.
• To focus the output on counters related to response times on Solaris NFS clients, run nfsstat -
m.
• To reset statistics and counters to zero, use nfsstat -z.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Measuring CIFS Performance
Every other row displays the number of The time interval window lies halfway between
operations that took place in the interval in the values for adjacent columns. In this
the row above it. In this example, 13,715 example, 165 operations occurred in the 36-ms
operations happened in less than .5 ms. to 44-ms windows.
To measure CIFS performance, you can use the sysstat and smb_hist commands.
To display CIFS operations per second on the console, enter the sysstat 1 command, or
use FilerView.
To view CIFS throughput statistics, complete the following steps:
Click the CLI window to step through the process.
1. Set the command privileges to advanced.
2. To zero the counters, enter smb_hist –z.
3. Wait long enough to get a good sample.
4. To view CIFS statistics generated since the reset, enter smb_hist.
5. Review first section of output.
In the example in the figure above, the first part of the smb_hist output shows there were
13,175 operations that occurred in less than .5 milliseconds (ms), 17,752 operations that
occurred in the window between 0.5 ms and 1.5 ms, and 5,111 operations that occurred in the
window between 1.5 ms and 2.5 ms, and so on. In normal situations, as the interval window
gets larger, the number of operations that take that long decreases to zero.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Obtaining Statistics
OBTAINING STATISTICS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the statit Command
to Obtain Statistics
To obtain statistics using the statit command,
complete the following steps:
1. To enter advanced privilege mode, enter:
priv set advanced
2. To begin collecting statistics, enter:
statit –b.
3. After 30 seconds (or as necessary to end statistics
collection and include NFS statistics), enter:
statit –e –n
4. To return to normal admin privilege mode, enter:
priv set admin
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Obtaining Statistics
The report generated is divided into the following
statistics sections:
CPU
Multiprocessor
CSMP domain switches
Miscellaneous
WAFL
RAID
Network interface
Disk
Aggregate
Spares and other disks
FCP
iSCSI
Tape
© 2008 NetApp. All rights reserved. 29
OBTAINING STATISTICS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CPU Statistics
CPU Statistics
506.934263 time (seconds) 100 %
275.044317 system time 54 %
23.412966 rupt time 5 % (7022 rupts x 0 usec/rupt
251.466451 non-rupt system time 50 %
271.837944 idle time 44 %
439.543653 time in CP 92 % 100 %
21.837230 rupt time in CP 5 % (132 rupts x 0 sec/rupt)
CPU STATISTICS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Multiprocessor Statistics
Multiprocessor Statistics (per second)
cpu0 cpu1 total
sk switches 1378.09 46.82 1424.91
hard switches 1175.27 29.15 1204.42
domain switches 103.89 16.08 119.96
CP rupts 0.00 0.00 0.00
nonCP rupts 100.00 0.00 100.00
nonCP rupt usec 0.00 0.00 0.00
Idle 1000000.00 1000000.00 2000000.00
kahuna 0.00 0.00 0.00
network 0.00 0.00 0.00
storage 0.00 0.00 0.00
exempt 0.00 0.00 0.00
raid 0.00 0.00 0.00
target 0.00 0.00 0.00
netcache 0.00 0.00 0.00
netcache2 0.00 0.00 0.00
MULTIPROCESSOR STATISTICS
The second section of the report includes multiprocessor statistics for multiple CPUs.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Miscellaneous Statistics
MISCELLANEOUS STATISTICS
The miscellaneous section of the statistics report includes rates (or counts) for many
operations. The statistics from this section most commonly viewed are:
• NFS, CIFS, and HTTP operations
• Network KB transmitted and received
• Disk KB read and written
• FCP and iSCSI operations
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
WAFL Rates
WAFL Statistics (per second) 0.00 blocks over-written
5.96 name cache hits ( 62%) 0.28 wafl_timer generated CP
3.69 name cache misses ( 38%) 0.00 snapshot generated CP
19.30 inode cache hits ( 100%) 0.00 wafl_avail_bufs generated CP
0.00 inode cache misses ( 0%) 0.00 dirty_blk_cnt generated CP
55.06 buf cache hits ( 100%) 0.00 full NV-log generated CP
0.00 buf cache misses ( 0%) 0.00 back-to-back CP
0.00 blocks read 0.00 flush generated CP
0.00 blocks read-ahead 0.00 sync generated CP
0.00 chains read-ahead 0.00 wafl_avail_vbufs generated CP
0.00 blocks speculative read-ahead 55.06 non-restart messages
5.11 blocks written 0.00 IOWAIT suspends
0.57 stripes written 604852 buffers
WAFL RATES
The WAFL section of the statistics report displays WAFL rates (or counts). The statistics
from this section most commonly viewed are:
• All cache hits and misses
• Inode cache hits and misses
• Per second rates for all the CP types
All cache hits and misses and inode cache hits and misses provide information about read
performance. Generally, it is considered good to have more hits than misses. However, there
are many factors to consider when analyzing these numbers, such as the fact that a file that is
only read once does not reside in cache. This would be true for most backup applications.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Network Interface Statistics
The Network Interface section of the statistics report provides network interface statistics,
including rates for:
• Packets and bytes transmitted and received
• Transmit and receive errors
• Collisions
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disk Statistics
Disk Statistics (per second)
ut% is the percent of time the disk was busy.
xfers is the number of data transfer commands issued per second.
xfers = ureads + writes + cpreads + greads + gwrites
DISK STATISTICS
The Disk section of the statistics report provides statistics for each drive. Some of the column
headings are defined at the top of the screen.
Beginning with the fourth column of data, the report uses hyphens in the column headings to
group related information. For example, user reads and the associated chain and round-trip
times are linked in the heading ureads--chain—usecs.
The following list defines some of the column headings on the Disk statistics report:
• disk—Indicates which drives are included in the statistics.
• ut%—Shows the drive utilization averaged per second, as in the percent of elapsed time that the
driver had a request outstanding.
Utilization rates of more than 80% might suggest an I/O bottleneck.
• xfers—Shows the total number of transfers, or reads and writes averaged per second. Most
drives are capable of 50 to 100 input operations per second (IOPS).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregate, Spares, and Disk Statistics
Aggregate statistics:
Minimum 0 0.00 0.00 0.00 0.00 0.00 0.00
Mean 1 0.28 0.00 0.28 0.00 0.00 0.00
Maximum 5 3.69 0.57 3.12 0.00 0.00 0.00
This section of the report displays aggregate, spares, and other disk statistics.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FCP, iSCSI, and Tape Operations
The last three sections of the report display statistics for FCP, iSCSI, and tape operations.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Other Resources: Data Collection and
Performance
For more information about data collection and performance, see the
Fundamentals of Performance Analysis course.
This advanced course shows you how to:
Analyze data using recommended methodology to correlate
performance data into performance analysis information
Monitor performance using performance tools and establish a
baseline of expected throughput and response times for storage
systems under planned and increasing workloads
Perform capacity planning by monitoring performance and
comparing baseline information over time to determine when a
storage system will reach maximum capacity
Perform tuning for optimal performance for protocols such as
CIFS, NFS and SAN (including locating resources with tuning
guidelines for database scenarios)
Perform bottleneck analysis
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 14: System Data Collection
Estimated Time: 30 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare
FlexShare
Module 15
Data ONTAP® 7.3 Fundamentals
FLEXSHARE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare
FLEXSHARE
FlexShare is a tool provided by Data ONTAP that enables you to use priorities and hints to
increase your control over how your storage system resources are used based on the
following:
• Priorities are assigned to volumes to assign relative priorities between the following:
• Different volumes
For example, you could specify that operations on /vol/db are more important than
operations on /vol1/test.
• Client data accesses and system operations
For example, you could specify that client accesses are more important than SnapMirror
operations.
• Hints are used to affect the way cache buffers are handled for a particular volume
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare Application Scenarios
Scenario 1―A mission-critical database is on the
– same storage system as user home directories
– Use FlexShare to ensure that database accesses
are assigned a higher priority than accesses to
home directories
Scenario 2―System operations are negatively
impacting client accesses
– Use FlexShare to ensure that client accesses are
assigned a higher priority than system operations
Scenario 3―Volumes have different caching
requirements
– A database log volume that does not need to be
– cached after writing
– Use the cache buffer policy hint to help Data
ONTAP determine how to manage the cache buffers
for volumes with different caching requirements
© 2008 NetApp. All rights reserved. 4
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare Characteristics
No performance guarantees
Priority levels are relative
Both nodes in an active-active configuration
Must have the priority feature enabled
FLEXSHARE CHARACTERISTICS
NO PERFORMANCE GUARANTEES
FlexShare enables you to construct a priority policy that helps Data ONTAP manage system
resources optimally for your application environment, but does not guarantee performance.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Effects of Volume Operations on FlexShare
Priorities
Volume Operation Effect on FlexShare Settings
Deletion FlexShare settings removed
Rename FlexShare settings unchanged
FlexClone Volume Creation Parent volume settings unchanged
FlexShare settings for new FlexClone
volume unset (as for a newly created
volume)
Copy Source volume settings unchanged
FlexShare settings for destination
volume unset (as for a newly created
volume)
Offline or Online FlexShare settings preserved
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Global I/O Concurrency Option
Disks have a maximum number of concurrent I/O operations they can support, which varies
according to disk type. FlexShare limits the number of concurrent I/O operations per volume
based on multiple values, including volume priority and disk type.
For most customers, the default io_concurrency value is correct and should not be
changed. If you have nonstandard disks or loads, your system performance could be improved
by changing the value of the io_concurrency option.
NOTE: Because the io_concurrency option effects the entire system, use caution when
changing its value, and monitor system performance to ensure that this option actually does
improve performance.
For more information about FlexShare, see the na_priority(1) man page or the NOW site at
http://now.netapp.com/NOW.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Assigning Priorities to Volume Data
Access
To assign priorities to volume data access:
1. Ensure that FlexShare is enabled:
priority on
2. Specify the priority for a volume:
priority set volume vol_name
level=priority_level
3. (Optional) Verify the priority level:
priority show volume [-v] vol_name
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Assign Priorities to System and User
Operations
To assign priorities to system and user
operations:
1. Ensure that FlexShare is enabled:
priority on
2. Specify the priority:
priority set volume vol_name system
3. (Optional) Verify the volume priority levels:
priority show volume -v vol_name
PROCEDURE
To assign a priority to system operations relative to user operations for a specific volume,
complete the following steps:
1. Ensure that FlexShare is enabled for your storage system by entering the following
command:
priority on
2. Specify the priority for system operations for the volume by entering the following
command:
priority set volume vol_name system=priority_level
where vol_name is the name of the volume for which you want to set the priority of
system operations, and priority_level is one of the following values: VeryHigh,
High, Medium, Low, Very Low, or a number from 1 to 100.
The number indicates the priority of system operations. When both user and system
operations are requested, the system operations will be selected over the user operations
[1-100] percent of the time, and the other percentage user operations will be selected.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NOTE: Setting the priority of system operations to 30 does not mean that 30 percent of
storage system resources are devoted to system operations. Rather, when both user and
system operations are requested, the system operations will be selected over the user
operations 30 percent of the time, and the other 70 percent of the time the user operation
is selected.
3. You can optionally verify the priority levels of the volume by entering the following
command:
priority show volume [-v] vol_name
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Buffer Cache Policy
Settings:
– Keep
– Reuse
– Default
Setting policy:
– Priority on
– Priority set volume vol_name
cache=policy
You can use FlexShare to give Data ONTAP a hint about how to manage the buffer cache of
a volume.
NOTE: While this capability provides direction in the form of a hint to Data ONTAP,
ultimately Data ONTAP determines how a buffer is reused based on multiple factors,
including the hint.
The following table lists possible values for the buffer cache policy:
Value Description
keep This value instructs Data ONTAP to wait as long as possible before reusing the cache
buffers. This value can improve performance for a volume that is accessed frequently and
has a high incidence of multiple accesses to the same cache buffers.
reuse This value instructs Data ONTAP to make buffers for this volume available for reuse
quickly. You can use this value for volumes that are written but rarely read, such as database
log volumes, or volumes that have a data set so large that keeping the cache buffers probably
won’t increase the hit rate.
default This value instructs Data ONTAP to use the default system cache buffer policy for this
volume.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SETTING THE VOLUME BUFFER CACHE POLICY
You can use FlexShare to influence how Data ONTAP determines when to reuse buffers. To
set the buffer cache policy for a specific volume, complete the following steps.
1. If you haven’t already done so, ensure that FlexShare is enabled for your storage system
by entering the following command:
priority on
2. Specify the cache buffer policy for the volume by entering the following command:
priority set volume vol_name cache=policy
Example: The following command sets the cache buffer policy for the testvol1 volume
to keep, which instructs Data ONTAP not to reuse buffers for this volume when
possible.
priority set volume testvol1 cache=keep
3. You can optionally verify priority levels of the volume by entering the following
command:
priority show volume [-v] vol_name
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Removing or Disabling FlexShare Policies
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Default Volume Priority
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 15: FlexShare
Estimated Time: 30 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP
NDMP
Fundamentals
Module 16
Data ONTAP® 7.3 Fundamentals
NDMP FUNDAMENTALS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
INTRODUCTION
This module describes how to use Network Data Management Protocol (NDMP) services on
your storage system to enable network-based backup and recovery using NDMP-enabled
commercial backup applications. It also explains how to monitor NDMP services running on
the storage system and to use ndmpcopy to migrate data efficiently within or between storage
systems.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Overview
NDMP OVERVIEW
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Overview
NDMP OVERVIEW
The NDMP is an open standard for centralized control of data management across the
enterprise. NDMP enables backup software vendors to provide support for NetApp storage
systems without having to port client code.
An NDMP-compliant solution separates the flow of backup and restore control information
from the flow of data to and from the backup media. These solutions invoke the Data ONTAP
operating system’s native dump, and restore to back up data from, and restore data to, a
NetApp storage system.
NDMP also provides low-level control of tape devices and media changers.
Using data protection services through backup applications that support NDMP offers a
number of advantages:
• Provides sophisticated scheduling of data protection operations across multiple storage systems.
• Provides media management and tape inventory management services to eliminate or minimize
manual tape handling during data protection operations.
• Supports data catalogue services that simplify the process of locating specific recovery data. Direct
Access Recovery optimizes the access of specific data from large backup tape sets.
• Supports multiple topology configurations, allowing efficient sharing of secondary storage
resources (tape library) through the use of three-way network data connections.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Support Matrix
Partner Data ONTAP Data ONTAP Data ONTAP Data ONTAP Data ONTAP
6.2 6.3 6.4 6.5 7.0
Atempo® Time Navigator™ 3.6 3.6 3.7 TBD TBD
BakBone® NetVault® 6.5.2 6.5.2 7.0 7.0, 7.1 7.1.1
CommVault® Galaxy® 4.1 4.1 4.2 TBD 5.9
BrightStor® ARCserve® 9 9 9 TBD 11.1
HP® Data Protector 5.0 5.0 5.0, 5.1 TBD 5.5
Legato® NetWorker™ 6.2 6.1.3, 6.2 6.1.3, 6.2, TBD 7.2
7.0
Syncsort® Backup Express 2.1.4, 2.1.5 2.1.4, 2.1.5 2.1.5 TBD 2.3
IBM® Tivoli® Storage Manager 5.0, 5.1 5.0, 5.1 5.2 TBD 5.3
Veritas® NetBackup™ 3.4, 3.4.1, 4.5 4.5, 5.0 4.5, 5.0 4.5, 5.0, 5.1
4.5 (Data (Data
ONTAP 6.3.3 ONTAP 6.4.2
or later only) or later only)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Gigabit Ethernet Tape to SAN
Client Client Client Client Client
UNIX NT Backup
Application Application Ethernet LAN Application Application Host
Server Server Server Server Server
Gigabit Ethernet
Tape SAN
NetApp delivers both certified FC fabric Tape-to-SAN backup solutions and Gigabit Ethernet
(GbE) Tape-to-SAN solutions. These solutions are made possible through our joint
partnerships with industry leaders in the fields of tape automation, fabric switches, and
backup software. They offer significant benefits for enterprise customers over tape devices
attached (through SCSI) directly to NetApp storage systems. Specifically, these two Tape-to-
SAN solutions offer the following benefits:
• Tape sharing and amortization of tape resources
• Extended distances from data to centralized tape backup libraries
• Minimized impact of backups on servers on the network
• Tape drive hot-swapping
• Dynamic tape configuration changes without shutting down the NetApp storage system
The GbE Tape-to-SAN configurations allow multiple NetApp storage systems to
concurrently transfer data over GbE to one or more tape libraries that support NDMP. This
architecture allows each drive inside the tape library to be seen as a shared resource and an
NDMP server. A clear advantage of this configuration is the demonstrated interoperability of
Ethernet-based components.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Fibre Channel Tape to SAN
Client Client Client Client Client
UNIX NT Backup
Application Application Ethernet LAN Application Application Host
Server Server Server Server Server
Fibre Channel
Tape SAN
Together with a third-party NDMP-based data protection solution that supports technology
known as dynamic drive sharing, both FC and GbE Tape-to-SAN solutions enable you to
dynamically allocate tape drives in a larger library to NetApp storage systems as needed for
backup or recovery operations. This eliminates the need to dedicate expensive tape devices to
each system.
These solutions help provide essential elements to enterprise customers seeking to maximize
the availability of their NetApp storage. You can replace or upgrade tape devices with no
impact on the system's ability to serve data to clients. Drives can be dynamically added or
removed without requiring any downtime.
For information about certified backup solutions, see the NOW site.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Terminology and Components
NDMP client
– The backup application is the NDMP client
– NDMP clients submit requests to an NDMP
server, and then receive replies and status back
from the NDMP server
NDMP server
– A process or service that runs on the NetApp
storage system
– The NDMP server processes requests from
NDMP clients, and then returns reply and status
information back to the NDMP client
In the following definitions, the primary storage is the system that performs the NDMP Data
Service and the secondary system is the one that performs the NDMP Tape Service.
• Data Management Application (DMA)—Also called the backup application. The DMA controls
the NDMP session. Veritas NetBackup and Legato Networker are examples of backup
applications.
• NDMP Service—Provides Data Service, Tape Service, and SCSI Service.
• Control Connection—Bidirectional TCP/IP connection that carries external data representation
standard (XDR) encoded NDMP messages between the DMA and the NDMP server. The Control
Connection is analogous to an NDMP session on the storage system.
• Data Connection—Establishes a connection between the two NDMP systems that carry the data
stream; either internal to the NetApp storage system (local) or TCP/IP (remote).
• Data Service—NDMP service that transfers data between the primary storage system (where the
data on disks resides) and the Data Connection.
• Tape Service—NDMP service that transfers data between the secondary storage and the Data
Connection, allowing the DMA to manipulate and access secondary storage.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Typical NDMP Backup Session
DMA Host Control Messages
Data Connection
DMA Payload Data
Notifications, file history, log messages
Content
Index
TCP/IP TCP/IP
IP Network
NDMP NDMP
Data Tape
Service Service
TCP/IP, IPC
Primary Secondary
Storage Storage
System System
The figure above represents a storage system (primary) to storage system (secondary) to tape
data protection topology, where the backup operation is driven by a DMA host (NDMP
client).
The DMA opens connections to, and activates NDMP services in, both storage systems.
Control messages to the services configure the services and create a data connection between
them. More control messages initiate and start the backup; the data service creates the
payload (backup image) and writes it to the data connection, where the tape service receives
it.
Log messages and notifications are sent from the services to the DMA.
CONTROL CONNECTION
• XDR encoded
• DMA-server exchanges
• Well known, registered TCP port (10,000)
• DMA “manages” servers through request/reply control exchanges
• Server-initiated log and notification posts (short, unidirectional)
• Server-initiated file history transfers (bulk data, unidirectional)
DATA CONNECTIONS
• Opaque byte stream with no XDR encoding
• Server-server exchanges
• DMA “manages” data connection between peer servers
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
• Non-reserved TCP ports are assigned by “listening” server
• Server-initiated backup-stream transfers (bulk data, unidirectional)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Connection Information
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Tape Backup
Topologies
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Tape Backup Topologies
NDMP supports a number of topologies and configurations between backup applications and
storage systems or other NDMP servers providing data (file systems) and tape services.
The NDMP protocol specification allows the following backup configurations:
• Local backup from a NetApp storage system to a direct-attached tape device
• Three-way backup from a NetApp storage system to a network-attached tape library
• Three-way backup from a NetApp storage system through the network to another NetApp storage
system with a local tape device
• Backup from a NetApp storage system through the network to a UNIX or Windows NT backup
server with a local tape device
• Backup from a UNIX or Windows NT server through the network to a NetApp storage system
with a local tape device
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System to Local Tape (Direct-
Attached)
The storage system to local tape topology
provides the best performance.
The distance between the storage system and
tape limited by SCSI/FC
SCSI-attached tape drives are dedicated to a
single storage system
File History
LAN Boundary NDMP Control
Backup Data
Data
In the simplest configuration, a backup application backs up data from a storage system to a
tape subsystem attached to the storage system. The NDMP control connection exists across
the network boundary. The NDMP data connection that exists within the storage system
between the data and tape services is called an NDMP local configuration.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System to Network-Attached Tape
Library
Dynamic drive sharing without additional
software
No distance limit between source storage
system and tape library
Performance is dependant on network
architecture and storage system resources
File History
NDMP Control
Backup Data
NDMP-enabled tape libraries provide a variation of the three-way configuration. In this case,
the tape library attaches directly to the TCP/IP network, and then communicates with the
backup application and the storage system through an internal NDMP server.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System to Storage System to Tape
File History
NDMP Control
Backup Data
A backup application can also back up data from a storage system to a tape library (a media
changer with one or more tape drives) attached to another storage system. In this case, the
NDMP data connection between the data and tape services is provided by a TCP/IP network
connection. This configuration is referred to as an NDMP three-way storage-system-to-
storage-system.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Storage System to Server to Tape
File History
NDMP Control
Backup Data
Data
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Server to Storage System to Tape
File History
NDMP Control
Backup Data
Data
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using Tape Devices with NDMP
Robot
Robot
When using NDMP, the storage system can read from or write to the following devices:
• Stand-alone tape drives or tapes in a tape library that is attached to the storage system
• Tape drives or tape libraries attached to the workstation that runs the backup application
• Tape drives or tape libraries attached to a workstation or storage system on your network
• NDMP-enabled tape libraries attached to your network
NOTE: To use NDMP to manage your tape library, you must set the tape stacker autoload
setting to off. Otherwise, the system won’t allow media-changer operations to be controlled
by the NDMP backup application.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Enabling and
Configuring NDMP
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Enabling and Configuring NDMP
NDMP is disabled by default. To enable:
ndmpd on or options ndmpd.enable on
The version must match the version configured on the
NDMP backup application. To configure:
ndmpd version { 2 | 3| 4}
By default, there is no host access control configured. To
configure:
ndmpd.access
Configuring NDMP authorization methods:
ndmpd.authtype
– Combination of challenge, plaintext, or both
– SnapVault and SnapMirror management requires challenge
To enable a storage system for basic management by an NDMP backup application, you must
enable the storage system’s NDMP support, and specify the configured NDMP version of the
backup application, host IP address, and authentication method.
To prepare a storage system for NDMP management, complete the following steps.
1. Enable the NDMP service:
system>options ndmpd.enable on
When disabling ndmpd, the storage system continues processing all requests for sessions already
established, but rejects new sessions.
2. Specify the NDMP version to support on the storage system. This version must match the version
configured on the NDMP backup application server:
system>ndmpd version {2|3|4}
Data ONTAP supports NDMP versions 2, 3, and 4 (4 is the default value).
The storage system and the backup application must agree on a version of NDMP to be used for
each NDMP session. When the backup application connects to the storage system, the storage
system sends the default version back. The application can choose to use that default version and
continue with the session. However, if the backup application uses an earlier version, it begins
version negotiation, asking if each version is supported, to which the storage system responds with
a yes or no.
Because some backup applications do not support version negotiation, the ndmpd version
command controls the maximum and default NDMP version allowed. If you know your backup
application does not support NDMP version 4, and does not negotiate versions, you can use this
command to define the maximum version Data ONTAP supports so that the application can
operate correctly.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
3. If you want to specify a restricted set of NDMP backup-application hosts that can connect to the
storage system, set the following option:
system>options ndmpd.access {all|legacy|host[!]=hosts|if
[!]=interfaces}
Where:
• all is the default value, which permits NDMP sessions with any host
• legacy restores previous values in effect before a Data ONTAP version upgrade; in the
case of Data ONTAP 6.2, the legacy value is equal to all\
• host=hosts allows a specified host or a comma-separated list of hosts to run NDMP
sessions on this storage system; the hosts can be specified by either host name or IP address
• host!=hosts, blocks a specified host or comma-separated list of hosts from running
NDMP sessions on this storage system; the hosts can be specified by either host name or IP
address
• if=interfaces, allows NDMP sessions through a specified interface or comma-
separated list of interfaces on this storage system
• if!=interfaces, blocks NDMP sessions through a specified interface or comma-
separated list of interfaces on this storage system
4. Specify the authentication method through which users are allowed to start NDMP sessions with
the storage system. This setting must include an authentication type supported by the NDMP
backup application:
system>options ndmpd.authtype
{challenge|plaintext|challenge,plaintext}
The challenge authentication method is generally the preferred, and more secure,
authentication method. Challenge is the default type.
With the plaintext authentication method, the login password is transmitted as clear text.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Enabling and Configuring NDMP (Cont.)
Creating local account:
useradmin useradd <backupuser>
Setting the NDMP password length:
By default, Data ONTAP generates a 16-character
password. If DMA does not support this, reduce the
password length to 8 characters. To set the password
length:
ndmpd.password_length { 8 | 16 }
Generating an encoded NDMP password:
ndmpd password <backupuser>
Enabling the NDMP connection log:
ndmpd.connectlog.enabled { off | on }
Including or excluding files with ctime changed from
incremental dumps:
ndmpd.ignore_ctime.enabled { on | off }
© 2008 NetApp. All rights reserved. 21
5. If you have operators without root privileges on the storage system that will be carrying out tape-
backup operations through the NDMP backup application, then add a new backup user to the
Backup Operators useradmin group list:
system>useradmin user add backupuser -g “Backup Operators”
6. Specify an 8- or 16-character NDMP password length (the default value is 16):
system>options ndmpd.password_length
7. Generate an NDMP password for the new user:
System>ndmpd password backupuser
NOTE: If you change the password to your regular storage system account, repeat this procedure
to obtain your new system-generated, NDMP-specific password.
8. Enable logging of NDMP connection attempts with the storage system:
system>options ndmpd.connectlog.enabled on
This enables Data ONTAP to log NDMP connection attempts in the /etc/messages file.
These entries can help you determine if and when authorized or unauthorized users are attempting
to start NDMP sessions. The default for this option is off.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Log entries for attempted NDMP connections or operations include the following fields:
• Time
• Thread
• NDMP request and action (allow or refuse)
• NDMP version
• Session ID
• Source IP (address where the NDMP request originated)
• Destination IP (address of the storage system receiving the NDMP request)
• Source port (port through which the NDMP request was transmitted)
• Storage system port through which the NDMP request was received
Example:
Friday Aug 25:16:45:17GMT ndmpd.access allowed fosr version =4,
sessid=34,
from src ip = 172.29.19.40, dst ip =172.29.19.95, src port =
63793, dst port = 10000.
9. Include or exclude files with the ctime changed from incremental dumps according to your
backup requirements:
system>options ndmpd.ignore_ctime.enabled { on | off }
When this option is on, users can exclude files with the ctime changed from system storage
incremental dumps, because other processes (like virus scanning) often alter the ctime of files.
When this option is off, backup on the storage system includes all files with a change or modified
time later than the last dump in the previous level dump.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Status and Session Information
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
In the following example, the command displays detailed status information for session 4:
system>ndmpd probe 4
ndmpd ON.
Session: 4
isActive: TRUE
protocol version: 3
effHost: Local
authorized: FALSE
client addr: 10.10.10.12.47154
spt.device_id: none
spt.ha: -1
spt.scsi_id: -1
spt.scsi_lun: -1
tape.device: rst0a
tape.mode: Read/Write
mover.state: Active
mover.mode: Read
mover.pauseReason N/A
mover.haltReason N/A
mover.recordSize: 10240
mover.recordNum: 315620
mover.dataWritten: 3231948800
mover.seekPosition: 0
mover.bytesLeftToRead: 0
mover.windowOffset: 0
mover.windowLength: -1
mover.position: 0
mover.connect.addr_type:LOCAL
data.operation: Backup
data.state: Active
data.haltReason: N/A
data.connect.addr_type: LOCAL
data.bytesProcessed: 3231989760
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
TERMINATING NDMP SESSIONS
To terminate a specific session:
system>ndmpd kill session#
where session # is the specific NDMP session you want to terminate, from 0 to 99
To terminate all NDMP sessions:
system>ndmpd killall
These kill commands allow no responding sessions to be cleared without the need for a
reboot, because the ndmpd off command waits until all sessions are inactive before turning
off the NDMP service.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Dump and Restore Format
The Data ONTAP dump adheres to the Solaris ufsdump
format.
Dump format:
– Phase I and II: Build the map of files and directories, and
collect file history and attribute information
– Phase III: Dump data to tape, specifically directory
entries
– Phase IV: Dump files
– Phase V: Dump ACLs
Restore format:
– Phase I: Restore directories
– Phase II: Restore files
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Phase III writes the entire directory structure for what is being backed up to the tape. Phase
III includes two subphases:
• Phase IIIa is the early ACLs phase. This phase dumps ACLs for the data set to the tape. This step
could take more time if a lot of files in the data set have ACLs.
• Phase IIIb was introduced in Data ONTAP 6.4. This phase is executed only for NDMP backups
that have File History turned on. The output of this phase is the offset map. For each file on any
given backup, the offset map contains the physical address on the tape that marks the beginning of
the file in the backup image.
Phase IV dumps the actual file data on tape. This phase operates in the inode order. A smaller
inode number is guaranteed to be found before a larger inode number.
Phase V is a duplicate of Phase IIIa. This is what traditionally existed in the NetApp native
dump, and this phase is retained for backward compatibility.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Dump and Restore Event Logs
Data ONTAP automatically logs significant events and the times at which they occur during
dump and restore operations. You might want to view event log files to verify if a backup was
successful, to gather statistics on backup operations, or to use information contained in past
event-log files to help diagnose problems with dump and restore operations.
Event logging is turned off by default. To enable event logging:
system>options backup.log.enable on
All dump and restore events are recorded in a log file backup in the /etc/log/ directory.
Once a week, log files are rotated. The /etc/log/backup file is copied to
/etc/log/backup.0, the /etc/log/backup.0 file is copied to /etc/log/backup.1,
and so on. The system saves the log files for up to six weeks. This means you can have up to
seven message files (/etc/log/backup.0 through /etc/log/backup.5, plus the current
/etc/log/backup file).
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Each log message begins with one of the type indicators:
Type Meaning
Log Logging event
Dmp Dump event
rst Restore event
The timestamp field shows the date and time of the event.
The identifier field for a dump event includes the dump path and the unique ID for the
dump. The identifier field for a restore event uses only the restore destination path name
as a unique identifier. Logging-related event messages do not include an identifier field.
DUMP EVENTS
The event field for a dump event contains an event type followed by event-specific
information in parentheses. The following table describes events, their meanings, and the
related event information that might be recorded for a dump operation.
Event Meaning Event Information
Start A dump or NDMP dump begins Dump level and the type of dump
Restart A dump restarts Dump level
End Dumps completed successfully Amount of data processed
Abort The operation aborts Amount of data processed
Options Specified options are listed All options and their associated values,
including NDMP options
Tape_open The tape is open for read/write The new tape device name
Tape_close The tape is closed for read/write The tape device name
Phase_change Dump is entering a new processing phase The new phase name
Error Dump encounters an unexpected event Error message
Snapshot A snapshot is created or located The name and time of the snapshot
Base_dump A base dump entry in the etc/dumpdates The level and time of the base dump
file has been located (for incremental dumps only)
The log file for a dump operation begins with either a Start or Restart event and ends
with either an End or Abort event.
The following is an example of the output for a dump operation:
dmp Fri Aug 25 01:11:22 GMT /vol/vol0/(1) Start (Level 0
dmp Fri Aug 25 01:11:22 GMT /vol/vol0/(1) Options (b=63,
B=1000000,u)
dmp Fri Aug 25 01:11:22 GMT /vol/vol0/(1) Snapshot
(snapshot_for_backup.6, Sep 20 01:11:21 GMT)
dmp Aug 25 01:11:22 GMT /vol/vol0/(1) Tape_open (nrst0a)
dmp Aug 25 01:11:22 GMT /vol/vol0/(1) Phase_change (I)
dmp Aug 25 01:11:24 GMT /vol/vol0/(1) Phase_change (II)
16-31 Data ONTAP® 7.3 Fundamentals: NDMP Fundamentals
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
dmp Aug 25 01:11:24 GMT /vol/vol0/(1) Phase_change (III)
dmp Aug 25 01:11:26 GMT /vol/vol0/(1) Phase_change (IV)
dmp Aug 25 01:14:19 GMT /vol/vol0/(1) Tape_close (nrst0a)
dmp Aug 25 01:14:20 GMT /vol/vol0/(1) Tape_open (nrst0a)
dmp Aug 25 01:14:54 GMT /vol/vol0/(1) Phase_change (V)
dmp Aug 25 01:14:54 GMT /vol/vol0/(1) Tape_close (nrst0a)
dmp Aug 25 01:14:54 GMT /vol/vol0/(1) End (1224 MB)
RESTORE EVENTS
The event field for a restore event contains an event type followed by event-specific
information in parentheses. The following table describes the events, their meanings, and the
related event information that might be recorded for a restore operation.
Event Meaning Event Information
Start A restore or NDMP restore begins Restore level and the type of restore
Restart A restore restarts Restore level
End Restore completed successfully Number of files and amount of data processed
Abort The operation aborts Number of files and amount of data processed
Options Specified options are listed All options and their associated values, including
NDMP options
Tape_open The tape is open for read/write The new tape device name
Tape_close The tape is closed for read/write The tape device name
Phase_change Dump is entering a new The new phase name
processing phase
Error Restore encounters an unexpected Error message
event
The log file for a restore operation begins with either a Start or Restart event and ends
with either an End or Abort event.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The following is an example of the output for an aborted restore operation:
rst Fri Aug 25 02:13:54 GMT /rst_vol/ Start (Level 0)
rst Fri Aug 25 02:13:54 GMT /rst_vol/ Options (r)
rst Fri Aug 25 02:13:54 GMT /rst_vol/ Tape_open (nrst0a)
rst Fri Aug 25 02:13:55 GMT /rst_vol/ Phase_change (Dirs)
rst Fri Aug 25 02:13:56 GMT /rst_vol/ Phase_change (Files)
rst Fri Aug 25 02:23:40 GMT /vol/rst_vol/ Error (Interrupted)
rst Fri Aug 25 02:23:40 GMT /vol/rst_vol/ Tape_close (nrst0a)
rst Fri Aug 25 2:23:40 GMT /vol/rst_vol/ Abort (3516 files,
598MB)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Using the ndmpcopy Command to
Copy Data
The ndmpcopy command:
– Used to transfer data between storage systems
that support NDMP v3 or v4
– Can carry out full and incremental transfers
– Limits incremental transfers to a maximum of
two levels (one full and up to two incremental)
– Applies NetApp to NetApp only
– Syntax:
ndmpcopy
[options]source_hostname:source_path
destination_hostname:destination_path
The ndmpcopy command enables you to transfer file system data between storage systems
that support NDMP v3 or v4, and the UNIX file system (UFS) dump format.
Using the ndmpcopy command, you can carry out both full and incremental data transfers.
However, incremental transfers are limited to a maximum of two levels (one full and no more
than two incremental). You can transfer full or partial volumes, qtrees, or directories, but not
individual files.
To copy data within a storage system or between storage systems using ndmpcopy, use the
following command from the source or the destination system, or from a storage system that
is not the source or the destination:
system>ndmpcopy [options] source_hostname:source_path
destination_hostname:destination_path
where source_hostname and destination_hostname can be host names or IP
addresses. If destination_path does not specify a volume (or specifies a
nonexistent volume), the root volume is used.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The following table lists available options for the ndmpcopy command.
Option Description
-sa Source authorization specifies the user name and password for connecting to the
username:[password] source storage system.
-da Destination authorization specifies the user name and password for connecting
username:[password] to the destination storage system.
-st Sets the source authentication type to be used when connecting to the source
{challenge|text} storage system.
-dt Sets the destination authentication type to be used when connecting to the
{challenge|text} destination storage system. By default, challenge is the authentication type
used. The text authentication type exchanges the user name and password in
clear text. The challenge authentication type exchanges the user name and
password in encrypted form.
-l level Sets the dump level used for the transfer to the specified value of the level.
Valid values for the dump level are 0, 1, and 2, where 0 indicates a full transfer,
and 1 or 2 indicates an incremental transfer. The default is 0.
-d Enables ndmpcopy debug log messages (which appear in the root volume
/etc/log directory) to be generated. The ndmpcopy debug log file
names are in the form ndmpcopy.yyyymmdd.
-f Enables forced mode. This mode enables overwriting system files in the /etc
directory on the root volume.
-h Prints the help message.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NDMP Online
Documentation
NetApp strongly recommends you read the following documents to gain a complete
understanding of NDMP and its integration with other partner backup solutions. All of these
references are available on the NOW site.
TECHNICAL REPORTS
TR-3066: Data Protection Strategies for Network Appliances Storage Systems
NDMP SPECIFICATIONS
http://www.ndmp.org
MANUAL
Data Protection Tape Backup and Recovery Guide for Data ONTAP (latest release
recommended)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 16: NDMP Fundamentals
Estimated Time: 30 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active
Active-Active
Controller
Configuration
Module 17
Data ONTAP® 7.3 Fundamentals
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active Controller Configuration
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active is for High Availability
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Configuration Characteristics
CONFIGURATION CHARACTERISTICS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Types of Active-Active Configurations
STANDARD
In a standard active-active configuration, Data ONTAP ensures that each node monitors the
functioning of its partner through a heartbeat signal sent between the nodes. The NVRAM
data from one node is mirrored by its partner and each node can take over the partner’s disks
if the partner node fails. Also, the nodes synchronize each other’s time.
METROCLUSTER
MetroCluster provides the same advantages of mirroring as mirrored active-active
configurations, with the additional ability to initiate failover if an entire site becomes lost or
unavailable.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
If there is a failure or loss of two or more disks in a RAID 4 aggregate, or three or more disks
in a RAID-DP aggregate, your data is protected.
The failure of an FC-AL adapter, loop, or ESH2 module does not require a failover.
In addition, a MetroCluster enables you use a single command to initiate a failover if an entire
site becomes lost or unavailable. If a disaster occurs at one of the node locations and destroys
your data there, your data not only survives on the other node, but can also be served by that
node while you address the issue or rebuild the configuration.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Requirements for Standard Active-Active
Architecture compatibility
Storage capacity
Disk and disk shelf compatibility
Cluster interconnect adapters and cables
installed
Nodes attached to the same networks
Same software licensed and enabled
STORAGE CAPACITY
The number of disks in a standard active-active configuration must not exceed the maximum
configuration capacity. In addition, the total amount of storage attached to each node must not
exceed the capacity of a single node.
To determine your maximum configuration capacity, see the System Configuration Guide at
http://now.netapp.com/NOW/knowledge/docs/hardware/hardware_index.shtml.
NOTE: When a failover occurs, the takeover node temporarily serves data from all the
storage in the active-active configuration. When the single-node capacity limit is less than the
total active-active configuration capacity limit, the total disk space in a cluster can be greater
than the single-node capacity limit. It is acceptable for the takeover node to temporarily serve
more than the single-node capacity would normally allow, as long as it does not own more
than the single-node capacity.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NOTE: If a takeover occurs, the takeover node can only provide the functionality for the licenses
installed on it. If the takeover node does not have a license that was being used by the partner node to
serve data, your active-active configuration loses functionality after a takeover.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Standard Active-Active Configuration
Active-Active Controller Configuration
Network
Active-Active Configuration
Interconnect
FC-AL FC-AL
A Loops A Loops
FC-AL
B Loops
CONFIGURATION VARIATIONS
The following list describes some configuration variations that are supported for standard
active-active configurations:
• Asymmetrical configurations—One node has more storage than the other. This configuration
is supported as long as neither node exceeds the maximum capacity limit.
• Active/passive configurations—The passive node has only a root volume, while the active
node has all the remaining storage, and services all data requests during normal operation. The
passive node responds to data requests only if it has taken over the active node.
• Shared loops or stacks—If your standard active-active configuration is using software-based
disk ownership, you can share a loop or stack between the two nodes. This is particularly useful
for active/passive configurations.
• Multipath Storage—Provides a redundant connection from each node to each disk. This
configuration can prevent some types of failovers.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Enabling the License
1. License
Example:
license add abcedfg
2. Reboot
Example:
reboot
3. Enable the service on one of the two systems
Example:
cf enable
4. Check the status
Example:
cf status
To add the license, enter the following command on both node consoles for each required
license:
license add xxxxxx
where xxxxx is the license code you received for the feature
To reboot both nodes, enter the following command:
reboot
To enable the license, enter the following command on the local node console:
cf enable
To verify that controller failover is enabled, enter the following command on each node
console:
cf status
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Setting Matching Node Options
Because some Data ONTAP options must be the same on both the local and partner nodes,
you should use the options command to check these options on each node, and change them
as necessary.
To set matching node options, complete the following steps:
• Display and record options values on local and partner nodes using the following command on
each console:
• options
• The current option settings for the node are displayed on the console. The output displayed is
similar to the following:
• options autosupport.doit DONT
• options autosupport.enable on
• Verify that the options are set to the same value for both nodes. The comments are as follows:
• Value might be overwritten in takeover
• Same value required in local+partner
• Same value in local+partner recommended
• Correct any mismatched options using the following command:
• options option_name option_value
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Parameters That Must Be the Same
The parameters listed in the figure above must be the same on both nodes so that takeover is
smooth and data is correctly transferred between the nodes.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Three Modes of Operation
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Partners Communicate
To ensure that both nodes in an active-active controller configuration maintain the correct and
current status of the partner node, heartbeat information and node status are stored on each
node in the mailbox disks. The mailbox disks are a redundant set of disks used in
coordinating takeover or giveback operations. If one node stops functioning, the surviving
partner node uses the information on the mailbox disks to perform takeover processing, which
creates a virtual storage system. In the event of an interconnect failure, the mailbox heartbeat
information prevents an unnecessary failover from occurring. Moreover, if cluster
configuration information that is stored on the mailbox disks is out of sync during boot, the
active-active controller nodes automatically resolve the situation. The FAS system failover
process is extremely robust, preventing split-brain issues from occurring.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active Controllers and NVRAM
NVRAM
Data ONTAP uses the WAFL file system to manage data processing and NVRAM to
guarantee data consistency before committing writes to disks. If the storage controller
experiences a power failure, the most current data is protected by the NVRAM, and file
system integrity is maintained.
In the active-active controller environment, each node reserves half of the total NVRAM size
for the partner node’s data to ensure that exactly the same data exists in NVRAM on both
storage controllers. Therefore, only half of the NVRAM in the active-active controller is
dedicated to the local node. If failover occurs, when the surviving node takes over the failed
node, all WAFL checkpoints stored in NVRAM are flushed to disk. The surviving node then
combines the split NVRAM. After the surviving node restores disk control and data
processing to the recovered failed node, all NVRAM data belonging to the partner node is
flushed to disk during the giveback operation.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
When Does a Takeover Occur?
In an active-active configuration, a takeover occurs
when:
A node undergoes a software or system failure that
leads to a panic
A node undergoes a system failure (for example, a
loss of power) and cannot reboot
There is a mismatch between the disks that one node
recognizes and the disks that the other node
recognizes
One or more network interfaces configured to support
failover becomes unavailable
A node cannot send heartbeat messages to its partner
A node is halted using -f
A takeover is manually initiated
© 2008 NetApp. All rights reserved. 15
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
What Happens When a Takeover Occurs
Active-Active Controller Configuration
Network
Active-Active Configuration
Interconnect
FC-AL
B Loops Partner accesses the
failed node’s disks and
serves its data to clients
When a takeover occurs, the functioning partner node takes over the functions and disk drives
of the failed node by creating an emulated storage system that:
• Assumes the identity of the failed node
• Accesses the failed node’s disks and serves its data to clients
The partner node maintains its own identity and its own primary functions, but also handles
the added functionality of the failed node through the emulated node.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
During a Takeover
DURING A TAKEOVER
When a takeover occurs, the surviving partner has two identities: its own and that of its
partner. These identities exist simultaneously on the same storage system. Each identity can
access only the appropriate volumes and networks. You can send commands or log in to
either storage system by using the rsh command, allowing remote scripts that invoke
storage system commands through a remote-shell connection to continue normal operations.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
During a Giveback
DURING A GIVEBACK
After a partner node is repaired and operating normally, you can use the cf giveback
command to return operations to the partner.
When the failed node is functioning again, the following events can occur:
• You initiate a cf giveback command that terminates the emulated node on the partner.
• The failed node resumes normal operation, serving its own data.
• The active-active configuration resumes normal operation, with each node ready to take over for
its partner if the partner fails.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Active-Active Commands
cf enable | disable
cf takeover
cf partner
cf giveback
cf status
aggr status -r
halt –f
partner
ACTIVE-ACTIVE COMMANDS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Failover Effects On Client Connections
Client disruption, although minimal, can still occur in the active-active controller
environment during the takeover and giveback processes.
When one node in an active-active configuration encounters an error and stops processing
data, its partner detects the failed (or failing) status of the partner and takes over all data
processing from that controller. If the partner is confirmed down, the surviving storage
controller then initiates the failover process to assume control of all services from the failed
storage controller. This period is referred to as takeover time for clients.
After the failed storage controller is repaired, you can return all services to the repaired
storage controller by issuing the cf giveback command on the surviving storage controller
serving all clients. This command triggers the giveback process, and the repaired storage
controller boots when the giveback operation is complete. This process is referred to as
giveback time for clients.
Therefore, the takeover and giveback period for clients equals the sum of the takeover time
plus the giveback time, as represented in the following equations:
• Takeover time = time to detect controller error (mailbox disks not responding) and initiate
takeover + time required for takeover to complete (synchronize the WAFL logs).
• Giveback time = time required to release partner’s disks + time to replay the WAFL log + time
to start all services (NFS/NIS/CIFS, and so on) and process export rules
• Total time = takeover time + giveback time.
NOTE: For clients or applications using stateless connection protocols, I/O requests are
suspended during the takeover and giveback periods, but resume when the takeover and
giveback processes are complete. For CIFS, sessions are lost, but the application could—and
generally does—attempt to re-establish the session.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
The amount of time required for takeover and giveback is critical. With newer versions of
Data ONTAP, this time has been decreasing. In some instances, if the network is unstable or
the storage controller is configured incorrectly, total time can be very long.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Best Practices for Active-Active
Configurations
Test failover and giveback operations before
placing active-active controllers into production
Monitor:
– Performance of network
– Performance of disks and storage shelves
– CPU utilization of each controller to ensure it
does not exceed 50%
Enable AutoSupport
Thoroughly test newly installed active-active controllers before moving them into
production.
General best practices require comprehensive testing of all mission-critical systems before
introducing them into a production environment. Active-active controller testing should
include not only takeover and giveback, or functional testing, but performance evaluation as
well. Extensive testing validates planning.
Monitor network connectivity and stability.
Unstable networks not only affect total takeover and giveback times, they adversely affect all
devices on the network in various ways. NetApp storage controllers are typically connected to
the network to serve data, so if the network is unstable, the first symptom is degradation of
storage-controller performance and availability. Client service requests are retransmitted
many times before reaching the storage controller, appearing to the client as slow responses
from the storage controller. In a worst-case scenario, an unstable network can cause
communication to time-out, and the storage controller appears to be unavailable.
During takeover and giveback operations in the active-active controller environment, storage
controllers attempt to connect to numerous types of servers on the network, including
Windows domain controllers, DNS, NIS, LDAP, and application servers. If these systems are
unavailable or the network is unstable, the storage controller continues to retry establishing
communications, which delays takeover or giveback times.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Summary
MODULE SUMMARY
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Exercise
Module 17: Active-Active Controller
Configuration
Estimated Time: 30 minutes
EXERCISE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Final Words
Final Words
Module 18
Data ONTAP® 7.3 Fundamentals
FINAL WORDS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Module Objectives
MODULE OBJECTIVES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Unified Storage
iSCSI NFS
CIFS Corporate
FC Ethernet LAN
SAN NAS
(Blocks) (Files)
NetApp
FAS
UNIFIED STORAGE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Console Access
system> version
NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008
system> sysconfig -v
NetApp Release 7.3RC1: Wed Mar 5 02:17:31 PST 2008
System ID: 0084166726 (NetApp1)
System Serial Number: 3003908 (NetApp1)
slot 0: System Board 599 MHz (TSANTSA D0)
Model Name: FAS250
Part Number: 110-00016
Revision: D0
Serial Number: 280646
Firmware release: CFE 1.2.0 system> license
Processors: 2
Processor revision: B2 nfs site ABCDEFG
Processor type: 1250 cifs site BCDEFGH
Memory Size: 510 MB http site CDEFGHI
NVMEM Size: 64 MB of Main Memory Used
cluster not licensed
snapmirror not licensed
snaprestore not licensed
CONSOLE ACCESS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FilerView
FILERVIEW
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
AutoSupport
Email Server
AUTOSUPPORT
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Role-Based Access Control
Role-Based Access Control (RBAC) is a mechanism for managing a
set of actions (capabilities) that a user or administrator can perform
on a storage system.
A role is created.
Capabilities are granted to the role.
Groups are assigned to one or more roles.
Users are assigned to groups.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Disks and Data Protection
Volume 1
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Aggregates
AGGREGATES
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How They Work:
Aggregates and FlexVol Volumes
Create RAID groups
FlexVol 1 FlexVol 2 FlexVol 3
Create aggregate
vol1 vol2 Create FlexVol 1
vol3
– Only metadata space
used
– No preallocation of
blocks to a specific
volume
Aggregate Create FlexVol 2
RG1 RG2 RG3 – WAFL allocates space
from aggregate as data
RG1 RG2 RG3 is written
Aggregate
Populate volumes
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
How Snapshot Works
File X File X
Disk Blocks A B C C’
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
NFS
SS1
vol0 flexvol1
data_files
etc eng_files
home misc_files
Network Connection
Client1 Client1
NFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
CIFS
CIFS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
SAN Protocols
WAFL Architecture
Block Services
SAN Protocols
FCP iSCSI
Network Interfaces
FC Ethernet
Encapsulated SCSI Encapsulated SCSI
SAN PROTOCOLS
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Data ONTAP Simplified
Network
Client
Client
NVRAM
Physical Disks
Memory
Client
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
FlexShare
FLEXSHARE
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Standard Active-Active Configuration
Active-Active Controller Configuration
Network
Active-Active Configuration
Interconnect
FC-AL FC-AL
A Loops A Loops
FC-AL
B Loops
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Additional Data ONTAP Resources
Education
– Data ONTAP CIFS Administration
– Data ONTAP NFS Administration
– Data ONTAP SAN Administration Basics
– Data Protection and Retention
– Fundamentals of Performance Analysis
Web sites
– NOW
– NetApp (www.netapp.com)
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute
Thank You!
Please fill out an evaluation.
© 2008 NetApp. This material is intended for training use only. Not authorized for reproduction purposes.
NetApp University - Do Not Distribute