Documente Academic
Documente Profesional
Documente Cultură
IMPLEMENTATION AND
MANAGEMENT
Version 18.2
PARTICIPANT GUIDE
PARTICIPANT GUIDE
manren02@in.ibm.com
manren02@in.ibm.com
Dell Confidential and Proprietary
Copyright © 2018 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC and other
trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be
trademarks of their respective owners.
Course Introduction.................................................................................. 1
Summary................................................................................................................... 80
Summary................................................................................................................... 93
Summary................................................................................................................. 211
Summary................................................................................................................. 297
Summary................................................................................................................. 339
Summary................................................................................................................. 391
Summary................................................................................................................. 428
Summary................................................................................................................. 502
Summary................................................................................................................. 543
Summary................................................................................................................. 569
Summary................................................................................................................. 591
Introduction
Course Objectives
Prerequisite Skills
Course Agenda
Introductions
Introduction
Introduction
This lesson provides a brief review of these prerequisites along with cross-
references to the prerequisite course to help you obtain this knowledge.
Why NetWorker?
Overview
Dell EMC NetWorker works within the existing framework of hardware, operating
system software, and network communication protocols to provide a
comprehensive and consolidated data protection solution.
Overview
Besides backup and recovery, NetWorker provides a full range of data protection
functions including tracking and reporting, aging, cloning, and staging. The
NetWorker Fundamentals prerequisite eLearning introduces these functions and
looks at how NetWorker supports these functions.
A backup is a copy of production data, which is created and retained for the sole
purpose of recovering deleted or corrupted data.
Tracking is the process of storing information or metadata about backup save sets.
The Management Console server uses this information to generate reports.
Aging determines the length of time that backup data is available for recovery.
NetWorker enables you to specify how long individual copies of data are
maintained.
Cloning is the process of copying a save set from one NetWorker backup volume to
another. The clone can be managed independently with its own retention time.
Staging is the process of moving a save set from one volume to another.
Overview
To implement a backup and recovery strategy, understand the roles and functions
of the various components in a NetWorker datazone. A detailed description of each
component is discussed in the NetWorker Fundamentals eLearning course.
The NetWorker server is a physical or virtual machine that manages the datazone
and facilitates client backups and recoveries. The NetWorker server maintains
tracking and configuration information.
The Management Console Server provides a global view of the NetWorker backup
environment for centralized management of one or more NetWorker datazones.
NetWorker supports many types of devices that can be used to store backup data.
Device types include virtual and physical tape, disk, and cloud storage devices.
Finally, the most fundamental NetWorker component is the NetWorker client. Client
software generates backups, pushes data to a NetWorker storage node or directly
to a backup device, and retrieves data for a recovery. Client software is installed on
all NetWorker hosts.
Overview
To understand the backup process, you must understand the backup terminology
that is associated with the NetWorker product. Listed here are some common
NetWorker terms that were introduced in NetWorker Fundamentals.
data protection operation like backup, clone, or snapshot. Within an action, you
specify one or more backup levels and pool when the action runs.
As you progress through this course, you cover these terms in more detail and
build upon these definitions.
Introduction
This lesson covers the NetWorker processes associated with NetWorker client,
storage node, server, and NetWorker Management Console. The lesson concludes
with a high-level process and data flow of a typical NetWorker scheduled backup.
Overview
Client
Storage node
Server
The following pages provide summary information about the main NetWorker
daemons. For more detailed information, see the NetWorker Command Reference
Guide or the man pages.
Overview
The NetWorker client process, nsrexecd (network save and recover execution
daemon), runs on NetWorker clients to support remote execution requests from
NetWorker servers. For example, nsrexecd starts a backup command at the
request of the NetWorker server. The nsrexecd process also determines which
RPC ports to use to support and request NetWorker services.
Overview
The NetWorker storage node management daemon, nsrsnmd (network save and
recover storage node management daemon), provides an RPC-based service that
manages all device operations. It also manages the nsrmmd processes on the
storage node on behalf of the nsrd process on the NetWorker server. The
nsrsnmd daemon is responsible for ensuring that the device operations get
performed when nsrd requires. There is one nsrsnmd process running on each
configured storage node.
The NetWorker storage node daemon, nsrmmd (network save and recover media
multiplexing daemon), runs on NetWorker storage nodes to support reading and
writing of data to devices. The nsrmmd daemon writes the backup data to a volume
in the backup device it is controlling. It sends information to the NetWorker server
to track data that is written to the volume, and reads data from the volume during
operations such as recoveries and cloning. One nsrmmd is started for each device
that is configured as a NetWorker resource.
Overview
Nsrd (network save and recover daemon) is the master daemon. nsrd manages
the NetWorker resource database, which contains almost all NetWorker
configuration information. It also starts the nsrmmdbd and nsrindexd processes.
nsrd is started automatically at system startup. Once started, nsrd starts the other
server daemons and the nsrsnmd process on the storage node.
Nsrindexd (network save and recover index daemon) provides the read and write
service for the client file index databases.
Nsrjobd (network save and recover job daemon) is responsible for coordinating all
scheduled backups. It stores information about these operations and provides it to
the NetWorker server and the NMC server for reporting purposes.
Nsrmmgd (network save and recover media management daemon) manages all
library operations. It is started on the NetWorker server by nsrd when the
NetWorker services are started or when the first jukebox resource is configured
and enabled.
Overview
Gstd (general services toolkit daemon) is the master Console process and is
responsible for starting the httpd, postgres, and gstsnmptrapd processes. After
a Console client has established communication with the Console server, all further
communication is performed through gstd.
In a Linux environment, the processes are started automatically during system boot
up. On a Microsoft Windows host, the processes are started through the EMC GST
Service which is configured to start automatically during boot up. Httpd is registered
as the EMC GST Web Service.
Overview
The server nsrd process (1) starts a scheduled backup. nsrd asks nsrjobd to
send a remote execution request to the client nsrexecd process, requesting that it
start the NetWorker save command to perform the backup.
The save command (2) that is started on the client communicates with the server
nsrd process (through nsrjobd) to request backup support.
nsrd requests nsrsnmd (3) for backup support, nsrsnmd matches the backup to
a storage node nsrmmd process based on configuration information and save
request attributes.
Once the volume has been mounted on the backup device (4), nsrd directs the
client to push its data to the storage node.
The client (5) pushes the data to the storage node nsrmmd process and sends
tracking information to its client file index (CFI) through the server nsrindexd
process.
nsrmmd on the storage node (6) writes the data that is sent by the save command
to the volume and sends tracking information to the media database through the
server nsrmmdbd process
NetWorker Resources
Overview
For example, in the slide above, the client resource for bongo has a save set
attribute configured to back up the /oracle directory. This client is a member of the
Payroll group, and the Payroll group is assigned to the File system backups
workflow which is configured to start at 9:00 P.M.
Most resources are stored on the NetWorker server and managed by the nsrd
daemon. A few resources are managed on the NetWorker client.
Introduction
This lesson covers the directory structure and content of the CFI, media, and jobs
databases.
Overview
The NetWorker server maintains tracking information for save sets in both the client
file indexes (CFIs) and in the media database. Volume information is maintained
only in the media database.
A client file index (CFI) stores information about each file that is backed up by a
NetWorker client. There is one CFI per physical NetWorker client. The stored
information includes file characteristics such as owner, size, permissions,
modification and access times, as well as timestamps. All files in a given save set
have the exact same backup timestamp. This information is used to support
browsable recoveries, which enable you to recover a client to a specific point in
time.
As a save set ages, its CFI records are automatically purged to save space. The
Browse policy attribute in the client resource determines the length of time that the
records are retained. CFIs may require large amounts of space on the NetWorker
server. Each record in a CFI uses approximately 160 bytes. The default path of a
CFI is /nsr/index/hostname_of_client/db6.
The media database contains information about all NetWorker volumes and the
save sets on those volumes. For each volume, there is a volume record. For each
save set on a volume, there is a save set record. This information is critical for
Overview
A CFI directory contains a header and journal file and a series of directories whose
names are hexadecimal timestamps. Each save set tracked in a CFI has a record
and a key file which are stored in a subdirectory that is determined by the
timestamp of the save set (nsavetime value). The record and key files are named
nsavetime.rec, nsavetime.k0, and nsavetime.k1.
The data in the CFI files is XDR encoded for NetWorker use. Only NetWorker
GUI/CLI interfaces should be used to view and manage the CFI data.
Overview
Each record in a CFI contains the path name of a backed up file or directory, and
the timestamp that is associated with the save set. The timestamp matches the
timestamp of a save set record in the media database, and is used in determining
which save set and volume is needed when recovering the file. File attribute and
backup information are also stored in the CFI.
nsrinfo displays the timestamp in two formats. The nsavetime format is the
number of seconds since January 1, 1970. This is the time format that is used
internally by NetWorker. The save time format is a more human-readable form of
the date and time.
Overview
The media database directory structure includes a header file and files to store
client records, save set records and volume records. Each client record, save set
record, and volume record file has supporting index files.
To maintain its integrity, only use NetWorker GUI or CLI interfaces to view and
manage the data that is contained in the media database.
Overview
The media database contains a record for each NetWorker volume and for each
save set written to a volume.
Overview
The jobs database in NetWorker is responsible for managing and monitoring all
jobs within the environment. These jobs include server activities such as cloning,
staging, and recovery operations and client activities, like save or save groups.
When the jobs are started, the jobs database collects all the runtime information
and completion information.
The jobs database consists of an embedded SQLite database server which is a full
database engine that can handle high loads without performance concerns. The
database itself is stored in a single file on the NetWorker server and is managed
through time-based purging. The default expiration period is 72 hours. The
database should not exceed 1 GB in size. The jobs database is re-created empty
during NetWorker server disaster recovery procedures.
Summary
Introduction
Introduction
Overview
The next step is to identify the host roles that are needed in your environment.
This includes NetWorker server, Management Console server, storage nodes, and
any proxy nodes that may be used.
Once these steps are identified, validate sizing for each of these components and
any additional datazone requirements, like the use of multi-tenancy.
Overview
Finally, the Licensing Guide describes the licensing options available. A description
between served and unserved licenses is detailed as well as the supported
licensing configurations.
NetWorker product information and documentation can be found on the Dell EMC
Support website, https://www.dell.com/support.
Overview
One of the first considerations to make is the location of the key NetWorker
services. In particular, look at the NetWorker server, NetWorker Management
Console (NMC), and the Dell EMC Licensing Solution License server. These
components can be co-located on the same host, or distributed. As the NMC can
add a significant load to the backup environment, for larger environments, Dell
EMC recommends to install NMC on a separate computer. Decide the location of
these services prior to sizing the hardware that host's them.
Also, you should consider the way that backup data is sent to the target devices. If
storage nodes are used, you should determine how many and where they will best
be located. If using client direct, it is important to ensure that backup clients have
direct access to the devices and you have identified all necessary data paths. More
often than not, you have a combination of methods, using client direct for some
clients and storage nodes for others.
Overview
Besides the primary data center, there is usually a disaster recovery site which
hosts a remote NetWorker storage node along with remote storage devices. For
Data Domain, replication is configured to replicate data between local and remote
data centers. Also, a tape library may be configured at the remote site for cloning
data to tape for long-term retention.
Overview
The unique environment and Service Level Agreements (SLA) of the organization
are going to dictate the design of the NetWorker environment. As another example
of what a NetWorker environment might look like, this configuration uses cloud
storage for long-term data retention. In this configuration, data is backed up to one
or more Data Domain systems at the primary site. Then, it is cloned to a
CloudBoost appliance and sent to a cloud storage provider for long-term retention.
This configuration could also include a DR site that uses Data Domain replication,
or clone-controlled replication for transferring data between sites.
Sizing Considerations
Overview
Platform Compatibility
The best platform for your environment is generally the one that you have the most
administrative experience with. It could be Windows or Linux depending on your
environment. Another consideration is the use of a physical NetWorker server or
the NetWorker Virtual Edition (NVE). See the Dell EMC NetWorker Software
Compatibility Guide for supported operating system and platforms.
Network Connectivity
Overview
The NetWorker multi-tenancy facility enables for the creation of multiple restricted
datazones. End users can access a single NetWorker server without being able to
view data, backups, recoveries, or modify objects in other datazones. Also, tenant
administrators within a restricted datazone can only see a limited amount of the
information that is managed by the global administrator or other restricted
datazones from the console or CLI.
Overview
NetWorker Licensing
Introduction
This lesson covers the Dell EMC Licensing Solution model and some
considerations when upgrading from previous licensing models.
Overview
NetWorker provides two options to license the product and its features; served and
unserved licenses. When designing a solution, choose the licensing solution to
meet the needs of the protection environment. Conversion between the two is
possible by performing a rehost action at the URL https://licensing.emc.com.
An unserved license does not require any license server. It has no firewall port
requirements. The license file is restricted to the NetWorker server and each
NetWorker server requires it is own unique license.
Overview
NetWorker uses the Dell EMC Licensing Solution model which uses the Common
Licensing Platform (CLP) for licensing. The Dell EMC Licensing Solution is based
on capacity and is the only licensing model available for new NetWorker
installations.
With the served solution, one or more license servers must be installed in the
NetWorker environment. The license server is responsible for managing the
NetWorker license and capacity allocation across multiple datazones.
The license server reads a license file that is stored on the server to determine the
type of licenses and the amount of capacity purchased. The capacity is the total
needed TBs. The amount that is used by each NetWorker server can be adjusted if
the sum of all the NetWorker servers for a license server does not exceed the
licensed total capacity. Customers are allowed to split the capacity among the
NetWorker servers as they see fit without having to contact EMC Licensing.
With the Dell EMC Licensing Solution, license files are node-locked to the CLP
license server. The entitlements are tied to a customer ID and not to a specific
NetWorker server. This makes for more flexibility in license management.
The Dell EMC Licensing Solution supports scaling of the NetWorker environment.
There may be multiple license servers each servicing NetWorker servers. In this
case, the license file for each license server is unique. Each license server is
independent of any other license servers in an environment. For example, in a site
with 18 NetWorker servers, one license server may manage 10 NetWorker servers
and a second license server and then manages the remaining 8.
Overview
The license file is a plain text file that contains critical information about the location
of the license server. It also contains information about the type of licenses or
entitlements, and the amount of capacity purchased. NetWorker licenses are stored
in one “master” license file which resides on the license server host. The license
server uses its copy to respond to queries from NetWorker servers for a license.
Also, a copy of the license file resides on each NetWorker server and is used by
the CLP API to enable contact with the license server.
Contents of a license file include a comment header must contain the license type
SERVED to work with the CLP server. The hostname and IP address of the CLP
license server. The CLP server listens on the port specified. The license is
invalidated if the host ID of the host does not match the host ID on this line. Vendor
EMCLM is the process managing the licenses. A port value may also be specified
here. USE_SERVER tells a client, which is a NetWorker server, to contact the
specified license server for the license. Each NetWorker server must be able to
resolve the hostname. The entitlements that are provided by the license are listed
in the INCREMENT section.
Note: License files can not be edited. Editing can affect the digital
signature and break the license.
Overview
When the nsrd process is started, the NetWorker server looks for any license
resources in the NetWorker resource (RAP) database. If no license resources are
found, and then the traditional 120 day (90 days plus 30-days grace period)
evaluation mode begins. Next, NetWorker contacts the CLP license server and
requests one unit of capacity. If the capacity entitlement is missing, another request
is scheduled for an hour later until the request is fulfilled. When the request is
honored, a RAP license resource is created in the RAP database licensing the
NetWorker server. If, after 120 days and there is still no license file, the evaluation
period ends and the NetWorker server reverts to restore only mode.
If a NetWorker server is restarted and the Dell EMC Licensing Solution is in effect,
the RAP license resources are queried and all licenses are checked out again. If
the EMC License server cannot be reached, the existing RAP resources are kept
and periodic attempts to check out licenses are made.
Overview
NetWorker 18.2 allows you to use a served or unserved license. A served license
requires installation and configuration of the Dell EMC License Server, also known
as the CLP License Server. An unserved license does not require you to install the
License Server or open any associated firewall ports to enable communication with
the License Server. Customers on current support can migrate from the product
based model to the capacity based model. This would give them access to all
Networker products at no extra cost. The customer should work with their Dell/EMC
sales account team to start the process. To upgrade from NetWorker 9.x to
NetWorker 18.2 with either served or unserved license, no action has to be
performed.
Though not mandatory, Dell EMC recommends to convert to the Dell EMC
Licensing Solution model for the flexibility and ease of use it affords. The evaluation
period provides you with 90 days along with a 30-day grace period to determine
whether you want to continue using a legacy model or use the EMC Licensing
Solution.
If a user of the legacy capacity model wants to migrate to the Dell EMC License
Solution after upgrading to NetWorker 9 and above, unused capacity can be
carried over and applied to the amount of storage purchased for the new model.
Overview
Requests for licenses are made to the CLP license server by the NetWorker
process, nsrlmc. nsrd schedules nsrlmc for several reasons including updating
the information about the license server, obtaining an update license, or to request
a capacity license.
The CLP license server keeps count of how many units of capacity are checked out
from a license file. By default, one unit of measure is checked out for each capacity
request that is satisfied. nsrlmc installs the entitlements on the NetWorker server
through an exchange with the license server. The backup administrator does not
manually install entitlements on the NetWorker server.
When a NetWorker server stops, the license server checks the checked out units
back in. The CLP API provides a function for nsrlmc to maintain this heartbeat.
Overview
Install the License server on a supported platform, either Windows or Linux, that is
accessible to the datazones in the environment that it services. Dell EMC
recommends that all license server files and binaries be on locally mounted disks to
ensure that licenses are available while the server is running. Copy the license file
to the licenses directory on the license server and the nsr/lic directory on each
NetWorker server that access's this license server.
Then, run the LMTools utility in Windows or lmgrd in Linux to configure and start
the license server service.
To validate that the license server service is running on Windows, look for the
service name in Windows Task Manager. The default service name is “Flexlm
Service 1." However, this service can be defined during initial configuration. In
Linux, you can search for the Lmgrd service to check that it is running. The license
Overview
The properties of the NetWorker server are updated with information from the
locally residing license file and by querying the CLP license server. The CLP
License server and CLP License server port attribute values are obtained from the
license file on the NetWorker server host. Solution ID and CLP SWID are read from
a license that is checked out from the CLP license server.
The CLP refresh field enables the administrator to force NetWorker to requery the
License server and license file. The contents of the license file are displayed in the
CLP license text field.
Introduction
Overview
The NetWorker Windows installation packages for NetWorker server and client
software include the packages that are listed here:
Smaller, faster installers are available for the NetWorker client and NetWorker
extended client. Use these installers when only installing the client software. The
file lgtoclnt.X.x.x.x.exe is recommended to be used when installing the NetWorker
base client. It is also the preferred installer when installing NMM and all add-ins
that require the NetWorker client first.
Refer to the NetWorker Installation Guide for installation requirements and detailed
procedures.
Overview
This diagram shows the major software packages that are required for the
NetWorker server, storage node, and client installation types and the order that the
packages are installed. The base client package, lgtoclnt, must be installed first.
The extended client software package, lgtoxtdclnt, and the block-based backup
software, lgtobbb, are also required to be installed on the client.
When installing a NetWorker storage node, install the NetWorker client software
first, including the extended client, and the storage node rpm, lgtonode. The
NetWorker Authentication Service is a separate package, lgtoauthc, that must be
installed before installing the NetWorker server or NMC software.
When installing a NetWorker server, install the NetWorker client and storage node
software first. Then, install the NetWorker server software package, lgtoserv, and
the adapter package, lgtoadpt.
See the NetWorker Installation Guide for installation requirements and detailed
procedures.
Overview
The NetWorker server is supported on Windows x64 and Linux x64 platforms only.
The NetWorker server is not supported on Solaris, AIX, Linux x86, and HP-UX
platforms. However, NetWorker storage nodes and clients are supported on these
platforms. NetWorker does not support Linux IA-64.
Overview
Log into the target computer with administrator privileges. After starting the
installation, accept the license agreement on the Welcome to the Setup Wizard
screen. In the Installation Type and Location window, select the software that you
want to install on the host. Note the default location for the software installation
files.
The next several slides cover information that is supplied during the installation
process.
Installing AuthC (1 of 2)
Overview
During the NetWorker installation, the wizard prompts for information for
configuring the NetWorker Authentication Service. On this screen, enter the
authentication server hostname and port.
Installing AuthC (2 of 2)
Overview
Other configuration options for AuthC include specifying a password for the
keystore file and a password for the authentication service administrator account.
After installation, when you logon as the administrator, use the password that is
specified for the authentication service administrator account to logon.
Overview
During the installation for NMC, you are prompted for the NMC installation and
database folders, the name of the authentication service host, and NMC client
service and web server ports. By default, the user name for the PostgreSQL
database on the NMC server is postgres. This account is used to start the
embedded PostgreSQL database. If this account does not exist at the time of
installation, it is automatically created.
Overview
To launch the NetWorker Management Console, enter the URL in a supported web
browser. The URL is: http://console_server:http_service_port, where
console_server is the hostname of the console server and http_service_port is the
port number for the embedded web server that was specified during the Console
server installation. The default HTTP port is 9000. Alternatively, on Windows, the
NMC can be started from the shortcut on the desktop or from the Windows Start
menu.
The NetWorker Management Console Login screen is displayed to the user. A user
cannot run NMC unless a valid user name and password combination is provided.
For User Name, use administrator and for Password, use the password that was
specified for the NMC authentication during the installation.
Overview
Starting the NMC for the first time, the user is prompted for NMC configuration
information including the name of the Authentication server, the name of the
NetWorker server that will back up the NMC database, and a list of NetWorker
servers that this NMC will manage.
Overview
Overview
NetWorker uses WiX bootstrapper technology for installation. You can install
NetWorker software using a silent install from the command line. Here are some
examples of installing and uninstalling using the NetWorker-X.x.x.x.exe. The name
of the file may be different depending on the version of NetWorker used.
When installing the NetWorker server, ensure that the NetWorker authentication
service is started before starting the NetWorker server services.
Overview
It is required to install the latest version of the 64-bit Java 8 software on the
NetWorker server host before installing the NetWorker server or NetWorker
Authentication Service software.
After installing the NetWorker server, install the Dell EMC License server to use the
Dell EMC Licensing Solution model.
At the beginning of the NetWorker Windows base client installation, you can
choose to run the System Configuration Checker. It checks for any OS-related
configuration issues. If any warnings are brought up, they can be addressed and
then the Configuration Checker can be rerun post-installation to verify that the
warnings are cleared.
Introduction
Overview
Overview
On Windows, there are always two httpd processes running when the NMC server
is active. On Linux, there are two or more httpd processes running, where the
parent httpd process runs as root and the child processes run as the user name
specified during the installation.
Overview
Overview
System processes are started through run-control scripts that are executed at
system startup. Installing a NetWorker host, a run-control script that is named
networker is installed in the appropriate system directory, usually a subdirectory of
/etc.
The networker script can be started manually, using a start argument, to start the
NetWorker daemons. When the stop argument is used, all NetWorker daemons,
and any other running NetWorker processes, are stopped.
NetWorker server daemons can be started manually by starting nsrexecd and then
nsrd. For a NetWorker client or storage node, only nsrexecd should be started.
Overview
On a Windows host, use Programs and Features from the Control Panel to
uninstall the NetWorker and NetWorker Management Console software. Or use the
installation binaries and select uninstall when prompted for the operation you want
to perform.
On a Linux host, use the operating system software removal utility to remove the
software.
In either case, the default behavior during removal is to perform a partial uninstall.
This leaves the NetWorker control data installed. To perform a complete uninstall
on a Linux host, the directory containing the NetWorker control data, \nsr, must
be manually removed using a utility such as rm. To perform a complete uninstall on
a Windows host, manually remove the C:\Program Files\EMC NetWorker
folder or whatever folder contains the NetWorker software.
Overview
This lab covers installing NetWorker server and NetWorker Management Console
server software on a Windows host in the lab environment. This host is your
NetWorker server during the remainder of the class. You perform the initial
configuration steps for NetWorker Management Console. You install NetWorker
client on the second Windows host and NetWorker storage node on the Linux host.
Finally, you install and configure the License server.
Summary
Introduction
Overview
Pools automatically separate data by data type. NetWorker server uses pools to
direct a save set being backed up or cloned to a set of volumes.
As illustrated in the slide, there are two types of pools, Backup and Backup Clone,
that NetWorker uses to separate one type of data from another. For example, a
save set being backed up can only be written to a volume belonging to a Backup
pool. When a save set is cloned, the new clone copy of the save set can only be
written to a volume in a Backup Clone pool.
Overview
A common use of media pools is to separate data into different pools that are
based on backup level or type. Pools can be used to maximize recovery speed by
consolidating all data for a specific client onto the same volume. Another use is to
target specific data to specific devices. An example of this is to write all data for the
accounting department to a pool for a Data Domain device that only contains data
from this department.
Overview
This table summarizes how NetWorker determines which pool receives the backup
data, that is based on the configuration of action, client, and pool resource
attributes. Use the Pool attribute in the action resource to specify the pool for the
particular backup action. However, you can elect to use a pool specified in the
client resource by changing the setting of the Client Override Behavior option in
the backup action.
If the Client Override Behavior option is set to Client Can Not Override, and then
NetWorker uses the value for the Pool attribute in the backup action.
If the Client Override Behavior attribute is set to Client Can Override, and then
the value for the Pool attribute in the client resource is used. If the Pool value in the
client resource is empty, then the value that is defined in the backup action is used.
This setting is the default for new action specifications.
Overview
NetWorker creates a unique label for each volume by applying the label template
that is associated with a pool. Thus, a volume is associated with a media pool by its
label. Typically, the label name is consistent with the name of the pool. Ideally,
each pool should have its own unique label template. However, more than one pool
can use the same label template. If a volume being labeled resides in an
autochanger, or library, that is configured to match barcode labels, the label
template is ignored. The volume name will be the same as its barcode value.
NetWorker has several pre-created label templates that can be used, or you can
create label templates from the Media window as shown here. The lower left
illustration shows the configured label template that is named Astro. The labels
assigned to volumes start with Astro.001, Astro.002, and so on, up to Astro.999
and are based on the values that are specified in the Fields and Separator
attributes.
Overview
The NetWorker pool resource is used to configure a new media pool from the
Media window of NetWorker Administration. A Backup pool named Astro is created
that will use the Astro label template.
There is a shortcut way of creating a template. When creating a pool, if you do not
select a label template, NetWorker displays an error message as shown on the
slide. If you click OK and then OK again on the Create Media Pool window,
NetWorker automatically creates a label template using the pool name as the label
template name.
Overview
Use the Configuration tab of the pool resource to specify these fields:
Overview
With the use of virtual tape libraries, recycling of volumes is critical to reclaim disk
space.
Recycle start defines the time to start the automatic relabel process each day. By
default, the automatic relabel process is not done.
Recycle interval defines the interval between two starts of automatic relabel
processes.
Max volumes to recycle defines the maximum number of recyclable volumes that
can be relabeled during each automatic relabel process.
Labeling Volumes
Overview
A volume must be labeled before NetWorker can write to it. During volume labeling,
the NetWorker software writes a unique label on the volume. Label devices by
right-clicking the device from the Devices window of NetWorker Administration.
The label contains information such as the volume name, the name of the pool to
which the volume was assigned, and the block size to use when writing to the
volume.
TDuring a backup, the NetWorker server matches a save set to the appropriate
nsrmmd based on the pool to which the volume belongs. Three events take place
when a volume is labeled.
The device, AFTD1, is labeled into the Astro pool that uses the volume label,
Astro.001.
Overview
In this lab, you configure a label template resource for a pool and then configure a
pool resource. Then, a NetWorker AFTD device and label this device into the new
pool.
Summary
Introduction
This module focuses on the various ways of performing backups with NetWorker:
look at the workflows and actions that are used for traditional, scheduled backups
and how to perform manual backups with user interfaces and commands. This
module also covers performing backups with NetWorker Snapshot Management,
how to back up virtual clients and the use of NetWorker modules for application
and database backups.
Introduction
This lesson covers data protection policies and the resources that are used for
running traditional file system backups.
Overview
NetWorker enables you to perform two types of backups: scheduled and manual.
NetWorker provides user interfaces for configuring and running both types of
backups as shown here. Commands are also available for configuring and running
backups from the command line.
Scheduled backups are the preferred option for performing on-going, day-to-day
backups and ad hoc or on-demand backups. By using scheduled backups, you
ensure that data is protected regularly according to specifications that you define in
NetWorker data protection policies. It is recommended to reserve client-initiated
backups for specific use cases only as needed.
Overview
Protecting data throughout its life cycle consists of backing up specific data to
primary backup media, cloning the backup data to secondary backup media, and
managing the data through the length of time it is required to be kept for recovery.
With NetWorker, clients are protected automatically throughout the data protection
life cycle by using policies.
Policies enable you to define the resources and settings to implement your
business policies for the data that you want to protect. Policies enable you to
design a data protection solution at the data level instead of at the host level. With
policies, for example, you have the ability to develop a service-catalog approach to
the backup configuration of a datazone.
A workflow defines the actions or tasks that you want to perform, when to
automatically start the workflow and how often to run. Multiple workflows in a policy
and each of these workflows is independent of any other workflow in a policy.
A protection group defines the data sources to protect by the workflow, such as a
set of client resources or save sets. There is a single group per workflow.
As you can see here, policies allow for the creation of complex workflows by
chaining multiple actions in a workflow. In this way, you can specify what happens
to a group of client resources throughout the data protection life cycle.
Overview
Recovery point objectives drive how often to back up the data and how long
backup data is retained. For example, if data loss of no more than 4 hours is
acceptable, and then the backup administrator should ensure that backups start
every 4 hours and most importantly, that the previous backup completes in less
than 4 hours.
Recovery time objectives drive the type of backup storage that is used as well as
the backup level. Availability of time to perform backups determines when and how
backups take place (offline/online), type of media that is used, and other factors like
backup level used.
The reasons for performing backup also influence how long backups are retained
and the type of backup storage used. Backups for operational restore purposes are
kept for shorter periods of time. For example, users may request to restore data of
an application from its onsite, operational backup copy for up to one month.
However, the organization may need to retain the backup that is taken at the
beginning of each month for a longer period because of internal policies or
regulatory requirements. These backups may be retained on different storage
media located offsite. Also, backups that are needed for disaster recovery
purposes will be stored in offsite media. Ensure that retention is sufficient to retain
the backup data through to the next backup.
Overview
NetWorker data protection policies can be grouped into six main categories or
strategies.
Traditional backups include file system and application backups, and NDMP
backups. The major focus of this course is traditional, file system backups,
although we touch upon these other strategy types.
NetWorker and NMC server database backups and maintenance activities includes
NetWorker server bootstrap and NMC database backups.
VMware backups include protection of VMware virtual machines using the image-
based NetWorker VMware Protection Solution with vProxy.
Overview
The preconfigured policies are the Bronze, Gold, Platinum, and Silver policies,
highlighted here, along with the groups used in the workflows. NetWorker also
includes a preconfigured policy for backing up the NetWorker and NetWorker
Management Console servers.
The workflows within the preconfigured policies contain backup actions and
specifications that would typically be performed for that particular level of service.
The Bronze policy, with two workflows, one for backup of application data and one
for backup of file system data, is an example of a policy for a nonvirtualized
environment. With workflows containing backup and clone actions the Gold policy
is an example of a policy for a virtualized environment requiring backup
redundancy. The Silver policy provides for similar protection for nonvirtualized
environments. Lastly, the Platinum policy containing a snapshot backup action
followed by a clone action as an example of a policy for an environment containing
Dell EMC storage arrays/appliances requiring redundancy.
You can choose to use these preconfigured resources, modify them for your
specific environment and also create policy resources specific to your data
protection requirements. Let us look at how to create NetWorker data protection
policies.
Overview
Here are the high-level steps that are involved in configuring a new data protection
policy.
Next, within the policy, create a workflow for each data type. A workflow and its
associated group and actions determine the protection that is afforded to the data.
Group data that is protected by similar requirements in the same workflow, such as
by datatype (file system/database), retention, backup levels, backup media, backup
frequency, and time of backup. In other words, all data to be backed up by this
workflow will be protected in the same way.
Then, create a protection group for each workflow. Members of a group are client
resources or save sets depending upon the type of group.
The next step is to create one or more actions for each workflow. Actions define the
protection activities for the group.
Lastly, create client resources that specify the data you want to protect and assign
the client resources to the applicable protection group.
The easiest and most common way to create policy, protection group, and client
resources is to use the wizards and windows that are found in the NetWorker
Administration Protection window. Let us explore these resources and the options
they offer.
Policies
Overview
Use policies to organize the data protection resources to support the operations
that you want to perform in your backup environment. You may choose to use the
preconfigured policies or create new policies. For example, you can use the
preconfigured policies to organize backup operations by criticality, Bronze, Gold,
Platinum, and Silver. Another example is to create policies according to the types
of backups performed, such as file system, database, and snapshot. The choice is
up to the backup administrator.
To edit existing policies or create new ones, use the Protection window. Here we
have created a new policy named Filesystem Backups.
Workflows
Overview
From the Protection window, create a workflow within the policy. Specify the
workflow name, the time to start the workflow, and notification settings for the
workflow. Specify the protection group if known. Make sure the Enabled and
AutoStart options are selected to ensure that the workflow runs at the selected
time and intervals. The Interval attribute determines how frequently the workflow
runs, and the default is every 24 hours or once each day. The Restart Window
attribute specifies the length of time that NetWorker can manually or automatically
restart a failed or canceled workflow.
A policy can have one or more workflows, however, each workflow can belong to
only one policy.
Protection Groups
Overview
Each group can be assigned to only one workflow. The same client resource can
be added to more than one group.
Overview
The type of group that you create depends on the types of clients and data to be
protected. The actions that be displayed for a group depend on the group type. For
file system or traditional backups, there are two types of groups that can be
defined.
A Basic client group defines a static list of client resources to back up. You add
client resources with like backup requirements to the same protection group.
A Dynamic client group determines the clients to be protected at run time based
on the value of a tag. When the group is created, you specify a tag that is used to
choose the clients. Then, when configuring clients, you assign that tag to all clients
that you want to be members of the group. At run time, NetWorker automatically
generates a list of client resources with a tag that matches the client tag that is
specified for the group. The benefit of this type of group is that an administrator
does not need to remember to add specific clients to a group, and clients are
automatically added to the group based on the tag you assign when creating the
client resource. In the example on the right, we have created a dynamic clients
group with a tag of Backup at 7. At run time, this client resource is automatically
added to the group.
Overview
Actions define the data protection tasks that take place when the workflow is
started. There are four types of supported actions for traditional, file system backup
workflows. These are Backup Traditional, Probe, Check Connectivity and Clone.
A Probe action runs a user-defined script on a client host that passes a return
code. If the return code is 0, the next action such as a backup, is performed. If the
return code is 1, and then the next action in the workflow is not performed.
The next several slides in this lesson describe some of the most common options
for each of the action types.
Overview
For a traditional backup action, you specify the level of backup to occur on each
day of the selected period, either Weekly by day or Monthly by day. Supported
backup levels are full, incremental, cumulative incremental, logs only, synthetic full
and skip.
Overview
NetWorker supports full level backups that back up all data in a save set, or one of
several levels that back up only data that has changed since a previous backup.
The levels that are used are similar to the UNIX ufsdump or dump command.
The backup levels that are supported by NetWorker are listed on the slide.
A full backup backs up all files and directories in a save set and is the lowest
backup level, being equivalent to a UNIX level 0 backup. A full backup requires the
most storage space and takes the longest time to perform.
An incremental backup contains all files that have changed since the last backup
of any type while a cumulative incremental backup contains files that have
changed since the last full. Using incremental and cumulative incremental backup
levels generally takes less time than performing full backups and uses less volume
space. However, using these backup levels may slow file recovery if multiple save
sets are required to recover to a particular point in time.
Overview
Only the NetWorker server and storage nodes are involved in synthetic full backup
processing. By lessening the number of traditional full backups, the backup
workload of backup clients is reduced, and the network overhead involved in
transferring the backup data from the clients to the storage node. Synthetic
backups also reduce recovery time and steps as data can be restored from the
synthetic full backup instead of a traditional full backup and all its dependent
incremental backups.
In the example shown on the slide, the synthetic full backup that is taken on
Wednesday combines the full backup run on Monday with the incremental backups
that are run on Tuesday and Wednesday. The resulting synthetic full backup is
equivalent to a traditional full backup run simultaneously as the Wednesday
incremental backup and reflects the state of the data as of the incremental backup
of Wednesday. The incremental backup run on Thursday includes all changes
since the incremental on Wednesday. The next synthetic full backup (not shown on
the slide) will combine the previous synthetic full backup and subsequent
incremental backups.
Overview
For Backup Options, choose the storage node and media pool with the devices on
which to store the backup data. Set Retention for the amount of time that the
backup data will be retained. After this period expires, the metadata about the save
sets is removed from the client file index and marked as recyclable in the media
database.
When Client Override Behavior is set to Client Can Override, values for Schedule,
Pool, Storage Nodes, and Retention policy in the client resource are used instead
of the values for comparable attributes in the backup action. The default for this
attribute is to enable the client to override the action.
The DD Retention Lock feature securely locking the data on a Data Domain
system. The save sets cannot be deleted, modified, or overwritten during the
retention period. The Data Domain target device must also have DD Retention
Lock feature enabled.
Overview
Some commonly used options in the Specify the Advanced Options window
include.
Retries: The number of times NetWorker should retry failed probe and backup
actions.
Retry delay: Amount of time in seconds that NetWorker waits before retrying a
failed action.
Probe Action
Overview
A probe action runs a user-defined script on clients that are members of the group
that is assigned to the probe workflow of the action. Based on the result of the
probe, the subsequent backup action in the workflow is either run or not run.
For a probe action, you define the days of the week that the action runs. If the Start
backup only after all probes succeed attribute is checked, the following backup
action runs only if all probes in client resources in the assigned group succeed.
Succeed is defined as a return code of 0. If the field is not checked, the backup
action starts if any one of the probes that are associated with a client resource in
the assigned group succeeds.
Probe Resource
Overview
A probe is a user-defined script or program that passes a return code. The name
of the probe script must begin with nsr or save. The probe script must reside in the
directory that contains the NetWorker client binaries on each client referencing the
probe, such as C:\Program Files\EMC NetWorker\nsr\bin for Windows clients and
/usr/sbin on UNIX machines.
A NetWorker probe resource is created for each probe script. The probe resource
specifies the probe script name and command options, if any. The probe resource
is and then associated with one or more client resources. The client resources are
associated with a group, and the group is associated with the workflow containing
the probe action.
Overview
A check connectivity action tests connectivity between the NetWorker server and
clients that are members of the group that is assigned to the workflow. Based on
the result of the test, the subsequent action in the workflow, which can be either a
probe action or a backup action, is either run or not run. For the check connectivity
action, you define the days of the week that the action runs. If the Succeed only
after all clients succeed attribute is checked, the following action runs only if all
clients succeed. If the checkbox is cleared, the following action runs if connectivity
is achieved for one or more clients.
Overview
Create client resources for backup clients to specify the data sets to be backed up,
along with other configuration options. You may decide to have multiple client
resources for a single host machine; for example, you may want to back up
different save sets for the same client host at different times.
NetWorker provides the New Client Wizard to walk users through the steps to
quickly create a client. The New Client Wizard is accessed from the Protection
window by right-clicking Clients.
The wizard asks for the client name and supplies default values for the several
attributes in the client resource. The slide lists the client resource that is created for
a client named winclient.emc.edu.
To note that prior to configuring the client using the New Client Wizard, we first
installed the NetWorker client software on the client host. Alternatively, you can use
the Properties window of the client resource to create and configure a NetWorker
client.
The New Client Wizard presents the most common client resource fields to enable
administrators to quickly configure client resources for most situations. You find
that the Client Properties window contains many more fields to further customize
backups for individual client resources and save sets. A full set of attributes is
displayed by selecting Diagnostic Mode from the View menu. We discuss several
of these additional fields later in this course.
Overview
Options that are displayed by the wizard for configuring the client depend upon the
application type selected. Here you can see some of the client resource options
that are available through the New Client Wizard for a traditional, file system
backup.
Overview
From the Select File System Objects window, identify the save sets that will be
backed up by this client resource. For a file system backup, NetWorker displays the
client’s file systems enabling you to select the data to be backed up. There is no
limit to the number of save sets you can specify.
The slide shows a specification for backing up two save sets: C:\Documents and
C:\Program Files.
Overview
By default, NetWorker provides a value for the Save set attribute which defines
which files are backed up for this client resource. The default value for the Save
set attribute is All, which causes all local file systems/drives to be backed up. Data
in the All save set by operating system is shown in the table on the slide.
Important: Certain save sets are excluded from the All save set. Also
special keywords can be used with All to define the file systems to
include in a client backup. For a list of excluded save sets and key
words, see the “The All save set” topic in the NetWorker
Administration Guide. The special save set DISASTER_RECOVERY:\
is used to back up all of the data that is required to perform a
Windows BMR recovery. Recovering Windows hosts is covered in
more detail later in this course.
If Save set is set to anything other than All and you want to back up any of the
Windows SYSTEM save sets, you must explicitly specify them in the save set list.
Overview
Now, let us put all the components of a NetWorker data protection policy together.
In addition to the table view, NetWorker provides a visual representation of each
workflow. Shown here is a view of a basic backup policy that is configured and
displayed from the Protection window.
The Traditional backups workflow that is pictured here is a workflow in the policy
named “Filesystem Backups” for a basic backup. The workflow is configured with
one action named “backup”. When the workflow runs, the workflow backs up the
clients that are assigned to the “Filesystem Backups” group to a device in the
“AFTD Devices” pool.
By using policies and workflows, NetWorker enables you to see at-a-glance how
your data is protected.
Overview
As we have seen, a workflow can have one action or multiple actions. Multiple
actions can be chained together and run sequentially or concurrently. Where there
are multiple actions in a workflow, a subsequent action in the chain operates on the
output that is generated by the action that precedes it in the workflow. The
subsequent action does not start until the previous action finishes.
The table summarizes the valid workflows that can be configured for traditional
backups through to a third action. A workflow can be as simple as one backup
action or it can be more complex with a succession of various actions. There are
some rules, though, for which action types can occur where in the succession. For
example, the only action that can follow a traditional backup is a clone action. The
clone action can occur either concurrently with or after the backup action. A
workflow for a traditional backup can optionally include a probe or check
connectivity action before the backup. A check connectivity action can be followed
by either a backup action or a probe action. When configuring the actions in a
workflow, the wizard enforces these rules by only presenting the valid action types
depending upon the position of the action in the workflow.
In the example that is displayed above, a workflow contains two actions, a backup
action and a clone action. A list of clients to back up is sent to the clone action
depending upon the outcome of the backup action.
Overview
To create a workflow for a traditional backup containing more than one action, start
with the first action for the workflow. Per the chart on the previous slide, that can be
either a probe, check connectivity or a backup traditional action. Then, the next
action that you add to the workflow depends upon what was chosen for the first
action.
Overview
In this lab, you create the resources necessary for a traditional backup workflow.
You create a new client resource and assign the client to a new group, and then
create a new policy with a new workflow and backup traditional action.
Introduction
This lesson covers the data flow of scheduled or server-initiated backups, how to
perform ad-hoc backups of policies and workflows, and how to initiate policy-based
backups from the command line.
Overview
Once a policy and its associated workflows are created, workflows automatically
run according to the time and interval specifications in the workflow. In this
example, workflows in the “Server Protection” and “Filesystem Backups” policies
are enabled for autostart. Each workflow starts according to the schedule defined in
the workflow. The last time a policy, workflow, or action was run is displayed in the
Start Time column of the Policies section of the Monitoring window.
Overview
To start a workflow, right-click the name of the workflow that you want to start and
select Start. You can run a workflow for selected clients in the workflow by
selecting the workflow and then choosing Start Individual Client from the
Monitoring menu.
Overview
Backup administrators can use the nsrworkflow command to run policy workflows
from the command line. Basic command usage is to specify the policy and
workflow names to be run. NetWorker starts the actions within the named workflow
of the specified policy. In this example, we are using nsrworkflow to run the action
in the “Traditional Backups” workflow within the “Filesystem Backups” policy.
Overview
There may be times when an administrator wants to run a workflow with a different
value for an action setting. NetWorker provides the –A command line option to
enable overrides of certain action settings using nsrworkflow from the command
line. When the –A option is used, the command line flags are passed to the
executable implementing the specified action and are in effect for that operation.
The –A option is supported for these action types: backup traditional, backup
snapshot, probe, and clone.
In the example shown here, we are again running the “Traditional backups”
workflow in the “Filesystem Backups” policy. But this time, we are overriding the
level that is specified in the action to run a level full backup.
Overview
Policies and workflows can also be started by running the nsrpolicy start command
on the NetWorker server. You specify the policy name and optionally, a workflow
within the policy and the name of one or more clients. Workflows must always start
from the first or head action. Granular start of a single action within a workflow is
not supported.
When using the nsrpolicy start command, it is possible to override the workflow and
run the workflow for one or more clients as long as one or more clients are clients
that are specified in the group that is assigned to the workflow.
In the example shown here, we are starting the workflow, “Workflow with multiple
actions,” in the policy, “File system Backups,” for one of the clients in the workflow.
Overview
savegrp uses nsrexecd to start saves on NetWorker client hosts. nsrexecd, running
on each client host, only allows remote execution requests from NetWorker hosts
listed in the client’s /nsr/res/servers file. If this file is empty or does not exist, the
client can be backed up by any NetWorker server.
Priority attribute is set to 500. To guarantee that the backup of one client occurs
before the backup of another, place each client in separate groups and configure
the workflows to start at different times.
Monitoring Backups
Overview
As shown here, from the Monitoring window, open up the tree in the Policies
section to the desired level. For backup actions, you can drill down to the clients
within the backup. The Status column displays the status of running operations or
for the last run time. For example, a green checkmark indicates a successful
completion for the last time the operation ran. A blue icon indicates that an
operation is in progress and a red icon points to a failed operation. There are other
policy status icons that may be displayed, hover the mouse over an icon to display
its meaning. Extra monitoring information can be seen from Monitoring.
Policies – Lists all policies, workflows and actions with status, the time the last
backup was run, the duration of the backup, the completion percentage, and the
next time the backup will run. Clicking the Actions tab displays a list of all the
configured actions. Column information indicates the action status and its policy
and workflow.
All Sessions – Displays all sessions currently running on the NetWorker server.
You can select other session tabs to display only certain session types, such as
save sessions, recoveries, and clones. You can cancel a session by highlighting
the session, right-clicking and selecting Stop.
Devices – Contains storage node, volume, pool, and performance information for
configured NetWorker devices. The status icon indicates if the device is currently
active (shown here), disabled or idle.
Log – Contains information about the many actions that are performed by
NetWorker during the running of the policy or workflow.
Alerts – May contain information such as the license status alert shown here. The
priority column indicates the criticality of the alert.
Overview
To find out more about workflow operations, right-click a workflow from the
Monitoring window and choose Show Details. The Workflow Summary window
displays recent instances of running the selected workflow. Select the instance that
you are interested in and details about the actions of that specific workflow run are
displayed in the lower portion of the window. Clicking Show Messages displays
the end of the log file for the selected workflow instance. Options for the Show
Messages window include Get Full Log, Print and Save the messages to a file on
the local host.
Overview
With the status icons and messages that are provided from the Monitoring
window, you can quickly obtain information about failed actions and workflows and
begin troubleshooting the failure.
Policy Notifications
Overview
You can define the notification settings for a policy and its associated workflows
and actions.
Notifications can be sent to a log file or to an email address. You can change the
content of the notification command to send the notification to a different log file or
to a mail recipient.
At the workflow level, you have the choice to use the notification configuration that
was set at the policy level or to send a notification that is defined for the workflow
on completion of all of the actions in the workflow or on failure of any one of the
actions. When a notification is set at the workflow level, it supersedes any
notifications that are configured at the policy level.
For an action, you can choose to use the notification that is configured at the policy
level or you can configure a different command on completion or on failure of the
action. When a notification is set at the action level, the notification is generated in
addition to any notifications generated at the workflow or policy levels.
Finally, a protection period can be specified. This allows a specified time for the
save set to be retained. The range can be between minutes to years. By default,
this option is disabled.
In the example shown here, the default notification is left unchanged at the policy
level. However, for the backup action, we chose to use a different notification upon
completion of the action. When the action finishes, the notification message is
written to the file, C:\filesystemaction.log.NetWorker supports several predefined
variables for notifications including: ${NSR POLICY}, ${NSR WORKFLOW} and
${NSR ACTION}. For example, when the notification mail -s “workflow ${NSR
WORKFLOW} completed” recipient@mailserver is used, the actual name of the
workflow will be substituted in the subject.
Workflow Considerations
Overview
You can stop workflows that are currently running at the workflow and at the policy
level. If for some reason an action fails during the execution of a workflow, a
workflow may be restarted. In that case, each action continues where it left off.
Overview
The checkpoint restart feature is not enabled by default and is configured on a per
client basis. To enable the feature, check Checkpoint enabled from the client
resource General tab. In Diagnostic Mode, Checkpoint granularity is the level at
which the backup can be restarted, either at the directory or file level. When restart
by directory is selected, after each directory is saved, the data is committed to the
index and media database. If restart by file is selected, every file is committed to
the index and media database. This is time consuming and has the potential to
degrade performance during a backup containing many small files. Because of this,
restarting by file is recommended only for save sets with a few, large files.
Overview
The Backup command attribute is used to enter a specific backup command when
using one of NetWorker’s add-on modules, such as NetWorker Module for
Microsoft and NetWorker Module for Databases and Applications, to perform
application-specific backups.
You can create a custom script to perform tasks before, after, or instead of the
save process. These tasks might include moving, deleting, or renaming files,
stopping and starting processes, or generating logging information. When writing a
custom script, you must include the save command if you want a save stream to be
generated. The save command should have an argument of $* to retain all of the
arguments sent by the NetWorker server.
The custom script must have a name that begins with nsr or save (for example,
nsr_my_custom_command or save_my_custom_command). The custom script file
must also reside in the same directory as the NetWorker save command. On
Overview
When a client’s Backup command attribute is blank or contains anything other than
save, the specified command (or save if the attribute is blank) is started once for
each save set. Thus, if a client has three save sets, the backup command is started
three times.
Overview
For a client to run any type of backup, it must first be configured as a client
resource on the NetWorker server. When the client performs a save, it generates a
save stream, sends it to the assigned storage node, and sends tracking information
to the NetWorker server. The storage node also generates tracking information
which it sends to the server.
Overview
The save command can be started directly from the command line on any
NetWorker client.
Overview
Save is the NetWorker backup command-line utility that is used to back up files and
directories. It creates a single save set containing the files and directories that are
specified as arguments. If no files or directories are provided as arguments, the
current directory is backed up.
Unless the -x option is used, save will not cross mount points. For example, save /
in a Linux environment backs up only the root file system. Please see the
NetWorker Command Reference Guide for more options and information about
save.
Overview
Previewing the backup does not back up any data. Running save with the –n
option performs many of the tasks that take place during a normal backup, such as
contacting the NetWorker server to request permission to back up. However, no
save stream is generated.
Previewing the backup ensures that save is working properly and displays an
estimated size of the save set as well as the number of files to be backed up. A list
of files that would be saved is also displayed.
Overview
NetWorker User is used to perform both saves and recoveries from Windows client
hosts. It can be initiated from Windows Start or by running winworkr.exe on the
command line.
The four buttons in the upper-left corner of the window initiate the following tasks.
Perform a backup – This opens the smaller backup window that is shown in the
slide on the right.
Perform an archive – This requires a special license and is not covered in this
course.
Verify files – This allows you to verify whether a recent backup or archive
operation was successful by comparing data on disk to data on a volume. See the
NetWorker Administration Guide for details.
Overview
From the Backup window, files and folders are marked for backup. Folders are
displayed in the left pane. Clicking a folder displays its contents in the right pane.
Items can be marked for backup in either pane.
After marking the files and directories to back up, click Start (green lightening bolt)
to begin the backup. You can monitor the backup in the Backup Status window,
which opens when the backup begins.
Overview
Using Special Handling affects all the files that are backed up during the backup
session. To perform compression, password protection, or encryption only on
selected files in the backup, right-click the item that you want to handle specially
and select the appropriate action from the menu. The Attributes column shows the
special handling that is currently set. A value of P is marked for password
protection, E for password protection and encryption, and C for compression.
Overview
Introduction
This lesson covers several advanced backup options including synthetic full and
block based backups, NetWorker directives, NetWorker Snapshot Management,
and NetWorker backup support for virtual clients, databases, and applications.
Overview
Synthetic full backups are supported only for traditional, file system backups.
Application modules and NDMP backups are not supported.
Using synthetic full backups can reduce the number of full backups that need to
be run but does not eliminate the requirement to run full backups. Run synthetic
backups as a replacement for full backups, not in addition to.
Because synthetic full backup operations include only the NetWorker server and
storage node, they have the potential to reduce the impact of backup operations
on the network and client resources. However, it is also important to monitor the
impact of synthetic backup processing on participating storage nodes.
Overview
A full backup or a synthetic full backup, created with NetWorker 8.0 or later,
must exist.
All incremental backups participating in the synthetic full backup are in the
media database.
All save sets participating in the synthetic full must:
Participating storage nodes must have attached devices for read and write.
Synthetic backups can be directed to any device that can be used in a traditional
full backup. However, because synthetic backup processing involves concurrent
recover and save operations, Dell EMC recommends using backup devices that
support concurrent operations, such as advanced file type and Data Domain
devices. This allows NetWorker to automatically manage volume contention. Also,
consider using AFTD or Data Domain devices to store all participating backups on
a single device.
Overview
The tasks that are required for configuring a scheduled synthetic full backup
include.
1. Create a workflow specifically for scheduled synthetic full backups and assign
the group to the workflow. Set the schedule in the backup action to include
synthetic full backups. Remember to still include full backups regularly on the
schedule.
2. Create a group resource, and assign one or more clients to the group. Do not
mix Windows with UNIX clients.
3. Create a client resource for each backup client that participates in the synthetic
full. Ensure that the save sets meet synthetic full requirements. Make sure that
the Backup renamed directories attribute is enabled on the General tab of the
client resource. This attribute is enabled by default for NetWorker 8.0 and above
clients.
4. Create a client resource for each storage node that will be performing
scheduled synthetic full backups.
Overview
Overview
NetWorker supports block based backups (BBB) for Linux and Microsoft
Windows platforms, but BBBs do not support the WINDOWS ROLES AND
FEATURES save sets.
Using block based backup technology, backups complete in less time than
comparable non-BBB backups. Also, no index is created as part of this workflow.
This makes block based backups of particular benefit for high-density file systems
where, potentially, millions of files would need to be indexed and indexed again
with every backup. The fact that NetWorker does not create an index in this
process is a differentiator in the industry. It saves time and space in the backup
workflow. Even though an index is not created, recovery at the file level is still
supported. This is done by virtually mounting the backup, at which point, files can
be viewed and recovered.
Overview
For Linux platforms, in addition to the NetWorker base client installation package,
you must install the BBB software package named lgtobbb to provide a NetWorker
client with block based backup support for incremental backups and recoveries.
Block based backups require the use of client direct, therefore, only AFTD and
Data Domain device types are supported as backup targets. You can, however,
clone block based full backups to other device types including tape and virtual tape.
To enable the block based backup feature, select the Block based backup
attribute in the client resource. Client direct is enabled by default. Valid save sets
include the All save set and volume/volume mount point levels. Save sets at the
folder or file level are not supported for backup. For Linux, each volume group must
have at least 10% free space for block based backups to succeed. This space is
required for copy on write snapshot processing.
Overview
Supported backup levels for block based backups are full and incremental. Block
based backups can coexist with traditional backups.
When backups are sent to an AFTD, selecting any level apart from full or
incremental results in an incremental backup being performed. The next backup
after 38 incremental backups will automatically be a full backup.
On a Data Domain device, selecting any backup level apart from full results in a
virtual full backup. The backup save sets are displayed as level full. Forever
incremental backups are supported.
A full backup must be created initially. Incremental backups must be created on the
same device as full backups. When using incremental backups, the next backup
after a reboot of a client host will be a level full.
NetWorker Directives
Overview
A directive is a set of statements and arguments that the save command uses
when generating a save stream. Directives enable you to perform optional tasks
such as skipping, compressing, or encrypting files. There are three types of
directives.
If there is a conflict between directives, global directives take precedence over local
directives. On Windows systems, NetWorker User local directives take precedence
over local directive files.
Directive Syntax
Overview
Directory names are specified within double angle brackets, “<< >>”. A directory
specification of “<< / >>” on a Windows host is equivalent to all drives. Quotes
around the directory specification are not required for a UNIX path name.
Indentation is optional.
ASMs on following lines affect how files under the specified directory are saved.
When an ASM has a leading + it is recursively applied to all subdirectories.
A pattern is a file or directory name. It may contain the wildcards *, ?, and [].
Multiple pattern arguments are separated by white space.
In the following example, the skip ASM applies only to files or directories in /etc
whose names end in .log:
skip: *.log
Overview
Examples:
Skip the file expenses.xls in the C:\docs directory, and compress all files having a
.mdb extension residing in C:\docs and recursively below it.
skip: expenses.xls
+compressasm: *.mdb
Skip all files with .tmp and .jpg extensions anywhere under /opt/data.
Overview
Save environment keywords are used to affect how current ASMs, as well as ASMs
further down in the directory structure, are to be applied.
<< / >>
+compressasm: .
forget
ignore
The result is that nothing under /export/home is compressed and all .nsr files under
/export/home are ignored. Thus, even if a user has a directive file
export/home/xyz/.nsr containing:
<< / >>
+compressasm: .
forget
ignore
+compressasm
allow
Global Directives
Overview
Use directive resources to apply global directives to individual client resources for
server-initiated backups. NetWorker provides various preconfigured global
directives for various operating systems. These resources can be modified, but
they cannot be deleted. You can also create your own directive resources.
You apply a global directive to individual client resources using the Directive
attribute on the client resource.
In this example, we want to skip all files with an extension of tmp for a specific
Windows client resource. When a backup action runs for this client resource, it
skips all tmp files.
Overview
You can encrypt backup data on Windows, and UNIX hosts using the NetWorker
Advanced Encryption Standard (AES) Application Specific Module (ASM). The AES
ASM provides 256-bit data encryption. NetWorker uses the Datazone passphrase
attribute in the NetWorker server resource (NSR) to generate the datazone
encryption key that is used during backup and recovery operations with encryption.
When enabling backup encryption, specify a value for the Datazone passphrase
attribute. If you do not specify a Datazone passphrase, NetWorker uses a default
passphrase.
You control access to the passphrase through the lockbox resource on the
NetWorker server. NetWorker administrators with sufficient privileges can specify a
list of users that have permissions to store, retrieve, and delete AES passphrases.
Only users who are specified in the lockbox resource can modify the Datazone
passphrase attribute in the NSR resource.
Overview
You enable encryption for save set backups by applying the AES directive to the
client resource. Select Encryption directive for the Directive attribute. When this
client is backed up, the save sets are encrypted.
In this example, when any backup workflow containing this client runs, the save set
is encrypted during the backup operation.
Overview
You can recover AES encrypted data by using the Recovery wizard in NetWorker
Administration, NetWorker User on a Windows host, or the NetWorker recover
command.
During a recovery of encrypted backup data, the passphrase that was used to
encrypt the data must be used to decrypt it for a successful recovery. By default,
NetWorker uses the current value of the Datazone passphrase attribute to recover
the data. If the key generated from this passphrase fails, NetWorker uses the key
that is generated from the default passphrase. If this fails, NetWorker fails the
recovery.
Overview
The purpose of configuring local directive files using NetWorker User is to avoid
having to manually edit a nsr.dir file and worry about using correct syntax. Using
NetWorker User simplifies the creation of the directives.
This type of directive has limitations. It can only configure ASMs that NetWorker
User is familiar with. These include null (similar to skip), compressasm, pw1
(password-protect), and pw2 (encrypt).
To configure the directives, start NetWorker User and select Local Backup
Directives from the Options menu. All files and directories are initially marked.
Unmark files and directories you want skipped during backups, and apply special
handling to those items for which you want special handling. Save the directives by
selecting Save Backup Directives from the File menu. The networkr.cfg file is
created and read by save during subsequent backups. If the file exists, it is updated
each time that you save the directives. networkr.cfg resides at the top level of the
system volume (usually C:\).
More information about directives can be found in the nsr_directive (for server-side)
and the nsr (for client-side) topics in the NetWorker Command Reference Guide or
the UNIX/Linux man pages. Also, please see the Directives topic in the NetWorker
Administration Guide.
Overview
A snapshot is a point-in-time (PiT) copy of data files, volumes, or file systems. NSM
provides snapshot backups on disk that can be tracked and managed from
NetWorker. You can use snapshots for impact-free backups by using a server other
than the production host to perform clones of snapshots to backup media. This
alternate proxy host or mount host take on the performance burden instead of the
production server.
that are offered for conventional backups such as monitoring, scheduling, and
reporting.
Overview
In the diagram, critical application data is stored on a Dell EMC storage system.
Production data can consist of file systems and databases. At the time of backup,
an array-based point-in-time snapshot is created. NetWorker uses cloning to
rollover or copy the snapshot to backup media, DD Boost, or AFTD devices. There
can be multiple point-in-time snapshots that are taken throughout the day, any one
of which may be cloned to backup media as needed, depending upon the
customer’s protection needs. NSM provides snapshot restore/recovery capabilities
to retrieve data directly from a snapshot (snapshot restore) or from the clone copy.
You can also replace data on a source disk from a snapshot by performing a
rollback restore.
Overview
NetWorker Snapshot Management supports Dell EMC storage array and storage
appliance configurations that are listed here. Consult the latest NetWorker
Snapshot Management guides for the most up-to-date NSM support.
Snapshot Workflows
Overview
Snapshot Only: With a workflow containing only a snapshot backup action, NSM
creates a snapshot on the storage array. The snapshot is retained on the storage
array only. For a ProtectPoint backup, NetWorker creates a snapshot of the
specified files on the application host and retains the snapshot on a Data Domain
device only. NetWorker catalogs the snapshot as a backup in its media database
The snapshot can be used for a snapshot restore.
Snapshot and Clone: The second workflow depicts a snapshot backup action
followed by a clone action. Here, NSM creates a snapshot and then the save sets
specified in the client resource are copied (cloned) from the snapshot to backup
media. Media can be DD Boost, AFTD, or tape devices. The NetWorker media
database catalogs both the snapshot and the clone. For the clone, NetWorker
records the content of the snapshot for file system backups in the CFI; for the
backup and the clone, the application files being protected for application backups
are recorded in the CFI. You can also clone VMAX3 Snapvx and
RecoverPoint/XtremIO snapshots to ProtectPoint devices. A rollover-only workflow
can be achieved by following a snapshot backup action with a clone action that
specifies to delete the source save set after the clone action completes. In this
case, the snapshot is cataloged, cloned to media, and then deleted. Only the
rollover is available for recovery.
Delayed Clone: The third workflow is where the clone action is not directly tied to a
snapshot backup action. In this example, a save set group is used to select the
specific input for the clone. We discuss configuring clone operations in a later
module of this course.
Overview
Many of the options in the Policy Action wizard are similar to those for other types
of backups. Of particular note for snapshot backups are the fields on the Specify
the Snapshot Options screen. Snapshot retention is specified using duration-
based retention with the Retention attribute. After the period specified here, the
save set is removed from the media/CFI databases and the snapshot is deleted.
For Minimum Retention Time, specify the minimum amount of time to retain the
snapshot. When the minimum amount of time expires, a snapshot action in
progress can remove a snapshot from a storage device to ensure that there is
sufficient disk space for the new snapshot.
Overview
With data on supported hardware, NSM provides snapshot backup support for file
system clients, NMDA for Oracle and DB2, and NMSAP with Oracle. NSM is part
of the NetWorker extended client software package. This package must be
installed on the client to use NSM features. Each application host and mount host
must run NetWorker client and extended client software. In Windows, the extended
client is automatically installed when using the NetWorker all-in-one installer for
installing the NetWorker server and storage node. It is not automatically installed
when selecting the client install only from this package, when using the separate
client installer, or when installing on a UNIX platform. In these cases, install the
extended client package after the base client is installed. Note that using NMDA
and NMSAP with NSM requires installing those packages as well.
The client resource is used to specify snapshot backup options such as the storage
array on which to create the snapshot, and the mount host and storage node to be
used for rollovers. When NSM is enabled for the client resource, the wizard
presents storage array and other NSM backup options for configuration.
Overview
The types of snapshot restore that can be performed depend on the storage
location and other factors.
Snapshot restore - You mount and browse the snapshot file system on the
storage node/mount host and select the files, file systems, or volumes to restore.
Restore from clone - You perform a traditional NetWorker restore from backup
storage media.
Rollback restore - You restore the snapshot by using the storage array features.
An application volume is unmounted and its entire contents are replaced by the
entire contents of the selected snapshot.
Overview
the most commonly used third-party applications, including IBM DB2, IBM
Domino/Notes, Oracle, MySQL, Sybase, and Informix.
NetWorker Module for Microsoft Applications (NMM) delivers a unified backup
solution for Microsoft applications. NMM works with Microsoft Volume Shadow
Copy Service (VSS) technology for backups of Microsoft Exchange, SQL,
SharePoint, Hyper-V, and Active Directory. Additionally, this module provides
the capability to leverage Microsoft VDI for SQL Server to provide a second
method for Microsoft SQL backups.
NetWorker Module for SAP provides backup and recovery of SAP applications,
including SAP HANA.
NetWorker Module for MEDITECH is used to protect MEDITECH
implementations.
You can learn more about NetWorker application support from the training course,
NetWorker Integration Workshop, focuses on optimizing NetWorker performance
and integrating NetWorker with Dell EMC backup products, Microsoft applications,
Oracle, SAP, MEDITECH and virtual environments. Operational best practices are
included with a focus on configuring and performing backup and recovery of file
systems, applications and databases.
Overview
Overview
The labs cover configuring advanced workflows using a check connectivity action,
dynamic groups, a notification at the action level, and using the skip directive.
Introduction
This lesson covers the NetWorker options for protecting machines in a VMware
environment. This includes an overview of how VMware client backups are
supported as well as the workflow for image backups.
Overview
NetWorker 9.1 and above provides support for two primary types of backup and
recovery solutions for VMware virtual clients.
The first option is guest-based where a NetWorker client is installed on each virtual
machine host the same as if it was a physical machine.
The second option is the NetWorker VMware Protection solution (NVP), which
became available with NetWorker 9.1 and above. NVP uses a native, data mover
proxy appliance, or vProxy to backup and restore virtual machines that run in a
VMware virtualized infrastructure. NVP replaces the previous VMware backup
solution, EBR/VBA.
Overview
Deciding which backup method to employ for backing up virtual machines depends
upon many factors. These include ease of use, efficiency, and impact of backup
processing on resources, as well as backup and restore capabilities. This slide
shows some comparisons between the two current solutions.
Overview
NMC server—Provides the ability to start, stop, and monitor data protection policies
and perform recovery operations.
Dell EMC Data Protection Restore client—Provides the ability to perform file-level
restore by using a web interface.
DDR1 and DDR2—Data Domain appliances that receive and clone backup data in
SSDF format.
As shown here, the NetWorker server drives all operations. Data is protected in
one place under the control of the NetWorker server. The natively driven vProxies,
deployed on the vCenter, send backup data to Data Domain storage in native
VMDK format. The VMDK data format is kept as long as the data remains on a
Data Domain device. The backup environment is easily scaled by adding vProxies.
The NetWorker server manages the data protection environment using policies
along with the screens and wizards that are provided by NMC for backup
configuration, recoveries, monitoring and troubleshooting. In addition to using the
NMC Recover wizard, there is also an FLR web user interface that works with the
NetWorker server to provide file level recovery from image backups.
The backup request goes to the vProxy. Unless specified, the vProxy determines
the most efficient method for backup. vProxy will choose either the Hotadd or NBD
transport mode, with Hotadd being the default transport mode. The vProxy acquires
the virtual machine data from the datastore and sends the data to the specified
Data Domain device. All backups are CBT incremental backups based on previous
backups residing in backup or clone volumes on the Data Domain system. Only
changed blocks are passed to storage.
Overview
Overview
The protection group type of VMware with a subgroup type of All is used to
configure a protection group for NetWorker VMware Protection with vProxy backup.
The backup action is configured with a backup subtype of VMware (vProxy).
Overview
In addition to recovery from a single virtual machine, recovery from multiple virtual
machines is possible for the Revert and Virtual Machine recovery methods.
Overview
NetWorker VMware Protection supports both image level recoveries, and file level
restores. Recoveries are supported with the NMC Recover wizard. With image
level recovery, you can recover full virtual machines and VMDKs. Recovery is
controlled by nsrproxy_recover that makes a direct request to a vProxy based on
supplied arguments. nsrproxy_recover can also be run from CLI.
Recovery can be performed from the original backup or a clone copy. If a clone is
not on a Data Domain device, recovery will first recall the data to a Data Domain
device and then perform the recovery. Retention of the recalled data is 1 day.
Recovery can be from an individual virtual machine or multiple virtual machines.
Recovery across vCenters is supported. Supported recovery types or methods
include:
Instant recovery or instant access: This type of recovery creates a new virtual
machine running directly off the backup image without performing any data
movement. The vProxy mounts the backup on a temporary NFS datastore, and the
virtual machine is immediately available. The recovery does not alter the backup
image that is saved in NetWorker. The VM copy that it creates is destroyed when
the session is deleted by the user.
Virtual disk recovery: With this type of recovery, also known as a VMDK recovery,
the user can recover one or more disks to an existing virtual machine.
File level recovery: Recovers individual files and folders back to the same or a
different virtual machine.
Overview
File level restore, or FLR, is provided through the Dell EMC Data Protection
Restore client which is accessed through the web browser and the NMC Recover
wizard. The web interface runs on the Networker server host. FLR sessions can be
monitored and controlled from NMC. FLR preserves Windows ACLs.
To perform a file level restore, an FLR guest agent is automatically deployed on the
virtual machine that is the target of the recovery. The agent can remain on the
virtual machine or can be removed after the recovery at the option of the user. File
level recoveries and backups can be performed simultaneously.
Microsoft SQL databases and SQL instance backups to running virtual machines.
After installation, the MSVMAPPAGENT package appears in the Windows installer
Add-Remove programs list. The MSVMAPPAGENT allows for advanced
application data protection of workloads residing on a VMware ESXi server.
Overview
New installations of NetWorker 18.2 and later only use the NetWorker VMware
Protection solution with the vProxy appliance. Backup and recovery operations with
the VMware Backup appliance (VBA) are not supported, although the vProxy
appliance can be used to perform recoveries from VBA backups within the
NetWorker Management Web user interface. When you upgrade from a NetWorker
9.0.x and earlier release, you must migrate to use only the vProxy appliance, which
requires workflow migration is required to convert existing VMware Backup
appliance policies to vProxy appliance policies.
While NetWorker 9.1 supports both VBA and vProxy simultaneously, Dell EMC
recommends migrating existing VBA backups to vProxy. If you already have a VBA
in vCenter, you can install vProxy and run both VBA and vProxy concurrently.
Migrating to vProxy allows you to continue to use VBA on your schedule. As vProxy
does not support recovery from VBA backups, you should continue to use VBA for
recoveries until VBA backups are expired.
As part of the migration plan, VBA support is limited in NetWorker 9.1. You can run
and edit your existing VBA policies, while all new policies must be created in
vProxy.
Overview
NetWorker 9.1 and above provides a utility to migrate VBA/EBR policies and
groups to vProxy policies. When you deploy the vProxy OVA template and
configure the vProxy device in NetWorker, you may and then migrate existing VBA
policies and groups to vProxy using the migration utility made for this purpose. The
utility checks the compatibility and readiness of the environment. Then, when
ready, the migration utility transitions the protection policies and groups to vProxy.
Overview
Summary
Introduction
Devices Overview
Introduction
This lesson covers various device types that are supported by NetWorker,
configuring a storage node resource and device management with nsrsnmd and
nsrmmd.
NetWorker Devices
Overview
In NetWorker, devices are classified by device type, how the device is configured
and managed, and by its location relative to the NetWorker server.
Overview
NetWorker supports many types of devices that can be used to store backup data.
These device types include the following:
Tape: Includes tape drives and tape cartridges, may be physical or virtual.
Examples include 4 mm, 8 mm, DLT8000, LTO Ultrium-5, SAIT-1, TS1140, and so
on.
Cloud: NetWorker supports Data Domain Cloud Tier, CloudBoost cloud backup
storage devices and ECS storage devices.
Data Domain: Refers to a NetWorker Data Domain DD Boost storage device. The
media type is Data Domain.
Note: The libraries and devices available for configuration are listed in
the Devices window of NetWorker Administration. For an up-to-date
list of supported NetWorker devices, see the Dell EMC NetWorker
Hardware Compatibility Guide at www.dell.com/support.
Overview
Devices that are managed by NetWorker are either stand-alone devices or library
devices.
A stand-alone device is any type of device that does not have a robotic arm for
loading volumes. Thus, a volume must be manually loaded into the device (and
mounted) before the device can be used for backup or recovery.
A library (sometimes called an autochanger or a jukebox) is a multiple-volume
device that uses a robotic arm to move media. A library contains one or more
drives. Drives within a library are configured and managed differently than stand
alone devices.
Overview
The NetWorker server manages the flow of save set data that are sent to a device.
To accomplish this, the server needs to know whether the device is attached to the
NetWorker server or to a remote storage node.
A NetWorker server can manage many storage nodes, but a storage node can be
managed by only one NetWorker server. In other words, a storage node cannot
exist in two data zones simultaneously.
Overview
Storage nodes are the NetWorker components that physically control the backup
devices. A storage node must have the NetWorker client and storage node
software that is installed on the host. Also, a storage node resource is configured
for each storage node host.
To create a storage node resource, right-click Storage Nodes in the left pane of the
Devices window and select New. In the resulting window, specify the host name of
the storage node. Select the type of storage node: SCSI, NDMP, or SILO.
In the status attributes, a Yes for Enabled means that the storage node is available
for use. Specifying No indicates a service or disabled state. New device operations
cannot begin, and existing device operations may be canceled.
We review more of the most commonly used storage node attributes in the course
by type of managed device.
Overview
Recall that processes running on a NetWorker storage node include nsrmmd and
nsrsnmd.
To support reading and writing of data, one or more nsrmmd processes are started
per configured device. Depending upon the configuration, AFTD and DD Boost
devices use multiple concurrent nsrmmd processes per device and multiple
concurrent save sessions per nsrmmd process.
There is one nsrsnmd process running on each storage node with configured and
enabled devices. nsrsnmd manages all device operations that the nsrmmd
processes handle on behalf of the NetWorker server’s nsrd process.
Communication between nsrsnmd and nsrd is event-based, and nsrsnmd is
automatically invoked by nsrd, as required.
To verify that the processes are running on a storage node, use the UNIX/Linux ps
command or, on a Windows host, use Windows Task Manager.
Introduction
This lesson covers using NetWorker disk storage devices with an emphasis on
Data Domain and advanced file type devices (AFTDs).
Overview
NetWorker backup to disk devices use disk files that are configured and managed
by NetWorker. Disk devices can reside on a computer’s local disk, or they can be
located on a network-attached disk.
File type device (FTD) – Is the basic, legacy disk device type.
Advanced file type device (AFTD) - Supports concurrent backup and restore
operations. AFTDs can reside on a local disk on a NetWorker storage node or on
network-attached disk devices that are either NFS or CIFS mounted to a
NetWorker storage node.
Data Domain device - Resides on Data Domain systems with enabled DD Boost.
Backup data is stored in a Data Domain device in deduplicated format.
Overview
A file type device (FTD) uses an existing directory within a file system as its
volume. File devices can be local to Windows/Linux storage nodes or NFS-
mounted to Linux storage nodes. Each save set directed to the device is written to
a separate file within the directory. File type device does not support concurrent
read and write operations.
When creating a NetWorker device resource for a file device, the name of the
device is the full pathname of the directory, for example E:\, D:\Filedev1, or
/filedevice2. It is suggested that you create separate file systems for each file type
device. If multiple file devices share the same file system, they will each contend
for the available disk space. If a file device resides in a file system containing
operating system or user files, there will also be contention for available space. If a
file type device cannot be assigned its own dedicated file system, the device’s
Volume default capacity attribute should be used to limit the amount of space that
can be used by the device. If this attribute has a value (it is null by default), the
volume becomes full upon the specified amount of data (750 MB, 12 GB, 1 TB, and
so on) being written to it.
After the device resource is created, a file type device’s volume is labeled and
mounted.
File type devices are legacy devices and their use is limited. Dell EMC
recommends to use AFTD or DD Boost devices instead of file type devices.
Overview
Advanced file type devices overcome the main restrictions of traditional file type
devices. Advanced file type devices support multiple backups and read operations,
simultaneously. This enables you to recover, clone, or stage data from an AFTD
while backups are in progress. To support this capability, multiple concurrent
nsrmmd processes are used per device and each nsrmmd can support multiple
concurrent save sessions.
When recovering from an AFTD, save sets are recovered concurrently. Multiple
save sets can be simultaneously recovered to multiple clients. AFTD save sets can
be cloned to two different volumes simultaneously. Concurrent recoveries are
limited to file type recoveries and are performed using the recover command.
Many file systems can be dynamically enlarged, enabling the size of an AFTD
volume to be increased without relabeling the volume. Unlike a file type device,
advanced file type devices are supported for both NFS and CIFS.
The Client Direct feature enables Networker clients to back up directly to AFTDs
over CIFS or NFS, bypassing the storage node.
Overview
An advanced file type device responds differently than a file type device to a “disk
full” condition. A file type device behaves much like a tape device. When there is no
more room on the volume, NetWorker marks the volume full and continues backing
up the save set to another volume. This volume may be either a disk or tape
volume.
An AFTD volume is never marked as full. A save set being written to an advanced
file type device will never continue (span) onto another volume. Instead, if the file
system containing the volume becomes full, NetWorker suspends all saves being
directed to that device until more space is made available on the volume. A
message is displayed stating that the file system requires more space. The nsrim
process is invoked to reclaim space on the volume. A notification is sent by email to
the NetWorker administrator.
Overview
Each AFTD device is identified with a single NetWorker storage volume. Before
creating an AFTD resource, create one directory for each disk to be used for the
AFTD.
Do not use a temporary directory. It is suggested that you create separate file
systems for each AFTD. If multiple AFTDs share the same file system, they each
contend for the available disk space. If an AFTD resides in a file system containing
operating system or user files, there will also be contention for available space.
For Dynamic nsrmmds, select whether nsrmmd processes on the storage node
devices will be started dynamically. If selected, NetWorker starts one nsrmmd
process per device and adds more only on demand, as needed. When not
selected, NetWorker runs all available nsrmmd processes.
Overview
Each AFTD device is defined by a single path, although the access path may be
specified in different ways for different client hosts.
NetWorker AFTD devices can be created from the Devices window using either the
New Device Wizard or the Properties window.
The attributes from the Properties window are shown here, however, with either
method, similar information is provided:
For Name, enter the name you would like to use for the device. This can be the
path to the device, or it can be a meaningful name of your choosing. If the storage
node is not also the NetWorker server, this AFTD is a remote device. The remote
device name must use this format: rd=storagenodename:devicename.
In the Device access information attribute, enter the complete path to the device
directory. Multiple entries may be made. The first path enables the storage node to
access the device through its defined mount point. You can also provide alternate
paths for Client Direct clients.
Specify adv_file as the Media type for advanced file type devices.
Overview
On the Configuration tab, set the number of concurrent sessions and the number of
nsrmmd processes the device may handle.
Target sessions is the number of sessions that a nsrmmd process handle before
another device on the host takes extra sessions. This setting is used to balance the
sessions among nsrmmd processes. If another device is not available, and then
another nsrmmd process on the same device takes the additional sessions.
Typically, this field is set to a lower value. The default value for AFTDs is 4.
Max sessions is the maximum number of sessions that the device may handle. If
no additional devices are available on the host, and then another available storage
host is used, or retries are attempted until sessions are available. The default value
is 32 for AFTDs. This typically provides the best performance.
Max nsrmmd count limits the number of nsrmmd processes that can run on this
device. This setting is used to balance the nsrmmd load among devices. The
default value for MAX nsrmmd count is 12.
Provide a Remote user name and Password if an NFS or CIFS path is specified
in the Device access information field.
Overview
After the AFTD device resource is created, the device is labeled and mounted
automatically. Alternatively, you also can manually label a volume in the device into
a media pool and then mount the volume.
Overview
You can also use the Device Configuration Wizard to create an AFTD. From the
Devices window, right-click Devices and select New Device Wizard. Select
Advanced File Type Device (AFTD) for device type. Complete the information in
the wizard as required. Verify the device settings and select Finish.
Overview
The data load for simultaneous operations can be balanced across available
devices by using the target and max sessions per device. Also, when there are
multiple AFTD volumes belonging to a pool, NetWorker chooses the AFTD with the
least amount of used space. By using the total used capacity for AFTD volume
selection, the first labeled device is not excessively used. Together these
capabilities provide for effective load balancing across disk volumes.
A single NetWorker volume can be shared among multiple devices and on different
storage nodes. Each device must have a different name and must specify a path to
the storage location. This enables devices and volumes to be better used by
enabling different devices to mount and access volumes simultaneously. A new
session can be distributed to any other nsrmmd seeing the same volume.
Clients with network access to AFTD or DD Boost storage devices can send their
backup data directly to the storage devices, thus bypassing the storage node in the
backup path. The storage node continues to manage the devices for the NetWorker
clients but does not handle the data. Using Client Direct has the potential for
reducing bandwidth usage as the backup data travels directly from the client to the
storage device. Also, any bottlenecks at the storage node are avoided.
Overview
Options include: max sessions – This option distributes save sessions that based
on the max sessions attribute of all devices that are configured on the storage
node. This is the default option and is more likely to concentrate the load on fewer
storage nodes.target sessions – This option distributes save sessions based on
the target sessions attribute of all devices that are configured on the storage node.
Using this option is more likely to spread the backup load across multiple storage
nodes.
Overview
Overview
The New Device Wizard is the recommended method to create and modify Data
Domain (DD Boost) devices. With the wizard, you can also create and modify
storage pools for Data Domain devices.
To create a Data Domain device, first launch the New Device Wizard from the
Devices window of NetWorker Administration.
The New Device Wizard walks you through the remaining steps for creating a
Data Domain device.
Overview
Next, select the Data Domain system on which you would like to configure the
device. If you have not already added the Data Domain system in NetWorker, you
can do so here as well. Then, enter the DD Boost username and password. On the
next screen, you are prompted to choose the folder to use for the device.
NetWorker creates the device name and device storage path.
Backup data that are sent to a NetWorker device that is configured with the Data
Domain device type is stored on DD Boost storage devices that are located on
Data Domain systems. By default, the NetWorker device configuration wizard
creates a storage unit (SU) on the specified Data Domain system to handle the DD
Boost devices for a NetWorker datazone. The SU is named with the short
hostname of the NetWorker server. The SUs are the parent folders for the DD
Boost devices, and each DD Boost device is a subfolder within a Data Domain
storage unit.
For the example shown here, the path for the storage unit on the Data Domain
device is /data/col1/nw. The paths for the DD Boost devices are as following.
/data/col1/nw/ddve_dev_1
/data/col1/nw/ddve_dev2
/data/col1/nw/ddve_dev3
/data/col1/nw/ddve_dev_ccr
/data/col1/nw/ddve_dev_ccr2
/data/col1/nw/ddvedev1
/data/col1/nw/ddve_dev6
If you prefer to choose a storage unit on the Data Domain that is already created,
you need to use the Secure Multi-Tenancy section of the wizard to configure the
device.
Overview
Once device configuration has been performed, the next step is to configure the
media pool and label and mount the device.
Choose a backup or clone pool type. Then, you can either choose a pool that you
have already created for DD Boost backups or you can create a new pool. A
dedicated pool is required for DD Boost devices. Be sure that you do not mix DD
Boost backups and traditional backups in the same pool.
Once you have selected a pool, you can check Label and Mount device after
creation. In the next window, choose the storage node for the device and the
method of transport, Fibre Channel or IP.
Overview
In SNMP Monitoring Options, type the Data Domain SNMP community string and
specify the events to be monitored.
The last wizard step is to review the configuration settings. The Data Domain
Device Name is the fully qualified hostname of the Data Domain system and the
name of the Data Domain storage folder on the system. Upon successful
configuration, the device is labeled and mounted. In the NetWorker Administration
Devices window, verify that the device is labeled and mounted, ready for use. The
Data Domain system is displayed as a managed application in the NetWorker
Management Console Enterprise window.
Overview
NetWorker supports Virtual Synthetic Full backups with Data Domain. The process
of creating a Virtual Synthetic full is a much more efficient way to create a Synthetic
full backup, integrating the NetWorker Synthetic Full backup feature and the Data
Domain virtual-synthetics feature.
Data Domain system to synthesize a full backup without moving data across the
network. A traditional full backup is recommended only after every 8 through 10
Virtual Fulls have been completed. The use of Virtual Synthetic Full backups also
reduces the number of traditional full backups from 52 to 6 per year – a 90%
reduction. If a Virtual Synthetic full operation fails, NetWorker defaults to creating a
Synthetic full.
Overview
Client Direct works with both AFTD and Data Domain devices. This feature is
enabled for a client by default. If a Client Direct backup cannot be performed (for
example a network connection to the storage is not supplied), a traditional backup
through the storage node is performed. Client Direct clients require a network
connection and remote access to the storage device, such as a CIFS or NFS path
for AFTD devices.
One or more paths to the AFTD device are specified in the device’s Device access
information attribute. If the storage device is directly connected to the storage
node, a different access path is specified for the client than that for the storage
node. A configuration using a CIFS share is shown on the slide.
If the storage device is not directly connected to the storage node, as with NAS, the
device access information is the same for the storage node and clients.
Checkpoint restart supports Client Direct backups only to AFTD devices, and not to
DD Boost devices. If a client is enabled for checkpoint restart and a Client Direct
backup is attempted to a DD Boost device, and then the backup reverts to a
traditional storage backup. For Client Direct backups to AFTDs using checkpoint
restart, checkpoint restart points are not made less than 15 seconds apart.
Checkpoints are always made after larger files requiring more than 15 seconds to
backup.
Introduction
This lesson covers an overview of using cloud storage devices with NetWorker.
Overview
NetWorker supports Data Domain Cloud Tier, and CloudBoost backup devices as
well as Elastic Cloud Storage.
Overview
Beginning with NetWorker version 9.1, NetWorker supports one of the key features
of Data Domain OS 6.0: Data Domain Cloud Tier. The Data Domain Cloud Tier
feature enables the movement of data from the active tier of a Data Domain system
to low-cost, high-capacity object storage in the public, private, or hybrid cloud for
long-term data retention. Only unique, deduplicated data is sent from the Data
Domain system to the cloud or retrieved from the cloud. This ensures that the data
being sent to the cloud occupies as little space as possible.
NetWorker integration with Data Domain Cloud Tier provides these specific
functions:
Clone from the Data Domain active tier to a Data Domain Cloud Tier device.
Track client data that are stored in the cloud and data that are stored locally.
Recover data from the cloud to a NetWorker client.
Overview
NetWorker does not store data directly to the cloud. With the Data Domain Cloud
Tier, data is moved to the cloud based on Data Domain data movement policies.
First, NetWorker backs up the data to the Data Domain active tier using a
NetWorker Data Domain device (DD Boost) as the target device for the backup.
Next, a NetWorker clone operation identifies the data to be moved to cloud storage
according to an application-based policy defined in a NetWorker DD Cloud Tier
device that is the target of the clone operation.
Then, Data Domain pushes the data to cloud storage according to an aged-based
policy controlled by the Data Domain system. Data movement can run
automatically according to a schedule defined in the policy or manually using the
start option of the Data Domain data-movement command. Only unique data is
moved to the cloud.
Overview
Here are several prerequisites for integrating NetWorker with Data Domain Cloud
Tier.
The storage node managing the Data Domain devices must be at NetWorker
version 9.1 or higher and the Data Domain systems must be configured for a cloud
tier. The Cloud Tier option on the Data Domain system must be licensed and
enabled. The device containing the DD Boost backup data and the Cloud Tier
device must reside on the same Data Domain storage unit.
Overview
Use the NetWorker New Device Wizard to create the DD Cloud Tier device. The
wizard prompts for the following information: the name of the Data Domain system
and the Cloud Unit name, DD Boost username and password, the folder to use on
the Data Domain system for the DD Cloud Tier device, a clone media pool, the
storage node to use, and the Data Domain Management credentials. When the DD
Cloud Tier device is labeled, NetWorker (as an application to Data Domain) creates
an application-based Data Domain data movement policy that associates the Data
Domain storage unit with a cloud unit. There is one policy per storage unit.
Overview
Using the NetWorker mminfo command, you can identify the status of the data
movement process. A flag of T denotes that the save set is in transit. This means
that the save set is on the Cloud Tier device but has not yet moved to cloud
storage. Without the T, the data movement is completed.
Similarly, when querying save sets in NMC, a T in the Clone Flags column
denotes that the save set is in transit.
Overview
To recover data from a DD Cloud Tier device, the NetWorker recover operation first
clones the data from the DD Cloud Tier device to a Data Domain device and then
recovers the data from the Data Domain device. NetWorker removes the clone
data from the Data Domain device 7 days later. For the recover, there must be a
mounted Data Domain device on the same storage unit as the DD Cloud Tier
device.
Overview
The data protection solutions are available with NetWorker 9.0.1 and above and
CloudBoost 2.1 and above, but require CloudBoost 18.1 or later when working with
NetWorker 18.2. A NetWorker with CloudBoost environment can extend onsite data
protection to the cloud through the following methods.
1. Backup to the cloud. NetWorker with CloudBoost allows direct backup of on-
premises clients to a range of private, public, and hybrid clouds. This solution
allows clients to send backups directly to the object store with only the metadata
being stored in the CloudBoost appliance.
2. Backup in public cloud. This solution allows protection of applications that run in
public clouds such as AWS, AWS S3, Azure, and Azure blob storage. Similar to
on-premises backups to the cloud, this solution allows Client Direct backup to
the object store for applications that run in AWS EC2 and Azure compute
instances.
3. Long-term retention or cloning to cloud. This solution allows clone backups from
a backup target to the cloud for long-term retention. The operational copy for
backup and restore operations remains on the Data Domain host or any other
backup target. The copy that is cloned to
the cloud by NetWorker and CloudBoost is used for long-term retention of data.
Overview
When using an external Linux storage node, install the CloudBoost device on that
storage node.
Overview
The information for CloudBoost device and CloudBoost Appliance are displayed in
right pane of the Devices window.
Overview
Before you can create and use a cloud device for backup, the listed firewall port
requirement must be met. If the ports shown on the table are not configured before
you configure the CloudBoost appliance, restart the CloudBoost appliance.
Overview
This lesson provides overview information about using NetWorker for cloud
storage. Also to the NetWorker Administration Guide, the guides that are shown
here provide more detailed information about using Data Domain Cloud Tier and
CloudBoost with NetWorker. The NetWorker Cloud Enablement eLearning course,
MR-1WN-NWCLD, focuses on the ability to enable NetWorker backups to the
cloud.
Introduction
This lesson covers an overview of using tape libraries with NetWorker including
supported library topologies, multiplexing and OTF, and persistent binding and
naming.
Library Components
Overview
Robotic arm - This is the mechanism that moves tapes. It is commonly an arm with
a gripper.
Slots - This is where volumes are stored when not loaded in a tape drive. Each slot
has a unique element address.
Media - These are the volumes, which are also known as tape cartridges or tapes.
In addition to the above components, many libraries also have the following.
Bar code reader - This is an optical device that reads a barcode affixed to a tape.
Using a barcode reader improves the speed of creating or refreshing the library’s
inventory of tape media.
Import/export port - This is a special port that is used to move tapes into and out
of the library without opening the door. It is also known as the Cartridge Access
Port (CAP).
Door - This enables access to the slots, media, and drives. Many libraries have a
sensor that detects when the door has been opened, which may initiate an
inventory.
Overview
A shared library is cabled in such a manner that two or more storage nodes control
some portion of the library. A shared library is supported in SAN (Storage Area
Network) and non-SAN environments. There are two configurations available for
shared library.
1. Static drive assignment - All drives are statically bound to a specific storage
node and multiple storage nodes are assigned a drive. Often used with virtual
tape libraries
2. Dynamic Drive Sharing (DDS) - Supported only in a SAN environment.
Individual drives in the library are controlled by more than one storage node.
However, only one storage node can use a drive at any given time. DDS is used
to share physical tape libraries/drives among storage nodes, but Dynamic Drive
Sharing (DDS) does not support sharing libraries across datazones.
Dedicated Library
Overview
As shown in the slide, all drives in a dedicated library are controlled by a single
storage node. Backup data from clients other than soprano must be sent to the
storage node soprano using the TCP/IP network.
Overview
Using Dynamic Drive Sharing (DDS), a tape drive is accessed and used by two or
more storage nodes within a single data zone. However, only one storage node
can control a drive at any given time.
It should also be noted that not all drives in a library must be dynamically shared.
For example, in the environment that is depicted in the slide, it would be possible to
enable alto access to all four tape drives but enable soprano access to only the top
drive.Thus, only the top drive would be dynamically shared.
DDS reduces hardware demands by enabling multiple storage nodes to use the
same drive, but at different times. Once configured, the administration (labeling,
mounting, and so on) of a shared drive is the same as for a nonshared drive.
For more information about NetWorker DDS configurations, see the NetWorker
Administration Guide.
Multiplexing
Overview
Multiplexing enables more than one save stream to write to the same device
simultaneously. This enables the device to write to the volume at the collective data
rate of the save streams, up to the maximum data rate of the device.
The amount of multiplexing enabled (the number of save sets that can back up
simultaneously) is primarily controlled by three NetWorker settings, Target
sessions, Max sessions, and Pool parallelism. These settings are discussed in
detail in a later module.
Overview
Open Tape Format (OTF) is a data format that enables multiplexed, heterogeneous
(UNIX, Windows, NetWare, and so on) data to reside on the same tape. NetWorker
clients send data in save set chunks to a storage node. The storage node arranges
them in media records and media files which are stored in volumes. The way the
storage node organizes the records and files is also platform-independent (Open
Tape Format), enabling any NetWorker storage node to read the data. Because of
Open Tape Format, a NetWorker storage node can be migrated to a host running a
different operating system.
Note: For more information about OTF, see the mm_data topic in the
NetWorker Command Reference Guide.
Overview
1. When a save is initiated, nsrmmd interfaces with the device to write the data to
the volume.
2. The nsrmmd daemon performs the following tasks to support multiplexing of
backup data, using Open Tape Format:
a. Breaks each save set into chunks.
b. Combines chunks from various save sets into records.
c. Sends the records to the device which writes them to the volume
d. Periodically, nsrmmd writes end-of-file marks to the volume, creating media
files. These file marks are used for faster positioning during reading of the
volume.
3. As each record is written to the volume, nsrmmd sends tracking information to
the media database on the NetWorker server. This information is inserted into
volume and save set records in the database, and tracks the location of each
media file, media record, and save set chunk.
Note: For more information about Open Tape Format, see the
mm_data topic in the NetWorker Command Reference Guide or the
UNIX/Linux man pages.
Overview
Persistent binding statically maps a target’s WWN address to the desired SCSI
address, ensuring the operating system always sees SAN-presented devices with
the same SCSI target ID across reboots. This feature is enabled by default on
some operating systems, while on others it has to be set manually.
If the SCSI address changes, the library becomes unavailable. In such situations, it
is required to disable the library and change the “control port” address to reflect the
new SCSI address of the library controller.
Persistent naming is used to ensure that the operating system (OS) or device driver
of a server always creates and uses the same symbolic path for a device
(sometimes referred to as device file).
As a best practice, Dell EMC recommends enabling persistent binding and naming
for tape libraries and tape devices. This avoids device reordering on reboots or
plug and play events. If a device reordering occurs, the NetWorker software is not
able to use any affected drives until the configuration is manually corrected.
For details on how to configure persistent naming from the operating system or
device driver, see your operating system and/or device driver documentation.
Introduction
Overview
For NetWorker to use a library, a jukebox resource (NSR jukebox) must be created.
This is done using either NetWorker Administration or the command-line utility,
jbconfig. For a library to be configured using NetWorker Administration, the library
must be able to provide hardware information, such as device serial numbers, to
NetWorker. If this information cannot be automatically provided to NetWorker by
the firmware, jbconfig is used to configure the library.
Overview
The Skip scsi targets field is used to specify SCSI addresses to skip (in
bus.target.lun format) when performing a scan operation. This is useful if the
storage node has tape drives or libraries that you do not want NetWorker to use.
Placing a list of SCSI addresses to be skipped in the storage node resource results
in those addresses being skipped during all scan operations.
Overview
The first step in configuring a library is to scan the controlling storage node for
libraries and devices that are not yet known to the NetWorker server, either direct
attached or SAN attached. This is done by right-clicking the storage node in the left
pane of the Devices window and selecting Scan for Devices. A window opens in
which you can specify the storage node to scan. Although the storage node that is
selected in the left-pane is automatically chosen, you can choose to scan any or all
storage nodes for which a storage node resource is configured.
If there are unconfigured tape drives or libraries on one or more storage nodes that
you do not want to be affected by a scan operation, specify each SCSI ID in the
Exclude SCSI Paths field. This field can be used to prevent NetWorker from
configuring a device and from unnecessarily scanning attached SAN disks or non-
tape library/drive SCSI IDs. Any addresses in the Skip scsi targets attribute of the
storage node resource are automatically included in the Exclude SCSI Paths for
the storage node.
Overview
You can monitor the progress of the scan operation by viewing the Log window.
After the scan operation is finished, unconfigured devices are displayed in the left
pane of the Devices window. The icon used to represent an unconfigured drive or
library looks like an orange circle containing a wrench.
Overview
Next, configure the library (jukebox resource) and its devices. Right-click an
unconfigured tape library in the left pane of the Devices window and select
Configure Library. To create jukebox resources for all unconfigured libraries on a
storage node, use the Configure All Libraries selection.
In the resulting Configure Library window, assign the drives in the library to the
storage node that will control the robot. In the slide, there is only one storage node
that is shown, nwlinux, in the window. However, in a SAN environment, it is
possible that more storage nodes can access the library. If these storage nodes
have been scanned by NetWorker, they are also displayed in the window.
Click Start Configuration to create the jukebox resource and device resources for
the drives within the library.
Important: An unconfigured library is listed in the left pane under each storage
node that has access to it.
Overview
After a jukebox resource has been created, the icon for the tape library in the
Devices window changes to reflect the fact that the library is now configured and
devices have been created for the tape drives. In this example, we show a
configured library with two configured tape drives. The display also shows that
there are 15 slots in the library with 14 unlabeled tapes and one cleaning tape
(CLN015L5).
Overview
With library sharing, two or more storage nodes are each assigned one or more
drives in the library to manage. Only one storage node manages each drive. When
configuring a shared library, NetWorker uses the device serial numbers that are
read during a scan operation to determine which storage nodes can access each
drive in the library.
In the slide, \\.\Tape3 on leg1-win5 and /dev/rmt/2cbn on leg1-sun5 have the same
serial number. NetWorker also recognizes that \\.\Tape2 on leg1-win5 and
/dev/rmt/3cbn on leg1-sun5 have the same serial number and therefore point to the
same physical drive. During library configuration, one drive is assigned to leg1-win5
and the second drive is assigned to leg1-sun5. After the library has been
configured, there are now two device resources that are associated with the tape
library. One of the drives is configured with leg1-sun5 and the other with leg1-win5.
The tape library is controlled by leg1-sun5.
Important: Always configure a library using the storage node that you
want to control the robot.
Overview
Configure persistent naming on the storage node either from the storage node’s
Properties window or when scanning for devices as shown here.
Overview
Clicking a configured library displays information about the library’s devices and
current volume inventory.
To view a jukebox resource, right-click the library and select Properties from the
drop-down menu. The General tab shows basic information about the library.
Overview
Overview
With the Devices window, label and inventory operations are performed by right-
clicking the library and choosing the appropriate selection from the menu. From the
menu, you can also perform a hardware reset of the library and have volumes that
are moved from the import slots to empty volume slots.
Overview
After configuring a library, a volume must be labeled before the library and its
devices can be used for backups. To label volumes in a library, right-click the
library name in the left pane of the Devices window and select Label.
In Target Media Pool, select the pool to which the volumes belong.
If the volume should not be recycled automatically, select Allow Manual Recycle.
After a volume is labeled, it must be mounted before NetWorker can use it. This is
done automatically within a library. When Auto Media Management is enabled,
NetWorker automatically mounts a volume in a device when needed and labels the
volume if it is unlabeled.
Overview
The Status table in the Devices window shows operations in progress. When there
is an operation that requires user input, such as labeling a tape which already
contains a label or depositing volumes into a library, NetWorker pops up a dialog
box automatically and a User Input icon is displayed in the status table.
If you choose Ignore from the dialog box, the icon remains in the User Input field
as a reminder that input must be provided before the operation continues. To later
supply input, right-click the notice in the status table and then select Manage
Library Operations > Supply Input.
Overview
To see status information for labeled tape volumes, select Tape Volumes in the
left pane of the Media window. Attributes that are displayed for the volumes include
the followings.
By double-clicking a volume in the right pane, you can display a list of save sets
that have been written to the selected volume. This is a good way to verify that a
first backup to a tape device is happening as expected.
Overview
Libraries that have serial numbers can be configured using either NetWorker
Administration or the jbconfig command. However, devices that do not provide
serial numbers must be configured using jbconfig. Also, use jbconfig to configure
IBM tape libraries that are controlled by using the IBM tape driver.
Overview
SCSI address - Each tape drive has a unique bus, target, and logical unit
number (LUN). Many people mistakenly believe that the lowest SCSI address is
the first tape drive in the library. This is not always the case.
Library element address - Each slot and tape drive is assigned a unique
element address by the robotic controller. The tape drive with the lowest
element address is the first drive, and the next highest element address is the
second drive, and so on.
Operating system pathname – A tape drive is accessed through its operating
system device pathname.
When using jbconfig to configure a tape library, you are prompted to enter the
operating system pathname of each drive, beginning with the drive having the
lowest element address. Understanding the order of the drives is necessary to
properly configure the library.
When using jbconfig to configure the library shown in the slide, you are prompted
four times for the pathname of a tape drive in the library. What is the correct
sequence of pathnames to enter? Since you are first prompted for the drive having
the lowest element address, the correct sequence is \\.\Tape3, \\.\Tape2, \\.\Tape1,
and \\.\Tape0. This order corresponds with the ordering of the element addresses.
Persistent binding and persistent naming can be used to resolve issues regarding
device ordering.
Overview
Before running jbconfig, ensure that the operating system can see and use the
library and its devices. The NetWorker inquire command lists all SCSI devices that
are detected by the operating system on the storage node. This command is part of
the storage node software.
The sjisn command is used to display information about a specific library. Not all
libraries support the sjisn command. The syntax of sjisn is: sjisn bus.target.lun
By comparing the output from inquire and sjisn, you can determine the tape drive
ordering and the operating system pathname that is assigned to each drive. In the
slide, the sjisn output shows the serial number of the drive at element address 256
is 10000091. The output of the inquire command shows that the operating system
has assigned the drive with that serial number a device pathname of /dev/nst0.
Since 256 is the lowest numbered element address, when prompted by jbconfig to
provide the path name of the first drive in the library, you should enter /dev/nst0.
Running jbconfig
Overview
Run the jbconfig command to configure the library. The command is executed from
the storage node managing the library control port (robotic arm). If it is a remote
storage node, use the -s option followed by the name of the NetWorker server. If
the –s option is not used and nsrd is not running on the local host, you are
prompted for the name of the NetWorker server on which the jukebox resource will
be configured. Since jbconfig creates a jukebox resource on the NetWorker server,
if it is executed from a storage node, the administrative user running the command
must belong to the NetWorker server’s Administrators user group. After jbconfig
creates the resource, the user can be removed from the user group.
jbconfig prompts vary from library to library, but commonly include: the type of
jukebox, jukebox name, whether NetWorker manages device cleaning and if there
are multiple paths to any of the drives.
After the jukebox resource is created, it is managed using either of the standard
administrative interfaces: NetWorker Administration or nsradmin.
Overview
nsrjb is a NetWorker command line utility that is used to manage NetWorker library
(jukebox) operations. nsrjb can be used to perform tasks such as labeling volumes,
mounting and unmounting volumes, inventory and resetting a library. The slide
shows several examples of using the command.
Note: nsrjb has many more options. See the nsrjb topic in the
NetWorker Command Reference Guide and the UNIX/Linux man
pages for more information.
Summary
Introduction
Introduction
This lesson covers how to view CFI and media database information using various
NetWorker interfaces: discuss the interfaces for managing the media database and
CFI, save set and volume status and aging, as well as how NetWorker selects a
volume for writing.
Overview
This slide shows the NetWorker interfaces available for displaying the contents of,
and/or querying, the media database and client file indexes. nsrinfo, nsrls, and
mminfo are executed on the NetWorker server. However, both nsrinfo and mminfo
have a –s nw_server option which enables you to run the command from any
NetWorker host.
Overview
The NetWorker nsrinfo command, when specified with only a client name as an
argument, displays a list of all files being tracked in CFI of that client. With extra
options, nsrinfo can list all files that are backed up at a specific time or with a
specific pathname.
The NetWorker nsrinfo command, when specified with only a client name as an
argument, displays a list of all files being tracked in CFI of that client. With extra
options, nsrinfo can list all files that are backed up at a specific time or with a
specific pathname.
Overview
Where clientname is the name of a NetWorker client and, if specified, causes that
client’s CFI usage to be summarized. If no arguments are specified, summary
information is displayed for all CFIs.
Output of nsrls includes the total number of records that are contained in the CFI
and the total amount of disk space that is used by the CFI.nsrls has a -m option
which displays the number of records in each of the media database files and the
amount of disk space that is used by each file.
Overview
To view information about each client's CFI or to manually remove CFI entries, click
Client Indexes in the left pane of the NetWorker Administration’s Media window. A
list of all NetWorker clients is displayed along with the overall size of each client’s
CFI and the number of cycles being tracked.
Right-clicking a client pops up a context menu from which you can display more
detailed information about the client’s CFI or perform a consistency check on it.
If you choose Show Save Sets from the context menu, the Index Save Sets
window pops up which displays the names of all the client’s browsable save sets
and the amount of space in the CFI used for file entries from those save sets.
Upon selecting a save set name in the upper pane, information for each individual
save set with that name is displayed in the bottom pane.
A CFI commonly contains several cycles worth of entries for each save set name.
A cycle is defined in NetWorker as a Full backup and all its dependent save sets.
Incremental and cumulative incremental save sets are dependent on the most
recent Full save set for a current recovery of the save set.
To give an example of what a cycle is, if a client has a 28-day retention policy, uses
a schedule of running a full backup on Sunday and incremental backups the rest of
the week, and has a save set list of C:\Windows\Fonts, the client’s CFI contains
four or five cycles of the C:\Windows\Fonts save sets, with each cycle being
composed of a full backup and its six dependent incremental save sets.
To manually remove entries from a CFI prior to the entries being automatically
purged due to normal aging of data, Remove Oldest Cycle removes all entries
belonging to the oldest full save set of the selected save set name and all entries
belonging to its dependent save sets. This is commonly done to quickly reduce the
size of a CFI.
Overview
If no arguments are specified, the output includes all browsable save sets created
since midnight of the previous day. By default, the fields that are displayed include
the save set name, client name, timestamp, size, backup level, and the name of the
volume containing the save set. If portions of a save set reside on multiple
volumes, there is a line of output for each volume.
Options and arguments are used to define other queries and reports. If the
volname argument is used, the output is restricted to save sets on that volume.
Several common mminfo usage examples are shown on the slide.
Overview
The query option, -q queryspec, enables you to specify a custom query on fields
(attributes) within the media database. The –r reportspec option enables you to
specify which fields to include in the output of matching records.
Queries may use the operators ‘<‘, ‘>’, and ‘=’ to compare a field to a value.
Commas are used to separate multiple queries. If queryspec begins with the
negation operator ‘!’, the comparison matches only if the field does not match the
value.
In the slide, the -q queryspec syntax is used to query the database for save sets
named C:\Documents. -r reportspec is used to display the name of the save set
truncated (or blank-padded) to 10 characters, the save set ID, the volume
containing the save set, and the client name.
Notes: You can query a client’s snapshot save sets using the mminfo
command. The -q snap option lists all snapshot save sets for a
particular client.There are many volumes and save set attributes that
may be used for querying and reporting. All these options are listed
and described in the mminfo(1m) man page and the NetWorker
Command Reference Guide.
Overview
The slide lists common mminfo options for querying the media database and
generating reports.
Note: See the mminfo(1m) man page and the NetWorker Command
Reference Guide for examples and further information.
Overview
NetWorker Administration can be used to display volume and save set information
by using the Disk Volumes or Tape Volumes selection in the Media window.
When a volume option is selected in the left pane, a list of all volumes is displayed.
Right-clicking on a volume pops up a context menu that is used for performing
tasks that are associated with volumes, such as displaying all save sets on a
volume and deleting a volume from the media database.
Double-clicking a volume also displays all save sets on the volume. The
information that is displayed is equivalent to that generated by using mminfo –v
volumename.
Overview
NetWorker Administration also provides the ability to query the media database
and display information concerning save sets matching the query.
To perform a query, click Save Sets in the left pane of the Media window. In the
right pane, specify the save set characteristics of those save sets you want
information about. Change to the Save Set List tab to perform the query and report
matching save sets.
In the Query Save Set tab, you can choose to display only those save sets
matching a specific status and type. The default value is All for both Status and
Type. Copies commonly refers to how many times a save set has been cloned. A
save set that has been cloned once has two copies, the original and one clone.
Also, any save set written to an advanced file type device is seen as having two
copies. The drop-down menu in the Copies field enables you to perform
comparisons using the ‘=‘, ‘>’ and ‘<‘ operators.
You can specify the maximum backup level of the save set. Since a full backup is
equivalent to a level 0, selecting Full matches only full level backups. To match
client-initiated save sets, All must be selected.
When selecting a range of values for the Save Time field, a calendar is displayed
from which you select the wanted date. A specific time of day can be specified by
manually editing the From and To fields.
Overview
While the command-line utilities in the slide are usually executed on the NetWorker
server, both nsrmm and mmlocate include a –s nw_server option which enables
you to run the command from any NetWorker host.
Overview
Retention that is specified on the backup action is used to set the aging values for
a client’s save sets. If client overrides are enabled on the action, the Retention
policy field on the client is used, if supplied.
You may also see references to a Browse policy on the client resource or Browse
time when looking at save set metadata. The browse policy was used in previous
versions of NetWorker. Beginning with NetWorker 9, NetWorker uses the
Retention value for both the Browse time and the Retention time.
When a save set is backed up, the value for Retention is added to the current date
to determine the save set’s browse time and retention time. These values are
stored in the save set record as the ssbrowse and ssretent attributes, and are used
to determine when the save set changes from one status to another as it ages.
The browse time specifies the date when the save set’s entries are removed from
the client’s CFI, thereby making the save set no longer browsable. The retention
time specifies the date when the save set expires and is no longer required.
Beginning with NetWorker 9, the browse time and the retention time will be the
same.
Save sets are checked for aging automatically once a day when the Server
backup workflow runs or by manually running nsrim. Dependent save sets may
delay the aging of certain save sets. For example, a level Full save set that has
passed its browse time remains browsable (and therefore tracked in the CFI) until
all incremental save sets that depend on the full save set also pass their browse
times. Thus, the aging of save sets may be delayed by up to one cycle period,
where a cycle is defined as the length of time between full backups.
Overview
All save sets are tracked in the media database. Each save set record has a status
field which reflects the save set’s aging status. Primary statuses include browsable,
recoverable, and recyclable. A save set may also be assigned a secondary status
of suspect if a read error occurs during a recovery attempt of the save set contents.
A browsable save set has not passed its browse time and is therefore still tracked
in both the media database and a client file index. Both a browsable recovery and
a save set recovery can be performed on the save set.
A recoverable save set has passed its browse time but has not exceeded its
retention time. Because it has passed its browse time, it is no longer tracked in a
client file index. Only a saveset recovery can be performed without rebuilding the
client file index for that saveset.
A recyclable save set has passed both its browse and retention times. A
recyclable save set is treated exactly like a recoverable save set except it will not
keep the volume that it is on from being automatically recycled (relabeled).
Overview
NetWorker volumes are also tracked in the media database and have one or more
statuses (modes) assigned to them reflecting their age and other conditions. The
slide lists the major volume modes.
When a volume becomes full, it is assigned a status of full and can no longer be
used for backups. A tape volume becomes full when the physical EOM (end of
media) marker is encountered during a save or when a write error results in the
save being directed to another volume.
When all save sets on a volume become recyclable, the status of the volume itself
changes to recyclable. Recyclable volumes may be automatically recycled
(relabeled) by NetWorker in the event that no appendable volumes are available to
satisfy a backup request.
A volume can be manually assigned a status of read only. This keeps extra data
from being written to the volume. Full and recyclable volumes are automatically
given a secondary status of read only.
Overview
nsrim handles aging of save set and volume records within the media database,
and is responsible for enforcing retention times for all clients. nsrim also removes
tracking information from the CFI when a save set passes the retention period. The
nsrim command is invoked automatically once a day when the Server backup
workflow runs. However, you can also run nsrim manually from the command line.
The command, nsrim syntax: nsrim [-option arg] [-option].
Note: See the nsrim (1m) man page and the NetWorker Command
Reference Guide for more information.
Overview
You can use nsrmm to change an existing save set’s retention time with the -e
retention_time option. Using this option sets the save set ssretent field in the media
database, which is used by nsrim for aging of the save set. Changing an existing
save set’s retention time is useful for extending or shortening the life cycle of a
specific save set.nsrmm syntax pertaining to retention time: nsrmm -e
retention_time -S ssid.
You can specify retention_time in any format that is described in the nsr_getdate(3)
man page. The time can be an absolute time such as MM/DD/YY, or a time relative
to the current date, such as “2 Months” or “4 years”.
Changing the retention time for a save set changes the dates for all instances of
the save set.NetWorker uses the retention time value for both the retention and
browse times. This is shown on the slide.
Overview
You can manually change the status of volumes and save sets by using nsrmm
with the -o mode option. nsrmm syntax pertaining to the -o mode option:
where mode can be any of the modes that are listed in the slide. The volume
argument is the name of the volume whose record you want to change.
If a write error occurs when writing to a volume, the volume mode is changed to full
to avoid trying to write extra data to a volume which is possibly damaged.
However, if the error was caused by the device, using nsrmm with the notfull
argument can be used to make the volume appendable again.
The -S ssid option is used to change the status of specific save sets. A common
use is to reset the status of a suspect save set after determining that the volume
really is not damaged.
Overview
After a backup starts and the NetWorker server determines which pool the save set
should be written to, it is then necessary to determine which volume within that pool
to use. The volume that is used falls in one of the five categories that are listed
below in order of priority. Each of these categories requires the volume to be
available on an appropriate storage node.
Introduction
This lesson covers managing save set and volume records, performing a CFI
consistency check, and restoring NetWorker control data with scanner.
Overview
nsrmm can be used to remove information from CFIs and the media database.
Combining the –d and –P options enables you to remove CFI entries of individual
save sets or of all save sets on a volume. Removal of CFI records is commonly
referred to as purging.
Using the –d option without –P removes save set and/or volume records from the
media database.
Overview
Using the –d option with volume name removes the references to the volume. This
example deletes the volume L00014L5.
Overview
You can also manage save set and volume records from the NetWorker
Administration Media window. Choose either Disk Volumes or Tape Volumes in the
left pane to display a list of volumes. Then, right-click a volume to bring up a
context menu. From the context menu, you can perform the same set of media
database management tasks as nsrmm.
Recycle - Allows you to set a volume to manual or automatic recycle. This is the
same as: nsrmm -o { manual | notmanual }.
Delete - Allows you to purge CFI entries of all save sets on the volume. You can
also remove the volume record and all the corresponding save set records. This is
the same as nsrmm -dP vol.
Overview
Volume records in the media database have a location field that you can use to
track the volume’s location. The location can be a string of up to 64 characters.
This field is useful for tracking volumes which have been removed from the jukebox
and for volumes moved offsite.
The location argument specifies what to set the location to or which volumes to
manage based on location. The default (no options/arguments) lists all volumes
and their location values.
Overview
You can also specify the physical location of the volume for reference purposes in
the NetWorker Administration interface. Select the Tape volume from the list of
volumes. Right-click the volume in the right pane and select Set Location. The Set
Location dialog box is displayed. Type the description for the physical location of
the volume and click OK.
Here on the slide the example shows the tape volume that is selected is L00004L5
and the set location to “Moved to cabinet 3”.
Overview
For save set management, options from the Volume Save Sets window include
Change Expiration and Change Status.
Choices for Change Expiration include to change the retention time of the
selected save set, keep the save set indefinitely or to expire the save set now. You
can also choose to apply the selected option to all clones of the selected save set.
Choices for changing the save set status are to change the status of the selected
save set to Suspect or Normal.
Overview
Use nsrck to check, recover, or remove a client file index. nsrck also cross-checks
the media database with the contents of each CFI. Each time the NetWorker server
starts, it runs nsrck -L 1.nsrck syntax: nsrck [-L level] [-options] [clientname].
The slide shows the seven levels of consistency checking that nsrck can perform.
Each level incorporates the actions of the lower levels. Level 7 is different from all
other levels in that it is used only for recovery of a CFI.
Overview
Scanner can perform numerous functions. Before running scanner, you must load a
volume into a NetWorker device. You then provide the pathname of the device as
an argument to scanner, which is executed on the storage node controlling the
device.
With no options, scanner reads the entire volume and displays a list of save sets
found. Information that is displayed includes save set name, SSID, and date and
time of the backup. Also any media errors that occur will be reported.
The –m option causes scanner to read the entire volume, creating save set records
in the media database for any save sets not currently tracked. If the media
database does not have a volume record for the volume being scanned, a volume
record is created.
When the –i option is used, scanner populates the media database with volume
and save set information, just like with –m, but also populates the appropriate client
file indexes with file information that is read from each save set on the volume. This
operation can be time consuming if there are many save sets with lots of files.
When used with the –i option, –S ssid is used to restrict which save set(s) the
operation is performed on. For example, to populate a CFI with the list of files from
save sets 1289372 and 1236738, located on a volume in device \\.\Tape1, the
command would be:
To recover the entire media database or an entire CFI, use the nsrdr command.
This is discussed later in this course in the Recovering NetWorker and NMC
Servers module.
Scanner Examples (1 of 2)
Overview
A recent backup of a save set is not needed because the data was corrupted
before the backup took place. It was written to a file device and needs to be deleted
to free up space. mminfo is used to determine the SSID of the save set.
nsrmm is used to delete the save set record. Unfortunately, the administrator
specifies the wrong SSID. mminfo is executed again just to verify that the save set
is indeed gone. It is now necessary to rebuild the deleted save set record.
Scanner Examples (2 of 2)
Overview
nsrmm, with no arguments, is used to locate the volume containing the save set.
From the output, it is determined that the volume is already loaded in device
AFTD_1. If the volume were in an autochanger, nsrjb would be used instead of
nsrmm.
scanner is used to recreate the media database save set record. The output is
redirected because when the –m option is used, scanner oddly enough generates a
recover stream that is not needed in this situation.
The administrator runs nsrmm to see if the save set is once again being tracked
and discovers that although the save set record is back, the save set is not
browsable. The save set needs to be returned to its original status, which was
browsable.
The administrator can run scanner again with the –i option to populate the client file
index.
Summary
Introduction
Introduction
This lesson covers an introduction to the three types of NetWorker recoveries, how
to use the various NetWorker recovery utilities, and volume and storage node
selection for recoveries.
Browsable recoveries
Save Set recoveries
Directed recoveries
NetWorker Recoveries
Overview
A recovery restores data to its original state at a specific point in time. NetWorker is
flexible in how recoveries are performed while simultaneously maintaining
necessary security to avoid recovery of data by nonauthorized persons. NetWorker
supports restoring one or more individual files, directories, or file systems from
NetWorker client backups. The three types of recoveries that are discussed in this
module are: Browsable, Save Set, and Directed.
Recoveries can be categorized by the method that is used to recover the data. In a
Browsable Recovery, the administrator or user browses and selects the set of
files and directories to be recovered using interfaces that require information from
the client file index.
A Directed Recovery is any recovery in which data that was backed up from one
computer is recovered to another.
Overview
Browsable Recoveries are the most flexible and easy to use method of recovering
data. Consider using a browsable recovery when you want to recover only the files
that you mark for recovery and no other files. Also, when you do not know the exact
name of a file, the file can be located by browsing through the file system. When
recovering an entire directory or file system, a point-in-time recovery is
automatically performed. This restores the directory or file system to the way it
looked as of the most recent backup. Because of the point-in-time feature,
browsable recoveries are useful when the most recent backup is not a full backup
and files have been deleted or renamed since the full backup. The recovery will not
restore a file that has been deleted and will recover a renamed file only with its
current name.
A Save Set Recovery can be performed at any time for any save set. By default,
an entire save set is recovered. However, you can recover individual files and
directories. A save set recovery is commonly done:
1. When the last backup was a full backup and you want to recover the entire save
set.
2. When many files are being recovered from a single save set. If a save set has
millions of files, the process of marking each file for recovery during a
browsable recovery can take a considerable amount of time. A save set
recovery does not require marking each file and thus can lead to faster file
recovery.
Overview
In any recovery, there are three client roles - administering client, source client, and
destination client - that are performed by one or more NetWorker hosts. Following
is a description of the three client roles in a recovery:
Source client: The NetWorker client from which the data being recovered was
originally backed up.
Destination client: The NetWorker client to which the data is being recovered.
Administering client: The NetWorker client (local host) performing the recovery.
The most common recovery is where a single NetWorker client performs all three
roles. For example, you might be logged in on hostA (administering client),
recovering data that are previously backed up from hostA (source client), to its
original location on hostA (destination client).
Overview
When a single client performs all three client roles in a recovery, there are no
security issues, data on a client can always be recovered back to the client.
The user on the client must belong to a NetWorker user group that has the
Recover Local Data privilege (members of NetWorker Administrators and the
Users user groups automatically have this privilege). The user also must have
operating system ownership of the files being recovered and have write privileges
to the directories where the data is recovered.
Overview
With the Recover wizard, you can schedule the recovery to be performed
automatically later. The Recover wizard enables you to perform most NetWorker
recoveries through NetWorker Administration without having to log in to the client
or any other application. The Recover wizard is the preferred way of performing a
recovery, however, the other utilities are available if needed.
Recoveries also can be performed using the NetWorker User graphical user
interface on the NetWorker client. Select NetWorker User from Recover >
NetWorker User.
Also, recoveries may be performed from the command line by using the command,
recover, on any NetWorker client. This option is available for all platforms.
Overview
Overview
recover(1m) syntax:
The command recover automatically assumes that the source client is the same
as the administering client. To specify a different source client, use the –c option. If
the administering client is configured as a NetWorker client in multiple data zones,
you can use the –s option to specify the NetWorker server that will control the
recovery.
The pathname argument is either the path to set as the initial working directory for
browsing (interactive mode) or, if the -a option is used (noninteractive mode), the
path(s) to recover. The default initial working directory is the current directory.
Overview
Rename the file being recovered: The existing file is untouched and the file
being recovered is recovered to the same folder, but with a different file name.
By default, a tilde (~) is placed in front of the original name, but when prompted,
you can specify any name that you like. If another file with a name of ~filename
exists, an extra tilde is prepended to the new name. As many tildes will be
added as is necessary to make the filename unique.
Discard the file being recovered: The existing file is untouched, and the
recovered file is discarded.
Overwrite the existing file: The existing file is deleted and replaced by the
recovered file.
Alternatively, you can choose to relocate the recovered data to a different directory.
The folder that you specify in the Relocate recovered data to field will be created if
it does not exist. Subfolders are created as necessary to retain the folder hierarchy
that existed when the files were backed up. There may be times when you want to
recover a set of files to a location other than the folder from which they were
backed up. Relocating recovered files is useful for comparing an existing set of
files with the same set of files that were previously backed up.
Overview
After making a selection of the data to be recovered, users can view a list of the
volumes that are needed to recover the data marked for recovery. If a volume is
mounted, the device on which it is mounted is also displayed.
Recovery Status
Overview
You can monitor the recovery in the Status window which opens as soon as the
recovery begins when using NetWorker User and NetWorker Recover.
Overview
Where there is potentially more than one volume for recovery, the highest priority is
given to the volume containing a complete, non-suspect save set status. If all
volumes still have equal priority, and then priority is given to the volume that is
mounted. If all the volumes are mounted, and then priority is given according to
media type, with AFTD having top priority. Next in priority is location, with highest
priority given to volumes in a library.
Note: Save set status can be changed with options available in the
NetWorker Administration Media window and with the nsrmm
command.
Overview
When a recovery is initiated, the NetWorker server selects the storage node to read
one or more volumes based on the following prioritized criteria:
The Read hostname attribute in the Configuration tab of the jukebox resource
specifies the storage node to use for recoveries and cloning if a client’s preferred
storage nodes are not available. The default value of this attribute is the hostname
of the storage node controlling the first drive in the library.
Overview
On the first screen of the Recover wizard, select Traditional NetWorker Client
Recovery for Recovery Type to perform a file system recovery.
Overview
Select the source host, destination host, and the recovery type. For a directed
recovery, before starting the Recover wizard ensure that the destination host is a
client of the NetWorker server and is running NetWorker 8.1 or later software. For a
directed recovery, the Remote Access attribute of the source client must contain
the host name of the destination client.
Overview
Next, select the files and folders to recover. You can select the items to recover by
file/folder name or by save set. On the next wizard screen, you have the option to
restore to the original path or specify a new destination path. Also, you select how
to handle duplicate file conditions in the recovery.
Overview
The Obtain the Volume Information window enables you to determine how the
recovery wizard selects the volumes that will be used for the recovery. You can
choose to either enable NetWorker to select the volume or to select the volumes to
be used.
After providing a name for the recovery, you can choose to either start the recovery
now or schedule the recovery to start later.
Overview
You can monitor the recovery results in the Check the Recovery Results window
from the Recover wizard through to the recover completion time. NetWorker also
stores the recovery log file in the …nsr\logs\recover directory.
Introduction
Overview
A browsable recovery can only be performed on a browsable save set. Any user
can perform a browsable recovery. However, only those files for which the user has
read permission can be recovered. During a recovery, the user selects the set of
files and directories to be recovered. When recovering an entire directory or file
system, a point-in-time recovery is automatically performed. This restores the
directory or file system to the way it looked as of the most recent backup.
Overview
If the recover program determines that multiple save sets (a full and its dependent
save sets) are required for the recovery, it uses the CFI to determine if any files
were deleted in the time between the most recent full backup and the most recent
non-full backup. These deleted files are not recovered. The CFI is used to
determine if a file was renamed since the most recent full backup. If it was, the file
is recovered only with its most recent name.
Overview
A file selection recovery method, or browsable recovery, inspects the client file
index that NetWorker creates for the source host to gather information about
backups. When the recovery process reviews entries in the client file index, you
can browse the backup data and select the files and directories to recover.
Overview
It is possible to recover a version of a file other than the most recent version:
Overview
The set of files that are displayed within a recovery utility is determined by the
recovery browse time. By default, the browse time is the current date and time.
Based on the CFI contents from the most recent full backup and subsequent level
and incremental backups, NetWorker can determine what the directory structure on
disk looks like as of the most recent backup. That directory structure is what you
are presented with in the recovery interface. If you mark and recover all files that
are displayed, your computer is restored to how it was at the time of the last
backup.
Overview
You can change the browse time to a date in the past, causing the NetWorker
recovery interface to display (and recover) only files backed up prior to the browse
time. Marking a file for recovery automatically selects the most recent version of the
file backed up prior to the browse time. You might want to change the browse time
if you need to:
Changing the browse time is an option in all NetWorker recovery interfaces. In the
NetWorker Recover wizard, the option is found in the Versions menu and Change
Browse Time is displayed to change the browse time.
Note: If you need to recover files from different points in time, either
use the Versions option for each file or perform multiple recoveries
with different browse times.
Searching a CFI
Overview
The Search feature enables you to locate a file or directory by typing its name. This
feature is useful in situations where:
Search is an option in the Select the Data to Recover window. When specifying
the file or directory to locate, the wildcards ‘*’ (match zero or more occurrences of
any character) and ‘?’ (match any one character) are allowed. The search is not
case-sensitive. The search begins with the selected folder or specified directory
and descends into its subfolders. Files and directories matching the search criteria
are displayed and can be selected for recovery.
Overview
With recover, the default method of recovery is by file selection of the latest version
of a file. The add subcommand is used to add the current version of the file to the
recovery list when using the interactive mode of recover.
In this recover example, the file, *.zip, is selected for recovery. Then, the versions
subcommand is used to determine the versions of the file that have been backed
up. To recover an earlier version of the file, the changetime subcommand is used
to change the browse time to a time before the most recent version and after the
next to the most recent version of the file. When the add subcommand is run again,
the next to the most recent version is added to the recovery list.
Introduction
This lesson covers save set recoveries including recovering to a specific point in
time and using the features of the NetWorker interfaces to perform save set
recoveries.
Overview
A save set recovery can be performed for any save set. System administrator
privileges are required to perform a save set recovery.
One or more save sets are specified during the recovery. Although the default
behavior is that each save set is entirely recovered, you can specify a set of
individual files or directories to be recovered instead.
Since a save set recovery does not use CFI information, it does not perform a
point-in-time recovery.
Overview
Let us assume that you want to perform a save set recovery of a large directory to
the way it looked after the incremental backup on Day 6. The following steps must
be performed:
If no files were deleted or renamed between Day 1 and Day 6, the file system is
now fully and accurately recovered. However, if deletions occurred, files which did
not exist on Day 6 were recovered in the Day 1 or Day 5 recoveries. Also, if a file
was renamed, it will now exist under both its original and new names. For the
recovered file system to accurately reflect the Day 6 file system, you must
determine which deletions and renames occurred and manually perform them
again.
Overview
The number of full and incremental save sets that are needed for recovery depends
on the schedule (backup levels) used immediately prior to the point in time you
want to recover the data.
The following save sets that you need for a save set recovery should be identified:
In the example shown on the slide, a recovery is performed after Day 7’s backup.
To perform the recovery, you need the Full save set from Day 1, the cumulative
incremental save set from Day 4, and the incremental save sets from Days 5, 6,
and 7.
Overview
A save set recovery does not reference the client file index where deleting and
renaming of files are recorded. This leads to the following behavior:
Directories and files that are deleted during the backup cycle are recovered.
Directories and files that are renamed during the backup cycle are recovered
multiple times, once for each name by which they were known.
When you have recovered the last save set required to restore your data to a
specific point in time, you may need to perform extra file handling. This could
include deleting files and directories that were deleted during the backup cycle and
renaming files that were renamed during the backup cycle.
Overview
As with browsable recoveries, you can perform searches and view properties,
versions, and volumes for selected items.
Overview
If you want to recover a subset of a save set, select Advanced Options and
specify the path of the directory or file to be recovered in the Extra recover
options attribute. Multiple items can be specified, separated by a space.
In this example, the save set is selected, C:\Documents in the Select the Data to
Recover window. However, we only want to recover the C:\Documents \Morefiles
directory from that save set. When the recover runs, only the contents of the
specified directory are recovered.
Overview
To perform a save set recovery with the recover command, use the –S option
followed by the SSID of the save set. Multiple –S options can be used in the same
command. A save set recovery using the command line is always noninteractive.
Note: Before performing the recovery, determine the SSID of the save
set to be recovered using NetWorker Administration or the mminfo
command. See the NetWorker Command Reference Guide for more
information including a description of the command options and
subcommands.
Introduction
This lesson covers the procedures, interfaces, and requirements for performing
directed recoveries in NetWorker.
Directed Recovery
Overview
A directed recovery is defined as a recovery in which the data that was backed up
from one computer is recovered to another.
Overview
The Remote access attribute in the source client’s client resource must contain the
destination client if the user@destination client does not have the Remote Access
All Clients privilege.
The destination client must enable remote execution requests from the
administering client. Remote execution is performed by nsrexecd. Remote
execution privileges are controlled by the following methods:
1. The /nsr/res/servers file on the destination client lists the hosts authorized to
make remote execution requests.
2. nsrexecd on the destination client can use the –s option to specify a host
authorized to make remote execution requests. If this option is used, the
/nsr/res/servers file is ignored.
3. Optionally, the Disable directed recover attribute can be set to yes in a
NetWorker client’s resource database, /nsr/res/nsrladb. This disallows directed
recoveries from any remote host. (nsradmin –d /nsr/res/nsrladb).
Overview
The source and destination clients must be of the same platform type. You can
perform directed recoveries between UNIX NetWorker clients and between
Windows NetWorker clients. You cannot recover data that are backed up from
UNIX clients to non-UNIX clients, and conversely. The administering host may be a
different platform type from the other clients.
Also, you may not be able to recover files between dissimilar file system formats.
For example, you cannot recover data from an NTFS file system on a Windows
client to a FAT file system because of the way file permissions are handled.
However, files from a FAT file system can be recovered to an NTFS file system
because there are no permissions in a FAT file system; NTFS gives recovered files
the permissions of the directory they are recovered to.
Overview
After you select the source and destination clients, the contents of the source
client’s CFI is displayed, enabling you to browse and mark files for recovery in the
exact same manner as in a normal browsable recovery. Upon initiating the actual
recovery, the administering client contacts nsrexecd on the destination client and
requests that it run recover with the list of files provided.
Overview
Only clients for which nw.emc.edu has remote access privileges are displayed in
the client selection windows.
After you have selected the source and destination clients, the contents of the
source client’s CFI is displayed, enabling you to browse and mark files for recovery
in the exact same manner as in a normal browsable recovery. Upon initiating the
actual recovery, the administering client contacts nsrexecd on the destination client
and requests that it run recover with the list of files provided.
Overview
The -c client option specifies the source client, and the -R client option specifies
the destination client. The required -i [YNR] option specifies what the destination
client should do in response to file naming conflicts:
-iR renames the file when a conflict occurs; .R is appended to each recovered file
name in UNIX/Linux; ~ is placed in front of file name in Windows.
To perform a directed save set recovery using recover, run this command format
from the source client:
Introduction
Recovery Types
Overview
You can recover individual files, or complete file systems from snapshot save sets.
To restore data from snapshots that are cloned to conventional storage media, use
the Recover wizard or other methods as you would for any conventional NetWorker
backup. There are three recovery types available for a snapshot backup. They are
snapshot, rollover, and rollback recoveries.
Overview
You can use the Recover wizard to restore file system data from a snapshot that is
stored on a supported storage array. From the wizard’s Available Recovery
Types, select Filesystem (Snapshot) or another supported application type that is
installed on the client. The Smart Snap option enables you to specify array LUNs
World Wide Names (WWNs).
The wizard detects all available snapshots and save sets, and displays choices and
visibilities that are related to recovering the data. The wizard provides you with the
ability to mount a save set and browse the save set to recover individual items from
the snapshot or recover the full save set.
This is a brief overview of recovering data from snapshots with NetWorker. For
more detailed information, please see the NetWorker Snapshot Management
Integration Guide and the Snapshot Management for NAS Devices Integration
Guide. The NetWorker Command Reference Guide and man pages provide
detailed information about nsrsnapadmin and nsrsnap_recover commands.
Summary
Introduction
Performing Cloning
Introduction
This lesson covers the procedures for performing cloning in the NetWorker
environment including configuring automatic, or scheduled, and manual clone
operations.
Overview
NetWorker provides the ability to further manage and protect save sets and
volumes by using cloning and staging. Cloning copies save sets to another volume
belonging to a clone pool while staging moves save sets to another volume.
Cloning Overview
Overview
Cloning enables you to create identical copies of save sets to be used if there is
damage to the original media or for offsite storage.
Clone operations use the Recover Pipe to Save (RPS) method to clone data. With
this method, the existing NetWorker backup and recover framework is used to
replicate the data from source to destination. Clone performs a save set recover
operation on the source and stores data in a buffer. Then, a save thread consumes
the data and performs a save operation onto the destination. You can clone save
sets either manually or automatically. Nsrclone, running on the NetWorker server,
initiates the clone operation and spawns nsrrecopy on the source storage node.
Data movement is performed by the nsrrecopy binary on the source storage node.
There are two threads for nsrrecopy: one for read and one for write. One nsrrecopy
is spawned per volume and multiple volumes of save sets can be cloned in parallel.
Two devices are required for cloning. Save sets are always cloned. Thus, if a save
set begins on one volume and continues (spans) onto one or more extra volumes,
each of the source volumes will be mounted and read during the clone operation.
Conversely, if the destination volume becomes full during a clone operation,
another volume from the same pool must be made available for the cloning to
continue. Concurrent clone, backup, and recovery operations can be performed on
the same device simultaneously when using advanced file type or Data Domain
devices.
No volume may contain more than one instance (copy) of a save set. This
eliminates the possibility of losing multiple instances of a save set if a single
volume becomes damaged. Since backup data cannot be mixed with clone data on
a volume, it is required that the destination volume belong to a clone pool.
Clone Workflows
Overview
There are two ways to clone save sets using policies and workflows.
Way one. You can configure cloning to occur in the same workflow as a backup
action (backup and clone workflow). In this configuration, you create a workflow
with a backup action and a clone action. The clone action can occur after the
backup action or concurrently with the backup action. There can be a single clone
action or multiple clone actions.
Way two. You can configure cloning to occur in a workflow apart from the backup
action (clone-only workflow). In this configuration, you create a group for save set
selection and specify that group and a clone action in the clone-only workflow.
There can be multiple clone actions in the workflow. This is useful if you want the
clone operations to occur at different times from backup operations.
Overview
Overview
The slide shows the workflow properties for our backup and clone workflow
example. Here you can see that the backup action is followed by a clone action.
Overview
When creating a clone action that is a member of a backup and clone workflow,
you specify the action name and action type of Clone for Action Information. For
Clone Options, specify the destination storage node, the destination pool, which is
a clone-type pool, and retention for the clone save sets. You can choose to delete
the source save sets after the clone operation completes. You can also filter the
input data to the clone by time, save set, clients, and backup level.
Clone-Only Workflows
Overview
In the example shown here, we have created two clone-only workflows in the Clone
Only policy. To configure a clone-only workflow, you first create a save set group
where you specify either the selection criteria or the IDs of the save sets to be
cloned. Then, you associate the group with a workflow that contains a clone action.
Overview
There are two types of protection groups that can be used to clone the save sets in
clone-only workflows. With these groups, you specify the save sets to be cloned.
The type of protection group that you use depends on the way why you are
configuring the workflow.
Save Set Query group - Use a Save Set Query group in clone-only workflows
where you want to clone save sets on an ongoing basis, based on save set criteria.
Save Set ID List group – Use a save set group in clone-only workflows where you
want to clone a specific list of save sets. Specify the save set ID/cloneID
(ssid/clonid) identifiers.
Overview
The slide shows the workflow properties for the Clone with List of Save Sets
clone-only workflow example. Here you can see that we have associated this
workflow with the Save set group. There is only one clone action in the workflow.
When the workflow runs, the save set specified in the protection group will be
cloned.
Overview
Overview
When the –S option is used, a list of save set IDs must be specified. If the –S
option is not used, arguments following any options must be NetWorker volume
names. nsrclone(1m) syntax: nsrclone [options] -S ssid ... | volume ...
where ssid is a save set to clone, volume is a volume containing save sets to clone.
Note that ssid/cloneid may also be used to specify which save set with multiple
copies to use as a source. Additional information including a full list of the
command options can be found in the NetWorker Command Reference Guide, or
the NetWorker Cloning Integration Guide.
Overview
Once the clone operation is complete, validate that the save sets are cloned. The
save sets now are available on two volumes.
Overview
When cloning a volume, it is not a byte-by-byte copy. Only save sets that begin on
the volume are cloned. If a save set begins on the volume and spans to one or
more extra volumes, each of those volumes will be mounted and read. Thus, to
clone a volume really means to clone, in their entirety, all save sets beginning on
the volume.
Multiple volumes can be specified on the command line. The -f option of the
nsrclone command can be used to specify a file (or standard input) containing a list
of volumes to clone. When using an input file, each volume must be on a line by
itself.
Note: The first flag that is associated with a save set indicates which
part of the save set is stored on a volume. This flag can be displayed
with the mminfo -v command and is also displayed when viewing the
save sets for a volume in the Volume Save Sets window in
NetWorker Administration Media tab.Values for the first flag are:
c: Save set is contained on this volume.
h: Save set spans volumes and the head is contained on this
volume.
m: Save set spans volumes and a middle section is contained on
this volume.
t: The tail section of a spanning save set is contained on this
volume.
Overview
Examples:
Clone all save sets backed up since 1:00 a.m. this morning:
nsrclone –S –t “01:00”
Clone all save sets backed up in the last 24 hours with backup level
full and group Default:
nsrclone -S –e now -l full -g Default (ow is a valid nsr_getdate format)
Clone all save sets backed up between 9:00 p.m. yesterday and 8:00
a.m. this morning:
nsrclone –S -t “yesterday 21:00” –e “08:00”
Overview
There may be times when you want to run a clone action at a different time, rather
than directly after a backup action. For example, you want the backup action to run
at 6 P.M. and the following clone action to run during the day at 6 A.M. Prior to the
NetWorker 9.1 release, using the policy framework, the solution was to run two
separate workflows: one workflow containing the backup action to start at 6 P.M.
and a second workflow to run the clone action starting at 6 AM. This solution had
its drawbacks because it was difficult to match up which save sets were cloned.
There is an advanced option on the action that enables you to specify a start time
for that specific action without using an extra workflow.
Overview
By default, the action start time is not used. You can set the start time of an action
using the Action wizard in NetWorker Administration. To use the start time on an
action, set the time to start the action at a specific, absolute time or after a period
relative to the start of the workflow. This is the specified amount of time or later, in
case the previous action in the workflow has not completed.
Changing Retention
Overview
Each instance of a save set has its own clone browse and retention time which is
tracked in the save set record of the media database. Browse and retention times
for clone data can be extended beyond that of the original save set, enabling
browsing and recovery of clone data after the original save sets have expired.
You can specify a retention policy value for the clone save set that differs from the
value that is defined for the original save set. When the retention policy differs for
the original and clone save set, you can expire the original save set and reclaim the
space on the source AFTD but maintain the data on a clone volume for future
recoveries.
If the clone instance is written to a pool having a retention policy, the retention time
of that save set instance is determined by the pool’s retention policy instead of the
client’s retention policy. A different clone retention time can also be set using the –y
retent_time option with nsrclone and with the nsrmm -e command. Setting the
clone’s retention to a longer period than the client’s retention enables the clone to
remain recoverable even after the original backup is no longer retained. Retention
that is specified from the command line overrides the retention policy for the clone
pool. The browse period for a clone can be extended with the -w option of nsrclone
when creating a clone save set. The browse period is left unchanged if the save
set’s browse date is later or if the new time has already passed. This option
requires the -y retention option and must not be greater than the retention time.
Cloning to Cloud
Overview
With the NetWorker Cloud Backup Option, copies of backup data can be stored on
internet-based storage as an alternative to sending tapes offsite. This provides a
tape-less offsite storage solution, eliminating the complex requirements of
managing tapes.
Overview
Introduction
Clone-Controlled Replication
Overview
As with other NetWorker devices, Data Domain device types can also be used to
perform clone operations. Single save sets or the entire volume of a Data Domain
device may be a source or target of cloning. You can also clone from a Data
Domain device to tape or to any other device type.
Data that is cloned from one Data Domain device to a target Data Domain device,
typically at a remote location, retains its deduplication format and is known as
clone-controlled replication (CCR) or as an optimized clone.
The clone is created quickly and uses low bandwidth and low storage capacity. A
clone that is created in this format may be used for data recovery or to create
further copies, for example, to traditional disk or tape storage. This method results
in minimal impact on production or primary backup and recovery operations.
Overview
Overview
CCR cloning in NetWorker employs logic to group save sets for cloning based on
threshold value. At a high level, this is what is involved in the grouping of save sets:
First, an estimate of overhead for save sets is determined. This is the amount of
time for processing the save sets to include both computational and data transfer
overhead. Then, if the total save set overhead is small (< max thread*threshold),
the initial parallelism is increased so the job finishes within a short period.
If total save set overhead is large (> max thread*threshold), the default initial
parallelism is used.
Overview
A target Data Domain device for CCR is labeled into a backup clone pool.
Introduction
This lesson covers the procedures for configuring automatic and manual staging of
data in NetWorker.
Overview
Like cloning, staging requires two devices, one or more source volumes, and one
or more destination volumes.
When a save set is staged, it is cloned, resulting in an extra instance (copy) of the
save set being tracked in the media database save set record. Upon successful
completion of the clone operation, the information pertaining to the original instance
(copy) of the save set is removed from the save set record.
If the save set being staged is on tape, it remains on the tape until the tape is
relabeled. If the save set being staged is on a file or adv_file type device, it is
immediately deleted from the device/volume (directory).
Staging is often used to move save sets from file and adv_file devices to long-term
media such as tape. This enables the most recent backups to be written to and
recovered from disk, and then moved to tape to free space for subsequent
backups. Staging is also used to remove nonrecyclable save sets from an
otherwise recyclable volume.
Overview
nsrstage is the command line utility that is used to stage save sets. nsrstage
syntax:
-m is a required option to stage (move) save sets, and -S ssid specifies which save
set(s) to stage. The optional /cloneid is for save sets with more than one instance
(copy), to identify the instance of the save set to stage. If an instance is not
specified, all instances except for the staged copy are deleted from the media
database.
Overview
A NetWorker stage resource is used to monitor selected file and adv_file type
devices and to automatically stage save sets from the device’s volume to other
media when the volume becomes too full.
Automatic save set staging is designed to move data from file/adv_file type devices
to tape. Staging enables you to perform backups to disk, potentially maximizing
backup performance, and later move the save sets to tape.
Staging prevents the file/adv_file type device from becoming full by periodically
checking the following:
How long each save set has been on the file type device - Save sets are
staged after a specified number of days or hours, regardless of how full the
volume (file system) is.
The percentage fullness of the file system on which the file/adv_file type
device directory resides - Save sets are staged when the file system reaches
a certain percentage of utilization (the high water mark), regardless of a save
set’s age. Once staging begins, it continues until the file system utilization has
decreased to the specified low water mark
Overview
A NetWorker stage resource is used to monitor and manage selected disk type
devices. There is one preconfigured stage resource, default stage, having the
default attribute values shown in the slide.
Overview
The Operations tab of the stage resource enables you to perform manual staging.
After selecting and performing any of the operations, the Start now attribute is
returned to a null value.
Select check file system to perform an immediate check of the fullness of one or
more file systems to determine whether the high-water mark has been reached,
thereby requiring automatic staging.
After selecting stage all save sets and clicking OK, all save sets residing on all
devices that are managed by the stage resource will be staged.
Summary
Introduction
Introduction
This lesson introduces the various types of NetWorker security features, including
access control, secure communications, logs and audit features, and data security.
How to use encryption for backup data is examined in more detail here.
Overview
NetWorker provides logs that record the sequence of activities for the NetWorker
server, NetWorker Management Console server, and each NetWorker client.
Resource update logging provides for the tracking of all resource changes made on
a NetWorker server. This information is useful for accountability where there are
multiple NetWorker administrators. It is useful for security in the event of a system
intrusion and for general auditing of modifications. Auditable security events include
authentication attempts, privilege checks, and resource creation and deletion.
Multiple systems can send their audit data to the same audit log server thus
providing centralized audit capabilities.
These security features are reviewed throughout the lessons in this module.
Access Control
Overview
When users log in to the NetWorker Management Console server, the credentials
of the user are authenticated using the NetWorker Authentication Service.
NetWorker Authentication Service, or AuthC, provides token-based authentication
for NMC and CLI users.
Authenticated users are granted privileges in NMC by using specific NMC roles.
Users with appropriate permissions are granted access to NetWorker
Administration for individual NetWorker servers through NMC.
In the next lessons of this module, NetWorker authentication and authorization are
examined in detail.
Overview
NetWorker hosts and daemons use the nsrauth GSS mechanism to authenticate
components and users, and to verify hosts. The nsrauth authentication
mechanism is enabled by default and is based on the secure socket layer protocol
which is provided by the OpenSSL library. Each NetWorker host has a nsrexecd
service which provides authentication services. Each nsrexecd has its own
private key and self-signed certificate for authentication. nsrexecd generates the
private key when it starts up or a key can be loaded from a file. The private key
generates the corresponding self-signed certificate. GSS is required for the
following NetWorker functionalities: client configuration wizard, file system browse
from client configuration, and software distribution.
Introduction
This lesson covers NetWorker authentication using AuthC and NMC user roles and
configuring users and hosts in NMC.
Overview
The AuthC local database is used to store AuthC configuration information and to
verify credentials for local users. A hierarchical database structure is maintained for
users and groups to support multitenant configurations. The default Server
Protection policy backs up the AuthC database.
Overview
The model pictured here shows at a high level what happens when a user logs in to
a NetWorker Management Console server. The NMC server contacts the
NetWorker Authentication Service on the NetWorker server to verify the user
credentials. The NetWorker Authentication Service compares the user credentials
with user information that is stored in the local user database. AuthC can also
contact an external authentication authority to verify the details, if configured to do
so. If the user verification succeeds, the NetWorker Authentication Service
generates a token for the user account and sends the token to the NMC server.
The NMC server login succeeds.
Next, the NMC server looks up the user role membership for the user to determine
the level of authorization that the user has. When the user attempts to connect to a
NetWorker server, NMC checks if the user has the rights to manage the selected
NetWorker server. If it does, the NMC server provides the token information about
the user to the NetWorker server.
The NetWorker server compares the information that is contained in the token with
contents of the External roles attribute in each configured user group. The server
does that to determine the authorization level that the user has on the NetWorker
server. NetWorker then allows or denies the user request.
Overview
Here are the high-level steps for integrating the NetWorker Authentication Service
with NetWorker.
First, during the NetWorker server installation process, AuthC is installed on every
NetWorker server host. AuthC installation is done as part of the NetWorker server
installation process for Windows and is a required package for Linux NetWorker
server installations. When you install a NetWorker Management Console server,
you specify the name of the NetWorker server that authenticates access to the
NMC server. For example, if the NMC is managing more than one NetWorker
server, you can designate one server as the AuthC authentication host for the
NMC.
Next, establish trusts between NetWorker servers if the NMC is managing more
than one datazone.
Then, configure LDAP or AD authentication, if applicable, and any local users for
NMC. Assign roles and privileges to the users in NMC and the NetWorker servers.
The next several slides provide more detail for each step.
Establishing Trust
Overview
The NMC server can only use one NetWorker Authentication Service to provide
authentication services. If the NMC server manages more than one NetWorker
server, trust must be established between each managed NetWorker server and
the AuthC service providing services to NMC. Establishing trust enables users that
are authenticated by the AuthC service on one NetWorker server to access another
NetWorker server.
Trust is established using the nsrauthtrust command. Run the command on the
host where you are adding the trust. The command format is:
nsrauthtrust -H Authentication_service_host –P
Authentication_service_port_number
Overview
You use NetWorker Management Console and command line tools to configure
and manage authentication and authorization.
Use NetWorker Management Console to create and modify user accounts in the
local user database. You can also use NMC to configure the NetWorker
Authentication Service to authenticate users in an AD or LDAP directory.
The CLI tools, authc_config, and authc_mgmt are used to configure and
manage authentication and the AuthC database. Uses for the commands include:
Use authc_mgmt to manage local database user accounts and groups, local user
options management, and user and group query management. Other operations
such as querying the LDAP or AD directory are also accomplished with this tool.
Overview
Both of these commands are used in an upcoming lab for this module.
Overview
On NetWorker servers, you can use the NMC Console window to configure the
NetWorker Authentication Service to authenticate users in an AD or LDAP
directory. After creating an AD or LDAP provider, you can also edit the external
authority within the Console.
Overview
When configuring AuthC, you established trust between each remote NetWorker
server that NMC manages and the NetWorker Authentication Service that provides
authentication to the NMC server.
nsraddadmin –H authentication_service_host –P
authentication_service_port_number
Next, use NetWorker Administration to add the service account for the NMC server
(svc_nmc_nmc_server_name) to the External roles attribute of the Users user
group.
Overview
When NMC is first launched, the default NMC user account, administrator, and the
authentication server service account are assigned to all three Console user roles.
Overview
The Setup window of the Console server is used to configure and manage NMC
users, including creating Console users.
There are two categories of NMC users, Authentication Service Users, and
External Repository User.
External Repository User refers to user accounts that an external authority server
creates and maintains when AuthC is configured to use external authentication. A
user object is automatically created when a user logs in to NMC for the first time
with external authentication. Optionally, you can create the user object in NMC first
as shown here. In this case, AuthC verifies that the user name is a valid name in
the external repository.
Users can manage data in NMC, such as reports and events, for hosts to which the
user is given permission. By default, a user can manage all hosts. Depending upon
the user role that is assigned to the user, user access to specific hosts can be
restricted using the Permissions tab.
Overview
Authorization settings control the rights or permissions that are granted to a user
and enable access to resources that the NetWorker and the NMC server manage.
After creating users in the NetWorker Authentication Service database, you must
configure the NMC server to enable access for both local and external users. That
also applies when configuring the NetWorker Authentication Service to use an
external authority for authentication.
To set the level of access (privileges) that the user has to the NMC server, map
them to NMC roles. Map each user or group that you want to have access to the
NMC to one of the three NMC roles. Map local users to a role using the Local
Users section of the Edit User Role window. Use the External Roles section to
add external users. To add an external user, type the distinguished name of the
user or group.
In the example shown here, a local user, MaryAdmin, and the external user group,
networker_admins, were mapped to the Console Application Administrator role.
After you map the external user group, all members of the group can access the
NMC server. Notice that the authentication server service account for the NMC
server, svc_nmc_nmc_nwwindows, and the user, administrator, are automatically
local users for the user role.
Overview
Log into the NMC server with a valid user name and password. You can log in to
the NMC server using either a local user account or a user account in a configured,
external authentication authority. Logins for tenant configurations are supported.
Continuing on with the examples, after configuring external authentication with the
AD server of emc.edu, log in to the NMC with the login account tparker. This
account is a member of the networker_admins group.
Overview
In this example, the nsrlogin command is run to validate the user tparker and
generate a token for the user.
Token Expiration
Overview
A token remains valid for a period as defined in the AuthC local database. By
default, the period is 480 minutes or 8 hours. To modify the token expiration
timeout value, select the Configure Authentication Service Token Timeout
option from the Setup menu of the Setup window.
For a CLI authenticated user, after any in-progress, user-initiated operation has
completed. The user must run the nsrlogin command again to generate a new
token.
Overview
For your reference, the table lists all NetWorker logs containing information relating
to the AuthC service. The logs are located in directories on Windows servers below
…\nsr\authc-server and in comparable paths on Linux. These logs are
especially helpful for troubleshooting and verifying operations:
Overview
You can change the configuration of a local user, such as assigned role or
password, from the Setup window by viewing Properties for the selected user. In
the Identity tab, you can change the full name, description, groups, roles, and
password.
For both external and local users, the Login Information tab provides details
about the last user login.
For all users, use the Properties window for each role to change the users that are
members of a selected role.
Overview
When setting up a new installation of NMC, you are prompted to specify the
NetWorker servers that the NMC will manage. You specify that during execution of
the Console Configuration Wizard. After this initial setup, new NetWorker servers
can be added to the Console from the Enterprise window.
To add a NetWorker server to manage, right-click Enterprise in the tree and then
select New > Host. In the Create Host window, specify the name of the NetWorker
server to manage. In the Select Host Type window, select NetWorker to manage a
NetWorker server. Next, in the Manage NetWorker window, choose whether to
gather information from the NetWorker server.
From Enterprise, you can also create folders in the Enterprise tree to organize
multiple hosts into groups.
Overview
The System Options from the Setup menu of the Setup window enable users to
fine-tune the performance of the NMC server. Because changing these options
could potentially degrade performance of the NMC server, exercise careful
consideration and caution. For example, change the debug level for
troubleshooting only and then set it back to 0 when finished.
The User authentication for NetWorker attribute defines how the Console user
accesses a managed NetWorker server. When enabled, which is the default option,
an access request to a NetWorker server is based on the Console user name.
There is a separate network connection from the NMC server to a NetWorker
server for each Console user with an Administration window open to that server. If
disabled, the user id of the gstd process owner determines the Console user
access. There is only one connection from the NMC server to a managed
NetWorker server.
From the Setup menu, you can also perform some of the NMC configuration tasks
that you run the first time that you start a NetWorker Management Console. Those
configuration tasks include running the Console Configuration Wizard and setting
the name of the server that backs up the NMC.
For detailed information about using these options, see the NetWorker
Administration Guide.
Introduction
Overview
Overview
Users and groups are authorized to perform specific tasks on a NetWorker server
based on membership in one or more user groups and the privileges that are
assigned to those groups.
Specific users or groups of users are associated with a user group through the
External roles and Users attributes of the user group resource.
Each NetWorker user group has a specific set of privileges that are associated with
it. The Privileges attribute defines those privileges. Users and groups must be a
member of one or more user groups with privileges that correspond to the tasks
that they perform.
Overview
To add a NMC/AuthC local user to External roles, click the “+” sign and select the
user from the list of local users and groups. To add an external user, type the
distinguished name of the user or group. Specify user names where a user belongs
to many groups.
The Users attribute of a user group defines membership for operating system
users that perform operations outside of NetWorker Administration. These include
CLI commands such as nsradmin, save and recover, and NetWorker modules,
such as NMM and NMDA. To add a user in the Users attribute, use a
Overview
NetWorker provides these nine, role-based user groups preconfigured with specific
privileges. You can assign users to one or more of these groups based on their
administrative role.
The privileges that are associated with each user group can be modified except for
the Application Administrators user group and the Security Administrators user
group. The preconfigured user groups cannot be deleted. The administrator can
create more groups to meet the specific needs of a data protection environment.
For a detailed description of all user privileges that can be assigned to a user
group, see the NetWorker Security Configuration Guide.
Overview
Extra user groups can be created as needed. A new group is convenient if there
are specific users that you would like to assign specific NetWorker duties to but do
not fit into the predefined categories.
Overview
For example, to have access to the client database (nsrexec), a user must be a
member of the Administrator list.
NetWorker Logs
Introduction
This lesson covers NetWorker resource update logging, audit logging capabilities,
and NetWorker server and Console server logs.
Overview
NetWorker uses the resource database to store the resources for a NetWorker data
zone. The resource database exists on the NetWorker server. There is one file per
configured resource and each file is stored in any of 10 subdirectories (00-09)
under /nsr/res/nsrdb.
Important: Resource files are text files and are to be modified only
using NetWorker administrative resources, including NetWorker
Administration and the nsradmin command. Do not edit them. See
the nsradmin topic in the NetWorker Command Reference Guide for a
description of nsradmin options, commands and examples.
Overview
Resource update logging enables the administrator to track changes that are made
to configuration resources. The NetWorker server records resource changes in the
rap.log file that is located in …/nsr/logs directory.
Resource update logging is enabled using the Monitor RAP attribute in the
NetWorker server resource (NSR). By default, this attribute is enabled but hidden.
To display the Monitor RAP attribute, enable the diagnostic mode from the View
menu. Then, right-click the name of the NetWorker server from any NetWorker
Administration window and select Properties.
Overview
For each event, there are several lines of information that are written to the file.
Information includes a timestamp of when the change was made, the type of action
performed (CHANGED, CREATED, or DELETED), and the affected NSR resource
type. Remaining lines provide the details of the modification. If the type of action is
CHANGED, the old value the new value are displayed. If the action is CREATED or
DELETED, all the resource attributes and attribute values are displayed.
Here is an example of the rap.log file entry for a change that is made to a client
resource. The save set for the client was changed from C:\Windows\Fonts to
C:\Program Files\EMC NetWorker\nsr\logs. You can see that the log
mentions both the old and the new value for the save set.
Overview
NetWorker provides the security audit logging feature to record events that are
related to the security and integrity of the data zone.
Any client host in the datazone can be configured to run nsrlogd. By default,
nsrlogd runs on the NetWorker server. The nsrlogd receives audit messages
from the NMC gstd, the nsrexecd on each client including the NMC, and the
daemons running on the NetWorker server. Administrators can view the properties
of the security audit log attribute from the Server window of the NetWorker server.
Members of the Security Administrators user group and the NetWorker server
Administrator attribute can modify the attributes of the security audit log resource.
Changes made to the resource are automatically copied to each client in the
datazone supporting audit logging.
The security audit log file contains the timestamp, the category, the program name,
and the unrendered message for each security audit message. On the NetWorker
server, the security audit log file is
…nsr\logs\networker_server_sec_audit.raw.
The Security Audit Logging topic in the NetWorker Security Configuration Guide
contains examples of security audit log configurations. It also contains a list of
resources and attributes that the security audit log monitores.
Overview
NetWorker maintains many log files on the NetWorker server and Console server,
besides the previously mentioned rap.log and security audit log files. For
Windows hosts, logs are located on the NetWorker server in the …\nsr\logs
directory. Console server logs are located in …\Management\gst\logs. For
Linux hosts, the paths are /nsr/logs and /opt/lgtonmc/management/logs
respectively.
Listed on the table are some of the most often used logs.
Overview
Several NetWorker log files, which are identified with the .raw extension, are
written in tokenized format. Raw files include daemon.raw (NetWorker server),
gstd.raw (Console server), networkr.raw (NetWorker User program), and
workflow and action logs. The tokens are the same regardless of the locale of the
host. When the nsr_render_log command is used to view these locale-
independent raw logs, the tokens are rendered using the locale of the current host.
Thus, a log file that is viewed on an English system displays English text. If the
same file is viewed, for example, on a host in the Chinese locale, Chinese output is
displayed.
All other log files, and messages that are displayed in the NetWorker Console, use
the locale in which the service that is generating the log messages is running. Use
a text viewer to view the content of these logs.
Using nsr_render_log
Overview
Review the NetWorker Command Reference Guide for command options and more
examples.
Overview
Introduction
Overview
Firewalls monitor all traffic flowing between two or more networks and enable only
authorized traffic, as defined by administrative policies.
Firewall support enables you to back up NetWorker clients that are separated from
the NetWorker server by a packet filtering firewall. It is first necessary to determine
which TCP/IP ports the NetWorker server uses and which ports the NetWorker
client uses. The firewall must then be configured to allow packets to be sent to the
appropriate range of ports on the destination hosts.
If a storage node must communicate through the firewall with either the NetWorker
server or a NetWorker client, calculate the range of ports that the storage node
requires. Then, configure the firewall appropriately to allow communication
between the storage node and the other NetWorker hosts.
Overview
Overview
The port numbers that the NetWorker processes or services use, except for
nsrexecd, are assigned from the service port range that is set in the NetWorker
software.
nsrexecd on every type of NetWorker host always tries to listen on ports 7937
and 7938. The ports are used no matter the range value in the NetWorker software,
unless another process is already listening on those ports when NetWorker is
started. NetWorker requires the port 7938 for rpcbind (portmapper) to be running
and available through the firewall, or NetWorker ceases to function correctly.
Permitted port ranges are stored in the NSR system port ranges resource in the
resource database, /nsr/res/nsrladb on each NetWorker host. nsrexecd
uses and manages the resource. Whenever NetWorker daemons/services are
started, nsrexecd is always the first process to start. Whenever NetWorker
server processes are started manually, nsrexecd is started first. Failure to do so
might cause the ports to be assigned randomly or outside the wanted range. The
ports in the Excluded service ports attribute are ports that are reserved for other
services. Specified ports are excluded from RPC service ports.
Overview
Port requirements vary based on the components that you are installing, the
environment you are installing in, and the version of NetWorker you are using.
Consequently, you must understand the processes and the ports that each of the
NetWorker components uses.
The table lists the standard NetWorker services, the ports that are required for
each and the functions for which the process is used: either server, storage node,
client, or the audit log server. Library and device-related processes are discussed
on the next slide. Additional applications and features may use additional ports.
You must identify the features and components that are used in your environment
and determine the port requirements specific to that unique environment.
A standard NetWorker client requires at least four TCP service ports. Snapshot
services require an extra two ports. The NetWorker server requires a minimum of
15 TCP service ports.
For the most detailed information regarding NetWorker services and port
requirements, see the NetWorker Security Configuration Guide.
Overview
The ports that are listed on the slide are for device-related ports that the storage
nodes and NetWorker server use when devices are attached. One port is required
for each jukebox that the storage node manages, and ports for the nsrmmd
processes. The minimum number of service ports that a storage node requires is
five - four for the NetWorker client and one for nsrsnmd.
The type of devices you are using and how you have them configured determine
the number of ports that the nsrmmd processes require.
Unattended firewall ports must be restricted for security reasons in most enterprise
environments. The storage node settings for mmds for disabled devices and
Dynamic nsrmmds unselected (static mode) can offer more control. The settings
cause all available nsrmmd firewall ports to be attended by running nsrmmd
services. This setting is useful in cases where security does not allow ports to be
open and unused. When these options are configured correctly, it can keep an
active process running for all devices even when they are not in use or disabled.
For more information about both of these settings, see the NetWorker
Administration Guide.
Overview
After calculating the number of service ports that each NetWorker host requires,
determine the service port range that will include the calculated number of ports.
When specifying a range, begin at port 7937. 7937 is always the first port in the
range because nsrexecd is always started on that port. Alternatively, you can
specify one range of 7937-7938 and then one or more extra ranges for the
remainder of the ports.
The firewall administrator does the configuration of the firewall based on the port
information you provide. The number of ports that must be opened in the firewall
depend on those NetWorker hosts that are separated by the firewall. In the
example shown here, the firewall should be configured to allow transmission of
TCP/IP packets destined for the following hosts/ports:
Client A 7937-7940
Client B 7937-7940
Overview
The slide lists the steps that are required to restrict the NetWorker service port
range. The steps must be performed for each host where you want to change the
service port range.
The following administrative interfaces are available for configuring NetWorker port
ranges:
nsrports
NetWorker Administration
nsradmin
To change the port ranges on a host, the user must have update access to the
NSR system port ranges resource for that host. The NSR system port ranges
resource has its own administrator list on each NetWorker host. That list is in
contrast to NetWorker resources that reside on the NetWorker server and that
users belonging to the Administrator list of the server manage. To give the user
update privileges, add the user to the administrator list for this resource on the
host:
2. Use the print subcommand to list the NSR system port ranges resource.
3. Use the update subcommand to modify the administrator attribute.
4. Save the update and quit nsradmin.
Overview
The nsrports program can be used to view or update the port ranges from the
command line.
nsrports can be run from any host. The -s option is used to specify a remote
host whose service port range you want to modify.
If the -s option is not used, the port range on the local host is modified.
The –S option is used to specify a new service port range for the host.
The -C option is used to specify a new connection port range for the host. By
default, NetWorker defines a range of 0-0 for connection ports.
If neither option is used, the current port ranges are displayed. Noncontiguous
ranges may be specified by including more than one range.
Overview
The slide illustrates the steps that are required to configure a port range using the
NetWorker Administration window.
Overview
The slide illustrates the steps that are required to configure a port range using
nsradmin.
Important: This command is run for each host for which port changes
are to be made.
Overview
Three ports are required for connections between the Console server (gstd) and
Console clients.
One port, default 9000, is used for the web server. The second port, default 9001,
is used for RPC calls from the NMC Java client to the Console server. These ports
are not taken from the range configured using nsrports. Instead, they can be
changed during the installation of NMC server.
The third port is used for database queries and is 5432. This port cannot be
changed.
The firewalls protecting the Console server and the client must be configured to
allow communication over these three ports. The range of ports that NetWorker
uses on the host where the NMC server is installed must not overlap with these
ports.
Besides these ports, two more ports are required if using Data Domain within the
environment. SNMP requires the use of port 161 and 162 for capturing SNMP traps
from the Data Domain device.
Overview
After you have determined the minimum service port ranges for the NetWorker
server and clients, the firewall must be configured to allow transfer of the following
packet types. The port ranges used are from the example that is shown on the
slide.
If they are going to a port in the range 7937-7955, packets are destined for the
IP address of the NetWorker server.
If they are going to a port in the range 7937-7940, packets are destined for the
IP address of the NetWorker client.
If they are going to a port in the range 7937-7943, packets are destined for the
IP address of the NetWorker storage node.
The firewall rules must be configured to accept packets with the SYN bit for ports in
the service ports range.
Tools
Overview
The RPC protocol underlies all NetWorker services. RPC is a protocol which
enables a program running on one host to cause code to be run on another host.
netstat is used to display a list of ports that are in use and, if appropriate, the
destination port to which they are connected.
iperf is used as network testing tool that can create TCP and UDP data streams
and measure the throughput of the network. iperf enables the user to set various
parameters that can be used for testing a network or alternately for optimizing or
tuning a network. iperf works on various platforms.
Summary
Introduction
Introduction
This lesson covers events and reporting in a NetWorker environment. The settings
for gathering information and configuring reports and notifications in NetWorker and
the NetWorker Management Console are discussed.
Overview
To change whether the Console server captures events and gathers reporting data
from a managed NetWorker server, select the NetWorker server in the Console
Enterprise window. Right-click NetWorker (the managed application) in the right
pane, and select Properties from the context menu.
Selecting Capture Events enables events such as license warnings and pending
media requests to be displayed in the Console Events window. Selecting Gather
Reporting Data enables the Console server to accumulate data that is retrieved
from the NetWorker server jobs database and use that information when creating
reports.
Overview
The Events window contains important notices that are generated by the NMC and
managed servers. Types of NetWorker events include failed policy backups,
pending media requests, automatic disabling of devices due to too many
consecutive write errors, and NetWorker licensing notifications.
In order for the NMC to capture events from a specific server, the Capture Events
option must be selected for each server.
Overview
The NMC Reports window contains all reports that can be run within the NMC. The
preconfigured reports are separated into seven different categories, based on
function.
Overview
Two types of reports are provided in the NMC. Basic reports are reports that
provide data at a single level; these typically include summary and detailed reports.
In contrast, drill-down reports provide data at a single level, as well as the ability to
drill down to deeper levels providing greater depth of information within a single
report. The two types of reports are easily identifiable based on the icon used to
represent them. Report icons with a black downward-pointing arrow indicate drill-
down reports.
Overview
For each report, there are a number of parameters that can be specified. By
default, all possible values of each parameter are selected. For example, the
Policy Summary report automatically displays information about all NetWorker
policies viewable by the user running the report. All Console database information
matching this query, regardless of the save set timestamp, is included in the report.
To customize the report, deselect one or more values from one or more of the
parameters, or restrict the time period for which the report is generated. The ‘<‘
button deselects an individual value while ‘<<‘ deselects all selected values. The
‘>’ button selects an unselected value while ‘>>’ selects all unselected values. A
customized report can be saved for later use.
Overview
After specifying the parameters on which to query, change to the View Report tab
to perform the query and display the results. The parameters used for the query
are displayed in the upper right corner and the actual report is displayed below
them.
Clicking the heading of a field causes the report to be sorted on that field. Clicking
the same heading again reverses the sort.
Customizing Reports
Overview
Right-clicking anywhere in a report pops up the context menu shown in the slide
from which you can choose the report format.
By default, reports are displayed in a tabular format in portrait orientation. You can
use the context menu to change the orientation to landscape.
Report Display
Overview
The default tabular display can be modified by selecting Document from the
context menu, as shown on the slide. Displaying a report in document format is
useful if you want to print the report.
To return to the default tabular view, select Interactive from the context menu.
Overview
There are several types of chart formats including bar chart, pie chart, plot chart,
and stacking bar chart. Each type of chart displays the same information but in a
different format. To display a report in chart format, select Chart from the context
menu. Then, select the type of chart from the choices in the Chart Type drop-down
menu. Select the type(s) of data to display with the Chart Selection field.
Report Options
Overview
In many of the report types, you can select Zoom from the context menu to change
the size of what is displayed. Additionally, you can choose Print from the context
menu to send the report to a printer.
The context menu also has an Export selection which allows you to export the
displayed information to a file in PDF, HTML or Postscript format. Reports
displayed in a tabular format also allow exporting to be performed in CSV format.
Drill-Down Reports
Overview
Drill-down reports are designated by a small black triangle on the bottom of the
report icon in the Reports window.
In a drill-down report, you can double-click items within the report to view more
detailed information. The types of information displayed when drilling down and the
order in which they appear are listed at the top of the report above the query
parameters in a section called Down Sequence.
Overview
To save the customized query parameters, right-click the report that you
customized in the left pane and select Save As from the context menu.
After you specify a name for the report, the customized report will be filed in the left
pane below the preconfigured report.
By default, a customized report is stored as private for the user who created it and
only appears in that user’s list of reports. The owner, or the NetWorker
administrator, may choose to share the report with others by right-clicking the
report name in the left pane and choosing Share from the context menu. Once
enabled for sharing, the report appears in the list of reports for all users.
Overview
To perform a query and generate a report from the command-line, use the
gstclreport command. There are a large number of options used to specify
items such as the user to perform the query as, the query parameters, and the
format of the report.
Command line reports may only be printed or run to generate exported output.
They cannot be saved or shared. Drill-down reports cannot be run from the
command line.
Overview
The information contained in the NMC database is used when generating reports.
To manage the size of the database, there are five categories of configurable
parameters that allow you to retain various types of data for differing lengths of
time.
Statistical Data consists of all save set data retrieved from a NetWorker server’s
media database for use in generating backup statistics reports. Once retrieved from
a NetWorker server and stored in the NMC database, the save set data is retained,
by default, for a period of one year.
Audit Data is kept in the NMC database for one year, by default. Audit Data
reports on NetWorker tasks performed by specified users when the NetWorker
User Auditing system option is activated.
Completion Data is kept for one month, by default. Completion data includes
information about all backed up save sets.
Overview
You can configure it using either the Server tab in NetWorker Administration or the
nsradmin command. ESRS provides an email report of RAP database
information. The following are not included:
Log data
Backup summary information and backup data
Non-NetWorker configuration information
Passwords and other security sensitive information
Any options specified in the Exclude attributes or Exclude resources fields
NetWorker Notifications
Overview
Many NetWorker processes within a datazone notify the NetWorker server when
they finish performing their assigned task or when they are having difficulty
performing a task due to undesirable conditions. Some common conditions might
include the followings.
Overview
A notification’s Event attribute specifies one or more events which trigger the
notification. Each message generated as the result of an event is flagged with a
severity level or priority. A notification’s Priority attribute specifies the severity
level(s) at which the message must be flagged for the notification to be performed.
Lastly, the Action attribute specifies the command that is executed when a
selected event at a specified priority occurs. For a NetWorker server running
Microsoft Windows, NetWorker provides the following commands that are
commonly used in notifications:
A Linux NetWorker server already has the utilities necessary for logging information
(the syslog facility and the logger command), printing (lp or lpr), and sending
email (mail or mailx).
This may involve creating a new notification or copying an existing notification and
modifying the action, resulting in multiple actions being performed for the same
event.
Filtering
Overview
From the Administration window, you can use filters to search and view details
about NetWorker server resources, recover configurations, devices, media, and
hosts. Search fields and list boxes display on all NMC windows with filtering
capability.
The search fields and list boxes allow you to filter information that appears on a
page. By typing a value in the search fields or selecting an option from the list box,
the display changes based on the values that you specified in the fields.
For example, in the Protection > Policies window, you can search and view
details for a policy. By typing Bronze in the Search Name field, only the policies
with the name Bronze appear in the list.
In this example, the policy with the name Bronze displays and the Send
Notification attribute is set to On Completion.
Overview
This lab covers NetWorker reporting, including the running of reports and creating
custom reports.
Introduction
NetWorker Parallelism (1 of 2)
Overview
Server parallelism defines the number of simultaneous data streams that the
NetWorker server allows. The default value is 32. Typically, it is recommended that
this value be set as high as possible without overloading the NetWorker server.
Action parallelism defines the maximum number of concurrent activities that can
occur on all clients in a group that is associated with the workflow that contains the
action. For a backup action, the default parallelism value is 100, for clone actions it
is 10, and all other action types have a default value of 0, meaning unrestricted.
NetWorker Parallelism (2 of 2)
Overview
Client parallelism is the maximum number of data streams that a client can use
simultaneously during backup. If multiple (logical) client resources exist for a host
and are backed up at the same time, the maximum number of save sets backed up
simultaneously from the physical host is the sum of the Parallelism value for each
client backing up. By default the Parallelism value is set to 4; however, for the
NetWorker server’s client resource the default value is 12 to accommodate server
CFI backups.
Pool parallelism defines the maximum number of simultaneous save streams for
each device belonging to the pool. The default value is 0, meaning unrestricted.
Overview
In this example, we look at the impact on the NetWorker server when server
parallelism is set to a value of 1.
Overview
In the next example, we consider the impact of increasing the server parallelism
value to 2.
The number of save streams assigned to a device is determined by the value of the
device resource’s Target sessions attribute. When a device is receiving the
number of save streams specified by its Target sessions value, the NetWorker
server attempts to direct additional save sets to other available devices. If there are
no other devices available to receive additional save streams, the NetWorker
server can direct the save streams to the device already receiving its target number
of save streams. Thus, Target sessions is not a hard limit; the NetWorker server
can override the value if necessary.
Each device resource also has an attribute called Max sessions. This attribute is a
hard limit on the number of save streams that may be directed to the device.
Overview
In this final example, we review the impact when server parallelism is set to a value
of 8.
Client oboe backs up its /usr and /mail save sets. The save streams are
directed to the first device because its Target sessions value is set to 2.
Client clarinet’s /mail and /tmp save sets are directed to the second device
because the first device is already receiving the number of save streams
specified by its Target sessions value. At this point, both devices are now
receiving their desired number of save streams.
Since server parallelism is 8, the NetWorker server will start four additional save
sessions. Since a device’s Target sessions is a soft limit, the server overrides
the value and directs the streams to the two devices.
Although the slide depicts the save streams being directed to the devices in a
round-robin fashion, each additional save stream is directed to the least utilized
device as determined by the device resource’s Accesses attribute.
Overview
Parallel save streams (PSS) are used to automatically break up a large save set
into multiple smaller save sets to be backed up at the same time. This results in a
backup that completes faster for file systems on disks that support the increased
read parallelism. Each PSS client resource’s save set entry (mount point, file
system) results in multiple save sets. Each save set has a corresponding media
database record. Synthetic and Virtual Synthetic full backups for UNIX, Linux, and
Windows are supported.
This feature is enabled for scheduled file system backups by checking the Parallel
save streams per save set client resource property.
Overview
Parallel save streams (PSS) are configured at the client level. To use PSS for a
specific client resource, modify the properties of the client and select Parallel save
streams per save set. The maximum number of save streams allowed will be
controlled by the client’s Parallelism value. PSS works best on clients with large
file systems hosted on disks that support high read performance.
Optionally, support is provided to specify the number of streams to use per save
set. This can be done by defining the PSS:streams_per_ss variable with the
Save operations attribute of the client properties Apps & Modules tab.
Overview
When backups are run using PSS, NetWorker displays the progress of each partial
save set in the NetWorker Administration Monitoring window. As save streams
are freed from backup completion, they will be dynamically reallocated to other
save sets until the max parallelism value is met.
Overview
This example illustrates the benefits of using parallel save streams in terms of
backup completion time. In this example, a client is backing up a save set
consisting of three volumes. Client parallelism is set to 10 and the default of 4 is
used for max stream per save point. The differences between no parallel stream
processing and parallel save streams (PSS) includes the number of streams
started concurrently and what happens when a stream is freed. With PSS, the
backup starts both C:\ and D:\ with four streams and E:\ with two streams, up to the
client parallelism value of 10. After one hour, C:\ and D:\ are finished and the eight
streams used are available to be reallocated. E:\ continues backing up with four
streams which is the default max stream per save point value. Without parallel
stream processing, the total backup time is determined by the largest volume and
would take approximately 20 hours. In this example, with PSS, the backup window
is approximately five hours.
Overview
If you are backing up virtual clients, you can base the client parallelism setting on
the underlying physical host. In this way, the total number of save streams for all
virtual clients that reside on a physical host are limited to the value specified for the
physical host. To configure this, select Physical client parallelism on the
properties of the virtual client with Diagnostic Mode enabled.
Multi-Tenancy
Introduction
This lesson covers the NetWorker multi-tenancy facility and the use of Restricted
Data Zones.
Overview
Restricted Data Zones (RDZ) allow multiple tenants to share a single NetWorker
environment. This offers customers who need to provide backup services to
various clients an ability to create logical datazones within a backup environment.
This is particularly useful with service providers managing multiple tenants within a
single infrastructure.
Multiple resources, such as clients, devices, and storage nodes, etc., can be
assigned with a Restricted Data Zone for better utilization. Restricted Data Zones
are a standard feature in NetWorker version 8.0 and higher, therefore no additional
licenses are required for use.
The Restricted Data Zone feature provides autonomy for tenants in a hosted or
service provider environment, and a simplified experience for NetWorker
administrators.
With NetWorker 9 and higher you can also associate an RDZ resource to an
individual resource (for example, to a client, protection policy, protection group, and
so on) from the resource itself. Non-default resources, that are previously
associated to the global zone and therefore unusable by an RDZ, are shared
resources that can be used by an RDZ.
Overview
The Restricted Data Zone is a feature that allows for resources from a single
NetWorker environment to be segmented into individual Restricted Data Zones.
The overall goal of Restricted Data Zones is to isolate and separate users and
resources within a NetWorker environment.
The Global Administrator performs the role of an administrator over the entire
datazone as well as setup and configuration of restricted Data Zones.
The Tenant Administrator can view all resources in a Restricted Data Zone but can
only modify resources designated to them for modification.
The Tenant User is a user that exists only within the RDZ and has no
administrative privileges in the RDZ.
Restricted Data Zones can be complex to setup. When attempting to utilize the
Restricted Data Zone capabilities in an existing NetWorker environment, changes
have to be made in order to fit Restricted Data Zones. If an environment is
considering using Restricted Data Zones, it is best to start the process on the initial
NetWorker install with a new environment rather than trying to modify an existing
NetWorker environment to use Restricted Data Zones.
For a complete list of rules and a more detailed discussion of Restricted Data
Zones, please refer to the NetWorker Administration Guide.
Overview
Configuration is performed by adding users and roles along with their associated
privileges to the user configuration. Next, select the resources available within the
NetWorker datazone that you are granting the Restricted Data Zone permission to
use.
For more information about configuring Restricted Data Zones, refer to the
NetWorker Administration Guide.
Overview
Various resources can be assigned to a Restricted Data Zone such as devices and
clients. Similarly, resources such as groups and policies can also be assigned to a
Restricted Data Zone.
Summary
Introduction
Introduction
Overview
Bare Metal Recovery (BMR) is an operation that restores the operating system and
data on a host after a catastrophic failure. NetWorker provides an automated BMR
for Windows that identifies critical volumes and performs recovery for a disabled
computer. NetWorker BMR does not support back up or recovery of user data or
application data unless the data resides on a critical volume. This type of data,
such as Microsoft Word documents or Excel databases, should be backed up with
regular file system or application backup operations.
You can use NetWorker BMR for recovery of both physical and virtual hosts.
NetWorker Windows BMR supports file system backup and recovery. Additional
backup and recovery software, such as NetWorker Module for Microsoft (NMM),
and procedures are required for backup and restore of application data.
Overview
Files that are associated with application VSS writers are not backed up as part of
the DISASTER_RECOVERY:\ save set. Those files cannot be recovered unless an
application backup program backs up them, such as NMM. The
DISASTER_RECOVERY:\ save set does not include data for clusters, Active
Directory, DFS-R, and Windows Failover Cluster.
Overview
The source and target hosts use the same operating system architecture and
processor architecture.
The startup hard disk capacity must be at least as large as that hard disk of the
source host.
The number of disks on the target host is greater than or equal to the number of
disks there are on the source host. The disk LUN numbering on the target host
must match the disk LUN numbering on the source host.
The RAID configuration on the target computer cannot interfere with the disk order
of the hard disks. The disk or RAID drivers that are used on the source system are
compatible with the disk or RAID controllers in the target system. The recovery
process restores the backup to the same logical disk number that the source host
used. You cannot restore the operating system to another hard disk.
Windows BMR supports IDE, SATA, or SCSI hard disks. You can make the backup
on one type of hard disk and recover on another type of hard disk. For example,
SAS to SATA is supported.
NIC drivers that match the NIC in the target host. These drives are installed after
the recovery and reboot is completed.
Overview
A NetWorker BMR for a Windows host is a restore operation that is performed from
the NetWorker Windows BMR boot image. Specific files or save sets cannot be
recovered during a BMR. The target system can access the Windows BMR image
as a bootable CD volume or from a network boot location. Here is a summary of
the disaster recovery tasks for a Windows physical or virtual host using NetWorker.
Driver software if the new host has different hardware than the source host
Network name and IP address of the target host and the NetWorker server and
storage node
The default gateway and name of the DNS server
The NetWorker volumes that contain the backup save sets
You use the Windows BMR image available from http://support.emc.com to create
a bootable CD or deploy this image for a network boot operation. The Windows
BMR image contains the Windows PE operating system, NetWorker binaries, and a
wizard which controls the recovery process. When the Windows host is booted
using the Windows BMR image, the recovery process starts the NetWorker BMR
wizard. The wizard guides the user through the recovery process. The BMR
process restores the operating system that was installed on the source host. If
recovering to a different host with different hardware, after the recovery and reboot
is completed, Windows prompts the user to install the required drivers. As
mentioned previously, data from noncritical volumes including user files and
application database files must be recovered after performing the disaster
recovery.
Introduction
This lesson covers backup and recovery of clusters and the configuration of cluster
clients in a NetWorker environment. Topics include cluster components and
characteristics, the procedure for configuring cluster-aware clients and the
management of path ownership with clusters.
Overview
Clustering is a common practice that can help ensure that data or applications are
continuously available to clients on a network. The basic premise of clustering is
simple: two or more nodes (physical hosts) are connected and available to network
users as a single, highly available system.
When using a clustering application, all nodes in a cluster share one or more disk
resources. In an active/passive cluster, only one of the nodes in the cluster is active
at any given time. The active node is responsible for managing the shared
resources. All other nodes in the cluster are passive nodes. If the active node fails
for any reason, one of the passive nodes takes control of the shared resources.
Clustering can involve more than two nodes and may also involve load balancing.
Clustering can also be configured in active/active arrangements. This arrangement
is used when there are multiple shared resources and each of the nodes is the
active node for one or more resources. This lesson covers a basic cluster
environment of two nodes in an active/passive configuration.
Overview
A shared resource may be either a set of files or an application. A cluster may have
many shared resources. A shared resource within a cluster is referred to by any of
several different names, depending on the clustering software being used. For the
remainder of this lesson, a shared resource is referred to as a virtual service. The
active node always manages a virtual service.
A virtual service is not a physical host, but rather a shared resource that each node
of the cluster can access. Each shared resource may be comprised of multiple
components, such as files, processes, data, and so on, and is assigned its own
hostname and IP address. Hosts outside the cluster see the virtual service as a
normal physical host.
During normal operation, the active node manages all communication between the
virtual services and other hosts on the network. If a planned shutdown or failure of
the active node occurs, control of the virtual services is transferred to the other
node in the cluster. When that happens, the other node changes from the passive
to the active node.
When the failed node is returned to a functional condition, it becomes the passive
node. It is then available for failover in the event of a failure of the current active
node.
Overview
This course provides an overview of the generic steps for configuring NetWorker in
a clustered environment. Procedures for preparing the cluster and for creating
cluster-aware NetWorker clients differ by type of supported cluster environment.
For this information, see the NetWorker Cluster Integration Guide.
Overview
Overview
With most cluster types, you run a cluster configuration script to configure a cluster-
aware client. This slide shows the location of the script by type of cluster
environment. There may be extra steps to create a cluster-aware client depending
upon the cluster type.
For MSFCS clusters, NetWorker supports backup and recovery of file system data
on Windows Server 2012 and Windows Server 2012 R2 file servers that are
configured for Windows Continuous Availability with Cluster Shared Volumes
(CSV).
For detailed configuration steps for cluster-aware clients, see the Configuring the
Cluster chapter in the NetWorker Cluster Integration Guide.
Overview
NetWorker client resources are created for each node in the cluster and for each
virtual service. In a cluster environment with two nodes and one virtual service, you
configure at least three NetWorker client resources.
Each physical node backs up data residing on its own local disks. You create
NetWorker client resources for the physical nodes as you would a non-clustered
backup client.
A virtual client backs up the shared clustered data. If the cluster has multiple virtual
services requiring multiple hostnames and IP addresses, you must create at least
one NetWorker client resource for each virtual service. Specify the root user or
system account for each physical node within the cluster in the Remote Access
field. The Remote Access field enables the active node to perform recoveries of the
virtual client, regardless of which node is active. Specify any environment variables
in the Application Information field. For example, you might optionally specify a
preferred server order list for a CSV backup.
When creating the client resources, ensure that the Save set attribute of the virtual
clients and nodes lists all shared and non-shared data on the systems. Ensure that
the virtual client is backing up all shared data. Also ensure that the NetWorker
client resource of each node includes the local data on that host. Although the All
save set is supported for a virtual client, Dell EMC recommends that you use the
All save set only for the nodes. When All is specified for a node, it does not include
the shared data.
As with any NetWorker client, multiple client resources may be configured for each
node and virtual service. Remember that each virtual client has its own hostname
and IP address and that all hosts must be listed in the appropriate name service
database. Ensure that reverse lookups behave correctly.
Overview
The clustered data is backed up as though it belongs to the virtual client. When the
virtual client backs up, its CFI is updated, regardless which node is active.
Recovery of data that is backed up from a private disk on a physical node follows
the same procedures as for a non-clustered host. If a recovery of data from the
shared resource is required, whichever node is active can perform the recovery.
Ensure that the Remote Access attribute of the virtual client resource contains an
entry for each physical cluster node.
In a UNIX cluster, shared data of the virtual client is mounted on the active node.
To recover data belonging to the virtual client, a normal browsable or save set
recovery is performed from the active node. However, the virtual client is selected
as the source client. The data must be relocated to the directory on the active node
where the shared data is mounted.
To recover data to the virtual client in a Windows environment, the active node is
the administering client in the recovery. The virtual client is both the source and
destination clients.
Overview
In a clustered environment, NetWorker must determine which save sets are owned
by the nodes and which save sets are owned by the virtual clients. The criteria
used to determine save set ownership are called path ownership rules. These rules
determine which CFI the save set tracking information is written to. If NetWorker
determines that a save set defined in a client resource is not owned by that client,
NetWorker might not back up the save set. This mechanism prevents a clustered
host from writing to multiple client file indexes which can cause recovery problems.
To ignore path ownership rules and force a backup of file systems that a client
does not own, you can create an empty pathownerignore file in the directory
containing the NetWorker binaries. This file is created on each node. Its existence
forces NetWorker to back up all specified save sets regardless of ownership
conflicts. Creating the pathownerignore file is not recommended, but may be
necessary if the cluster resources are incorrectly configured. Remember that this
file does not override the path ownership rules, it simply ignores them. Using
pathownerignore may result in tracking information being sent to an incorrect CFI,
possibly causing problems when performing browsable recoveries.
Overview
If you create a pathownerignore file, check whether the save set tracking
information is written to the correct client file index. If it goes to the wrong CFI, you
can force the tracking information to go to a specific client index.
To force save sets to be written to a specific CFI, modify the Backup command
attribute of the client whose data is being sent to the incorrect CFI. The following
command should be placed in this attribute: save –c client_name where
client_name is the hostname of the client being backed up.
If you are backing up an application server using a NetWorker module, ensure that
you are using the -c client_name or similar arguments that the NetWorker
module requires. Refer to the applicable module documentation for details on
options for the backup command that each NetWorker module uses.
Overview
It is often desirable to back up clustered data to devices that the cluster nodes
manage, thus avoiding TCP/IP traffic. NetWorker supports the environment where
each node in a cluster is configured as a NetWorker storage node. NetWorker
client and storage node software are installed on each node, and each node
controls one or more backup devices. The virtual client is backed up to a device
that the active node manages. All devices within the cluster are created as remote
devices. By default, data from a virtual client is backed up to the first storage node
listed in the Storage Node attribute of the virtual client resource. To back up to the
devices attached to the current physical host, use the keyword curphyhost as the
only value in the Storage Node attribute.
In the configuration shown on the slide, both cluster nodes are functional storage
nodes. The active node (Node A) backs up its local save sets to its own backup
device. Likewise, the passive node (Node B) backs up its local save sets to its own
backup device. The active node (Node A) backs up save sets belonging to the
virtual client to its own device.
Also, clients outside the cluster can be configured to direct their save sets to any
NetWorker storage node residing within the cluster. If either Node A or Node B
fails, the storage nodes list of each physical or virtual client backing up to the failed
node is consulted. Since the storage node is not a shared resource, the storage
node list is used to determine where to redirect the backup.
Although some clustering products can fail over backup devices between nodes, it
is beyond the scope of this course.
Overview
Owns the shared disks. A volume manager manages the shared disk.
Can fail over between Node 1 and Node 2. However, the NetWorker server
software only runs on one node at a time.
Overview
Consider the following before you install the NetWorker software on the nodes in a
cluster.
Ensure that the physical and virtual node names are resolvable in Domain
Name System (DNS) or by using a hosts file.
Ensure that the output of the hostname command on each physical node
corresponds to an IP address that can be pinged.
You can publish the virtual hostname in the DNS or Network Information
Services (NIS).
Install the most recent cluster patch for the operating system.
Install the NetWorker software in the same location on a private disk, on each
cluster node.
Ensure that authc is configured on all the nodes of NetWorker server cluster.
Install NMC on a stand-alone machine by using the virtual hostname of the
clustered NetWorker server.
See the NetWorker Cluster Integration Guide for more details when clustering the
NetWorker server.
Summary
Introduction
This module focuses on the recovery of control data residing on the NetWorker
server and the NetWorker Management Console server.
Introduction
This lesson focuses on protecting the NetWorker server and NMC databases: look
at the Server Protection policy, backing up the NetWorker server and NMC
databases, and the NetWorker bootstrap save set.
Overview
The NetWorker server and NMC server are protected with the Server Protection
policy. The workflows in the policy are configured to run daily. When you install the
NetWorker server, the installation process creates the default Server Protection
policy for NMC and NetWorker server backup and maintenance activities. The
Server Protection policy includes the Server backup and NMC server backup
default workflows. You can edit and change the default policy and associated
workflows and actions, and also create your own policies and workflows for
NetWorker and NMC server protection.
Once you install the NMC server and connect to the NMC GUI for the first time, the
Console Configuration wizard prompts the administrator to configure the NetWorker
server that will back up the NMC server database.
Overview
The Server backup workflow performs two actions: Expiration and Server
database backup.
The Server db backup action performs a bootstrap backup and a backup of the
client file indexes, by default. The data in the bootstrap backup enables you to
perform a disaster recovery of the NetWorker server. The bootstrap backup
contains the media database, authentication service database, and the resource
files (resource database and the Package Manager database).
The Server Protection group is assigned to the Server backup workflow. This
contains a dynamically generated list of the client resources for the NetWorker
server. By default, the Server backup workflow is configured to back up to the
Default pool. This should be changed in the Server db backup action to a
configured pool in your backup environment. As a best practice, Dell EMC
recommends writing all bootstrap and Client File Index backups to a dedicated
pool.
Overview
The NMC server backup workflow performs a traditional backup of the NMC
database. The workflow is scheduled to start a full backup daily at 2 p.m. The
default NMC server group which contains the NMC server is assigned to the NMC
server backup workflow. By default, this workflow is configured to back up to the
Default pool. This should be changed in the NMC server backup action to a
configured pool in your backup environment.
Note: The NMC server database backup only supports full and skip
backup levels.
Overview
The bootstrap backup is required for recovery of the NetWorker server databases.
If a recovery is required, you need to know its save set ID (SSID) and the name of
the volume on which it is located. There are several ways to obtain information
about bootstrap backups. These methods include notifications, log files, and using
mminfo.
The Server backup Action report, displayed here, is generated when the Server db
backup action runs. The report shows the backup save sets and the Bootstrap
backup report, including the save set id and volumes for recent bootstrap save
sets. This report is included in the notification when the workflows and actions for
the Server Protection policy complete. By default, this notification is appended to
the file, policy_notifications.log in the …\nsr\logs directory, along with notifications
sent to that file by all other running policies.
To isolate the notifications about server protection, you can change the notification
for the Server Protection policy to go to another file or to go to email. You can also
show information about the Server db backup action by configuring a notification at
the action level that will be created when the action completes. This is shown on
the slide.
Any way that you choose to receive the Server Backup Action report, it is
important to ensure that you are regularly receiving the bootstrap information and
filing it in a safe location for later reference in case a recovery is necessary.
Overview
You can also find information about bootstrap save sets in the log messages for
individual operations of the Server db backup action. These logs are available on
the NetWorker server in directories under …\nsr\logs\policy\Server
Protection\Server backup. You can also look at the messages for individual runs of
this action by highlighting the Server backup workflow in the Monitoring window,
selecting Show Details and drilling down to the full log message for the desired
Server db backup action. You can choose to print or save the message.
Another way to locate the bootstrap save set is with the mminfo – B command. This
command displays a list of bootstrap save sets with their save set ID and volume
information. The exact location (file and record number) of the save set on the
volumes is also displayed when tape media is used.
Overview
If you do not know the volume and save set ID of the most recent bootstrap save
set, here are some additional methods of locating the information.
The daemon.raw file in the NetWorker server log directory may contain an entry
showing which volume a bootstrap save set was written to.
If the previous method does not provide a volume name, another option is to use
the scanner command with the -B option to locate information about bootstrap save
sets. This method requires that you guess which volume contains the most recent
bootstrap save set and manually load it into a drive before running scanner.
scanner -B reads an entire volume and displays information about the most recent
bootstrap save set found. Depending on the size of the volume and the speed of
the device, this process can sometimes be lengthy. If the most recent bootstrap
save set on the volume is not the one you want, load another volume into the drive
and run scanner again.
Introduction
This lesson covers the procedures for recovering the NetWorker server, including
recovering the NetWorker bootstrap data and the client file indexes. Also,
recovering of the media database, resource database, and NMC database
individually are discussed.
Overview
The bootstrap save set is used by nsrdr to recover the NetWorker server.
The slide summarizes the steps that are needed to perform a complete recovery of
a NetWorker server. The steps assume that the original server is no longer
available and a new NetWorker server is being configured.
Before installing NetWorker, verify the functionality of the server it is being installed
on.
After starting all the NetWorker daemons/services, the only customization you must
perform to the default NW installation is to create a device resource for the device
that is used to recover the bootstrap save set.
Use nsrdr to recover the bootstrap save set and optionally recover the client file
indexes.
Overview
Using nsrdr is the only method of recovering the bootstrap save set, and
NetWorker processes must be running prior to running nsrdr. Configure a
NetWorker device resource and insert the volume containing the bootstrap save set
into the device. Make sure that you do not label the volume as you erase all data
on it.
nsrdr is interactive, prompting for the SSID of the bootstrap save set being
recovered. It also prompts you to replace the existing resource configuration
database folder, to replace the NetWorker Authentication Server database file, and
to recover the client file indexes.
Overview
There may be situations where the entire NetWorker server does not need to be
recovered. The media database may be damaged, corrupted, or missing important
information, but the resource directory is perfectly fine. Conversely, NetWorker
resources may have been accidentally or maliciously deleted or modified, requiring
that only the resource directories be recovered.
To insert missing volume or save set information into the media database, the
scanner command is used to scan a volume and insert information directly into the
media database (and optionally, client file indexes) while reading the volume.
The conditions that are shown in the slide are discussed on the following pages:
Overview
The slide summarizes the steps that are needed to perform a recovery of the
NetWorker control data with nsrdr. NetWorker must be running to run nsrdr:
Shut down the NetWorker processes, if running, and rename the existing /nsr/mm
and /nsr/res directories. By renaming the directories, you have a copy of the
directories as they were before the recovery is run. This also enables NetWorker to
start even though the media database or resource files may be corrupted or
damaged.
Next, create a device resource for the device that will be used to recover the
bootstrap save set. Do NOT label the volume containing the bootstrap as you erase
all the data on the volume. When creating an AFTD or Data Domain device, create
the device resource that has the volume containing the bootstrap save set mounted
in it. Do NOT label the device. Close NetWorker Administration.
Use nsrdr to recover the bootstrap save set and optionally, recover the client file
indexes and NetWorker Authentication Service database. Running nsrdr will
overwrite the /nsr/mm directory. You have the option to keep the /nsr/res folder (not
recover the resource files) or replace the resource files with recovered resource
files. If you choose to replace the resource files, nsrdr saves the existing /nsr/res
folder as res.<timestamp>.
Overview
After a bootstrap recovery, it is possible that some volumes may contain save sets
that are newer than the recovered bootstrap. If any backup or clone processes
wrote data to any of the volumes after the bootstrap save set was created, the
recovered media database will not contain information about the save sets. These
save sets could potentially be overwritten. The volume flag, S, indicates that save
sets on the volume may need to be scanned into the media database. When this
flag is set, the volume is “locked” and a recover space operation will not be
performed for disk volumes.
By default, nsrdr will mark all disk volumes in the database as read-only and scan
needed to indicate that you must scan the save set information back into the media
database before you can use the volume. For tape volumes, if you suspect that
backups or clones were written to those volumes after the latest bootstrap was
created, running the nsrdr command with the –N option will cause the scan needed
flag to be set on all volumes.
To find out if there are any volumes with save sets that need to be scanned, select
Tape Volumes or Disk Volumes from the NetWorker Administration Media
window. You can manually change the mode of a volume to scan needed by right-
clicking the volume in the right pane and selecting Mark Scan Needed > Scan is
needed.
To clear the scan needed volume flag for disk volumes, first run the scanner –i
device command. For tape volumes, when the scan needed mode is set and you
try to mount a tape volume that has save sets newer than what is recorded in the
media database, you receive a message with the last known file and record
number in the media database. If you suspect that there were save sets that were
saved after the last bootstrap backup, use this information with the scanner –f file –
r record –I device commandto scan the volume from the last known record
numbers. Then, to remove the scan needed flag from the volume, from the
NetWorker Administration Media window, right-click the volume and select Scan is
NOT needed from the Mark Scan Needed window.
See the NetWorker Command Reference Guide and the NetWorker Administration
Guide for more information.
Overview
When recovering the bootstrap save set with nsrdr, you have the option to recover
CFIs after the recovery operation restarts the NetWorker services. You may choose
to skip this step if the CFIs are not immediately necessary. Then, create an empty
CFI prior to the next backup of a client. You can then run nsrdr later to recover the
CFIs for selected clients.
To recover only specific CFIs, run nsrdr with the –I command line option to specify
a list of clients or use the –f option to specify an input file.
Overview
For Linux hosts, if you did not install NMC server software in the default path
/opt/lgtonmc, add the NMC_install_dir/bin directory to the LD_LIBRARY_PATH
environment variable.
Summary
Summary
Summary