Sunteți pe pagina 1din 20

Backup

In information technology, a backup, or data backup, or the process of backing up, refers to the copying into an archive
file[note 1][1] of computer data that is already in secondary storage—so that it may be used to restore the original after a data loss
event. The verb form is "back up" (a phrasal verb), whereas the noun and adjective form is "backup".[2] (This article assumes at least
a random access index to the secondary storage data to be backed up, and therefore does not discuss the venerable practice of pure
tape-to-tape copying.)

Backups have two distinct purposes. The primary purpose is to recover data after its loss, be it by data deletion or corruption. Data
loss can be a common experience of computer users; a 2008 survey found that 66% of respondents had lost files on their home PC.[3]
The secondary purpose of backups is to recover data from an earlier time, according to a user-defined data retention policy, typically
configured within a backup application for how long copies of data are required.[4] Though backups represent a simple form of
disaster recovery and should be part of any disaster recovery plan, backups by themselves should not be considered a complete
disaster recovery plan. One reason for this is that not all backup systems are able to reconstitute a computer system or other complex
configuration such as acomputer cluster, active directory server, or database server by simply restoring data from a backup.[5]

Since a backup system contains at least one copy of all data considered worth saving, the data storage requirements can be
significant. Organizing this storage space and managing the backup process can be a complicated undertaking. A data repository
model may be used to provide structure to the storage. Nowadays, there are many different types of data storage devices that are
useful for making backups. There are also many different ways in which these devices can be arranged to provide geographic
redundancy, data security, and portability.

Before data are sent to their storage locations, they are selected, extracted, and manipulated. Many different techniques have been
developed to optimize the backup procedure. These include optimizations for dealing with open files and live data sources as well as
compression, encryption, andde-duplication, among others. Every backup scheme should include dry runs that validate the reliability
of the data being backed up. It is important to recognize the limitations and human factors involved in any backup scheme.

Contents
Storage, the base of a backup system
Data repository models
Storage media
Managing the data repository
Selection and extraction of data
Files
Filesystems
Live data
Metadata
Manipulation of data and dataset optimization
Managing the backup process
Objectives
Limitations
Implementation
Measuring the process
Enterprise client-server backup
Performance
Source file integrity
User interface
LAN/WAN/Cloud
See also
Notes
References
External links

Storage, the base of a backup system

Data repository models


Any backup strategy starts with a concept of a data repository. The backup data needs to be stored, and probably should be organized
to a degree. The organisation could be as simple as a sheet of paper with a list of all backup media (CDs, etc.) and the dates they were
produced. A more sophisticated setup could include a computerized index, catalog, or relational database. Different approaches have
different advantages. Part of the model is thebackup rotation scheme.[6]

Unstructured
An unstructured repository may simply be a stack of tapes or CD-Rs or DVD-Rs with minimal
information about what was backed up and when. This is the easiest to implement, but
probably the least likely to achieve a high level of recoverability as it lacks automation.
Full only / System imaging
A repository of this type contains complete system images taken at one or more specific
points in time.[6] This technology is frequently used by computer technicians to record known
good configurations. Imaging[7] is generally more useful for deploying a standard
configuration to many systems rather than as a tool for making ongoing backups of diverse
systems.
Incremental
An incremental style repository aims to make it more feasible to store backups from more
points in time by organizing the data into increments of change between points in time. This
eliminates the need to store duplicate copies of unchanged data: with full backups a lot of
the data will be unchanged from what has been backed up previously.[6] Typically, a full
backup (of all files) is made on one occasion (or at infrequent intervals) and serves as the
reference point for an incremental backup set. After that, a number of incremental backups
are made after successive time periods. Restoring the whole system to the date of the last
incremental backup would require starting from the last full backup taken before the data
loss, and then applying in turn each of the incremental backups since then.[8] Additionally,
some backup systems can reorganize the repository to synthesize full backups from a series
of incrementals.
Differential
Each differential backup saves the data that has changed since the last full backup.[6] It has
the advantage that only a maximum of two data sets are needed to restore the data. One
disadvantage, compared to the incremental backup method, is that as time from the last full
backup (and thus the accumulated changes in data) increases, so does the time to perform
the differential backup. Restoring an entire system would require starting from the most
recent full backup and then applying just the last differential backup since the last full
backup.

By standard definition, a differential backup copies files that have been created or
changed since the last full backup, regardless of whether any other differential backups
have been made since then, whereas an incremental backup copies files that have
been created or changed since the most recent backup of any type (full or
incremental). Other variations of incremental backup include multi-level incrementals
and incremental backups that compare parts of files instead of just the whole file.
Reverse delta
A reverse delta type repository stores a recent "mirror" of the source data and a series of
differences between the mirror in its current state and its previous states. A reverse delta
backup will start with a normal full backup. After the full backup is performed, the system will
periodically synchronize the full backup with the live copy, while storing the data necessary
to reconstruct older versions.[9] This can either be done using hard links, or using binary
diffs. This system works particularly well for large, slowly changing, data sets.
Continuous data protection
Instead of scheduling periodic backups, the system immediately logs every change on the
host system. This is generally done by saving byte or block-level differences rather than file-
level differences.[10] It differs from simple disk mirroring in that it enables a roll-back of the
log and thus restoration of old images of data.

Storage media
Regardless of the repository model that is used, the data has to be copied onto some
data storage medium.

Magnetic tape
Magnetic tape has long been the most commonly used
medium for bulk data storage, backup, archiving, and
interchange. Tape has typically had an order of
magnitude better capacity-to-price ratio when compared
to hard disk, but the ratios for tape and hard disk have
become closer.[11] Many tape formats have been From left to right, a DVD disc in
proprietary or specific to certain markets like mainframes plastic cover, a USB flash drive and
or a particular brand of personal computer, but by 2014 an external hard drive
LTO was edging out two other remaining viable "super"
formats—IBM 3592 (now also referred to as the TS11xx
series) and Oracle StorageTek T10000,[12] and further
development of the smaller-capacity DDS format had been canceled. By 2017 Spectra
Logic, which builds tape libraries for both the LTO and TS11xx formats, was predicting that
"Linear Tape Open (LTO) technology has been and will continue to be the primary tape
technology."[13] Tape is a sequential access medium, so even though access times may be
poor, the rate of continuously writing or reading data can actually be very fast.
Hard disk
The capacity-to-price ratio of hard disks has been improving for many years, making them
more competitive with magnetic tape as a bulk storage medium. The main advantages of
hard disk storage are low access times, availability, capacity and ease of use.[14] External
disks can be connected via local interfaces like SCSI, USB, FireWire, or eSATA, or via
longer distance technologies like Ethernet, iSCSI, or Fibre Channel. Some disk-based
backup systems, via Virtual Tape Libraries or otherwise, support data deduplication, which
can dramatically reduce the amount of disk storage capacity consumed by daily and weekly
backup data.[15][16][17] One disadvantage of hard disk backups vis-a-vis tape is that hard
drives are close-tolerance mechanical devices and may be more easily damaged, especially
while being transported (e.g., for off-site backups).[18] In the mid-2000s, several drive
manufacturers began to produce portable drives employing ramp loading and accelerometer
technology (sometimes termed a "shock sensor"),[19][20] and—by 2010—the industry
average in drop tests for drives with that technology showed drives remaining intact and
working after a 36-inch non-operating drop onto industrial carpeting.[21] The manufacturers
do not, however, guarantee these results and note that a drive may fail to survive even a
shorter drop.[21] Some manufacturers also offer 'ruggedized' portable hard drives, which
include a shock-absorbing case around the hard disk, and claim a range of higher drop
specifications.[21][22][23] Another disadvantage is that over a period of years the stability of
hard disk backups is shorter than that of tape backups.[12][24][18]
Optical storage
Recordable CDs, DVDs, and Blu-ray Discs are commonly used with personal computers and
generally have low media unit costs. However, the capacities and speeds of these and other
optical discs have traditionally been lower than that of hard disks or tapes (though advances
in optical media are slowly shrinking that gap[25][26]). Many optical disk formats are WORM
type, which makes them useful for archival purposes since the data cannot be changed. The
use of an auto-changer or jukebox can make optical discs a feasible option for larger-scale
backup systems. Some optical storage systems allow for cataloged data backups without
human contact with the discs, allowing for longer data integrity. A 2008 French study
indicated the lifespan of typically-sold CD-Rs was 2-10 years,[27] but one manufacturer later
estimated the longevity of its CD-Rs with a gold-sputtered layer to be as high as 100
years.[28]
SSD/Solid-state drive
Also known as flash memory, thumb drives, USB flash drives, CompactFlash, SmartMedia,
Memory Stick, Secure Digital cards, etc., these devices are relatively expensive for their low
capacity in comparison to hard disk drives, but are very convenient for backing up relatively
low data volumes. A solid-state drive does not contain any movable parts unlike its magnetic
drive counterpart, making it less susceptible to physical damage, and can have huge
throughput in the order of 500Mbit/s to 6Gbit/s. The capacity offered from SSDs continues to
grow and prices are gradually decreasing as they become more common.[29][22] Over a
period of years the stability of flash memory backups is shorter than that of hard disk
backups.[12]
Remote backup service AKA cloud backup
As broadband Internet access becomes more widespread, remote backup services are
gaining in popularity. Backing up via the Internet to a remote location can protect against
events such as fires, floods, or earthquakes which could destroy locally stored backups.[30]
There are, however, a number of drawbacks to remote backup services. First, Internet
connections are usually slower than local data storage devices. Residential broadband is
especially problematic as routine backups must use an upstream link that's usually much
slower than the downstream link used only occasionally to retrieve a file from backup. This
tends to limit the use of such services to relatively small amounts of high value data, even if
a particular service provides initial seed loading. Secondly, users must trust a third party
service provider to maintain the privacy and integrity of their data, although confidentiality
can be assured by encrypting the data before transmission to the backup service with an
encryption key known only to the user. Ultimately the backup service must itself use one of
the above methods so this could be seen as a more complex way of doing traditional
backups.
Floppy disk and its derivatives
During the 1980s and early 1990s, many personal/home computer users associated backing
up mostly with copying to floppy disks. However, the data capacity of floppy disks did not
keep pace with growing demands, rendering them effectively obsolete. Later "superfloppy"
devices and related "non-floppy" devices provide greater storage capacity and remain
supported as backup media by some developers.[15]

Managing the data repository


Regardless of the data repository model, or data storage media used for backups, a balance needs to be struck between accessibility,
security and cost. These media management methods are not mutually exclusive and are frequently combined to meet the user's
needs. Using on-line disks for staging data before it is sent to a near
-line tape library is a common example.

Data repository implementations include[31][32] :

On-line
On-line backup storage is typically the most accessible type of data storage, which can
begin a restore in milliseconds. An internal hard disk or a disk array (maybe connected to
SAN) is one example of an on-line backup. This type of storage is convenient and speedy,
but is relatively expensive and is vulnerable to being deleted or overwritten, either by
accident, by malevolent action, or in the wake of a data-deleting virus payload.
Near-line
Near-line storage is typically less accessible and less expensive than on-line storage, but
still useful for backup data storage. A good example would be a tape library with restore
times ranging from seconds to a few minutes. A mechanical device is usually used to move
media units from storage into a drive where the data can be read or written. Generally it has
safety properties similar to on-line storage.
Off-line
Off-line storage requires some direct action to provide access to the storage media: for
example inserting a tape into a tape drive or plugging in a cable. Because the data are not
accessible via any computer except during limited periods in which they are written or read
back, they are largely immune to a whole class of on-line backup failure modes. Access time
will vary depending on whether the media are on-site or off-site.
Off-site data protection
Backup media may be sent to an off-site vault to protect against a disaster or other site-
specific problem. The vault can be as simple as a system administrator's home office or as
sophisticated as a disaster-hardened, temperature-controlled, high-security bunker with
facilities for backup media storage. Importantly a data replica can be off-site but also on-line
(e.g., an off-site RAID mirror). Such a replica has fairly limited value as a backup, and should
not be confused with an off-line backup.
Backup site or disaster recovery center (DR center)
In the event of a disaster, the data on backup media will not be sufficient to recover.
Computer systems onto which the data can be restored and properly configured networks
are necessary too. Some organizations have their own data recovery centers that are
equipped for this scenario. Other organizations contract this out to a third-party recovery
center. Because a DR site is itself a huge investment, backing up is very rarely considered
the preferred method of moving data to a DR site. A more typical way would be remote disk
mirroring, which keeps the DR data as up to date as possible.

Selection and extraction of data


A successful backup job starts with selecting and extracting coherent units of data. Most data on modern computer systems is stored
in discrete units, known as files. These files are organized into filesystems. Files that are actively being updated can be thought of as
"live" and present a challenge to back up. It is also useful to save metadata that describes the computer or the filesystem being backed
up.

Deciding what to back up at any given time involves tradeoffs. By backing up too much redundant data, the data repository will fill
[33]
up too quickly. Backing up an insufficient amount of data can eventually lead to the loss of critical information.

Files

Copying files
With file-level approach, making copies of files is the simplest and most common way to
perform a backup. A means to perform this basic function is included in all backup software
and all operating systems.

Partial file copying


Instead of copying whole files, a backup may include only the blocks or bytes within a file
that have changed in a given period of time. This technique can substantially reduce needed
storage space, but requires a high level of sophistication to reconstruct files in a restore
situation. Some implementations require integration with the source file system.

Deleted files
To prevent the unintentional restoration of files that have been intentionally deleted, a record
of the deletion must be kept.
Filesystems

Filesystem dump
Instead of copying files within a file system, a copy of the whole filesystem itself in block-
level can be made. This is also known as a raw partition backup and is related to disk
imaging. The process usually involves unmounting the filesystem and running a program like
dd (Unix).[34] Because the disk is read sequentially and with large buffers, this type of
backup can be much faster than reading every file normally, especially when the filesystem
contains many small files, is highly fragmented, or is nearly full. But because this method
also reads the free disk blocks that contain no useful data, this method can also be slower
than conventional reading, especially when the filesystem is nearly empty. Some filesystems,
such as XFS, provide a "dump" utility that reads the disk sequentially for high performance
while skipping unused sections. The corresponding restore utility can selectively restore
individual files or the entire volume at the operator's choice.[35]

Identification of changes
Some filesystems have an archive bit for each file that says it was recently changed. Some
backup software looks at the date of the file and compares it with the last backup to
determine whether the file was changed.

Versioning file system


A versioning filesystem keeps track of all changes to a file and makes those changes
accessible to the user. Generally this gives access to any previous version, all the way back
to the file's creation time. An example of this is the Wayback versioning filesystem for
Linux.[36]

Live data
A snapshot is an instantaneous function of some filesystems that presents a copy of the filesystem as if it were frozen at a specific
point in time, often by a copy-on-write mechanism. An effective way to back up live data is to temporarily quiesce them (e.g., close
all files), take a snapshot, and then resume live operations. At this point the snapshot can be backed up through normal methods.[37]
Snapshotting a file while it is being changed results in a corrupted file that is unusable, as most large files contain internal references
between their various parts that must remain consistent throughout the file. This is also the case across interrelated files, as may be
found in a conventional database or in applications such as Microsoft Exchange Server. The term fuzzy backup can be used to
describe a backup of live data that looks like it ran correctly [38]
, but does not represent the state of the data at a single point in time.

[39]
Backup options for data files that cannot be or are not quiesced include:

Open file backup


Many backup software applications undertake to back up open files in an internally
consistent state.[40] File locking would be useful for regulating access to open files, but this
may be inconvenient for the user. Some applications simply check whether open files are in
use and try again later.[15] Other applications exclude open files that are updated very
frequently.[41]

Interrelated database files backup


Some interrelated database file systems offer a means to generate a "hot backup"[42] of the
database while it is online and usable. This may include a snapshot of the data files plus a
snapshotted log of changes made while the backup is running. Upon a restore, the changes
in the log files are applied to bring the copy of the database up to the point in time at which
the initial backup ended.[43]

Metadata
Not all information stored on the computer is stored in files. Accurately recovering a complete system from scratch requires keeping
track of this non-file data too.[44]

System description
System specifications are needed to procure an exact replacement after a disaster.
Boot sector
The boot sector can sometimes be recreated more easily than saving it. Still, it usually isn't a
normal file and the system won't boot without it.
Partition layout
The layout of the original disk, as well as partition tables and filesystem settings, is needed
to properly recreate the original system.
File metadata
Each file's permissions, owner, group, ACLs, and any other metadata need to be backed up
for a restore to properly recreate the original environment.
System metadata
Different operating systems have different ways of storing configuration information.
Microsoft Windows keeps a registry of system information that is more difficult to restore
than a typical file.

Manipulation of data and dataset optimization


It is frequently useful or required to manipulate the data being backed up to optimize the backup process. These manipulations can
provide many benefits including improved backup speed, restore speed, data security, media usage and/or reduced bandwidth
requirements.

Compression
Various schemes can be employed to shrink the size of the source data to be stored so that
it uses less storage space. Compression is frequently a built-in feature of tape drive
hardware.[45]
Deduplication
When multiple similar systems are backed up to the same destination storage device, there
exists the potential for much redundancy within the backed up data. For example, if 20
Windows workstations were backed up to the same archive file, they might share a common
set of system files. The archive file only needs to store one copy of those files to be able to
restore any one of those workstations. This technique can be applied at the file level or even
on raw blocks of data, potentially resulting in a massive reduction in required storage
space.[45] Deduplication can occur on a server before any data moves to backup media,
sometimes referred to as source/client side deduplication. This approach also reduces
bandwidth required to send backup data to its target media. The process can also occur at
the target storage device, sometimes referred to as inline or back-end deduplication.
Duplication
Sometimes backup jobs are duplicated to a second set of storage media. This can be done
to rearrange the backup images to optimize restore speed or to have a second copy at a
different location or on a different storage medium.
Encryption
High-capacity removable storage media such as backup tapes present a data security risk if
they are lost or stolen.[46] Encrypting the data on these media can mitigate this problem, but
presents new problems. Encryption is a CPU intensive process that can slow down backup
speeds, and the security of the encrypted backups is only as effective as the security of the
key management policy.[45]
Multiplexing
When there are many more computers to be backed up than there are destination storage
devices, the ability to use a single storage device with several simultaneous backups can be
useful.[47]
Refactoring
The process of rearranging the backup sets in a archive file is known as refactoring. For
example, if a backup system uses a single tape each day to store the incremental backups
for all the protected computers, restoring one of the computers could potentially require
many tapes. Refactoring could be used to consolidate all the backups for a single computer
onto a single tape. This is especially useful for backup systems that do incrementals forever
style backups.
Staging
Sometimes backup jobs are copied to a staging disk before being copied to tape.[47] This
process is sometimes referred to as D2D2T, an acronym for Disk to Disk to Tape. This can
be useful if there is a problem matching the speed of the final destination device with the
source device as is frequently faced in network-based backup systems. It can also serve as
a centralized location for applying other data manipulation techniques.

Managing the backup process


As long as new data are being created and changes are being made, backups will need to be performed at frequent intervals.
Individuals and organizations with anything from one computer to thousands of computer systems all require protection of data. The
scales may be very different, but the objectives and limitations are essentially the same. Those who perform backups need to know
how successful the backups are, regardless of scale.

Objectives

Recovery point objective (RPO)


The point in time that the restarted infrastructure will reflect. Essentially, this is the roll-back
that will be experienced as a result of the recovery. The most desirable RPO would be the
point just prior to the data loss event. Making a more recent recovery point achievable
requires increasing the frequency of synchronization between the source data and the
backup repository.[48][49]
Recovery time objective (RTO)
The amount of time elapsed between disaster and restoration of business functions.[50]
Data security
In addition to preserving access to data for its owners, data must be restricted from
unauthorized access. Backups must be performed in a manner that does not compromise
the original owner's undertaking. This can be achieved with data encryption and proper
media handling policies.[51]
Data retention period
Regulations and policy can lead to situations where backups are expected to be retained for
a particular period, but not any further. Retaining backups after this period can lead to
unwanted liability and sub-optimal use of storage media.[51]

Limitations
An effective backup scheme will take into consideration the following situational limitations[52] :

Backup window
The period of time when backups are permitted to run on a system is called the backup
window. This is typically the time when the system sees the least usage and the backup
process will have the least amount of interference with normal operations. The backup
window is usually planned with users' convenience in mind. If a backup extends past the
defined backup window, a decision is made whether it is more beneficial to abort the backup
or to lengthen the backup window.
Performance impact
All backup schemes have some performance impact on the system being backed up. For
example, for the period of time that a computer system is being backed up, the hard drive is
busy reading files for the purpose of backing up, and its full bandwidth is no longer available
for other tasks. Such impacts should be analyzed.
Costs of hardware, software, labor
All types of storage media have a finite capacity with a real cost. Matching the correct
amount of storage capacity (over time) with the backup needs is an important part of the
design of a backup scheme. Any backup scheme has some labor requirement, but
complicated schemes have considerably higher labor requirements. The cost of commercial
backup software can also be considerable.
Network bandwidth
Distributed backup systems can be affected by limited network bandwidth.

Implementation
Meeting the defined objectives in the face of the above limitations can be a difficult task. The tools and concepts below can make that
task more achievable.

Scheduling
Using a job scheduler can greatly improve the reliability and consistency of backups by
removing part of the human element. Many backup software packages include this
functionality.
Authentication
Over the course of regular operations, the user accounts and/or system agents that perform
the backups need to be authenticated at some level. The power to copy all data off of or onto
a system requires unrestricted access. Using an authentication mechanism is a good way to
prevent the backup scheme from being used for unauthorized activity.
Chain of trust
Removable storage media are physical items and must only be handled by trusted
individuals. Establishing a chain of trusted individuals (and vendors) is critical to defining the
security of the data.

Measuring the process


en [53][54][55] :
To ensure that the backup scheme is working as expected, the following best practices should be acted

Backup validation
(also known as "backup success validation") Provides information about the backup, and
proves compliance to regulatory bodies outside the organization: for example, an insurance
company in the USA might be required under HIPAA to demonstrate that its client data meet
records retention requirements.[56] Disaster, data complexity, data value and increasing
dependence upon ever-growing volumes of data all contribute to the anxiety around and
dependence upon successful backups to ensure business continuity. Thus many
organizations rely on third-party or "independent" solutions to test, validate, and optimize
their backup operations (backup reporting).
Reporting
In larger configurations, reports are useful for monitoring media usage, device status, errors,
vault coordination and other information about the backup process.
Logging
In addition to the history of computer generated reports, activity and change logs are useful
for monitoring backup system events.
Validation
Many backup programs use checksums or hashes to validate that the data was accurately
copied. These offer several advantages. First, they allow data integrity to be verified without
reference to the original file: if the file as copied to the archive file has the same checksum
as the saved value, then it is very probably correct. Second, some backup programs can use
checksums to avoid making redundant copies of files, and thus improve backup speed. This
is particularly useful for the de-duplication process.
Monitored backup
Backup processes can be monitored locally via a software dashboard or by a third party
monitoring center. Both alert users to any errors that occur during automated backups. Some
third-party monitoring services also allow collection of historical metadata, that can be used
for storage resource management purposes like projection of data growth and locating
redundant primary storage capacity and reclaimable backup capacity.

Enterprise client-server backup


"Enterprise client-server" backup software describes a class of software applications
that back up data from a variety of client computers centrally to one or more server
computers, with the particular needs of enterprises in mind. They may employ a
scripted client–server[57] backup model[58] with a backup server program running
on one computer, and with small-footprintclient programs (referred to as "agents" in
some applications) running on the other computer(s) being backed up—or
alternatively as another process on the same computer as the backup server program.
Enterprise-specific requirements[58] include the need to back up large amounts of
data on a systematic basis, to adhere to legal requirements for the maintenance and
archiving of files and data, and to satisfy short-recovery-time objectives. To satisfy
these requirements (which World Backup Day (31 March)[59][60][61] highlights), it A computer sends its data to a
is typical for an enterprise to appoint a backup administrator—who is a part of office backup server, during a scheduled
administration rather than of the IT staff and whose role is "being the keeper of the backup window.
data".[62]

In a client-server backup application, the server program initiates the backup activity by the client program.[1] This is distinct from a
"personal" backup application such asApple's Time Machine, in which "Time Machine runs on each Mac, independently of any other
Macs, whether they're backing-up to the same destination or a different one." [63] If the backup server and client programs are
running on separate computers, they are connected in either a single platform or mixed platform network. The client-server backup
model was originated when magnetic tape was the only financially-feasible storage medium for doing backups of multiple computers
onto a single archive file;[note 1][note 2][64] because magnetic tape is a sequential access medium, it was imperative (barring
"multiplexed backup") that the client computers be backed up one at a time—as initiated by the backup server program.

What is described in the preceding paragraph is the "two-tier" configuration (in one application's diagram, the second-tier backup
server program is named "server" preceded by the name of the application, and first-tier "agents" are backing up interactive server
applications). That configuration controls the backup server program via either an integrated GUI or a separate Administration
Console. In some client-server backup applications, a "three-tier" configuration splits off the backup and restore functions of the
server program to run on what are called media servers—computers to which devices containing archive files are attached either
locally or as Network-attached storage (NAS). In those applications the decision on which media server a script is to run on is
[65] or an optional central admin. server.[66]
controlled using another program called either a master server

Performance
The steady improvement in hard disk drive price per byte has made feasible a disk-to-disk-to-tape strategy, combining the speed of
disk backup and restore with the capacity and low cost of tape for offsite archival and disaster recovery purposes.[67] This, with file
system technology, has led to features suited tooptimization such as:

Improved disk-to-disk-to-tape capabilities


Enable automated transfers to tape for safe offsite storage of disk archive files that were
created for fast onsite restores.[68][69][70]
Create synthetic full backups
For example, onto tapes from existing disk archive files—by copying multiple backups of the
same source(s) from one archive file to another. This is termed a "synthetic full backup"
because, after the transfer, the destination archive file contains the same data it would after
full backups.[68][71][72] One application can exclude[note 3] files and folders from the synthetic
full backup.[15]

Automated data grooming


Frees up space on disk archive files by removing out-of-date backup data—usually based on
an administrator-defined retention period.[61][67][68][73][74][75][note 4] One method of removing
data is to keep the last backup of each day/week/month for the last respective
week/month/specified-number-of-months, permitting compliance with regulatory
requirements.[76] One application has a "performance-optimized grooming" mode that only
removes outdated information from an archive file that it can quickly delete.[77] This is the
only mode of grooming allowed for cloud archive files, and is also up to 5 times as fast when
used on locally stored disk archive files. The "storage-optimized grooming" mode reclaims
more space because it rewrites the archive file, and in this application also permits exclusion
compliance with the GDPR "right of erasure" [78] via rules[note 3] —that can instead be used
for other filtering.[79]
Multithreaded backup server
Capable of simultaneously performing multiple backup, restore, and copy operations in
separate "activity threads" (once needed only by those who could afford multiple tape
drives).[58][80][81] In one application, all the categories of information for a particular "backup
server" are stored by it; when an "Administration Console" process is started, its process
synchronizes information with all running LAN/WAN backup servers.[64]
Block-level incremental backup
The ability to back up only the blocks of a file that have changed, a refinement of incremental
backup that saves space[82][83][84] and may save time.[58][85] Such partial file copying is
especially applicable to a database.
"Instant" scanning of client volumes
Uses the USN Journal on Windows NTFS and FSEvents on macOS to reduce the scanning
component[78] time on both incremental backups, fitting more sources into the scheduled
backup window,[58][86][87] and on restores.[88]
Cramming or evading the scheduled backup window
One application has the "multiplexed backup" capability of cramming the scheduled backup
window by sending data from multiple clients to a single tape drive simultaneously; "this is
useful for low end clients with slow throughput ... [that] cannot send data fast enough to keep
the tape drive busy .... will reduce the performance of restores."[80] Another application
allows an enterprise that has computers transiently connecting to the network over a long
workday to evade the scheduled window by using Proactive scripts.

Source file integrity

Backing up interactive applications via pausing


Interactive applications can be protected by having their services paused while their live data
is being backed up, and then unpaused.[89] Alternatively, the backup application can back up
a snapshot initiated at a natural pause.[15][78] Some enterprise backup applications
accomplish pausing and unpausing of services via built-in provisions—for many specific
databases and other interactive applications—that become automatically part of the backup
software's script execution; these provisions may be purchased separately.[90][91][15]
However another application has also added "script hooks" that enable the optional
automatic execution—at specific events during runs of a GUI-coded backup script—of
portions of an external script containing commands pre-written in a standard scripting
language.[78] For some databases—such as MongoDB and MySQL—that can be run on
filesystems that do not support snapshots, the external script can pause writing during
backup.[78] Since the external script is provided by an installation's backup administrator, its
code activated by the "script hooks" may accomplish not only data protection—via
pausing/unpausing interactive services—[78]but also integration with monitoring systems.[92]
Backing up interactive applications via coordinated snapshots
Some interactive applications such as databases must have all portions of their component
files coordinated while their live data is being backed up. One database system—
PostgreSQL—can do this via its own "snapshotting" MVCC running on filesystems that do
not support snapshots, and can therefore be backed up without pausing using an external
script containing commands that use "script hooks".[78] Another equivalent approach is to
use some filesystems' capability of taking a snapshot, and to back up the snapshot without
pausing the application itself. An enterprise backup application using filesystem snapshotting
can be used either to back up all user applications running on a virtual machine[93][94] or to
back up a particular interactive application that directly uses its filesystem's snapshot
capability.[15] Conceptually this approach can still be considered client-server backup; the
snapshotting capability by itself constitutes the client, and the backup server runs as a
separate process that initiates (second paragraph) and then reads the snapshot on the
machine that generated it. The software installed on each machine to be backed up is
referred to as an "agent"; if "agents" are being used to back up all user applications running
on a virtual machine, one or more such "agents" are controlled by a console.[95][94]

User interface
To accommodate the requirements of a backup administrator who may not be part of the IT staff with access to the secure server area,
enterprise client-server software may include features such as:

Administration Console
The backup administrator's backup server GUI management and near-term reporting tool.[54]
Its window shows the selected backup server, with a standard toolbar on top. A sidebar on
the left or navigation bar shows the clickable categories of backup server information for it;
each category shows a panel, which may have a specialized toolbar below or in place of the
standard toolbar. The built-in categories include activities—thus providing monitored backup,
past backups of each individual source, scripts/policies/jobs (terminology depending on the
application), sources (directly/indirectly), archive files, and storage devices.[92][96][97]
User-initiated backups and restores
These supplement the administrator-initiated backups and restores which backup
applications have always had, and relieve the administrator of time-consuming tasks.[62] The
user designates the date of the past backup from which files or folders are to be restored—
once IT staff has mounted the proper volume(s) of the relevant archive file on the backup
server.[67][92][98][99]
High-level/medium-term reports supplementing the Administration Console[54]
Within one application's Console panel displayed by clicking the name of the backup server
itself in the sidebar, an activities pane on the top left of the displayed Dashboard has a
moving bar graph for each activity going on for the backup server together with a pause and
stop button for the activity. Three more backup validation panes give the results of activities
in the past week: backups each day, sources backed up, and sources not backed up; as of
2019 the last two panes—together with failed backups—are summarized in an additional
color-coded bullseye pane.[100] Finally a storage reporting pane has a line for each archive
file, showing the last-modified date and depictions of the total bytes used and
available;[82][92] as of 2019 this is supplemented by a pane that gives a linear-regression
prediction for growth of each archive file.[100] For the application's Windows variant, the
Dashboard acts as a display-only substitute for a non-existent Console[15]—but was
upgraded in 2019 into an optional two-way Web-based Management Console.[78] Other
applications have a separate reporting and monitored backup facility that can cover multiple
backup servers.[101][102]
E-mailing of notifications about operations to chosen recipients[54]
Can alert the recipient to, e.g., errors or warnings, including extracts of logging to assist in
pinpointing problems.[15][101][103]
Integration with monitoring systems[54]
Such systems provide longer-term backup validation. One application's administrators can
deploy custom scripts that—invoking webhook code via script hooks—populate such
systems as the freeware Nagios and IFTTT and the freemium Slack with script successes
and failures corresponding to the activities category of the Console, per-source backup
information corresponding to the past backups category of the Console, and media
requests.[92] Another application has integration with two of the developer's monitoring
systems, one that is part of the client-server backup application and one that is more
generalized.[101] Yet another application has integration with a monitoring system that is part
of the client-server backup application,[104] but can also be integrated with Nagios.[105]

LAN/WAN/Cloud

Advanced network client support


All applications includes support for multiple network interfaces.[58][106][107] However one
application, unless deduplication is done by a separate sub-application between the client
and the backup server, cannot provide "resilient network connections" for machines on a
WAN.[108] One application can extend support to "remote" clients anywhere on the Internet
for a Proactive script and for user-initiated backups/restores.[78]
Cloud seeding and Large-Scale Recovery
Because of a large amount of data already backed up,[58] an enterprise adopting cloud
backup likely will need to do "seeding". This service uses a synthetic full backup to copy a
large locally-stored archive file onto a large-capacity disk device, which is then physically
shipped to the cloud storage site and uploaded.[109][110] After the large initial upload, the
enterprise's backup software may facilitate reconfiguration for writing to and reading from the
archive file incrementally in its cloud location.[111] The service may need to be employed in
reverse for faster large-scale data recovery times than would be possible via an Internet
connection.[109] Some applications offer seeding and large-scale recovery via third-party
services, which may use a high-speed Internet channel to/from cloud storage rather than a
shipable physical device.[112][113]

See also
About backup

Backup software

List of backup software


Glossary of backup terms
Remote backup service
Virtual backup appliance

Related topics

Data consistency
Data degradation
Data proliferation
Database dump
Digital preservation
Disaster recovery and business continuity auditing
File synchronization
Information repository

Notes
1. In contrast to everyday use of the term "archive", the data stored in an "archive file" is not necessarily old or of
historical interest.
2. Several client-server applications use the term "archiving" to describe a backup operation that deletes data from a
client source once the data's backup is complete.Bokelman, Seth (26 February 2012)."what is archiving in
Netbackup?" (https://vox.veritas.com/t5/NetBackup/what-is-archiving-in-Netbackup/m-p/490153#M112727) . VOX.
Veritas Technologies LLC. Retrieved 13 May 2018."Retrospect ® 14.0 Mac User's Guide"(http://download.retrospec
t.com/docs/mac/v14/user_guide/Retrospect_Mac_User_Guide-EN.pdf) (PDF). Retrospect. Retrospect Inc. March
2017. pp. 124-126(Archiving). Retrieved 28 March 2017."Backup Exec Archiving Option is no longer supported for
Backup Exec 15 Feature Pack 1"(https://www.veritas.com/support/en_US/article.100023956). Veritas Support.
Veritas Technologies LLC. 30 June 2015. Retrieved 13 May 2018.
3. Exclusion and/or inclusion is done with Selectors in the Windows variant; this misleading term has been changed to
Rules in the Macintosh variant.
4. Some backup applications—notablyrsync and CrashPlan—term removing backup data "pruning" instead of
"grooming".[1] (https://linux.die.net/man/1/rsync)[2] (https://support.code42.com/Administrator/5/Monitoring_and_ma
naging/Archive_maintenance#Prune)

References
1. Kissell, Joe (2007). Take Control of Mac OS X Backups(http://people.fas.harvard.edu/~techtool/pages/T
ake_Control
_of_Mac_OS_X_Backups_(2.0).pdf)(PDF) (Version 2.0 ed.). Ithaca, NY: TidBITS Electronic Publishing. pp. 18-20
(The Archive), 24 (client-server), 126-141 (old Retrospect terminology and GUI—still used in Windows variant), 165
(client-server), 128 (subvolume—later renamed Favorite Folder in Macintosh variant).
ISBN 0-9759503-0-4.
Retrieved 22 September 2017.
2. "back•up" (https://www.ahdictionary.com/word/search.html?q=backup). The American Heritage Dictionary of the
English Language. Houghton Mifflin Harcourt. 2018. Retrieved 9 May 2018.
3. Global Backup Survey (http://www.kabooza.com/globalsurvey.html) Archived (https://web.archive.org/web/20100327
235844/http://www.kabooza.com/globalsurvey.html) 27 March 2010 at theWayback Machine. Retrieved 15 February
2009
4. Nelson, S. (2011). "Chapter 1: Introduction to Backup and Recovery".Pro Data Backup and Recovery(https://books.
google.com/books?id=r4uEEsq3CJYC&printsec=frontcover) . Apress. pp. 1–16. ISBN 978-1-4302-2663-5. Retrieved
8 May 2018.
5. Cougias, D.J.; Heiberger, E.L.; Koop, K. (2003). "Chapter 1: What's a Disaster Without a Recovery?".The Backup
Book: Disaster Recovery from Desktop to Data Center(https://books.google.com/books?id=eLviiT ag5A0C&pg=PA1).
Network Frontiers. pp. 1–14.ISBN 0-9729039-0-9.
6. Dean, T. (2009). "Chapter 14: Ensuring Integrity and Availability". CompTIA Network+ 2009 in Depth(https://books.g
oogle.com/books?id=1QEMAAAAQBAJ&pg=P A602). Cengage Learning. pp. 571–614.ISBN 978-1-59863-878-3.
Retrieved 8 May 2018.
7. "Five key questions to ask about your backup solution"(http://sysgen.ca/five-key-backup-questions/). sysgen.ca.
Archived (https://web.archive.org/web/20160304042343/http://sysgen.ca/five-key-backup-questions/) from the
original on 4 March 2016. Retrieved 23 September 2015.
8. Incremental Backup (http://www.tech-faq.com/incremental-backup.shtml) Archived (https://web.archive.org/web/2016
0621090117/http://www.tech-faq.com/incremental-backup.shtml) 21 June 2016 at the Wayback Machine. Retrieved
10 March 2006
9. Leon, A. (2015). Software Configuration Management Handbook(https://books.google.com/books?id=pYcTBwAAQ
BAJ&pg=PA65). Artech House. p. 65. ISBN 978-1-60807-844-8. Retrieved 8 May 2018.
10. Continuous Protection white paper(http://www.sertdatarecovery.com/business-data-backup-disaster-recovery-planni
ng-resource.html) Archived (https://web.archive.org/web/20160304072358/http://www.sertdatarecovery.com/busines
s-data-backup-disaster-recovery-planning-resource.html)4 March 2016 at the Wayback Machine. (1 October 2005).
Retrieved 10 March 2007
11. Disk to Disk Backup versus Tape – War or Truce? (http://www.storagesearch.com/engenio-art2.html) Archived (http
s://web.archive.org/web/20160712235906/http://www .storagesearch.com/engenio-art2.html)12 July 2016 at the
Wayback Machine (9 December 2004). Retrieved 10 March 2007
12. Coughlin, Tom (29 June 2014). "Keeping Data for a Long Time" (https://www.forbes.com/sites/tomcoughlin/2014/06/2
9/keeping-data-for-a-long-time/). Forbes. Forbes Media LLC. para. Magnetic T apes(popular formats, storage life),
para. Hard Disk Drives(active archive), para. First consider flash memory in archiving(... may not have good media
archive life). Retrieved 19 April 2018.
13. "Digital Data Storage Outlook 2017"(https://spectralogic.com/wp-content/uploads/white-paper-digital-data-storage-o
utlook-2017-v3.pdf) (PDF). Spectra. Spectra Logic. 2017. p. 14(Tape). Retrieved 11 July 2018.
14. "Bye Bye Tape, Hello 5.3TB eSATA" (http://www.tomshardware.com/2007/04/18/bye_bye_tape/). Retrieved 22 April
2007.
15. "Retrospect ® 12 Windows User's Guide"(http://download.retrospect.com/docs/win/v12/user_guide/Retrospect_Win
_User_Guide-EN.pdf) (PDF). Retrospect. Retrospect Inc. 2017. pp. 30-31(deduplication via "Snapshots"—a
Retrospect term which predates and is distinct fromSnapshot_(computer_storage)), 31-32(Dashboard), 41-
43(removable disk drives), 216-218(selector as subset filter for synthetic full backups), 230-233(Scripted
Verification), 280(Multiple Executions), 369(Duplicate Execution Options), 420(Startup Preferences—Launcher for
auto-launch), 426-427(E-mail), 433-434(Open File Backup iTps—VSS snapshot at natural pause), 530-544(SQL
Server Agent—coordinating VSS snapshot), 545-566(Exchange Server Agent—coordinating VSS snapshot) .
Retrieved 2 September 2018.
16. "Symantec Shows Backup Exec a Little Dedupe Love; Lays out Source Side Deduplication Roadmap – DCIG" (htt
p://www.dcig.com/2009/07/symantec-shows-backup-exec-a-l.html). DCIG. Archived (https://web.archive.org/web/201
60304212819/http://www.dcig.com/2009/07/symantec-shows-backup-exec-a-l.html)from the original on 4 March
2016. Retrieved 26 February 2016.
17. "Veritas NetBackup™ Deduplication Guide"(https://www.veritas.com/content/support/en_US/doc/ka6j00000000ADE
AA2). Veritas. Veritas Technologies LLC. 2016. Retrieved 26 July 2018.
18. Jacobi, John L. (29 February 2016)."Hard-core data preservation: The best media and methods for archiving your
data" (https://www.pcworld.com/article/2984597/storage/hard-core-data-preservation-the-best-media-and-methods-f
or-archiving-your-data.html). PC World. sec. External Hard Drives(on the shelf, magnetic properties, mechanical
stresses, vulnerable to shocks). Retrieved 19 April 2018.
19. "Ramp Load/Unload Technology in Hard DiskDrives" (https://www.hgst.com/sites/default/files/resources/LoadUnload
_white_paper_FINAL.pdf)(PDF). HGST. Western Digital. November 2007. p. 3(sec. Enhanced Shock Tolerance).
Retrieved 29 June 2018.
20. "Toshiba Portable Hard Drive (Canvio® 3.0)"(https://www.toshibadata.com.sg/Product-Canvio-Portable-Hard-Drive.a
spx). Toshiba Data Dynamics Singapore. Toshiba Data Dynamics Pte Ltd. 2018. sec.Overview(Internal shock
sensor and ramp loading technology). Retrieved 16 June 2018.
21. "Iomega ® Drop Guard ™ Technology" (https://www.doc-developpement-durable.org/file/Projets-informatiques/Dro
p%20Guard-disque-dur-tres-solide.pdf)(PDF). Hard Drive Storage Solutions. Iomega Corp. 20 September 2010.
pp. 2(What is Drop Shock Technology?, What is Drop Guard Technology? (... features special internal cushioning ....
40% above the industry average)), 3(*NOTE). Retrieved 12 July 2018.
22. Burek, John (15 May 2018)."The Best Rugged Hard Drives and SSDs"(https://www.pcmag.com/roundup/361072/th
e-best-rugged-hard-drives-and-ssds). PC Magazine. Ziff Davis. What Exactly Makes a Drive Rugged?(When a drive
is encased ... you're mostly at the mercy of the drive vendor to tell you the rated maximum drop distance for the
drive). Retrieved 4 August 2018.
23. Krajeski, Justin; Streams, Kimber (20 March 2017)."The Best Portable Hard Drive"(https://web.archive.org/web/201
70331161821/http://thewirecutter.com/reviews/best-portable-hard-drive/#dont-buy-a-rugged-portable-hard-drive)
.
The New York Times. Archived from the original (https://web.archive.org/web/20170331161821/http://thewirecutter
.c
om/reviews/best-portable-hard-drive/#dont-buy-a-rugged-portable-hard-drive) on 31 March 2017. Retrieved 4 August
2018.
24. "Best Long-Term Data Archive Solutions"(http://www.ironmountain.com/resources/general-articles/b/best-long-term-
data-archive-solutions). Iron Mountain. Iron Mountain Inc. 2018. sec. More Reliable(average mean time between
failure ... rates, best practice for migrating data)
. Retrieved 19 April 2018.
25. Wan, S.; Cao, Q.; Xie, C. (2014). "Optical storage: An emerging option in long-term digital preservation".Frontiers of
Optoelectronics. 7 (4): 486–492. doi:10.1007/s12200-014-0442-2(https://doi.org/10.1007%2Fs12200-014-0442-2) .
26. Zhang, Q.; Xia, Z.; Cheng, Y.-B.; Gu, M. (2018). "High-capacity optical long data memory based on enhanced
Young's modulus in nanoplasmonic hybrid glass composites". Nature Communications. 9: 1183.
doi:10.1038/s41467-018-03589-y(https://doi.org/10.1038%2Fs41467-018-03589-y) .
27. Poirier, Gérard; Berahou, Foued (3 March 2008). "Journal de 20 Heures"(http://www.ina.fr/video/3571726001/20-he
ures-emission-du-3-mars-2008.fr.html). Institut national de l'audiovisuel. approximately minute 30 of the TV news
broadcast. Retrieved 3 March 2008.
28. "Archival Gold CD-R "300 Year Disc" Binder of 10 Discs with Scratch Armor Surface"(https://web.archive.org/web/20
130927170900/http://delkin.com/i-5937134-archival-gold-cd-r-300-year-disc-binder-of-10-discs-with-scratch-armor-s
urface.html). Delkin Devices. Delkin Devices Inc. Archived fromthe original (http://delkin.com/i-5937134-archival-gol
d-cd-r-300-year-disc-binder-of-10-discs-with-scratch-armor-surface.html) on 27 September 2013.
29. Micheloni, R.; Olivo, P. (2017). "Solid-State Drives (SSDs)"(https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8
013049). Proceedings of the IEEE. 105 (9): 1586–88. doi:10.1109/JPROC.2017.2727228(https://doi.org/10.1109%2
FJPROC.2017.2727228). Retrieved 8 May 2018.
30. "Remote Backup" (https://www.emc.com/corporate/glossary/remote-backup.htm). EMC Glossary. Dell, Inc. Retrieved
8 May 2018.
31. Stackpole, B.; Hanrion, P. (2007). Software Deployment, Updating, and Patching(https://books.google.com/books?id
=gjAhVzuV7k0C&pg=PA164). CRC Press. pp. 164–165.ISBN 978-1-4200-1329-0. Retrieved 8 May 2018.
32. Gnanasundaram, S.; Shrivastava, A., eds. (2012).Information Storage and Management: Storing, Managing, and
Protecting Digital Information in Classic, Virtualized, and Cloud Environments(https://books.google.com/books?id=P
U7gkW9ArxIC&pg=PA255). John Wiley and Sons. p. 255.ISBN 978-1-118-23696-3. Retrieved 8 May 2018.
33. Lees, D. (25 January 2017)."What to backup – a critical look at your data"(https://irontree.co.za/what-to-backup-a-cr
itical-look-at-your-data-1935.html). Irontree Blog. Irontree Internet Services CC. Retrieved 8 May 2018.
34. Preston, W.C. (2007). Backup & Recovery: Inexpensive Backup Solutions for Open Systems(https://books.google.c
om/books?id=6-w4fXbBInoC&pg=PA111). O'Reilly Media, Inc. pp. 111–114.ISBN 978-0-596-55504-7. Retrieved
8 May 2018.
35. Preston, W.C. (1999). Unix Backup & Recovery(https://books.google.com/books?id=_i1sO47qNnMC&pg=P
A73).
O'Reilly Media, Inc. pp. 73–91.ISBN 978-1-56592-642-4. Retrieved 8 May 2018.
36. Wayback: A User-level V File System for Linux (http://www.aqualab.cs.northwestern.edu/publications/Cornell04VFS.
html) Archived (https://web.archive.org/web/20070406204849/http://www .aqualab.cs.northwestern.edu/publications/
Cornell04VFS.html) 6 April 2007 at the Wayback Machine (2004). Retrieved 10 March 2007
37. Staimer, Marc (2011). "Using different types of storage snapshot technologies for data protection"(https://searchdata
backup.techtarget.com/tip/Using-different-types-of-storage-snapshot-technologies-for-data-protection) . TechTarget.
TechTarget Inc. Retrieved 4 December 2018.
38. Liotine, M. (2003). Mission-critical Network Planning(https://books.google.com/books?id=LecC2BhPPxMC&pg=P
A2
44). Artech House. p. 244.ISBN 978-1-58053-559-5. Retrieved 8 May 2018.
39. de Guise, P. (2008). Enterprise Systems Backup and Recovery: A Corporate Insurance Policy
(https://books.google.
com/books?id=2OtqvySBTu4C&pg=PA50). CRC Press. pp. 50–54.ISBN 978-1-4200-7640-0.
40. "Open File Backup Software for Windows"(https://www.handybackup.net/open-file-backup.shtml). Handy Backup.
Novosoft LLC. 8 November 2018. Retrieved 29 November 2018.
41. Reitshamer, Stefan (5 July 2017)."Troubleshooting backing up open/locked files on Windows" (https://www.arqbacku
p.com/blog/troubleshooting-backing-up-openlocked-files-on-windows/) . Arq Blog. Haystack Software. Stefan
Reitshamer is the principal developer of Arq. Retrieved 29 November 2018.
42. Boss, Nina (10 December 1997)."Oracle Tips Session #3: Oracle Backups"(https://web.archive.org/web/200703021
10933/http://www.wisc.edu/drmt/oratips/sess003.html#Hotbackup). www.wisc.edu. University of Wisconsin.
Retrieved 1 December 2018.
43. "What is ARCHIVE-LOG and NO-ARCHIVE-LOG mode in Oracle and the advantages & disadvantages of these
modes?" (https://support.arcserve.com/s/article/202080249?language=en_US)
. Arcserve Backup. Arcserve. 27
September 2018. Retrieved 29 November 2018.
44. Grešovnik, Igor (April 2016)."Preparation of Bootable Media and Images"(https://web.archive.org/web/2016042511
3119/http://www2.arnes.si/~ljc3m2/igor/blogs/technical/bootable_media_creation.html)
. Archived from the original (ht
tp://www2.arnes.si/~ljc3m2/igor/blogs/technical/bootable_media_creation.html)
on 25 April 2016. Retrieved 21 April
2016.
45. Cherry, D. (2015). Securing SQL Server: Protecting Your Database from Attackers (https://books.google.com/books?
id=SD_LAwAAQBAJ&pg=PA306). Syngress. pp. 306–308.ISBN 978-0-12-801375-5. Retrieved 8 May 2018.
46. Backups tapes a backdoor for identity thieves(http://www.securityfocus.com/news/11048)Archived (https://web.arch
ive.org/web/20160405033517/http://www .securityfocus.com/news/11048)5 April 2016 at the Wayback Machine (28
April 2004). Retrieved 10 March 2007
47. Preston, W.C. (2007). Backup & Recovery: Inexpensive Backup Solutions for Open Systems(https://books.google.c
om/books?id=6-w4fXbBInoC&pg=PA219). O'Reilly Media, Inc. pp. 219–220.ISBN 978-0-596-55504-7. Retrieved
8 May 2018.
48. Definition of recovery point objective(http://www.riskythinking.com/glossary/recovery_point_objective.php)Archived
(https://web.archive.org/web/20070513180844/http://www .riskythinking.com/glossary/recovery_point_objective.php)
13 May 2007 at the Wayback Machine. Retrieved 10 March 2007
49. "Top four things to consider in business continuity planning" (http://sysgen.ca/top-four-things-business-continuity-pla
nning/). sysgen.ca. Archived (https://web.archive.org/web/20160304075050/http://sysgen.ca/top-four-things-busines
s-continuity-planning/) from the original on 4 March 2016. Retrieved 23 September 2015.
50. Definition of recovery time objective(http://www.riskythinking.com/glossary/recovery_time_objective.php)Archived (h
ttps://web.archive.org/web/20070516081425/http://www .riskythinking.com/glossary/recovery_time_objective.php)16
May 2007 at the Wayback Machine. Retrieved 7 March 2007
51. Little, D.B. (2003). "Chapter 2: Business Requirements of Backup Systems".
Implementing Backup and Recovery:
The Readiness Guide for the Enterprise(https://books.google.com/books?id=_DqO6kizEDUC&pg=P A17). John
Wiley and Sons. pp. 17–30.ISBN 978-0-471-48081-5. Retrieved 8 May 2018.
52. Nelson, S. (2011). "Chapter 9: Putting It All T
ogether: Sample Backup Environments".Pro Data Backup and
Recovery (https://books.google.com/books?id=r4uEEsq3CJYC&printsec=frontcover) . Apress. pp. 203–246.
ISBN 978-1-4302-2663-5. Retrieved 8 May 2018.
53. Akhtar, A.N.; Buchholtz, J.; Ryan, M.; Setty, K. (2012). "Database Backup and Recovery Best Practices"(https://ww
w.isaca.org/Journal/archives/2012/Volume-1/Pages/Database-Backup-and-Recovery-Bes t-Practices.aspx). ISACA
Journal. 1: 1–6. Retrieved 8 May 2018.
54. Dorion, Pierre (June 2008)."Why you need a data backup reporting tool"(http://searchdatabackup.techtarget.com/ti
p/Why-you-need-a-data-backup-reporting-tool)
. TechTarget. Tech Target Inc. Retrieved 13 November 2017.
55. Pritchard, S. (December 2017)."Cloud-to-cloud backup: What it is and why you need it"(https://www.computerweekl
y.com/feature/Cloud-to-cloud-backup-What-it-is-and-why-you-need-it). Computer Weekly. TechTarget. Retrieved
8 May 2018.
56. HIPAA Advisory (http://www.hipaadvisory.com/regs/recordretention.htm)Archived (https://web.archive.org/web/2007
0411135655/http://www.hipaadvisory.com/regs/recordretention.htm)11 April 2007 at the Wayback Machine.
Retrieved 10 March 2007
57. Gripman, Stuart (27 March 2012)."Retrospect 9.0: powerful backup for professionals, organizations"(https://www.pc
world.com/article/1166050/retrospect_9_0.html)
. MacWorld. Scheduling scripts(GUI scripting), Restoring(Proactive
priorities). Retrieved 3 November 2017.
58. Rassokhin?, Alexander? (2012)."Enterprise Network Backup Challenges"(http://www.backupschedule.net/enterpris
e-network-backup.html). All About Backup. Novosoft LLC. Retrieved 13 November 2017.
59. Misener, Dan (29 March 2016)."World Backup Day highlights importance ofprotecting data" (http://www.cbc.ca/new
s/technology/world-backup-day-1.3510588). CBC News.
60. Schmoll-Trautmann, Anja (31 March 2017)."World Backup Day: deutliche Lücken zwischen Sicherheitsrisiko und
Nutzerverhalten" (http://www.zdnet.de/88291257/) (in German). ZDNet.
61. Preimesberger, Chris (31 March 2017)."World Backup Day 2017: 'We Don't Know the Day Nor the Hour'" (http://ww
w.eweek.com/storage/world-backup-day-2017-we-don-t-know-the-day-nor-the-hour). eWeek. QuinStreet. Ian Wood
of Veritas. Retrieved 11 November 2017.
62. Dorion, Pierre (4 August 2008)."The true role of a backup administrator"(http://searchdatabackup.techtarget.com/ne
ws/1322981/The-true-role-of-a-backup-administrator) . TechTarget. TechTarget, Inc. Retrieved 13 November 2017.
"On the other hand, the role of a backup administrator should be one of administration, not operation....whose role is
"being the keeper of the data""
63. Pond, James (26 August 2013)."Time Machine - FAQs 33. Backing - up multiple Macs"(http://baligu.com/pondini/T
M/33.html). baligu.com. James Pond (originally). Retrieved 28 October 2018.
64. Engst, Adam (23 March 2009)."EMC Ships Modernized Retrospect 8"(https://tidbits.com/article/10159). TidBITS.
TidBITS Publishing Inc. New Backup Capabilities. Retrieved 3 November 2018.
65. "Backing Up Databases with Veritas NetBackup" (https://gpdb.docs.pivotal.io/520/admin_guide/managing/backup-ve
ritas.html). Pivotal Documentation. Pivotal Software, Inc. 2018. About NetBackup Software
. Retrieved 18 January
2019.
66. "Symantec Backup Exec: How CASO Works" (http://backup-exec.helpmax.net/en/symantec-backup-exec-central-ad
min-server-option/how-caso-works/). Helpmax.net. HelpMax Software Help & Shop Inc. Retrieved 18 January 2019.
67. Fernando, Sal (30 April 2008)."Combine disk, tape benefits to protect data"(http://www.zdnet.com/article/combine-di
sk-tape-benefits-to-protect-data/). ZDNet. Retrieved 13 November 2017.
68. "New EMC Dantz Retrospect 7 Improves Data Protection for SMBs and the Distributed Enterprise"
(http://www.emc.
com/about/news/press/us/2005/20050131-2906.htm)
. DellEMC [current]. EMC Corp. [orig. publisher]. 31 January
2005. Retrieved 23 November 2016.
69. "About NetBackup Replication Director"(https://www.veritas.com/support/en_US/doc/59229900-126796169-0/v5807
9997-126796169). Veritas Support. Veritas Technologies LLC (US). 13 July 2017. Retrieved 18 November 2017.
70. "Symantec Backup Exec: About duplicating backed up data"(http://backup-exec.helpmax.net/en/backing-up-data/ab
out-duplicating-backed-up-data/). Helpmax.net. HelpMax Software Help & Shop Inc. Retrieved 13 January 2018.
71. "About synthetic backups"(https://www.veritas.com/content/support/en_US/doc/18716246-126559472-0/id-SF78016
3836-126559472). Veritas Support. Veritas Technologies LLC (US). 25 September 2017. Retrieved 18 November
2017.
72. "Symantec Backup Exec: About the synthetic backup feature"(http://backup-exec.helpmax.net/en/symantec-backup-
exec-advanced-disk-based-backup-option/about-the-synthetic-backup-feature/). Helpmax.net. HelpMax Software
Help & Shop Inc. Retrieved 13 January 2018.
73. Kaczorek, Mariusz (15 August 2015)."NetBackup Storage Lifecycle Policy (SLP): Overview"(https://www.settlersom
an.com/netbackup-storage-lifecycle-policy-slp-overview/)
. Settlersoman. Settlersoman. Retrieved 2 February 2018.
74. Jain, Hemant (14 April 2015)."VOX Knowledge Base: Data Protection Knowledge Base: Data Protection"
(https://vo
x.veritas.com/t5/Articles/Automated-Disk-management-and-Data-retention-in-Backup-Exec-DLM/ta-p/809167)
. VOX.
Veritas Technologies LLC. Retrieved 13 January 2018. "Employee [of Veritas]"
75. Dorion, Pierre (January 2007)."IBM Tivoli Storage Manager vs. traditional backup" (https://searchdatabackup.techtar
get.com/tip/IBM-Tivoli-Storage-Manager-vs-traditional-backup). TechTarget. Tech Target Inc. Backup versions.
Retrieved 30 October 2018.
76. "Retrospect ® 12.0 Mac User's Guide"(http://download.retrospect.com/docs/mac/v12/user_guide/Retrospect_Mac_
User_Guide-EN.pdf) (PDF). Retrospect. Retrospect Inc. 2015. pp. 8-9(Improved Grooming)
. Retrieved 28 December
2017.
77. Schmitz, Agen (5 March 2016)."Retrospect 13" (https://tidbits.com/article/16311). TitBITS. TidBITS Publishing Inc.
Retrieved 27 October 2016.
78. "Support: Knowledge Base"(https://www.retrospect.com/en/support/kb/). Retrospect. Retrospect Inc. 5 March 2019.
#Resources (Auto Launching Guide ..., ... dif
ference between "Backup" and "Duplicate", Avid Support ..., Instant
Scan FAQ, Can't use Open File Backup ...), #Email Backup, #Top Articles (BackupBot – Deep Dive into ProactiveAI,
How to Set Up Remote Backup, GDPR – Deep Dive into Data Retention Policies, Deep Dive - Components of a
Retrospect Backup, How to Set Up the Management Console, Management Console - How to Use Shared Scripts,
How to Use Storage Groups, Support End-of-Life Announcement for Mac OS X 10.3, 10.4, and 10.5), #Hooks (Script
Hooks: External Scripting with Event Handlers, Script Hooks: How to Protect MongoDB with Retrospect, Script
Hooks: How to Protect MySQL with Retrospect, Script Hooks: How to Protect PostgreSQL with Retrospect) .
Retrieved 12 March 2019.
79. Schmitz, Agen (28 May 2018)."Retrospect 15.1.1" (https://tidbits.com/watchlist/retrospect-15-1-1/)
. TitBITS. TidBITS
Publishing Inc. Retrieved 20 June 2018.
80. "What is the difference between multiplexingand multistreaming?" (https://www.veritas.com/support/en_US/article.T
ECH10085). Veritas Support. Veritas Technologies LLC (US). 29 January 2015. Retrieved 19 November 2017.
81. McMillen, Robert (21 July 2015)."How to run concurrent jobs in Backup exec 15"(https://www.youtube.com/watch?v
=1-9x9So038g) (Video). Google. Retrieved 14 January 2018 – via YouTube.
82. Schmitz, Agen (6 March 2014)."Retrospect 11" (https://tidbits.com/article/14573). TitBITS. TidBITS Publishing Inc.
Retrieved 27 April 2017.
83. "How Veritas NetBackup block-level incremental backup works for Oracle database files"(https://sort.symantec.com/
public/documents/sfha/6.0/aix/productguides/html/sf_adv_ora/ch21s01s01.htm). Symantec. Veritas Technologies
LLC (US). 2013. Retrieved 18 November 2017.
84. Harbaugh, Logan (Fall 2015)."Developing a Real Backup Plan with Symantec's Backup Exec 15"(https://edtechma
gazine.com/higher/article/2015/10/developing-real-backup-plan-symantecs-backup-exec-15)
. EdTech. CDW LLC.
Retrieved 14 January 2018.
85. Whitehouse, Lauren (September 2008)."The pros and cons of file-level vs. block-level data deduplication
technology" (http://searchdatabackup.techtarget.com/tip/The-pros-and-cons-of-file-level-vs-block-level-data-deduplic
ation-technology). TechTarget. Tech Target Inc. Retrieved 13 November 2017.
86. "About the Accelerator feature in NetBackup 7.5"(https://www.veritas.com/support/en_US/article.000086263).
Veritas Support. Veritas Technologies LLC (US). 10 November 2017. Retrieved 18 November 2017.
87. "Veritas Backup Exec Administrator's Guide:How Backup Exec determines if a file has been backed up"(https://ww
w.veritas.com/content/support/en_US/doc/59226269-99535599-0/v63768146-99535599). Veritas Support. Veritas
Technologies LLC. 11 November 2017. Retrieved 7 February 2018.
88. Engst, Adam (6 November 2012)."Retrospect 10 Reduces Backup Time with Instant Scan Technology" (https://tidbit
s.com/article/13379). TidBITS. TidBITS Publishing Inc. Retrieved 25 October 2016.
89. Rassokhin?, Alexander? (2012)."Enterprise Backup Software: Backup Network W orkstations, Email and Databases"
(http://www.backupschedule.net/enterprise-backup.html). All about Backup. Novosoft LLC. Retrieved 24 January
2018.
90. "Veritas NetBackup ™ 8.0 – 8.x.x Database and Application Agent Compatibility List"(https://www.veritas.com/conte
nt/support/en_US/doc/NB_80_DBSCL). Veritas. Veritas Technologies LLC (US). 17 November 2017. Retrieved
19 November 2017.
91. "Backup Exec TM 16 Agents and Options"(https://www.veritas.com/content/dam/Veritas/docs/data-sheets/be16-age
nts-and-options.pdf) (PDF). Veritas. Veritas Technologies LLC. 2016. Retrieved 14 January 2018.
92. "Retrospect ® 14.0 Mac User's Guide"(http://download.retrospect.com/docs/mac/v14/user_guide/Retrospect_Mac_
User_Guide-EN.pdf) (PDF). Retrospect. Retrospect Inc. March 2017. pp. 8-9(Script Hooks—backing up interactive
applications with pausing and integration with monitoring system), 18-26(Overview of the Retrospect Console), 27-
28(High-level Dashboard—high-level/long-term reports), 29(How Retrospect W orks—Smart Incremental), 31-
33(Media Sets), 73(Adding network shares), 74-75(User-initiated backups and restores), 124-126(Archiving), 168-
169(Email Preferences), 217(Retrospect for iOS) . Retrieved 13 April 2019.
93. Seget, Vladan (20 December 2017)."Veeam Backup and Replication 9.5 U3 Released" (https://www.vladan.fr/veea
m-backup-and-replication-9-5-u3-released). ESXVirtualization. Retrieved 20 December 2017.
94. "Retrospect: Retrospect Virtual" (https://www.retrospect.com/en/products/virtual?locale=en). Retrospect.com.
Retrospect Inc. 2018. Retrieved 28 October 2018.
95. "Backup & Replication Console"(https://helpcenter.veeam.com/backup/vsphere/remote_console.html). Veeam Help
Center. Veeam Software. 5 April 2016. Retrieved 28 October 2018.
96. "Symantec NetBackup ™ Administrator's Guide, V olume I Windows" (http://www-personal.umich.edu/~danno/syman
tec/NetBackup_AdminGuideI_WinServer .pdf) (PDF). Symantec. Veritas Technologies LLC (US). 2012. pp. 35–
45(Administration Console), 833–843(Activity Monitor), 888–894(Reports utility), 912(Remote Administration
Console), 915–938(Java Console). Retrieved 18 November 2017.
97. "Symantec Backup Exec: About the Administration Console"(http://backup-exec.helpmax.net/en/introducing-backup-
exec/about-the-administration-console/). Helpmax.net. HelpMax Software Help & Shop Inc. Retrieved 10 December
2017.
98. "OpsCenter Operational Restore"(https://www.veritas.com/support/en_US/article.100038022). Veritas Support.
Veritas Technologies LLC (US). 12 March 2012. Retrieved 18 November 2017.
99. "How Backup Exec Retrieve works"(http://backup-exec.helpmax.net/en/using-backup-exec-retrieve/how-backup-exe
c-retrieve-works/). Helpmax.net. HelpMax Software Help & Shop Inc. Retrieved 14 January 2018.
100. "Data Hooks: Modular Web Plugins for Retrospect Dashboard"
(https://www.retrospect.com/en/products/data_hooks). Retrospect. Retrospect Inc. 2019. screenshots. Retrieved
14 April 2019.
101. Antony, Erica; Tim Burlowski (January 2008)."NetBackup Operations Manager: Monitoring, Alerting and Reporting
for Veritas NetBackup" (https://vox.veritas.com/t5/Articles/NetBackup-Operations-Manager-Monitoring-Alerting-and-
Reporting/ta-p/806080)(PDF attachment). Symantec. Veritas Technologies LLC (US). pp. 4–5(monitoring), 6–
7(alerting), 7(3rdPartyEventMgmt.), 11–18(reporting) . Retrieved 18 November 2017.
102. "Windows® Enterprise Data Protection with Symantec Backup Exec™"(http://www.r2gen.com.br/images/symantec/
pdf/symantec_protegendo_sua_empresa.pdf)(PDF). Symantec. Veritas Technologies LLC. 2007. pp. 5–8 (CASO).
Retrieved 14 January 2018.
103. "How to configure notification recipients in Backup Exec 12.0 and above"
(https://www.veritas.com/support/en_US/ar
ticle.100016176). Veritas Support. Veritas Technologies LLC. 10 November 2017. Retrieved 15 January 2018.
104. "Veritas Backup Exec Administrator's Guide:About the Job Monitor"(https://www.veritas.com/content/support/en_U
S/doc/59226269-99535599-0/v76313540-99535599) . Veritas Support. Veritas Technologies LLC. 11 November
2017. Retrieved 15 January 2018.
105. "Nagios plugins for monitoring BackupExec"(https://exchange.nagios.org/directory/Plugins/Backup-and-Recovery/B
ackupExec). Nagios Exchange. Nagios Enterprises. Retrieved 15 January 2018.
106. "EMC Announces Retrospect 8.0 Backup and Recovery Software For Mac" (http://www.infotomorrowmag.com/abou
t/news/press/2009/20090106-02.htm). DellEMC [current]. EMC Corp. [orig. publisher]. 6 January 2009. Retrieved
10 November 2016.
107. "Veritas Backup Exec Administrator's Guide:Configuring network options for backup jobs"(https://www.veritas.com/c
ontent/support/en_US/doc/59226269-99535599-0/v96257307-99535599) . Veritas Support. Veritas Technologies
LLC. 17 November 2017. Retrieved 15 January 2018.
108. "Veritas NetBackup™ Deduplication Guide"(https://www.veritas.com/content/support/en_US/doc/ka6j00000000ADE
AA2) (PDF). Veritas. Veritas Technologies LLC (US). 2016. p. 171(Resilient network properties)
. Retrieved
18 November 2017.
109. "What Is an AWS Snowball Appliance?"(https://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html)
.
AWS. Amazon.com. 2018. Retrieved 8 March 2018.
110. Rouse, Margaret (December 2011)."Definition: cloud seeding"(http://searchdatabackup.techtarget.com/definition/cl
oud-seeding). TechTarget. Tech Target Inc. Retrieved 16 November 2017.
111. "Changing paths Cloud Mac"(https://www.youtube.com/watch?v=Ac3BhXO4T1g) (Video). Retrospect Inc. 29
February 2016. Retrieved 7 October 2016 – via YouTube.
112. High, Dave; Mahmud, Fozz (10 March 2016)."NBU and the Amazon Storage Gateway VTL"(https://www.youtube.c
om/watch?v=rU1rFK9o20s)(Video). Veritas. Veritas Technologies LLC. Retrieved 17 January 2018.
113. "Backup Exec 16: Best Practices for Using the V
eritas Backup Exec Cloud Connector"(https://www.veritas.com/cont
ent/support/en_US/doc/72686287-129480082-0/v128967126-129480082) . Veritas Support. Veritas Technologies
LLC. 25 October 2017. Retrieved 15 January 2018.

External links
Retrieved from "https://en.wikipedia.org/w/index.php?title=Backup&oldid=893334093
"

This page was last edited on 20 April 2019, at 17:17(UTC).

Text is available under theCreative Commons Attribution-ShareAlike License ; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of theWikimedia
Foundation, Inc., a non-profit organization.

S-ar putea să vă placă și