Documente Academic
Documente Profesional
Documente Cultură
Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are trademarks
of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. Published in the
USA.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO
THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE.
Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The trademarks, logos, and service marks
(collectively "Trademarks") appearing in this publication are the property of DELL EMC Corporation and other parties. Nothing contained in this publication should be construed
as granting any license or right to use any Trademark without the prior written permission of the party that owns the Trademark.
AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager,
AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, EMC
CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak, CloudArray, Codebook Correlation Technology, Common Information Model, Compuset,
Compute Anywhere, Configuration Intelligence, Configuresoft, Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge ,
Data Protection Suite. Data Protection Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS
ECO, Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC Centera, EMC ControlCenter,
EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic
Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC
LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy, Multi-Band Deduplication,
Navisphere, Netstorage, NetWitness, NetWorker, EMC OnCourse, OnRack, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere,
ProtectEverywhere, ProtectPoint, EMC Proven, EMC Proven Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak,
Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne,
SRDF, EMC Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, TwinStrata, UltraFlex,
UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, Virtualize
Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression,
xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta, Zero-Friction Enterprise Storage.
The Cloud Profile is unique to each cloud storage provider. The provider requires specific authentication
content such as an access URL and secret access key which must be gathered directly from the provider.
The CloudBoost appliance may be monitored from the Cloud Portal. This is where CloudBoost alerts,
configuration settings, software versions, and storage use history can be viewed.
The supported private clouds are EMC ATMOS, EMC Elastic Cloud Storage (ECS), and Generic
OpenStack Swift.
The supported public clouds are Amazon Web Services (AWS), AT&T Synaptic Storage, Google Cloud
Storage, Virtustream Storage Cloud, and Microsoft Azure Storage.
A single cloud provider can have multiple CloudBoost appliances accessing it, but a CloudBoost appliance
may only have a single cloud profile configured for one cloud provider.
Site cache should be deployed when the following conditions are met:
The cloud object store is not LAN accessible.
The connectivity to the cloud object store has low bandwidth and high latency.
There are no streaming workloads or continuous backups running.
Client Direct transfers data to the cloud object store directly. The CloudBoost agent library is installed on
the client. This is the optimal data path, but is limited to x64 Linux only.
An external storage node for clients that do not support client direct. The CloudBoost agent library is part
of the storage node installation.
A CloudBoost appliance contains an embedded storage node also. However this is the least preferred
method.
The CloudBoost library will convert the data into objects and store it on the cloud object store configured
as a target. The metadata for these cloud objects are recorded on the CloudBoost appliance in the
metadata database.
CloudBoost’s optional site cache would eliminate the impact of long distance connectivity where high
latency, low bandwidth and network reliability may be an issue.
As with the replication solution, CloudBoost’s optional site cache would assist sites where WAN latency is
a problem as well as improve recovery time objectives.
Unlike the previous solutions, site cache is not available when the CloudBoost appliance is deployed
within the cloud.
Login to the NMC GUI as an administrator and under the Devices window launch the Device Configuration
Wizard.
Choose the CloudBoost device type and review the preconfiguration checklist.
Choose the CloudBoost storage option; either embedded or external.
Choose to use an existing CloudBoost appliance or create a new one.
Enter the FQDN of the CloudBoost appliance.
Enter the ‘remotebackup’ user and password specified earlier.
Choose a configuration method to select the CloudBoost file system folder to serve as a target device.
Browse and Select is recommended.
Create a new folder under the /mnt/magfs/base directory to serve as the target data device.
During the creation of the CloudBoost device enter the external storage node in the CloudBoost Storage
configuration option, instead of the default embedded storage.
To specify a recovery from CloudBoost, create a new recovery configuration and select the saveset
Recover tab and then Query to view the instances available from CloudBoost. Configure the desired
recovery file path, and save the recovery configuration for reuse.
Recovery can be verified by viewing the contents of the recovery destination as well as the recover logs
found on the NetWorker server at \nsr\logs\recover.
When data is moved from the active to the cloud tier, it is deduplicated and stored in object storage in the
native Data Domain deduplicated format. This results in a lower total cost of ownership over time for long
term, cloud storage. The cloud tier supports encryption of data at rest by default and the Data Domain
retention lock feature, ensuring the ability to satisfy regulatory and compliance policies.
With DD OS 6.0, supported cloud storage includes Dell EMC Elastic Cloud Storage, Virtustream, Amazon
Web Services, and Microsoft Azure. Additional storage for metadata is required to support the cloud tier.
Metadata is used by deduplication, cleaning, and replication operations.
To support the Cloud Tier, additional metadata storage is required. The amount of required metadata
storage is based on the Data Domain platform.
A Data Domain system can run either the Cloud Tier or Extended Retention but not both on the same
system.
The Cloud Tier is also supported in an HA configuration. Both nodes must be running DD OS 6.0 or higher
and they must be HA-enabled.
This example shows a DD9500 system with an active tier of 864 TB and two cloud units. Each cloud unit
has a capacity equal to that of the active tier, for a combined maximum usable capacity of 1.7 PB. Data
stored on the active tier provides local access to data and can be used for operational recoveries. The
cloud tier provides long term retention for data stored in the cloud.
In CloudBoost 2.1, the client backup data is pushed directly to cloud object storage. It does not have to be
backed up to block storage first. The backup application, such as NetWorker, uses CloudBoost to enable
the connection between the client and the cloud provider as well as perform deduplication and encryption.
Both the primary copies (backup to the cloud) or secondary copies (replicate to the cloud) of backup data
are pushed off site to a cloud provider directly.
In DD OS 6.0, client backup data is saved to the Data Domain’s active tier and the policy manager clones
the save sets to the Cloud Tier. Once it has reached a certain age the Data Domain age-based policy
pushes the data to cloud object storage. This method provides block storage for new backup data and
cloud object storage for long term retention. Due to it’s age, only infrequently used backup data is stored
with the cloud provider meaning it is less likely to need recovery yet still necessary to fulfill compliance
requirements.
Two media pools will be required as well. The Data Domain device pool type must be ‘Backup’ and the
Cloud Tier device pool type must be ‘Clone’.
The NetWorker storage node chosen to manage the devices must be running the same version of
NetWorker server software, version 9.1.
Finally, A Data Domain system configured for Cloud Tier contains a CA certificate to communicate with
the cloud provider. The Cloud Tier device configuration will offer to pull, or import, the CA into NetWorker.
The certificate must be trusted and the name of the cloud unit created on the Data Domain system needs
to be specified in NetWorker.
Many types of recoveries are supported. For example, disaster recoveries, block based backups, file level
restores of block based backups, VMware block based backups, and VMware image level recoveries.
However, VMware file level restores from the Cloud Tier are not supported. Instead, clone the data to a
Data Domain device first..
Create a new folder for the Cloud Tier device and select it. Configure a clone media pool for the Cloud Tier
device. The pool must contain only Cloud Tier devices. Create a new pool or select an existing one.
Finally, specify the Data Domain Management Parameters. Enter the Data Domain system host name
and admin credentials. The default communication port is 3009. Pull the CA Certificate from the Data
Domain used to communicate with the cloud provider. Then, enter the name of the Cloud Unit specified on
the Data Domain system. A message regarding the app-based policy appears. This ensure the clone
action can perform the data movement from the Data Domain system’s Active Tier to the Cloud Tier.
Finalize the configuration wizard and ensure a DD Cloud Tier device and a mounted volume are present.
Select the media pool which contains the DD Cloud Tier devices. The retention time determines when to
mark the save sets as recyclable during the expiration server maintenance task.
Delete source savesets after clone completes is the equivalent to staging. Data is moved to the
destination volume and deleted from the source volume.
The configuration group box specifies the criteria for the staging policy to start. For example, the high
water mark signals when to perform the operation based on the amount of used disk space on the file
system partition on the source device. The low water mark signals when the save sets stop moving from
the source device.
The status may also be checked from NMC. Under the Media window select Save Sets. Under the View
menu select Choose Table Columns and ensure the Clone Flags column is selected. A ‘T’ flag will be
displayed similar to the output of the mminfo command.
Please note: If you are familiar with previous versions of the vRealize Suite, vRealize Automation was
previously known as vCloud Automation Center, or vCAC, and vRealize Orchestrator was previously
known as vCenter Orchestrator, or vCO.
In order to utilize DPE version 4.0, a VMware virtual environment must consist of vRealize Orchestrator
version 7.1 or above, and vRealize Automation version 7.1 or above.
After DPE is configured to interoperate with the NetWorker server, all the existing NetWorker tools remain
available for use. NMC, REST API, and cli tools are still available to the backup administrators on the
NetWorker server. Should the backup administrator use these tools directly, vRA operations will not be
affected and will update based on changes made in NetWorker.
Within the DPE package is the .vmoapp file which is a compressed vRealize Orchestrator install file used
by the vRealize deployment mechanism. It contains some additional file types:
• Open source licensing and license agreement files.
• A .dar file which is the orchestrator plug-in format containing the java code and library files.
A free license file must be obtained from Dell EMC and activated through the licensing website.
Click Submit and monitor the progress. When it completes, view the NetWorker data protection endpoint
by selecting the Inventory tab, and clicking EMC Data Protection. If the endpoint does not immediately
display, right-click EMC Data Protection and select Reload.
Optionally, you can diagnose potential configuration issues between DPE, vRealize Automation, vRealize
Orchestrator, vCenter, and NetWorker. Right-click Check EMC data protection configuration in the left
pane of the Workflows tab, and select Start workflow.
The backup administrator will see a new VM client become associated with the protection policy in NMC.
VMware administrators have the ability to follow the progress of the backup workflow in the vRO portal as
well as in VMware vSphere.
Backup administrators can view the status of the backup by using NMC’s log viewer and Monitoring
window.
Similar to the Run data protection action, both VMware administrators and backup administrators can
follow the progress using their respective tools.
A URL is constructed by vRA for the end-user to use the web based application. Once the application is
launched, the end-user must login with their user credentials. The user account must be part of the
NetWorker VMware FLR Users group. After gaining access, the end-user follows the FLR workflow to
browse and select files to restore.
This concludes the training. Proceed to the course assessment on the next slide.