Sunteți pe pagina 1din 323

HP 3PAR SA

Enablement
eLearning 1:
Presenting 3PAR Core
Technologies
A technical overview of 3PAR StoreServ
Storage,
the worlds most agile and efficient
storage arrays
Q2 FY2015
Copyright 2014 Hewlett-Packard Development Company, L.P.

Sponsored by
Intel
The information contained herein is subject to
change without notice.

Architecture overview

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

System hardware architecture matters


Legacy architectures force tradeoffs
Traditional modular
storage

3PAR architecture

Cost-efficient usually active-passive dualcontroller design limited in scalability and


resiliency

Traditional monolithic
Hoststorage
connectivity

Distributed
controllers
and
functions
Disk Connectivity
Scalable, resilient, and active-active but
costly
Might not meet multi-tenant requirements
efficiently

Mesh
Host
ports
Data
cache
Disk ports

LUN
Controlle
r

Cost-effective, scalable, resilient, meshed,


active-active architecture
Meets cloud-computing requirements for
efficiency, multi-tenancy, and autonomic
management

HP 3PAR ASIC
The heart of every 3PAR storage system
Built-in zero detection

All reads and writes are


through the ASIC

Fast RAID 10, 50, and 60

CRC 32 and XOR used for


inline deduplication

Tightly coupled clustering

Rapid RAID rebuild


Integrated XOR engine

High bandwidth, low latency


interconnect

Mixed workload and CPU


offload

Independent metadata and data


processing

HP 3PAR virtualization concept (1 of 2)


Example: Four-node 7400 with eight drive enclosures

A particular physical drive is owned


by one node

Node 0

Node 1

Node 2

Nodes are added in pairs for cache


redundancy
Note: The nodes are installed in the
back of the first drive enclosures
HP 3PAR StoreServ arrays with four
or more nodes support Cache
Persistence
This example shows a four-node
configuration with eight drive
enclosures in total

Node 3

HP 3PAR virtualization concept (2 of 2)


Example: Four-node 7400 with eight drive enclosures

Physical drives are


automatically formatted
in
1 GB chunklets
Chunklets are bound
together to form logical
disks in the format
defined in the CPG
policies
(RAID level, step size)
Virtual volumes are built
striped across all LDs of
all nodes from all drives
defined in a particular
CPG
Virtual volumes can now
be exported as LUNs to
servers

Phase state
Disk initialization

Defines RAID level, step


size, set size, and
redundancy

Autonomic wide striping


across all logical disks
Present and access LUNs
across multiple activeactive paths
(HBAs, fabrics, nodes)

i.e. RAID5 (3+1)

Process step

Server

Active-active
multipathing
Exported
LUN

LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD
LD

CPG

LD
LD
LD
LD
LD
LD

LD
LD
LD
LD
LD
LD

LD
LD
LD
LD
LD
LD

LD
LD
LD
LD
LD
LD

Virtual
volume

Which array is more efficient and easier


to use?

Traditional storage array

3PAR array

HP 3PAR autonomic sets


Simplify provisioning

Autonomic HP 3PAR storage

Traditional storage

Autonomic host set

Cluster of VMware vSphere servers

V1

V2

V3

V4

V5

V6

V7

V8

V9

V10

V1

Requires 50 provisioning actions


(1 per host-volume relationship)

Add another host/server

Requires 10 provisioning actions (1 per


volume)

Add another volume

Requires 5 provisioning actions (1 per


host)

V3

V4

V5

V6

V7

V8

V9

Autonomic volume set

Individual volumes

Initial provisioning of the cluster

V2

Initial provisioning of the cluster

Add hosts to the host set

Add volumes to the volume set

Export volume set to the host set

Add another host/server

Just add host to the host set

Add another volume

Just add the volume to the volume set

V10

HP 3PAR StoreServ is eliminating


boundaries
Polymorphic
When scale
simplicity
ONE architecture

ONE operating
system
ONE interface
ONE feature set

When value
matters

matters
Up to 3.2 PB

When performance
matters
Up to 900 K IOPS @ 0.7
ms latency

Starting at $25 K

7400c
7200c

7450c
All-flash array
7200c
All-flash starter kit

7440c

New

HP 3PAR software titles

Same functions and features for 7000 and


10000
Security SW Suite
Virtual Domains
Virtual Lock

Data Optimization SW Suite


v2

Dynamic Optimization
Adaptive Optimization
Peer Motion
Priority Optimization
Virtual SP (7000 only )
SmartStart (7000 only)

Online Import license (180 days)


System Tuner

Host Explorer

Multipath I/O SW

VSS Provider

Scheduler

Recovery Manager for vSphere


VASA, vCenter plug-in

Application SW Suite for Oracle


Recovery Manager for Oracle

Application SW Suite for Microsoft


SQL

Replication SW Suite
Virtual Copy (VC)
Remote Copy (RC)
Peer Persistence

Application SW Suite for VMware


vSphere

Data Encryption

Recovery Manager for Microsoft SQL

Application SW Suite for MS Exchange


Recovery Manager for MS Exchange

Policy Manager

Application SW Suite for MS Hyper-V


Recovery Manager for MS Hyper-V

Reporting SW Suite
System Reporter
3PARInfo

Storage Plug-in for SAP LVM

HP 3PAR Operating System SW Suite


Rapid Provisioning
Autonomic Groups
Autonomic Replication Groups
Autonomic Rebalance
LDAP Support
Access Guard
Host Personas

StoreFront Mobile Access

Optional Integration Solutions


Management Plug-in for MS SCOM

Adaptive Flash Cache

Persistent Cache

Persistent Ports

Management Console

Web Services API

SMI-S
Real Time Performance Monitor

OpenStack Integration
StoreFront VMware vCOPS Integration

Full Copy
Thin Provisioning
Thin Copy Reclamation
Thin Persistence
Thin Conversion
Thin Deduplication for SSD
3PAR OS Administration Tools

CLI client
SNMP

Thin Provisioning with EMC VNX

From the EMC whitepaper Virtual Provisioning for the New VNX
Series
It is important to understand your application requirements and select the approach
that meets your needs
If conditions change, you can use VNX LUN migration to migrate among thin, thick, and
classic LUNs
Use pool-based thin LUNs for:

Applications with moderate performance requirements

Taking advantage of advanced data services such as FAST VP, VNX snapshots, compression, and
deduplication

Ease of setup and management, best storage efficiency, energy and capital savings

Applications where space consumption is difficult to forecast

Use pool-based thick LUNs for:

Applications that require good performance

Taking advantage of advanced data services such as FAST VP and VNX snapshots

Storage assigned to VNX for file

Ease of setup and management

Use classic LUNs for:

Applications that require extreme performance

2014 Gartner Magic Quadrant for


general-purpose disk arrays

12

Learning check
1. Why are nodes added in pairs to the enclosure?
____________________________________________________________________________
__________________________________________________________________________

13

Learning check answer


1. Why are nodes added in pairs to the enclosure?
To provide cache redundancy

14

AFA and flash


optimization

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Flash does not change storage


Reducing
risk with a comprehensive approach to data integrity
requirements
High
performance
Flash-optimized
architecture

Application
integration
VMware, Oracle, SQL
integrations

Scalability

Reliability

Disaster
recovery

Scale out architecture


with multiple activeactive nodes

Proven architecture with


guaranteed high
availability

Drive efficiency

Ease of use

Data mobility

Extend life and


utilization of flash

Self configuring,
optimizing, and tuning

Federate across
systems and sites

Data protection with


sync and a-sync with
multiple sites

HP 3PAR flash strategy enables


seamless transition

Performanc
e

< 100
us

<
1ms

2 10
ms

Polymorphic
simplicity
ONE architecture

All flash storage


Consistent low latency
Single flash tier
3PAR StoreServ 7450, 7400,
7200

ONE operating
system
ONE interface
ONE feature set

Hybrid storage

HDD storage
Cost-optimized

3PAR StoreServ 7000,


10000

Balance cost &


performance
3PAR StoreServ 7000,
10000
SSDs + Adaptive
Optimization + Flash
Cache

Cost ($/GB)

Making flash mainstream


Industry usable
$/GB eMLC SSD
3PAR SSD $/GB
Block-zero
dedupe

$13
$5

Saving money and capacity


with
the most complete set of
data compaction
technologies available

4:1 to 10:1 depending on


workload
$4
Negligible performance impact
3PAR SSD $/GB
due
Adaptive
to unique hardware acceleration
Sparing
$2

3PAR SSD $/GB


cMLC SSD

85%

Lower $/GB in
last 12 mo

3PAR SSD $/GB


Thin
deduplication
Thin clones

Industry raw
$/GB
15 K SAS HDD

3PAR approach to working with flash


Flash optimized = more than just being fast
Cache
management

Adaptive
read

Adaptive
write

Autonomic
cache
offload

Multitenant I/O
processing

Performance
scalability

3PAR ASIC

Express
writes

Systemwide
striping

Quality of
service

Efficiency and
wear handling

3PAR Thin
Technologie
s

Zero detect

Adaptive
Sparing

Systemwide
striping

Failure handling

Step size
optimizatio
n

Systemwide
sparing

Adaptive read
Host

Read optimizationfrom flash to cache


3PAR architecture adapts its reads from
flash media to match host I/O sizes

4K
Back end

16.8
KB

8.4
KB

Cache

Front end

4.2
KB*

Reduced latency by avoiding


unnecessary data reads
Optimized back-end throughput
handling

16
KB

Benefits

8 KB

Read
I/Os

Flash
HP 3PAR StoreServ
*Extra bytes to account for DIF

Adaptive write
Host

Write optimization to cache


3PAR architecture supports a granular cache
page size of 16 KB
However, if a sub-16 KB write I/O occurs,
3PAR array performs a sub-16 KB write to
cache
3PAR array keeps a bitmap for each page and
only
the dirty part of the page

16 KB cache
page (valid page)

4 KB write
I/O

Cache
Host writes only
4 KB to cache
page

Benefits
Reduces latency and back-end throughput
and also extends flash life by avoiding
unnecessary data writes
For RAID 10 volumes, adapting writes to
match I/Os avoids latency penalties
associated with read-modify-write sequences

Flash
HP 3PAR StoreServ

Only the dirty


data (4 KB) is
written to
flash

Adaptive I/O processing


Host

Maintaining service levels under mixed workloads


Front end

Cache-to-media write process is multithreaded, allowing for each I/O to start


its own thread in parallel without
waiting for threads to be free

Back end

Host 1
(DSS)

3PAR architecture splits large R/W


I/Os into 32 KB sub-I/Os before
sending them to flash media
Ensures that smaller read I/Os do not
suffer from higher response times

Benefits
Allows 3PAR arrays to serve sequential
workloads without paying a latency
penalty on OLTP workloads

Host 2
(OLTP)

128 KB
Read
I/O

4 KB read I/Os

Cache

Front end

10

11

12

13

14

15

16

32 KB
sub-I/Os
5

Back end

Flash devices

HP 3PAR Thin deduplication


Accelerated by the ASIC and Express Indexing
1. Host write
0001101

2. ASIC computes
hash

3. Fast metadata
lookup with Express
Indexing
4. On match data is
LBA
xxx
yyy
xxx
yyy
compared against the
zzz
zzz
existing potential deduped
Hash
page and the ASIC used for a
L1
bit- bit compare using inline
Hash
0001101
L1
XOR operation
L2
Table
5. XOR
L2 Hash
=
Table L3
Table

L3
Table

0001101

6. A dedupe match will


result in XOR outcome
being a page of zeros
that is detected inline
by the ASIC
0000000

SSD layout
Making flash affordable
Overprovisioned
flash
Spares

User
space

High endurance at
high $/GB

Every SSD has internal over-provisioning


(OP)
Used for garbage collection and for minimizing
write amplification

The internal OP reduces the raw capacity


available to users
3PAR wide-striped architecture also
reserves chunklets in each drive for
sparing
Spare space is necessary to protect against
drive failure scenarios

Rethinking over-provisioned capacity


Making flash affordable
Overprovisioned
flash
Spares

User
space

High endurance at
high $/GB

Overprovisioned
flash
Spares

Lower OP =
lower
endurance
20% gain
in user
space

User
space

Low endurance at
lower $/GB

What is data compaction?


Data compactionis the reduction of the number ofdata elements,bandwidth,
cost, and time for the generation,transmission, andstorage of datawithout loss
ofinformationby eliminating unnecessaryredundancy, removing irrelevancy, or
using special coding

Data compaction on HP 3PAR StoreServ

CompactionA holistic approach to


lowering cost

Minimize

Duplicate writes

Zero-page inline deduplication


Thin deduplication and thin
clones

Reservations/pools

Reservation-less thin/snaps

Allocation

16 KB write allocation unit

Hot spares

System-wide sparing

Maximize
Raw capacity

Adaptive Sparing

Reclamation

With16 KB granularity

Wear management

Adaptive write
Wear gauge for every SSD

HP 3PAR Flash Advisor


Enabling a smooth transition to flash
Adaptive Flash
Cache Simulation

Adaptive
Optimization
I/O density reports

Dedupe estimation
and dynamic
optimization
Thin/HDD

Dedupe/SSD

1101000110111010

1101000110111010
1101000110111010

1- Estimate
savings

1101000110111010

Adaptive Flash Cache Simulation


helps determine benefits and
amount of flash required in their
system for random read
acceleration

1101000110111010
1101000110111010

Powerful I/O reporting to


determine the exact amount of
flash needed for hot data

2- Online dynamic
optimization to
dedupe status

Thin deduplication estimation

How to calculate a blended dedupe


ratio (1 of 4)
The first ratio to be determined is the thin efficiency in percent of savings
based on the measured benefit of thin provisioning

This can be measured using the host capacity scan in NinjaStars


Assume that this was completed and 75% of the exported capacity was written, which
results in a 25% savings

The second ratio to be determined is the blended dedupe ratio for the
applications/data of the customer environment

Assume that the environment is 39% database, 7% images, 39% virtual servers, and
15% file server volumes with expected representative dedupe ratios of 1:1 for
database, 1:1 for the images, 4:1 for the virtual servers, and 5:1 for the file server
volumes
Dedupe ratios for each application or data class in the customer environment should
be based on a discussion with the customer
As a result of the discussion in this case, the next slide shows the example calculation
to determine the blended dedupe ratio for use in NinjaStars

How to calculate a blended dedupe


ratio
(2 of 4)
Database
0.39 X 256 TB X 1/1 = 99.84 TB
Images 0.07 X 256 TB X 1/1 = 17.92TB
Virtual servers 0.39 X 256 TB X 1/4 = 24.96TB
File server 0.15 X 256 TB X 1/5 =7.68TB
___________________________________________________

150.40 TB

Blended dedupe ratio = 256/150.40 = 1.7

On the following slide is a format that could be used to represent the


requirements that summarize the way the total compaction ratio was
determined

How to calculate a blended dedupe ratio


(3
Thisof
is a 4)
presentation format that could be used to represent the
requirements that summarize the way that the total compaction ratio
was determined

How to calculate a blended dedupe ratio


Thisof
is a 4)
NinjaStars sizing that would meet the 256 TB requirement
(4
when it is at 85% of capacity with the total compaction ratio factored
in

Learning check
1. List at least four benefits of using flash
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
_____________________________________________________________________________________

Learning check answer


1. List at least four benefits of using flash
High performance
Ease of use
Cost benefits
Scalability
Maximizes capacity
Greater reliability

Learning check
2. What is deduplication, and why is it important in thin provisioning?
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
_____________________________________________________________________________________

Learning check answer


2. What is deduplication, and why is it important in thin provisioning?
Deduplication is the process of compressing data to optimize
space and eliminate copies of data. Estimating deduplication
allows you to

7000 hardware
overview

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

HP 3PAR StoreServ 7000


7200
c
Controller nodes

7400c

7440c

7450c

240

288

576

480

960

120

240

40 GB

48 GB

96 GB

96 GB

192 GB

64 GB

128 GB

768

768

1500

96

192

96

192

Built-in 8 Gbit/s FC ports

8 Gbit/s FC
16 Gbit/s FC
10 Gbit/s
iSCSI
10 Gbit/s
FCoE

8
4
4
4

8
4
4
4

16
8
8
8

8
4
4
4

16
8
8
8

8
4
4
4

16
8
8
8

Max drives
Cache per node-pair /
max
Max Adaptive Flash
Cache

Optional
ports

Built-in IP Remote Copy


port
Controller enclosures

HP 3PAR StoreServ 7440c hardware


details
Item

HP 3PAR StoreServ
7440c

Number of controller
nodes

2 or 4

HP 3PAR Gen4 ASICs

2 or 4

CPU (per controller


node)
Total cache
Total flash cache
Total on-node cache

8-core, 2.3 GHz


1.6 - 3.2 TB
1.5 - 3 TB

8 - 960

Number of solid state


drives

8 - 240

Drive enclosure
Number of drive
enclosures

Host adapters

96 - 192 GB

Number of disk drives

Raw capacity

Item

1.2 TB - 2000 TB
SFF: 24 slots in 2U
LFF: 24 slots in 4U
0 - 38

Maximum host ports


8 Gb/s FC host ports

HP 3PAR StoreServ
7440c
Four-port 8 Gb/s FC
Four-port 16 Gb/s FC
Two-port 10 Gb/s
iSCSI/FCoE
24
4 - 24

16 Gb/s FC host ports

0-8

10 Gb/s iSCSI host ports

0-8

Maximum initiators

1024 or 2048

HP 3PAR StoreServ 7000 hardware building


blocks
Base
Host
Expansion
Service
storage
systems
HP 3PAR StoreServ
7200
(2 nodes, 4 FC ports, 24
SFF slots)

HP 3PAR StoreServ
74x0
(2-node, 4 FC ports, 24 SFF
slots)

HP 3PAR StoreServ
74x0
(4-node, 8 FC ports, 48 SFF
slots)

adapter
s

drive
enclosures

4-port FC
HBA
- 8Gb/s

HP M6710 2.5in 2U
SAS

2-port FC
HBA
- 16Gb/s
HP M6720 3.5in 4U
SAS
2-port 10Gb/s
iSCSI/FCoE
CNA

Drive
s

SFF SAS
HDD &
SSD

LFF SAS
HDD &
SSDs
Choice of
encrypted
and nonencrypted
drives

Racks

HP G3 rack

Customersupplied rack
(4-post, square
hole, EIA
standard, 19 in.
rack from HP or
other suppliers)

process
or

Virtual
(default)

Physical
(optional)

HP 3PAR StoreServ 7000 controller


Configuration
options
enclosure
2

6
a

Node 1
(3)
Node 0
(2)

12 x 8 Gb FC configuration
4 x 8 Gb FC base configuration
Built-in 1GbE Remote Copy Port
1GbE Management Port
Built-in 8Gb FC Ports
4-lane 6Gbit/s SAS for drive chassis
4
connections
5 74x0 Controller Interconnects
Optional PCIe Card Slot
6
4-Port 8Gb FC Adapter
6a
2-Port 10Gb CNA (iSCSI/FCoE) or 16Gb
6b
FC Adapter

6
b

1
2
3

4 x 8Gb FC and
4 x 10 Gb Eth (CNA) or 4 x 16 Gb configuration

HP 3PAR StoreServ 74x0 four-node


Controller
interconnect
system
Node 3
Node 2
Node 1
Node 0

HP 3PAR StoreServ 7000 controller


Two
to four nodes per systeminstalled in pairs
nodes
Per-node configuration
Ethernet
Ethernet
Management
SATA
boot SSD Serial
Remote Copy

Multifunction
controller

node console
port

Intel Sandy
Bridge
processor

One Intel Sandy Bridge


processor

Control cache

Data cache

3PAR Gen4
ASIC

SAS
IOC

To
ASIC
or
other
nodes

Internal
SFF drives

SAS
expande
r
Optional
PCIe slot

SAS ports

FC ports

7200c
7400c
7440c
7450c

4 GB
8 GB
16 GB
16 GB

Control cache

Internal
FC
adapter

6-core
6-core
8-core
8-core

Data cache

PCIe
switch

7200c
7400c
7440c
7450c

7200
7400
7440c
7450

16 GB
16 GB
32 GB
32 GB

1.8
1.8
2.3
2.3

GHz
GHz
GHz
GHz

One Thin Built In Gen4 ASIC


Two built-in 8 Gb/s FC ports
One optional PCIe adapter

Four-port 8 Gb FC or
Two-port 16 Gb FC or
Two-port 10 Gb/s CAN

Two SAS back-end ports

Four-lane 6 Gb SAS

HP 3PAR StoreServ 7000 disk chassis


Mix and match drives and enclosures as required
4U with 24 LFF drive slots
2U with 24 SFF drive slots

HP 3PAR StoreServ 7000 drive overview


HP 3PAR
StoreServ 7200
RAID levels

HP 3PAR
StoreServ 7400

HP 3PAR
StoreServ
7450

RAID 0, 10, 50, 60

RAID 5 data to parity ratios

2:1 to 8:1

RAID 6 data to parity ratios

4:2; 6:2; 8:2; 10:2; 14:2

SFF 2.5 drives

MLC SSD
cMLC SSD
SAS 15 krpm
SAS 10 krpm
NL SAS 7.2
krpm

480 GB, 920 GB


480 GB, 1.92 TB
300 GB
450 GB, 600 GB, 900 GB,
1200 GB
1 TB

480 GB, 920 GB


480 GB, 1.92 TB
300 GB
450 GB, 600 GB, 900 GB,
1200 GB
1 TB

480 GB, 920 GB


480 GB, 1.92 TB
NA
NA
NA

SFF 2.5
encrypted
drives*

MLC SSD
SAS 10 krpm
NL SAS 7.2
krpm

920 GB
450 GB, 900 GB
1 TB

920 GB
450 GB, 900 GB
1 TB

920 GB
NA
NA

LFF 3.5 drives

MLC SSD
SAS 15 krpm
SAS 10 krpm
NL SAS 7.2
krpm

480 GB, 920 GB


NA
NA
2 TB, 3 TB, 4 TB

480 GB, 920 GB


NA
NA
2 TB, 3 TB, 4 TB

480 GB, 920 GB


NA
NA
NA

NL SAS 7.2

2 TB, 4 TB

Array Encryption License required / Encrypted drives cannot be mixed with standard drives in the same array

LFF 3.5

2 TB, 4 TB

NA

HP 3PAR SSD drive options

Available sizes
Warranty

MLC
Multi-level
cell

cMLC
Commercial
multi- level
cell

480 GB, 920 GB

480 GB, 1.92 TB

5 years

5 years

Within the warranty period worn-out drives will be replaced by HP


Remaining SSD drive life can be checked by the user (see chart to
the left)

The 3PAR array alerts the user when the wear-out level (max
Program/Erase cycles) reaches 95%
After the five-year warranty expires, the customer must purchase
worn-out drive replacements
HP 3PAR Adaptive Sparing decreases wear-out and extends drive life
dramatically

Recommended hardware configuration


Record
rulesyour requirements
1. Choose base configurationdefines scalability and cost
. 7200 2-node or
. 74x0 2-node or
. 74x0 4-node

2. Define availability needsdefines required # of enclosures, possible RAID


levels, and set sizes
. HA drive (magazine) or
. HA enclosure (cage)

3. Choose drive types and quantitydefines capacity and performance


. SSD
. FCFast Class (10 k or 15 k rpm)
. NLNear Line (7.2 k rpm)

4. Choose connectivitydefines optional PCIe adapters


. Number of host Fibre Channel ports required
. Number of host iSCSI/FCoE ports required
. Number of Remote Copy FC ports required

Recommended configuration rules


Two controllers, HA drive
Base enclosure

Installation order

Includes two controllers


Supports SAS Fast Class or SSD SFF drives

An even number of drives of the same


drive class from left to right
Eight SSD* = Solid state drive and/or
Eight FC = Fast Class 15 K or 10 K SFF
and/or
12 NL = Near Line drives (RAID 6)

Upgrade minimum four drives of the same class


Adding a new drive class
8 (12) drives of the new drive class minimum
An add-on enclosure requires a minimum of four drives installed
* 4 SSD for use as Adaptive Flash Cache only

Learning check
1. The HP 3PAR StoreServ is available with a single controller nodes
True
False

Learning check answer


1. The HP 3PAR StoreServ is available with a single controller node
True
False
HP 3PAR StoreServ is available with two or four controller
nodes

Learning check
2. What are the two important considerations when choosing an HP
3PAR StoreServ series 7000 base configuration?

Learning check answer


2. What are the two important considerations when choosing an HP
3PAR StoreServ series 7000 base configuration?
Scalability
Cost

10000 hardware
overview

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

HP 3PAR StoreServ 10400 components

Drive chassis (4U) and drive magazines


Up to six in first, eight in each
expansion rack
Capacity building block
2 to 10 drive magazines
Add non-disruptively
Industry-leading
density
Full-mesh
backplane
Post-switch architecture
High performance, tightly coupled
Completely passive

Controller node chassis (15U) and node


Performance and connectivity building
block
8 Gb FC and/or 10 Gb CNA cards
Add non-disruptively
Runs
independent operating
Service
processor
(1U) system
instance
Remote error detection
Supports diagnostics and
maintenance
Reporting to HP 3PAR Central

First rack
with
controllers
and drives

Expansion
racks with
drives
only

HP 3PAR StoreServ 10800 components


Drive chassis (4U)
Two in first, up to eight in each expansion
rack
Capacity building block
Two to 10 drive magazines
Add non-disruptively
Industry-leading density
Full-mesh
backplane
Post-switch architecture
High performance, tightly coupled
Completely passive

Controller node chassis (28U) and node


Performance and connectivity building
block
8 Gb FC and/or 10 Gb CNA cards
Add non-disruptively
Runs
independent operating
Service
processor
(1U) system
instance
Remote error detection
Supports diagnostics and
maintenance
Reporting to HP 3PAR Central

First rack
with
controllers
and drives

Expansio
n racks
with
drives
only

The 3PAR StoreServ 10000 evolution


Bus to switch to full mesh progression
10000 full mesh backplane

High performance/low latency


112 GB/s backplane bandwidth
Passive circuit board
Slots for controller nodes
Links every controller (full mesh)

2.0 GB/s ASIC to ASIC

Single hop

Fully configured 3PAR 10800

Eight controller nodes


16 Gen4 ASICsTwo per node
16 Intel Quad-Core processors
256 GB control cache
512 GB total data cache
136 GB/s peak memory bandwidth
450,213 SPC-1 IOPS

Max 10800 configuration


with eight nodes and 1,920 drives

HP 3PAR StoreServ 10000 controller


Two
to eight nodes per systeminstalled in pairs
nodes
Intel Quad-Core processors
Dedicated control and data cache
PCIe slots
Two Gen4 ASICs per node
678
Data movement, ThP, and XOR RAID processing
Internal SSD drive for:
Built-in Remote Copy
3PAR OS
Ethernet port RCIP E1
345
Cache destaging in case of power failure
Management Eth port
E0
Scalable connectivity per node
Serial ports
Three PCIe buses with 9 PCIe slots
Recommended PCIe card installation
Four-port 8 Gb/s FC adapter
012
order
Two-port 16 Gb/s FC adapter
Drive chassis FC connections 6, 3, 0
Two-port 10 Gb/s iSCSI/FCoE CNA
Host connections (FC, iSCSI,
2, 5, 8, 1,
FCoE)
4, 7
Flexibility enhancement
Remote Copy FC
1, 4, 2, 3
Host FC and Remote Copy FC ports can be configured on different
connections
ports of the same 8
Gb/s FC adapter

HP 3PAR StoreServ 10000 controller


Two
to eight nodes per systeminstalled in pairs
nodes
Intel XEON
processor

Per-node configuration
2 x Thin Built In Gen4 ASIC

Intel XEON
processor

Ethernet

2.0 GB/s dedicated ASIC-to-ASIC


bandwidth
112 GB/s total backplane bandwidth
Inline Fat-to-Thin processing in DMA
engine2

Remote Copy

Ethernet
SATA

Multifunction
controller

Management

boot SSD

Serial

Control cache
32 GB

node console
port

Data cache
64 GB
To
ASIC
or
other
nodes

3PAR Gen4
ASIC

PCIe
switc
h

PCIe
switc
h

3PAR Gen4
ASIC

To
ASIC
or
other
nodes

PCIe
switc
h

2 x Intel 2.83 GHz Quad-Core processors


96 GB cache
9 PCIe xlots Warm-plug adapters

8 Gb/s FC host/drive adapter


10 Gb/s iSCSI/FCoE host adapter

PCIe Slots

HP 3PAR StoreServ 10000 PCIe card


options
Port 4

Port 2

Port 3

Port 1

Port 2
Port 1

Four-port card

Two-port card

PCIe cards

Four-Port 8Gb
Fibre Channel
adapter

Two-Port 16Gb
Fibre Channel
adapter

Two-Port
converged
network
adapter

# ports / card

Max # cards / node

Port speeds

8 Gb/s (2, 4 Gb/s)

16 Gb/s (4, 8 Gb/s)

10 Gb/s

FC host connection (max #


ports/node)

Y (24)

Y (12)

iSCSI host connection (max # ports/


node)

Y (4)

HP 3PAR StoreServ 10000 drive chassis

2.5 SFF magazine

3.5 LFF magazine

Holds from 2 to 10 drive magazines


(1+1) redundant power supplies
Redundant dual Fibre Channel paths
Redundant dual Fibre Channel switches

Each magazine always holds four drives of the


same drive type
Each magazine in a chassis can be a different
drive type
Available drives

HP 3PAR StoreServ 10000 drive


overview
HP 3PAR StoreServ
HP 3PAR StoreServ
10400

RAID levels

10800

RAID 0, 10, 50, 60

RAID 5 data to parity ratios

2:1 to 8:1

RAID 6 data to parity ratios

4:2; 6:2; 8:2; 10:2; 14:2

Drives

MLC SSD
15 k rpm FC
10 k rpm FC
7.2 k rpm NL

480 GB, 920 GB, 1.92 TB


300 GB, 600 GB
450 GB, 900 GB, 1200 GB
2 TB, 4 TB

480 GB, 920 GB, 1.92 TB


300 GB, 600 GB
450 GB, 900 GB, 1200 GB
2 TB, 4 TB

Encrypted
drives *

MLC SSD
10 k rpm FC
7.2 k rpm NL

400 GB, 920 GB


450 GB, 900 GB
2 TB, 4 TB

400 GB, 920 GB


450 GB, 900 GB
2 TB, 4 TB

Density

4U drive chassis

40 drives

40 drives

# of chassis

4 to 24

4 to 48

# of drives

16 to 960

16 to 1920

256

512

Max # of SSD per StoreServ array

Array Encryption license required; encrypted drives cannot be mixed with standard drives in the same array

HP 3PAR SSD drive options

Available sizes
Warranty

MLC
Multi-level
cell

cMLC
Commercial
multi- level
cell

480 GB, 920 GB

480 GB, 1.92 TB

5 years

5 years

Within the warranty period worn-out drives will be replaced by HP


Remaining SSD drive life can be checked by the user (see chart to
the left)

The 3PAR array alerts the user when the wear-out level (max
Program/Erase cycles) reaches 95%
After the five-year warranty expires, the customer must purchase
worn-out drive replacements
HP 3PAR Adaptive Sparing decreases wear-out and extends drive life
dramatically

HP 3PAR StoreServ 10000 racking options


(1
of 2)
Legacy 3PAR racks until February 2013
The StoreServ 10400 (former V400) could be ordered in either a 3PAR rack or field rackable
The StoreServ 10800 (former V800) could be ordered only in a non-standard 3PAR rack
The 3PAR racks are available only with 0U 4 x Single Phase PDUs

HP 3PAR racking options after February 2013


All StoreServ 10000 can now also be ordered in redesigned HP racks with user-selectable power options

QW978A - HP 3PAR StoreServ 10400 16 GB Control/32 GB Data Cache Rack Config Base

QW979A - HP 3PAR StoreServ 10800 32 GB Control/64 GB Data Cache Rack Config Base

QW982A - HP 3PAR StoreServ 10000 2-Meter Expansion Rack

PDUs can be selected as required

252663-D74 - Single-phase NEMA (24A)

252663-B33 - Single-phase IEC (32A)

AF511A

- Three-phase NEMA (48A)

AF518A

- Three-phase IEC (32A)

The total maximum numbers of drive chassis and drives remain unchanged
The number of drive chassis in the base rack is reduced by two

10400 base rack max 4 drive chassis

10800 base rack 0 drive chassis

The new racks can be used to extend legacy configurations; any combination is supported

HP 3PAR StoreServ 10000 racking options


After February 2013Choose between HP
(2Before
ofFebruary
2) 2013
single-phase and
three-phase PDUs

Legacy V-class/ StoreServ 10000


racking with four integrated
vertically mounted
0U 3PAR single-phase PDUs

HP StoreServ 10000 racking


with
two horizontally mounted
HP three-phase IEC or NEMA
PDUs

HP StoreServ 10000 racking


with
four horizontally mounted
HP single-phase IEC or
NEMA PDUs

HP 3PAR StoreServ 10000 dispersed


rack installation
The disk racks can be up to 100 m apart from the first rack with the
controllers
Disk
rack 2

Controller Disk
rack rack 1

Disk
rack 3

Learning check
1. How many drives does a drive magazine hold?
______________________________________________________________________________________
______________________________________________________________________________________
__

Learning check answer


1. How many drives does a drive magazine hold?
Each drive magazine always holds four drives of the same
drive type

Capacity efficiency

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Copy technologies and thin


technologies

What is data compaction?


Data compactionis the reduction of the number ofdata elements,bandwidth,
cost, and time for the generation,transmission, andstorage of datawithout loss
ofinformationby eliminating unnecessaryredundancy, removing irrelevancy, or
using special coding

Data compaction on HP 3PAR StoreServ

HP 3PAR Full Copy V1Restorable copy


Part of the base 3PAR OS

Full physical point-in-time copy


Provisionable after copy ends
Independent of base volumes RAID and
physical layout properties
Fast resynchronization capability
Thin Provisioningaware

Base volume

Intermediate
snapshot

Full Copy

Full copies can consume same physical


capacity as thinly provisioned base volume

Full copy

HP 3PAR Full Copy V2Instantly


Part
of the base 3PAR
OS
accessible
copy
Base volume

Share data quickly and easily


Full physical point-in-time copy
Immediately provisionable to hosts
Independent of base volumes RAID and
physical layout properties
No resynchronization capability
Thin Provisioningaware

Intermediate
snapshot

Full Copy

Full copies can consume same physical


capacity as thinly provisioned base volume

Full copy

HP 3PAR Virtual CopySnapshot at its


best
(1 of 2)
Smart

Individually erasable and promotable


Scheduled creation/deletion
Consistency groups

Thin

No reservation, non-duplicative
Variable QoS

# of
snapshots

Up to 64,000 virtual
volumes and snapshots
Hundreds of snaps per base
volume
but only one CoW required

Ready

Instantaneously readable and/or writeable


Snapshots of snapshots of
Virtual Lock for retention of read-only snaps
Automated erase option

Integrated

Microsoft Hyper-V, SQL, Exchange


vSphere
Base volume
Oracle
Backup apps from HP, Symantec, VEEAM, ComVault
SMI-S

32004

1080
0

31911

1080
0

14734

1040
0
T400
T400

13907
13646

Virtual copies

Model

11524
9942
8695

8461

1040
0
7400
1040
0

1040
0
8425
7200
Top arrays
1040
worldwide
8144
as of May 2014 0
1040

HP 3PAR Virtual CopySnapshot at its best


(2 of 2)
Virtual copies can be mapped to CPGs different from their base
volumes
This

means that they can have different quality-of-service characteristics


For example, the base volume space can be derived from a RAID 1 CPG on
FC disks and the Virtual Copy space from a RAID 5 CPG on NL disks

The base volume space and the Virtual Copy space can grow
independently without impacting each other
Each

space has its own allocation warning and limit

Dynamic optimization can tune the base volume space and the
Virtual Copy space independently

HP 3PAR Virtual Copy for backup use


One
week based on hourly snaps and an average daily change rate of ~10%
case

Base volume
of 2 TB

Monday

Tuesday

Wednesday

24 copies
~200 GB

48 copies
~200 GB

72 copies
~200 GB

Thursday

96 copies
~200 GB

Friday

120 copies
~200 GB

Results in 168 virtual


copies and only ~1.4 TB
snapshot space needed

Saturday

144 copies
~200 GB

Sunday

168 copies
~200 GB

HP 3PAR Thin Technologies leadership


TCO and space efficiency without compromise
overview
Start thin with
Thin
Provisioning
16 TB

Get thin with


Thin Conversion
Before
000000000000000
000000000000000

1 TB
Presented

8
TB
2
TB

1101000110111010

After
Fast

0001100
0001100
0001101
0001101
0001100
0001100

1111100
1111100

Buffer

0001100
0001100

24 GB

Buy up to 50%
less storage
capacity 1)
2)

Get even
thinner with
Inline DataDeduplication
0001101
0001101

1101000110111010

Linux

Consumed 3 TB + buffer

1)

Stay thin
with
Thin
Persisten
ce

Reduce tech refresh


costs by up to 50%

See the HP Get Thin Guarantee at http://www.hp.com/storage/getthin


Currently available on SSD only

Thin 3PAR volumes


stay thin over time

0001100
0001100
0001101
0001101
1111100
1111100

1111100
1111100
0001101
0001101

Reduce your storage


footprint by 50 to
90% 2)

HP 3PAR Thin Technologies benefits


Built-in

Utility Storage supports ThP and Thin


Deduplication by eliminating the
diminished performance and functional
limitations that plague bolt-on thin
provisioning and dedupe solutions

Reservation-less

In-band
The 3PAR ASIC detects sequences of
zeroes and same patterns of data in 16
kB chunks
and does not write them to disks
Third-party ThP and dedupe
implementations reclaim space as a
post-process,
creating space and performance
overhead
* As compared to a legacy storage array. See the HP

Get Thin Guaranteeat http://www.hp.com/storage/getthin

ThP draws fine-grained increments from a


single free-space reservoir without prededication
Third-party ThP implementations require a
separate, pre-dedicated pool for each data
service level

Integrated

API for direct thin provisioning and thin


dedupe integration in Symantec File System,
VMware vSphere, Oracle ASM, Windows
Server 2012, and others
Guaranteed efficiency
Save 50%+ storage capacity using ThP when
migrating from legacy storage *
Save another 50%+ capacity on SSD thanks
to thin deduplication

HP 3PAR Thin ProvisioningStart thin


Traditional array
Dedicate on allocation

HP 3PAR array
Dedicate on write only
Server
presented
capacities/LUNs
Required net
array
capacities

Physically installed disks

Physical
disks
Actually written data

Free
chunklets

Physically installed disks

HP 3PAR Thin ConversionGet thin


ASI
C

Thin online SAN storage up to 75%


A practical and effective solution to
eliminate costs associated with:
Storage arrays and capacity
Software licensing and support
Power, cooling, and floor space

Unique 3PAR ASIC built-in zero


detection delivers:
Eliminate the time and complexity of
getting thin
Open and heterogeneous migrations for
any-to-3PAR migrations
Preserved service levels at high
performance during migrations

0000

Fast

0000 0000

Before

After

HP 3PAR Thin PersistenceStay thin


Keep the array thin over time
Provides non-disruptive and application-transparent rethinning of thin provisioned volumes
Returns space to thin provisioned volumes and to free pool for
reuse
Delivers simplicity through unique 3PAR ASIC with built-in zero
detection

Fast

No special host software required


Leverage standard file system tools/scripts to write zero blocks

0000
Preserves service-level zeroes detected and unmapped at 0000
line
speeds
Intelligently reclaims 16 KB pages
Before
Integrates automated reclamation with:

ASI
C

T10 SCSI Unmap/Trim (Windows Server 2012, vSphere [manual],


Linux)
VAAI (write same zero)

After

HP 3PAR Thin Technologies positioning


Built-in, not bolted on
No up-front allocation of storage for thin volumes
No performance impact when using thin and thin deduped volumes unlike
competing storage products
No restrictions on 3PAR Thin Volumes use unlike many other storage arrays
Allocation size of 16 k which is much smaller than most competitors thin
implementations
Thin volumes can be created in less than 30 seconds without any disk layout
or configuration planning required
Thin volumes are autonomically wide striped over all drives within a certain
tier of storage

HP 3PAR Thin Clones


Host assisted by vSphere and Hyper-V
VM cloning leverages
Virtual Copy
and xCOPY and ODX
VM

VM VM

VM

Hypervisor

VM

VM
VM
VM

Integration with HP 3PAR Inline


Deduplication
Works on VMware vSphere and
Hyper-V
Leverages HP 3PAR Reservationless Snapshot technology
Clones are created quickly and
easily without pre-allocating any
storage
New data is deduplicated
leveraging inline dedupe solution

Learning check
1. What is the key difference in dedicating provisioning on a
traditional array and provisioning on an HP 3PAR StoreServ array?

Learning check answer


1. What is the key difference in dedicating provisioning on a
traditional array and provisioning on an HP 3PAR StoreServ array?
. On a traditional array you dedicate on allocation
. With HP 3PAR StoreServ array, you dedicate on write only

Learning check
2. List at least three benefits of keeping an array thin over time
________________________________________________________________________________________
________________________________________________________________________________________
________________________________________________________________________________________
___

Learning check answer


2. List at least three benefits of keeping an array thin over time
. Provides non-disruptive and application-transparent rethinning of thin provisioned volumes
. Returns space to thin provisioned volumes and to free pool for
reuse
. Delivers simplicity through unique 3PAR ASIC with built-in zero
detection
. Preserves service-level zeroes detected and unmapped at line
speeds
. Intelligently reclaims 16 KB pages
. Integrates automated reclamation

Performance

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

HP 3PAR StoreServ Storage


Same operating system, management console, and software features
7200c
Controller nodes
Fibre Channel host ports
10 Gb iSCSI/FCoE Ports
Built-in IP Remote Copy ports
GBs cache per node-pair/max
GBs flash cache per nodepair/max
Drives per StoreServ
SSD
Available
15 k SAS
drive
10 k SAS
types
7.2 k NL SAS
Max SSD per StoreServ
Raw capacity (TB)
SPC-1 benchmark IOPS
Max front-end IOPS read
Max front-end MB/s 256 k read

7400c

7450c

7440c

2
4 12
04
2
40 /40

2-4
4 24
08
24
48 / 96

2-4
4 24
08
24
96 / 192

24
4 24
08
24
96 / 192

768 / 768

768 / 1500

NA

1500 / 3000

8 576
yes
yes
yes
yes
240
1600
258078
600000
5000

8 240
yes
NA
NA
NA
240
460
Planned
900000
4250

8 960
yes
yes
yes
yes
240
2000
NA
900000
5000

8 240
yes
yes
yes
yes
120
500
NA
300000
2700

10400

10800

24
0 96
0 48
24
192 / 384

28
0 192
0 96
28
192 / 768

16 - 960
yes
yes
yes
yes
256
1600
NA
240000
10800

16 - 1920
yes
yes
yes
yes
512
3200
450213
480000
14900

Which array is more efficient and easier


to use?

Traditional storage array

HP 3PAR array

Adaptive Flash Cache

Reads without
Adaptive
Controller
Flash Cache

Read cache extension using SSD


Leverages portion of SSD capacity as flash cache
Provides second-level caching layer between DRAM and HDDs
Cache most frequently accessed data
Redirect host I/O to flash cache to provide low latency access
Is included with base software

DRAM
cache

Reads or
writes

HDD

Advantages and use cases

Reads with
Adaptive
Lowers latency by ~20% for random read-intensive I/O workloads
Flash
Controller
Cache
Faster response time for periodic read burst on cold data on HDDs

DRAM

Faster response time for read burst on cold data on tiered volumes
cache
No dedicated SSDs required
Reads
Simple system-wide configuration
Available on all HP 3PAR StoreServ arrays
HDD

Flash cache
16 KB page
size

Cache
SSD tier

Adaptive Flash Cache specs

Minimum amount
of drives per node
pair
Maximum amount
of flash cache per
system
Maximum amount
of flash cache per
node pair
Total system cache
DRAM+AFC
Notes:

HP 3PAR
7200

HP 3PAR
7400

HP 3PAR
10400 old

HP 3PAR 10400
new

HP 3PAR
10800

2xDMAG (8 drives)

2xDMAG (8 drives)

2XDMAG (8 drives)

768 GB

1.5 TB

3 TB

4 TB

8 TB

768 GB

768 GB

1.5 TB

2 TB

2 TB

792 GB

1,564 GB

3,384 GB

4,384 GB

8,768 GB

The minimum amount of SSD drives work for Adaptive Flash Cache only; for Provisioning and AO the
minimum remains 8 per node pair
All SSDs are supported to be used for AFC; the only exception is the 480 GB cMLC (E7Y55A/E7Y56A) SSD that
does not support creation of Adaptive Flash Cache
Adaptive Flash Cache is not applicable to AFA; it does not accelerate data that is already stored within the

HP 3PAR StoreServ 7440c hardware


details
Item

HP 3PAR StoreServ
7440c

Number of controller
nodes

2 or 4

HP 3PAR Gen4 ASICs

2 or 4

CPU (per controller


node)
Total cache
Total flash cache
Total on-node cache

8-core, 2.3 GHz


1.6 - 3.2 TB
1.5 - 3 TB

8 - 960

Number of solid state


drives

8 - 240

Drive enclosure
Number of drive
enclosures

Host adapters

96 - 192 GB

Number of disk drives

Raw capacity

Item

1.2 TB - 2000 TB
SFF: 24 slots in 2U
LFF: 24 slots in 4U
0 - 38

Maximum host ports


8 Gb/s FC host ports

HP 3PAR StoreServ
7440c
Four-port 8 Gb/s FC
Four-port 16 Gb/s FC
Two-port 10 Gb/s
iSCSI/FCoE
24
4 - 24

16 Gb/s FC host ports

0-8

10 Gb/s iSCSI host ports

0-8

Maximum initiators

1024 or 2048

HP 3PAR express writes


Fibre Channel host write processing
has been optimized to deliver
significantly lower latencies
Main improvement (10 - 30%) seen
for small-block random writes at low
workload intensity
Ships as part of the base OS and is
enabled by default after an upgrade
to 3.2.1

Best practice is to leave this enabled by


default

A new Target Mode Write


Optimization column is added to the
showport output

Host Response Time (ms)

New functionality in 3PAR OS 3.2.1

Host IOPS

Performance improvement with 3PAR express writes


Measured configuration:
Express writes disabled
Express writes enabled

7450 four-node with 48


SSD
Six virtual volumes in
RAID 1

New HP RAID 6

Legacy 3PAR RAID 6

RAID 6 handling has been improved to reduce the


number of required back-end I/Os for writes
Applies to RAID 6 set sizes of 6, 10, and 16 (4+2, 8+2,
and 14+2) only
Applies to HDD and SSD
After upgrade to 3.2.1 a tuneld or tunevv command
can convert an existing layout to optimal
Use the showblock command to see the difference
between optimal and non-optimal, example in backup

Write IOPS

HP 3PAR RAID 6 layout optimization

Performance improvement of optimized


RAID 6 with 100% 16 kB writes

TPVV grow optimization


Adaptive VV grow makes the grow size of each TPVV related to its
virtual size
Fixed

growing by 256 MB per node is not optimal any longer, considering


larger disks and faster hosts
Small growing increments cause too many growth requests, causing more
system busy events
Adaptive grow increments are between 256 MB and 4 Gb per node

Multinode VV growth has been enhanced


In

certain cases, VVs were not optimally and symmetrically grown across
nodes

AO block-less movePerformance
Region switch in 3.1.2
benefit
70%
70%

70000
60000
50000
40000
30000
20000
10000
0

IOPS

Region switch in 3.1.3


70000
60000
50000
40000
30000
20000
10000
0
IOPS

Optimizations in 3.1.3Performance
comparison

Optimizations: 3.1.2 compared to 3.1.3


50
40
30
Seconds 20
10
0

<
95
%

3.1.2

3.1.3

Learning check
1. All the following statements about Adaptive Flash Cache are true
except which one?
. Provides performance acceleration for random reads
. Available as an add-on to the base OS suite
. Enabled/disabled on the entire system or on selected vvsets
. Requires no dedicated SSDs

Learning check answer


1. All the following statements about Adaptive Flash Cache are true
except which one?
. Provides performance acceleration for random reads
. Available as an add-on to the base OS suite
. Enabled/disabled on the entire system or on selected vvsets
. Requires no dedicated SSDs

Availability

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

HP 3PAR high availability (1 of 3)


Spare disk drives compared to distributed sparing
3PAR StoreServ
Traditional arrays

Spare chunklets

Spare drive

Few-to-one rebuild
Hotspots and long rebuild exposure

Many-to-many
rebuild
Parallel rebuilds in less time

HP 3PAR high availability (2 of 3)


Guaranteed drive enclosure (drive cage) availability if desired

B
C
D

F
G
H

A
B
C
D

Enclosure-dependent
RAID
Enclosure (cage) failure might
mean no access to data

enclosu
re

A1

enclosu
re

R5

B1

enclosu
re

enclosu
re

enclosu
re

enclosu
re

enclosu
re

RAID 5 group

3PAR StoreServ

C1

enclosu
re

Traditional arrays

D1

A4

A2 A5

A3 A6

B4

B2 B5

B3 B6

C4

C2 C5

C3 C6
C3
C6

D4

D2 D5

D3 D6

Enclosure-independent RAID
Raidlet groups for any RAID level
Data access preserved with HA enclosure
(cage)
User selectable per CPG

RAID 50
R5R5R5R5
A1 A2 A3 A4
B1
B1 B2 B3
B3 B4
B4
C1 C2 C3 C4
D1 D2 D3 D4

RAID 10
R1R1R1R1R1R1R1R1
A1 A2 A3 A4 C1 C2 C3 C4
B1 B2 B3 B4 D1 D2 D3 D4

HP 3PAR high availability (3 of 3)


Write cache remirroring

3PAR StoreServ

Traditional mid-range arrays


Ctrl 1

Ctrl 2

Write cache

Write cache

Mirror

Mirror

Traditional write cache


mirroring
Losing 1 controller results in poor performance
due to write-through mode or risk of write data
loss

Write cache stays on


thanks to
redistribution

Persistent write cache


mirroring

No write-through mode consistent


performance
Works with all 4-, 6- and 8-node systems

Online firmware update


Fast and reliable

Non-disruptive to business applications


One node after the other is updated
Can be performed under I/O load
Tests performed by ESG on the following
environment:

VMware vSphere 5.1 running on four HP BL460


servers
3PAR StoreServ 7450 four-node array
OLTP workload of 144,000 IOPS generated with
IOMETER

The actual firmware update:

Performance during the firmware upda


Initially each of the four nodes made 36,000 IOPS
First node being updated
Second node being updated
Third node being updated

HP 3PAR Persistent Ports


Path loss, controller maintenance or loss behavior of 3PAR arrays
No user intervention required

0:
11 0:
:
1 0:

1:0:
2

0
2 : 0:

0:
1:
1

0:0:
1

1:0:
2

0:
1: :
0
0:2
2

0:
1: :
0
0:2
2

0:
1:
1

0:0:
1

0
2 : 0:

In Fibre Channel SAN


environments, all paths stay
online in case of loss of signal
of a Fibre Channel path,
during node maintenance,
and in case of a node failure

MPI
O

0:
11 0:
:
1 0:

MPI
O

0:0:1 0:0:2 1:0:1


1:0:2
1:0:1
1:0:2
1:0:1 1:0:2 0:0:1
0:0:10:0:2
0:0:2
Ctrl 0
Ctrl 1

1:0:1 1:0:2
1:0:2
0:0:1 0:0:2 1:0:1
1:0:1 1:0:2 0:0:1 0:0:2
Ctrl
Ctrl 0
Ctrl 1
1

A Fibre Channel path loss is


handled by 3PAR Persistent
Ports
all server paths stay
online

A controller maintenance or loss is


handled by 3PAR Persistent Ports
for all protocols
all server paths stay online

For Fibre Channel, iSCSI, and


FCoE deployments all paths
stay online during node
maintenance and in case of a
node failure
Server will not see the swap
of the 3PAR port ID, thus no
MPIO path failover is required
Read more in the Persistent Ports whitepaper

HP 3PAR Get 6-Nines Guarantee


99.9999% data availabilityguaranteed*
Products covered:

All four-node 7000 systems


All 10000 systems with more than four nodes

Program details*

6-Nines Availability Guarantee on covered


systems
Remedy: HP will work with the customer to
resolve their issue and fund three additional
months on customers mission-critical
support contract
Length of guarantee: First 12 months 3PAR
storage system is deployed

* Complete program terms and conditions on the Get 6-Nines Portal Page

Industry-first

6-Nines

guarantee
across midrange,
enterprise, and
all-flash storage

Learning check
1. Compare the key benefits of persistent write cache mirroring over
traditional write cache mirroring

Learning check answer


1. Compare the key benefits of persistent write cache mirroring over
traditional write cache mirroring
In traditional write cache mirroring, losing one controller
results in poor performance due to write-through mode or
risk of write data loss
In persistent write cache mirroring, No write-through mode
produces consistent performance, and it works with all 4-,
6- and 8-node systems

File support

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

File and object offerings for HP 3PAR


StoreServ
3PAR StoreServ
StoreAll
8200

Integrated via
SMI-S Provider

3PAR StoreServ
File Persona

File Controller

Primary storage
Straightforward user shares and
home directories
AD, local, and LDAP
environments
Unified GUI and CLI
management

Sophisticated file serving for


AD-based environments
Connected StoreEasy remote
sites
Configurable performance
and capacity

Archive storage
Retention, WORM, integrity
validation
Metadata analytics and search
Scale out performance and
capacity

HP 3PAR StoreServ primary file storage


products
There are 2 3PAR StoreServ products that provide primary file
storage

HP 3PAR File Persona Software


Suite

Feature within the 3PAR OS

HP 3PAR StoreServ File


Controller

Add-on hardware
+
+
Any 3PAR array

Truly converged

Gateway with integrated


management
Primary Storage Block and File

Significantly different architectural approaches

HP 3PAR File Persona Software Suite


What is it?
A licensed native feature of the HP 3PAR OS
No additional hardware required
Includes:
A rich set of file protocols
(SMB, NFS, HTTP)
An Object Access API (REST)
File data services
File storage feature using hardware within the
array itself
Available in the 7200c, 7400c ,7440c, and 7450c

HP 3PAR StoreServ with


HP 3PAR File Persona

Efficient Effortles Bulletpro


s
of

File Persona limits


File storage capacity

Users

64 TB per node pair

1,500 users per node pair

128 TB (7440c)

3,000 users (7440c)

File systems/file provisioning groups (FPG)

File shares

32 TB per FPG

4,000 SMB shares per node pair

16 FPG per node pair

1,024 NFS shares per node pair

32 VVs allowed per FPG (min 1 TB max 16 TB)

Virtual file servers (VFS)

Files

2 TB max file size

16 VFS per node pair (1 VFS per FPG)

128 K files per directory

4 VLANs per VFS

100 million files and directories per FPG

4 IP addresses per VFS

File stores

256 per node pair

Snapshots

262,144 file snapshots per node pair

Quotas

20,000 user/group quotas per node pair

256 capacity quotas per node pair

HP 3PAR StoreServ File Controller


Add-on hardware that provides file
services

What is HP 3PAR
StoreServ File
Controller?

ustered for high availability

Direct or fabric attached via Fibre Channel


Uses Fibre Channel ports of array
Provides its own network interfaces

Integrated management

End-to-end file storage and file-share


provisioning
Monitoring dashboard for file services

Significantly scalable

Two - eight file controllers per cluster


Multiple file tenants per cluster
Multiple clusters per 3PAR array

Windows Storage Server 2012 R2

Full Windows environment compatibility


SMB protocol updated to SMB 3.02

3PAR StoreServ File Controller limits


Capacity

352 TB per File Controller cluster (22 drive


letters x max volume size)

File system (volume)

16 TB per volume (3PAR LUN limit)


1 VV per volume
22 basic volumes per cluster (drive letter
limited)

Users

File shares

Cluster

2 - 8 file controllers per cluster


150 VLANs per cluster (tested limit) in practice;
limited by system memory and NIC driver
32 physical network interfaces per cluster

Multi-tenancy

Up to 24 tenants per cluster

Snapshots

64 shared folder VSS snapshots


Hardware snapshots limited by array

20,000 users per file controller


40,000 users with a file controller pair + 7440

Undefined but thousands per file controller


The number of shares on a server affects server
boot time
On a server with typical hardware and thousands of
shares, boot time can be delayed by minutes
Exact delays depend on server hardware
Recommended max values
5,000 SMB shares per file controller
2,048 NFS shares per file controller

File

16 TB max file size


350,000 files per directory
If directory enumeration performance is important,
files should be stored in file system in a hierarchy

Quotas

20,000 user/group quotas per file controller

3PAR File Persona and 3PAR StoreServ


File Controller features
3PAR File Persona

3PAR StoreServ File Controller

Product type

Software feature

Discrete add-on hardware

Scalability

2 or 4 3PAR converged controllers


Up to 3,000 concurrent users*
128 TB aggregate file capacity*
32 TB per file system

2 to 8 file controllers per file controller


cluster
Up to 40,000 concurrent users*
352 TB per file controller cluster
16 TB per file system (3PAR LUN limit)

Protocols

SMB 1.0, 2.0, 2.1, 3.0**


NFSv3, v4
NDMP

SMB 1.0, 2.0, 2.1, 3.0, 3.02


NFSv2, v3, v4.1

Authenticatio
n

Active Directory, OpenLDAP, Local

Active Directory, Local

Management

Truly unified SSMC and 3PAR OS CLI

Semi-integrated

Remote
support

Truly unified STaTS

Discrete Insight Remote Support

Advanced
features

* 74x0c-4N
Object
Access
foronly
custom cloud
** Select
SMB 3.0API
features
apps

Screening
* Per file controller pair
Classification, access policies, rights
management

Considerations when sizing for file


workloads
SMB
,
NFS

Clients
Type of client
Number of concurrent
clients
Client applications
Overall performance of
the client
CPU
Memory/cache
Client network interface

Connectivity
Protocol
SMB (1.0, 1.1, 2, 2.1, 3)
NFS (v3, v4)

Network infrastructure
LAN, WAN
1 GbE/10 GbE
Connectivity between
switches
Congestion
Network load balancing

Storage

File serving node


CPU
Memory/cache
Overall performance of the
server
Server network
configuration
1/10 GbE
Bond mode if any
Number of links in bond

Storage
HBA used
Media type (HDD, SSD)
RAID level

Operation
s

Backup
Restore
Snapshots
Quotas
Anti-virus
Replicatio
n

Logical view of managed objects

File Share ("home")

Share permissions

SMB, NFS, REST


API

11

16
16

VFS1
FPG1

1
11

CPGs
Wide-striped logical disks

16
16
16

VFS32
FPG32

File Store (sales)

Holder of policies, some of which can be inherited


from VFS
Snapshot entity for up to 1,024 snapshots

Virtual File Server


(enterprise.hp.com)

Virtual IP interfaces and authentication service


User quotas and antivirus configuration

File Provisioning Group (fpg1)

Replication and disaster recovery entity


Built from an autonomic group (virtual volumes
set)

Antivirus scanning overview


Policy-based antivirus scanning over SMB (CIFS), NFS, and HTTP (used by
Object Access API) protocols

Exclusion AV policies at the VFS level and override policies at File Store level
Supports multiple virus scan servers (max 50) for redundancy and improved
throughput performance

ICAP 1.0based Virus Scan Engine (VSE) software supported (single vendor at
a time)

Symantec Protection Engine 7.5


McAfee VirusScan Enterprise 8.8 and VirusScan Enterprise for Storage 1.0.2
Supports on-access (real-time) and schedule scanning (on-demand scanning)

Supports automatic and manual start/stop of AV service on addition/removal


of the VSE to the cluster
AV statistics (files scanned, files infected, files quarantined) at VFS level

Antivirus scanning process


On-Access scan
1
4
5

Access denied

Client PCs

2
Deny if no AV
servers

Antivirus scan
servers

1. Client requests an open(read)/close(write) of SMB file or read for NFS/HTTP


file
2. Storage system determines if the file needs to be scanned based on the
policies, and notifies the AV scan server
3. VSE server scans the file and reports the scan results back
4. If no virus found, access is allowed to file

If virus found, then Access Denied to SMB client, Permission Denied to NFS client,
transfer closed on HTTP client; file is quarantined, and scan messages are logged in
/var/log/ade generated

5. If VSE server is unavailable and the policy is set to Deny Access, then

User-driven file recovery


Using File Store snapshots
File Store snapshots are different from block
volume Virtual Copy snapshots
Restoring individual files from File Store
snapshots is more efficient than
administrator-driven recovery

Users can restore their own files

How it works
Windows clients

Snapshots integrate with Previous Versions tab


in Windows Explorer

Linux/UNIX clients

Previous versions of the files appear in


.snapshot directory

Snapshot
s

Replication and disaster recovery


Replication
Remote Copy is used for files just as it is for block
Both Sync and Async Periodic Remote Copy supported
All VVs* in a file provisioning group must be in a single
Remote Copy group
Both uni-directional and bi-directional Remote Copy
supported to different volumes
1:1, M:1 (many-to-one) and 1:N (one-to-many), M:N
topologies supported for failover only, not for
distribution**

Disaster recovery
Required preconfiguration of the target array for node
networking, DNS configuration, AD config, AV services
Target/backup array must have the same number of
File Persona nodes as source/primary
*Scheduled
tasks must
be
migrated or will be
Max 32 VVs of minimum
1 TB each
in amanually
node for first release
**
M,N is a max of 4
lost

RC Group

RC Group
VV
1V
V2
VV
3V
V4
3PAR StoreServ

Sync/Async
RC links

VV
1V
V2
VV
3V
V4
3PAR
3PAR StoreServ
StoreServ

Backup options
Share-based backup
Network share-based backup over SMB or NFS
Is recommended mode of backup; use NDMP when needed

NDMP backup over iSCSI

Backup
software

HP Data
Protector

Supports the software iSCSI initiator for the NDMP


Commvault
Symantec
backup
Simpana
NetBackup
NDMP v2, v3, v4 (default is v4)
IBM Tivoli Storage
Shares the same network ports with file I/O
Manager

3PAR
StoreServ

Backup
target

3PAR StoreServ Management Console

Replacement for the existing management console


Converged management for entire HP 3PAR product line
Intuitive web-based UI with dashboard overviews
Modern and consistent look and feel
Redesigning hundreds of IMC screens
Better usability
Replacement for existing System Reporter
Standards based and integration with HP OneView

Dashboard

Health, performance, and capacity


at a glance

Mega Menu

An express-driven interface to all points


within the SSMC and the objects monitored

From the Mega Menu, a user is linked directly to


any of the listed context areas

Converged File and Block management and


reporting

Learning check
1. What are the two options for backup, and when are they
recommended?
________________________________________________________________________________________
________________________________________________________________________________________
________________________________________________________________________________________
___

Learning check answer


1. What are the two options for backup, and when are they
recommended?
Share-based backupIs recommended mode of backup; use
NDMP when needed
NDMP backup over iSCSI

Replication and
recovery

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

HP 3PAR Remote Copy

1:1 configuration
Sync RC and/or Async Periodic RC
Any 3PAR arrays

Protect and share data


Smart

S
S

Initial setup in minutes


Simple and intuitive commands
No consulting services

Complete

P
P

S
S

4:4 configuration
Sync RC and/or Async Periodic RC
Any 7000 or 10000 array

Native IP LAN or FC SAN-based


No extra copies or infrastructure needed
Thin Provisioning and Thin Conversion aware
Mirror 1:1 between any 3PAR arrays (F-, T-Class, 7000, and 10000 )
For StoreServ 7000 and 10000 at 3.1.3 and later all configurations
from 1:1 to 4:4 are supported
Any combination of Sync and Async Periodic RC
VMware vSphere Site Recovery Manager and vSphere Metro Storage cluster certified

Scalable

P
P

One RCIP link per nodeUp to eight per 3PAR array


Up to four RCFC links per nodeUp to 32 per 3PAR array
Up to 6,000 replicated volumes per 3PAR array
See the demo video at: http
://h20324.www2.hp.com/SDP/Content/ContentListing.aspx?PortalID=1&booth=66&tag=534&content=3431

HP 3PAR Synchronous Long Distance


configuration
A specialized 1:2 disaster recovery solution
Combines the ability to maintain concurrent metro-distance synchronous remote copies
with RTO=0 AND continental-distance asynchronous remote copies for disaster
tolerance
Metropolitan distance

Primary

Sync RC

S1

Synchronous Long Distance


1:2 configuration
Asy
nc

Per
iod
ic R
Act
C
i ve

Async Periodic
RC

S2

Standby

Secondary

Contine
ntal dista
nce

Tertiary

Find four demo videos at: https://www.youtube.com/playlist?list=PL9UfCHCZQuNDmU8WRXT_RU7yG_sL-eweV

HP 3PAR Remote Copy, Synchronous


Continuous
mode operation and synchronization
Real-time mirror

Highest I/O currency


Lock-step data consistency

Space efficient

Campus-wide business continuity

Guaranteed consistency

Enabled by volume groups

Secondary
volume

Thin Provisioning aware

Targeted use

Primary
volume

4
1. Host server writes I/O to primary write cache
2. Primary array writes I/O to secondary write
cache
3. Remote array acknowledges the receipt of the
I/O Host I/O acknowledged to host
4.

HP 3PAR Remote Copy, Asynchronous


Periodic mode
Initial setup and synchronization

Efficient even with high latency


links

Local writes acknowledgement

Primary
volume

Local
snapshots

Secondary
volume

Bandwidth friendly

Space efficient

Just delta replication


Thin aware

Guaranteed consistency

Enabled by volume groups


Based on snapshots

1. Secondary volume created


2. Local snapshot created
3. Initial synchronization started

HP 3PAR Remote Copy


Assured data integrity
Single volume
All

writes to the secondary volume are


completed in the same order as they were
written on the primary volume

New
source
volume

New target
volume
created
autonomically

Autonomic multi-volume group


Volumes

can be grouped together to maintain


write ordering across sets of volumes

Useful

for databases or other applications that


make dependent writes to more than 1 volume

Secondary

groups and volumes are


autonomically created or reconfigured and
credentials inherited

Replicated
provisioning
group
Primary 3PAR
storage

Replicated
provisioning
group
Secondary 3PAR
storage

HP 3PAR Remote CopySupported


topologies and maximum latencies

Remote Copy type


Synchronous RC FC

Max supported latency


2.6 ms RTT*

Synchronous RCIP

2.6 ms RTT*

Asynchronous Periodic RC FC

2.6 ms RTT*

Asynchronous Periodic RCIP

150 ms RTT*

Asynchronous Periodic RC FCIP

120 ms RTT*

RTT = round trip time


Optical fiber networks typically have a delay of ~5 us/km (0.005
ms/km)
Thus 2.6 ms allows fiber link distances of up to 260 km
(2 x 260 km = 520 km 520 km x 0.005 ms/km = 2.6 ms)

Cluster Extension for Windows


Clustering solution protecting against server and storage failure
What does it provide?

File share Witness

Manual or automated site failover for server and storage


resources
Transparent Hyper-V live migration between sites

Supported environments
LAN/WAN

Windows Server 2003, 2008, 2012


HP StoreEasy (Windows Storage Server)
Max supported distances
Remote Copy sync supported up to 2.6 ms RTT (~260 km)
Up to Microsoft Cluster heartbeat maximum of 20 ms RTT

HP 3PAR Synchronous
or
asynchronous
Remote Copy
managed by HP
CLX
3PAR

1:1 and SLD configuration


Sync or async Remote Copy

Requirements
Data Center 2

3PAR disk arrays


3PAR Remote Copy
Windows cluster
HP Cluster Extension (CLX)
Max 20 ms cluster IP network RTT

Licensing options

Also see the HP CLX resources

Option 1: per cluster node


1 LTU per Windows cluster node (4 LTUs for configuration to
the left)

Cluster Extension for Windows on


Clustering
solution protecting against server and storage failure
vSphere
What does it provide?
vSph
ere

File share Witness


vSph
ere

LAN/WAN

1
2
3
4

Also see the HP CLX resources

DC 3

Windows Server 2003, 2008, 2012 VMs on VMware


vSphere
Max supported distances
Up to Remote Copy sync supported max of 2.6 ms RTT (~260
km)
Up to Microsoft Cluster Heartbeat max of 20 ms RTT

vSph
ere

1:1 and SLD configuration


Sync or async Remote Copy

Requirements
Data Center 2

Cluster
Cluster
Cluster
Cluster

Manual or automated site failover for Windows VMs and


storage resources

Supported environments
vSph
ere

HP 3PAR Synchronous
or
asynchronous
Remote Copy
managed by
HP
CLX
3PAR

3PAR disk arrays


3PAR Remote Copy
Windows cluster
HP Cluster Extension (CLX) on each VM in the cluster
Max 20 ms cluster IP network RTT

Licensing options

Option 1: per Windows VM


1 LTU per Windows VM in the cluster

HP Serviceguard Metrocluster for HP-UX


End-to-end
clustering solution to protect against server and storage
and Linux
failure

What does it provide?


Quorum Service
DC 3
LAN/WAN

Manual or automated site failover for server and


storage resources

Supported environments

HP-UX 11i v2 and v3 with Serviceguard


RHEL 5 and 6 with HP Serviceguard 11.20.10
SLES 11 with HP Serviceguard 11.20.10
Max supported distances
Up to Remote Copy sync max 2.6 ms RTT (~260 km)
Up to Remote Copy async max 150 ms RTT

3PAR1 Synchronous
Data HP
Center
or
asynchronous
Remote Copy
managed by
HP
CLX
3PAR

Requirements

Data Center 2

HP 3PAR disk arrays


3PAR Remote Copy
HP Serviceguard and HP Metrocluster

Licensing for Linux

1 LTU SGLX per CPU core and 1 LTU MCLX per CPU
core

Licensing options for HP-UX


Also see the Metrocluster 3PAR manuals

Option 1: per CPU socket for SGUX and MCUX


Option 2: per cluster with up to 16 nodes for SGUX
and MCUX

Peer Persistence overview

Peer Persistence is a high availability storage configuration between two sites/data


centers with the ability to transparently redirect host I/O from the primary to the
secondary storage system

switchover is a manual process allowing the facilitation of service optimization and storage
system maintenance activities within a high-availability data storage solution

failover is an automatic process that redirects host I/O from a failed source system to the target
storage system
Failover uses the HP 3PAR Quorum Witness to monitor for HP 3PAR storage system failure to determine
whether a failover of host services is required

The volumes must be synchronously replicated and must have the same WWNs
For vSphere, host persona 11 is required; for Windows, host persona 15 is required
VMware
vSphere

HP 3PAR OS

Host
connectivity

Windows
Server

HP 3PAR OS

Host
connectivity

5.0, 5.1 *

3.1.2 MU2

FC

2008 R2 *

3.2.1

FC, iSCSI, FCoE

5.5 *

3.1.3
3.2.1

FC
FC, iSCSI, FCoE

2012 R2 *

3.2.1

FC, iSCSI, FCoE

* Stand-alone, cluster, and vMSC configurations

* Stand-alone, cluster, and Hyper-V


configurations

Currently supported environments as of


September 2014

RC Link

HP 3PAR
OS

RCFC

3.1.2
MU2

RCIP

3.1.3

Peer Persistence for VMware vSphere


Certified for vSphere Metro Storage Cluster

P
P

Primary RC Volume active path


presentation

S
S

Secondary RC Volume LUN


passive path presentation

What does it provide?

High availability across data centers


Automatic or manual transparent LUN swap
Transparent VM vMotion between data centers

vSphe
re

How does it work?

Based on 3PAR Remote Copy and vSphere ALUA


Primary RC volume presented with active paths
Secondary RC volume presented with passive paths Data Center 1

Automated LUN swap arbitrated by a Quorum


Witness
(QW Linux ESX VM on third site)

Supported environments

ESX vSphere 5.0, 5.1, 5.5 including HA, Failsafe,


and
uniform vSphere Metro Storage Cluster
Up to RC sync supported max of 2.6 ms RTT
(~260 km)

Requirements

Two 3PAR disk arrays


FC, iSCSI, or FCoE cross-site server SAN

vSphe
re

P
S

HP 3PAR
Synchronous
Remote
Copy + Peer
vSpheLAN/WAN Persistence

re

QW
DC 3

vSphe
re

vSphere Cluster vSphe


re

S
P
HP 3PAR
Data Center 2

Also see the VMware KB


"Implementing vMSC using 3PAR Peer Persistence
and the HP white paper
Implementing vMSC using HP 3PAR Peer Persistence
* RCFC strongly recommended; VMware vMSC certification is based on

Peer Persistence for Windows

P
P
S
S

Available with 3PAR OS 3.2.1

Clustered application

High availability across data centers


Automatic or manual transparent LUN swap
Transparent live migration between data
centers

Windows Failover Cluster


Hyper-V
Hyper-V

How does it work?

Based on 3PAR Remote Copy and MS MPIO


Primary RC volume presented with active
paths
Secondary RC volume presented with passive
paths

Automated LUN swap arbitrated by a


Quorum Witness (QW Linux Hyper-V VM on
third site)

Supported environments

Secondary RC volume LUN passive path


presentation
Hyper-V VM

What does it provide?

Primary RC volume active path


presentation

Windows Server 2008 R2 and 2012 R2


Stand-alone servers and Windows cluster
Hyper-V
Up to RC supported max of 2.6 ms RTT
(~260 km )

Requirements

Hyper-V

P
Data Center 1

HP 3PAR
Synchronous
Remote Copy
+ Peer
Persistence
LAN/WAN
Witness
QW
DC 3

Hyper-V

S
P
HP 3PAR
Data Center 2

3PAR HA/DT options and comparison


3PAR Peer Persistence

3PAR CLX/ HP Serviceguard


Metrocluster

Primary use case

Application transparent Storage


Failover

Cluster Service Failover (downtime


while services are restarted on the
other site)

Integration method

Agentless and transparent, based on


MPIO (ALUA)

Agent in the cluster stack (Windows


cluster, HP Serviceguard)

Configurations

Uniform access (hosts in both sites


need connectivity to the arrays in
both sites)
1:1 Remote Copy

Non-Uniform Access (hosts in each


site are connected to local array only)
1:1 Remote Copy, SLD

Supported replication

Synchronous only, RCFC or RCIP

Synchronous or asynchronous, RCFC


or RCIP

Supported
configurations

VMware stand-alone or clustered


Windows stand-alone, Failover
Cluster, Hyper-V

Windows Failover Cluster (CLX),


HP-UX, and Linux Serviceguard
Metrocluster

Trigger

Manual or automated failover

Manually or automated failover

Manual granularity

Remote Copy Group

Remote Copy Group

Automated
granularity

Full array

Clustered Service/Remote Copy Group

vSphere disaster recovery with Site Recovery Manager


Automated ESX disaster recovery
What does it do?

Production site
Recovery site
vCenter

Site
Recover
y
Manager

Site
Recovery
Manager

Virtual machines

VMware infrastructure

Servers

Servers

Integrates VMware vSphere Infrastructure with HP 3PAR


Remote Copy and Virtual Copy
Makes disaster recovery protection a property of the VM

vCenter

Virtual machines

VMware infrastructure

Simplifies disaster recovery and increases reliability

Allows you to pre-program your disaster response


Enables non-disruptive disaster recovery testing

Requirements
VMware vSphere
VMware vCenter

HP 3PAR

HP 3PAR Replication Adapter for VMware vCenter Site


Recovery Manager

HP 3PAR

ote
Rem

VMware vCenter Site Recovery Manager

py
Co

HP 3PAR Remote Copy Software


Production LUNs
Remote Copy DR LUNs

HP 3PAR Virtual Copy Software (for disaster recovery


failover testing)

Virtual Copy Test LUNs

Also see the


3PAR vSphere white paper

HP 3PAR Peer Persistence versus


Functionality
HP
3PAR Peer Persistence
HP 3PAR integrated with SRM
VMware
SRM
Concept

Dual-site active-active data centers

Dual-site active-standby data centers

Use case

High availability and disaster avoidance

Disaster recovery

Disaster on primary
site

Transparent non-disruptive failover of


active 3PAR volumes
If vSphere Metro Storage Cluster (vMSC)
is deployed, VMs can fail over
automatically

Manually triggered storage failover and restart of


selected VMs in disaster recovery site

Additional use

Allows balancing load over the two data


centersactive LUNs can be swapped
transparently

Provides extensive failover test capabilities on the


remote site on copies of production data

vMotion/Storage
vMotion

YesOne cluster over two data centers

No1 cluster in each data center

Granularity

HP 3PAR Remote Copy Group

HP 3PAR Remote Copy Group

Arbitration

Automated by the Quorum Witness on


third site

Human

3PAR Remote Copy and Peer Persistence


licenses;
* RCFC strongly recommended;
VMwareSAN
vMSC
certification
is based
Requirements
Fibre Channel
across
both sites;
on RCFC
Synchronous RCFC or RCIP* and max 2.6
ms RTT

3PAR Remote Copy and Virtual Copy and VMware


SRM licenses;
FC or IP replication connectivity between sites;
Synchronous RCFC or IP and max 2.6 ms RTT

3PAR Recovery Manager for VMware


Array-based
snapshots for Rapid Online Recovery
vSphere
Solution composed of:
3PAR Recovery Manager vSphere
3PAR Virtual Copy
VMware vCenter

Use cases
Expedite provisioning of new
virtual machines from VM copies
Rapid online recovery of files
Snapshot copies for testing and
development

Benefits
Hundreds of VM snapshots
Granular, rapid online recovery
Reservation-less, non-duplicative without agents
vCenter integration superior ease of use

Find product documentation at http://h18006.www1.hp.com/storage/software/3par/rms-vsphere/index.html


See the demo video at 3PAR Management plug-in and Recovery Manager for VMware

Recovery managers for Microsoft


Exchange Server and Microsoft SQL
Server
RM MS Exchange Server and RM MS SQL Server
Automatic

discovery of Exchange and SQL Server servers and


their associated databases
VSS integration for application-consistent snapshots
Support for Exchange Server 2003, 2007, and 2010
Support for SQL Server 2005, 2008, and 2012
Support for SQL Server running in a vSphere Windows VM
Database verification using Microsoft tools

See the demo video at:


3PAR Recovery Manager for SQL

Built on 3PAR Thin Virtual Copy technology


Fast

point-in-time snapshot backups of Exchange and SQL Server


databases
Hundreds of copy-on-write snapshots with just-in-time, granular
snapshot space allocation
Automatic recovery from snapshot
3PAR Remote Copy integration
Exporting of database backups to other hosts

Backup integration
HP

DataProtector
NetBackup and Backup Exec
Microsoft System Center Data Protection Manager
Symantec

Find product documentation at:


http://h18006.www1.hp.com/storage/software/3par/rms-exchange/index.html
http://h18006.www1.hp.com/storage/software/3par/rms-sql/index.html

Recovery Manager for Microsoft HyperVBuilt on 3PAR Thin Virtual Copy technology
Supports hundreds of snapshots with just-intime, granular snapshot space allocation
Create crash- and application-consistent
virtual copies of Hyper-V environment
VM restore from snapshot to original location
Mount/unmount of virtual copy of any VM
Time-based VC policy per VM
Web GUI scheduler to create/analyze VC
PowerShell cmdlets (CLI and scripting)
Supported with:
Windows

Server 2008 R2 and 2012

Stand-alone

Hyper-V servers and Hyper-V


Failover Cluster (CSV)

F-Class,

StoreServ 7000 and 10000

RME and RMS architecture


Exchange or SQL Server
production server

RM client and
backup server

Off-host backup
Direct restore from tape
Direct mount of snapshot
Restore from snapshot
with file copy restore

9:00
13:00
3PAR
production
volumes

17:00
Snapshots

Optional library
tape or D2D

RME & RMS & RMH VSS integration


Production DB server
1
1
1

RM agent
3
3

2
2

1
1
0
0

4
4 5
5

9
Exchange/SQ
L DB
or VDD

Recovery
Manager

Backup
server

MS VSS
9
9

1
1

8
8
4 6
4
3PAR VSS
provider

7
7

Recovery

3PAR
array

1. Backup server requests RM agent to create 3PAR


VC
2. RM agent requests MS Volume Shadow Copy
Service (VSS) for database metadata details
3. RM agent calls MS VSS to create virtual copies for
specific database volumes
4. VSS queries 3PAR VSS provider if 3PAR VC can be
created
5. VSS sets database/VHD to quiesce mode
6. VSS calls 3PAR VSS provider to create virtual
copies of volumes
7. 3PAR VSS provider sends commands to 3PAR array
to create virtual copies of volumes
8. 3PAR VSS provider acknowledges VSS VC creation
completed
9. VSS sets database /VHD back to normal operation
10. VSS acknowledges RM agent creation of virtual

RM Exchange and SQL Server in a CLX


Extended
possibilities
environment
Site B

Site A

RM backup server local

RM backup server at
the remote secondary
site (Site B) can
actively manage
Virtual Copy
That means all the
operations, including
recovery, can be
performed at the
remote site

RM backup server remote

Single copy cluster / SQL Extended Cluster (using CLX)


Exchange/SQL
1

Exchange/SQL
2

Exchange/SQL
3

Remote Copy

DB

VC

VC

Can recover
at Site B using
RM

Exchange/SQL
4

Validations can take hours


to complete for large
databases (size of TB)
Queuing and sequentially
validating many databases
can take a long time (hours
to days)
This enhancement ensures
that the validations occur
in parallel, mitigating the
issue

DB1 (2 TB)

Earlier
versions

3 hrs

DB2 (2 TB)

3 hrs

DB3 (2 TB)

Sequentia
l

Recovery Manager for Microsoft


Concurrent
database validations
Exchange
3 hrs

Total: 9 hrs to complete

Since
v4.4

Concurre
nt

DB1 (2
TB)

DB2 (2
TB)

3 hrs

3 hrs

DB3 (2 TB)

3 hrs

Total: approx 3 hrs to complete

Recovery Manager Diagnostic Tool


A tool that validates all the RM configuration parameters and
generates reports indicating non-compliance
Runs on the backup server
Automatically probes all the servers registered in the
recovery
manager including the backup server itself
Checks all parameters required for a successful RM operation,
such as:
Database

status
HWP configuration
StoreServ connectivity
VSS

Generates a report indicating success, warning, and error


Advises the user of corrective action
Displays high-level dashboard status
Currently supported:
RM

Exchange Server
SQL Server
RM Hyper-V pending
RM

Recovery Manager for Oracle


Rapid, off-host backup recovery solution for Oracle databases
Highlights
Back up using HP 3PAR Virtual Copy
Eliminate backup performance impact of production
database by exporting and backing up snapshot
from a backup server
Substantially reduce the time a database is in
backup mode, hence reducing the media recovery
time
Rapid recovery from Virtual Copy itself
Integrated with HP 3PAR Remote Copy to provide
disaster recovery solution
Integrated with popular third-party backup software
Support single datafile or tablespace restore

HP 3PAR Recovery Manager for Oracle


Allows point-in-time copies of Oracle databases

Non-disruptive, eliminating production downtime


Uses 3PAR Virtual Copy technology

Uninterrupted user
access
Decision
Production support
Test

Backup server

Allows rapid recovery of Oracle databases

Increases efficiency of recoveries


Allows cloning and exporting of new databases

Integrated high availability with disaster recovery sites

Integrated 3PAR replication / Remote Copy for array-to-array


disaster recovery

Oracle 10 g and 11 g
RHEL 4, 5, and 6
OEL 5 and 6
Solaris 10 SPARC
HP-UX 11.31
IBM AIX 6.1, 7.1

Supported backup applications

HP DataProtector
Oracle RMAN
Symantec NetBackup

9:00

Supported operating systems

13:00
Oracle
production
volumes

1.
2.
3.
4.
5.

17:00
Snapshots

Optional
Optional library
library
tape
tape or
or D2D
D2D

Fast automated restores


Up-to-date DSS data
Test with current data
DB images presented to backup
server
Full, non-disruptive Oracle
backups

See also: http://h18006.www1.hp.com/storage/software/3par/rms-oracle/index.html

What is Recovery Manager Central? (1


of
3)
Snapshot-based data protection platform
Two elements

Recovery Manager Central for VMware

Managed via vCenter plug-in; for VM backups only (application-consistent)

Recovery Manager Central Express Protect

Managed via web browserfor all other snap backups (crash-consistent)

Fosters integration of 3PAR and StoreOnce


Near-instant recovery
Longer-term data retention
Catalyst integration as backup target

Flat Backupdata streams from 3PAR to StoreOnce*


* v1.0 data path goes through RMC VM until 1.1 or 2.0, depending on when RMC is embedded in StoreOnce

What is Recovery Manager Central? (2


of 3)

3PAR StoreServ

StoreOnce
Recovery
Manager
Central

3PAR StoreServ system (any currently supported


model*)
StoreOnce (software v. 3.12.x to support Backup
Protect**)
StoreOnce Recovery Manager Central 1.0
VMware 5.1 and 5.5

*7000 series and 10000 series will have full functionality; F-Class and T-Class will be limited

StoreOnce

What is Recovery Manager Central? (3


of 3)
It is not a replacement for existing backup application
RMC v.1.0:

3PAR onlycannot protect other storage platforms


No Oracle, SQL Server, Exchange Server, Hyper-V (unless on a VM)
No Hyper-V or KVM VM
No bare-metal recovery (unless on a VM)
No granular-recovery capability

It is intended to be a complementary piece along with backup app

Faster and cheaper alternative to backup app for non-granular protection

RMC Value Proposition


Converged availability and backup service for VMware
Flat

backup alternative to traditional backup apps

Performance of Virtual Copy snaps


Reliability and retention of StoreOnce
Speed of backups and restores via SnapDiff

Control of VMware protection passes to VMware admins


Managed

from within vSphere

Extension of primary storage


Snapshots

key to entire data protection process

Common integration and API point for backup applications,


reporting, and security

Learning check
1. Complete the following table by filling in the maximum round trip
time in milliseconds for each supported topology
Remote Copy type
Synchronous RC FC
Synchronous RCIP
Asynchronous Periodic RC FC
Asynchronous Periodic RCIP
Asynchronous Periodic RC FCIP

Max supported latency

Learning check answer


1. Complete the following table by filling in the maximum round trip
time in milliseconds for each supported topology
Remote Copy type
Synchronous RC FC

Max supported latency


2.6 ms RTT*

Synchronous RCIP

2.6 ms RTT*

Asynchronous Periodic RC FC

2.6 ms RTT*

Asynchronous Periodic RCIP

150 ms RTT*

Asynchronous Periodic RC FCIP

120 ms RTT*

Learning check
2. What are the two elements of Recovery Central and how are they
managed?
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
____

Learning check answer


2. What are the two elements of Recovery Central and how are they
managed?
Recovery Manager Central for VMware
Managed via vCenter plug-in, used for VM backups only and is
application-consistent
Recovery Manager Central Express Protect
Managed via web browser, used for all other snap backups, and
is crash-consistent

Federation and data


mobility

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Federated storage
Whats the benefit?

Storage federation

To federate means to cause to join


into a union or similar association;
thus federated means to be united
under a central government
Dictionary

The transparent, dynamic and nondisruptive distribution of storage


resources across self-governing,
discrete, peer storage systems
Marc Farley, StorageRap, April 2010

Provides peer-to-peer versus hierarchical functionality as with compute


federation
Distributed volume management across self-governing, homogeneous
peers systems allows resources management at the data center or metro
level rather than at the device-by-device level
Provides secure, non-disruptive data mobility at the array level, not the
host level
Eliminates the risk of over-provisioning a single array

SAN virtualization
There are more complex and less complex solutions

Traditional SAN
virtualization
appliances introduce
more layers in the I/O
stack and thus more
dependencies and
more to manage

EMC VPLEX

IBM SVC

Falconstore NSS

DataCore SANsymphony

3PAR Peer Persistence


provides transparent
storage presentation
without the burden of
an additional
virtualization layer

Traditional SAN virtualization


DC 1

HP 3PAR federation
DC 1

DC 2

DC 2

Layer 1
Server

Layer 1
server

FC SAN
SAN virtualization
appliance
FC SAN

Layer 2
server SAN

FC SAN

Layer 3
storage
virtualization
Layer 4
storage SAN
Layer 5
Storage

3PAR Peer
Persistence

Layer 2
SAN

Layer 3
federated
3PAR storage

Federated storage vs. SAN


virtualization
Requireme
nt

SAN virtualization

Federated storage

Flexibility:

Yes: Supports changing workloads

Yes: Supports changing workloads

Scalability:

Some: Limited by most designs

Yes: High levels of scale possible

Efficiency:

Some: Improves utilization but


adds to cost

Yes: Improves utilization with limited


incremental cost

Simplicity:

No: Complex. Storage capacity


additions may be disruptive

Yes: Uses capabilities in underlying


storage without complexity

Reliability:

Some: Failover but adds network


and management failure points

Yes: Failover without additional


management or layers

Source: Evaluator Group: Storage Federation IT Without Limits: Russ Fellows

HP 3PAR features

Online
Import
HP 3PAR
and VMware VVOLs

HP 3PAR Priority Optimization3.1.2


MU2

Array Performance

Max limit

App
A

App
B
App
C

All
other
other
apps
apps

HP 3PAR Priority Optimization3.1.3

Array
Performance

Max limit

Ap
p
A

Ap
p
B

Min
goals
High priority
Normal
priority
Low priority

Ap
p
C

All
other
apps

HP 3PAR Priority Optimization


Latency goal

Performance caps are dynamically adjusted based on System Busy level


System Busy level is adjusted based on real-time latency and latency goal
IOPS

IOPS cap = Function of System Busy level

Max 10 k

High

Max 8 k
Max 6 k
Min 5 k
Min 4 k
Min 3 k

Medium
Low
0

10%

25%

50%

75%

100
%

Busy Level

HP 3PAR Dynamic and Adaptive


Manual or automatic tiering
Optimization
3PAR
Dynamic
Optimization
CPG 1

3PAR Adaptive
Optimization

Tier 1

CPG 2

CPG B

Tier 2

CPG 3

CPG C

Tier 0

- Region

LUN movement
between tiers

CPG A

Sub-LUN block movements


between tiers based on policies

Storage tiersHP 3PAR Dynamic


Optimization
SSD
RAID 1

Performanc
e

RAID 5

Fast Class
RAID 5

Near Line

RAID
1

RAID 6
RAID 1

RAID 6

Cost per Useable TB

RAID
6

In a single command,
non-disruptively optimize
and adapt

Cost
Performance
Efficiency
Resiliency

HP 3PAR Dynamic OptimizationUse


cases
Deliver the required service levels for the lowest possible cost throughout the
data lifecycle

~50%
savings

~80%
savings

10 TB net

10 TB net

10 TB net

RAID 10
300 GB FC drives

RAID 50 (3+1)
600 GB FC
drives

RAID 50 (7+1)
2 TB SATA-Class drives

Accommodate rapid or unexpected application growth on demand by freeing raw


capacity
7.5 TB net
free
10 TB net

10 TB net

20 TB rawRAID 10

20 TB rawRAID 50

Free 7.5 TBs of net capacity on


demand

Tuning example with Dynamic


Tune
virtual volume from a four-drive NL R5 CPG to a 16-drive FC R1
Optimization
CPG

Tune
Tune
started

Tune
Tune
finished

Iometer before tune

Iometer after tune

Online virtual volume conversion


Part of Dynamic Optimization
Non-disruptively migrate VVs
From fat to thin provisioned (TPVV) and
vice versa
From fat to thin dedupe (TDVV) and vice
versa
From thin provisioned to thin dedupe
and vice versa

Source volume can be:


Discarded
Kept
Kept and renamed

Addressing I/O density with 3PAR


architecture
These SSD I/O densities can be
achieved with 3PAR arrays based
on practical field information
% of total
SSD net
capacity

% of total I/O

33

2.5

50

66

10

80

20

90

35

99

HP 3PAR Adaptive Optimization (1 of 4)


Improve storage utilization
Traditional deployment

Deployment with HP 3PAR AO

Single pool of same disk drive type, speed and


capacity, and RAID level

An AO virtual volume draws space from two or three different


tiers

Number and type of disks are dictated by the


max IOPS + capacity requirements

Each tier can be built on different CPGs, disk types, RAID


level, and number of disks

100
%

Wasted space

IO
distribution
0%

Required
Capacity

High-speed
media pool

Required
IOPS

Single pool of
high-speed
media

Required
IOPS
0%

100
%

100
%

0%

Mediumspeed media
pool

Low-speed
media pool

I/O
distribution
0%

Required
Capacity

100
%

HP 3PAR Adaptive Optimization (2 of 4)


Efficient to own and manage
Defined in policies by tiers and schedules
Optimizes performance and cost by
moving regions between tiers
Up to 128 individual policies per 3PAR array
Each policy can be scheduled individually
A policy can run automatically or be
manually triggered
Part of 3PAR OS with in-node SQLite
database
No installation required
Enabled by a license key
Read more in the
3PAR StoreServ Adaptive Optimization white paper

HP 3PAR Adaptive Optimization (3 of 4)


Configuring AO tiers
An AO mode is cost-based,
balanced, or performance-based

Cost: More data is kept in lower tiers


Performance: More data is kept in
higher tiers
Balanced (default): Balanced
between performance and cost

Two or three tiers per policy can


be defined
Each tier is defined as a CPG
A CPG defines drive type, RAID
level, redundancy level, and step
size

HP 3PAR Adaptive Optimization (4 of 4)


Scheduling AO
Tier movement is based on analyzing these
parameters:
Average tier service times
Average tier access rate densities
Space available in the tiers
Tier movement can be started either:
Manually
Based on schedule
Measurement interval can be defined between
one hour and seven days

HP 3PAR StoreServ Online Import


Mid-range evolution with the lowest risk upgrade
HP 3PAR
StoreServ

HP EVA

Trusted - 100,000 arrays


installed WW
Recognized for simplicity
Leading hardware efficiency

A uniquely agile Tier 1


storage platform
Onl
ine

Imp
ort

EMC arrays

CX4 and VNX


Using the Peer Motion utility

HP 3PAR

Tier 1 architecture and


features
Clustered scalable controller
architecture
Industry-leading efficiency
technologies

Online Import
rt
m po
I
e
in
Onl

HP 3PAR Online Import Utility for EMC


Storage

Whats new?

How does it work?


HP 3PAR Online Import Utility for EMC Storage

Can be a
Windows client

Windows
Windows
client
client

Deployment

EMC
VNX
EMC
CX4
or
EMC
VMAX

Can be a physical
Windows server
or within a VM

3PAR Online Import for EMC Storage


Supported environmentsat General Availability

What is required for 3PAR Online Import


For
migration from EMC Storage to 3PAR StoreServ
to data
work

HP 3PAR Online Import for EMC Storage


Limitations

*If PowerPath is installed, the volumes will have to be represented to DM


after PP has been removed. This requires a short application outage.

3PAR Online Import for EMC Storage


Five simple stages of the migration process

Learning check
1. Fill in the fields in both columns as they apply to SAN virtualization
compared to Federated storage

Learning check answer


1. Fill in the fields in both columns as they apply to SAN virtualization
compared to Federated storage

Learning check
2. List the four key features of HP 3PAR data management
____________________________________________________________________
____________________________________________________________________
____________________________________________________________________
________________________________________________________________

Learning check answer


2. List the four key features of HP 3PAR data management
. Priority Optimization
. Dynamic Optimization
. Adaptive Optimization
. Online Import

Management and
support

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Adj: Existence in several forms, shapes, and sizes

Consolidated management tools

Modern look and feel

Web-based

Consistent experience for file and block

1
Entry

Service and support

One management platform

n
rti
po
re

High
end

em
st
Sy

Polymorphic simplicity

Co
nv
er
S
m
an tora ged
ag ge
em
en
t

HP 3PAR StoreServ management vision

3PAR StoreServ Management Console

Replacement for the existing management console


Converged management for entire HP 3PAR product line
Intuitive web-based UI with dashboard overviews
Modern and consistent look and feel
Redesigning hundreds of IMC screens
Better usability
Replacement for existing System Reporter
Standards based and integration with HP OneView

Dashboard

Health, performance, and capacity


at a glance

Mega Menu

An express-driven interface to all points


within the SSMC and the objects monitored

From the Mega Menu, a user is linked directly to


any of the listed context areas

Converged File and Block management and


reporting

Search

Global search allows user to find objects in seconds


Search can be global to all systems and objects or confined to a
given object
Intelligent search remembers previous queries

User has a server attached


to the array identified as
only WIN

User is notified that the array is


experiencing some delays on a host
identified by the name ATC

MAP views (1 of 2)
Physical map views
SystemMap view

MAP views (2 of 2)
Logical map views
VVsetMap
view

System Reporter in SSMC (1 of 2)

Modern look and feel


Focused on ease of use
Based on SR-on-Node data (no external
DB setup)

Zoom in online from daily


data
to high-res

System Reporter in SSMC (2 of 2)

In-depth capacity information


Point-and-click AT Time detailed reports

HP 3PAR Management Console


One console does it alleasy and straight forward
Main menu bar
Main tool bar
Common
actions panel

Management
window

Manager pane
Alerts, tasks,
connections
Status bar

HP 3PARManagement options
3PAR Management Client (GUI)

Fat client GUIWindows, Red Hat Linux


Storage management GUI

7000
SP VM

Command line interface

3PAR CLI or ssh


Storage Management Interface
Storage ServerVery rich, complete command set

Management LAN

SMI-S

Management from third-party management tools

Web API

RESTful interface

External Key Manager (ESKM)

HP Enterprise Key Manager or SafeNet KeySecure

Service Processor (SP)

Physical or virtual machine (vSphere or Hyper-V VM)


Health checks by collecting configuration and
performance data
Reporting to HP 3PAR Central
Anomalies reported back to customer via OSSA

GUI

CLI/SSH
access

SMI-S
access

Web API

ESKM

SP instance (10000 physical, 7000 virtual/optional physical)


SP eth connect
3PAR node management eth connect

3PAR Direct Manageability


Fine-grained privilege assignment
Simple, comprehensive,
consolidated administration
3PAR OS CLI
Powerful, fine-grained control
Scriptable, with highly consistent
HP 3PAR
syntax
Management Console LDAP support, IPv6
Multiple assignable roles

Super

Edit

Basic
Edit

Create

Access to
all
operations

Access
to most
operations

Create and
unmount,
cannot
delete

Create
volumes
but cannot
delete

Adaptive
Optimizatio
n
AO only
operations
(pre 3.1.2)

Recovery
Manager
Only for
RM
operations

My
Snapshot
Create/
refresh
snaps
(for test /
dev.)

Service

Browse

Limited
operations
for
servicing

Allows
read-only
access

HP 3PAR Web Services API


Functions
equivalent to the
following CLI
commands

New with 3.2.1 MU1

Modification of
virtual Volumes,
virtual copies,
CPG parameters
Creation,
removal, and
modification of
hosts
Query
Single item (as
opposed to
querying an
entire collection)
Available space
General system
information

New with 3.1.3

Creation and
removal

New with 3.1.2 MU2

Initial functionality with


3.1.2

Provides a well-defined API for performing storage management tasks


Support for
Remote Copy

New query
Virtual volumes
facilities
Virtual copies
Createvvset,
List user
CPGs
setvvset,
privileges
removevvset
Create a thin
VLUN
Createhostset,
Query all
deduplicated
sethostset,
volume (TDVV)
Volumes and their
removehostset
Convert TPVV to
properties
Createvvcopy for
TDVV
CPGs and their
single vv and vvset
Create TDVV
properties
physical copy
Createsv for vv set

New fields to
Createvlun for vv
VLUNs and their
Spacereporter,
set and host set
properties
Volume and
support
Capacity objects
Setqos (support
to show
new 3.1.3 QOS
compaction and
features)
deduplication
Showportdev
capacity
fcswitch
efficiency
Developers guide and sample client can be downloaded
from HP Software
depot at:
Showportdev
all
numbers

http://software.hp.comShowtask

HP StoreFront Mobile Access for 3PAR


Converged
storage management
StoreServ
Access: 24x7 access from virtually any location

Remote access to 3PAR StoreServ using Android and now


also iOS-based devices

Insight: Monitor storage system statistics and


properties

Capacity utilization, CPGs, virtual volumes, device types,


and more

Automation: Receive critical alerts in real time to


reduce risk

Instant notification of error conditions or issues that need


immediate action

Security: Encrypted login for secure remote access

Browse-only enforcement access integrated with 3PAR


role-based security

See more: hp.com/go/storefrontmobile

HP 3PAR Remote Support (1 of 2)

Allows remote service connections for assistance and troubleshooting to deliver faster,
more reliable response and quicker resolution time
Transmits only diagnostic data
Uses secure service communication over Secure Sockets Layer (HTTPS) protocol between
HP 3PAR Storage Systems and HP 3PAR Central
The optional Secure Service Policy Manager allows the customer to individually enable or
disable remote support capabilities and logs all remote support activities
If customer security rules do not allow a secure Internet connection, a support file can be
generated on the Service Processor and sent to HP by mail or FTP on a regular basis
For more details read the HP 3PAR Secure Service Architecture white paper in the HP Enterprise Library at:

HP 3PAR Remote Support (2 of 2)


3PAR
Management
Console

DNS

HP Global
Services and
Support
representative

Proxy

HP 3PAR
OSSA

Internet
HTTPS port 443
enabled
Optional:
Optional:
Secure Service mail server
policy manager
3PAR
arrays

Customer

HP 3PAR
Secure
Service
collector
server

HP
mail server

HP 3PAR Central

HP 3PAR CentralWhats in it for you?


Proactive,
remote error
detection

Proactive system
scans and health
checks
Over-Subscribed
System Alerts

Find more details in the


3PAR Central data sheet

World-class
support

HP 3PAR Central support


hub staffed with experts
24x7x365 monitoring
Automated parts dispatch

Security and
control

Secure, encrypted
communication
Exclusive control of remote
access policy configuration
Viewable audit log
Simple SW upgrades

HP 3PAR CentralGet connected and


back to business
Protect your business with proactive fault detection and error
resolution
Stay informed
Faster support
Protect storage
QoS

35%*
Higher
availability

with less
downtime

64%*

Faster time to resolution


when onsite support is
required

* Source = HP measurement of installed base of HPs StoreServ remote support customers as of Q2 2014

and in control

days
minutes

Complete control of
connectivity and software
upgrades on your
schedule

HP 3PAR Over-Subscribed System Alert


Part
of HP 3PAR Remote Support
tool
OSSAperforms periodic proactive utilization checks on
key system elements
Customers with active service contracts receive email
messages when systems seem to be oversubscribed in
one of these areas:

Active VLUNs
Balanced drives
CPU utilization
Disk IOPS
Disk port bandwidth per node pair
Initiator distribution per node
Initiators per port
Initiators per system
PCI bus bandwidth
Port bandwidth
Raw capacity

Data is collected periodically from the HP 3PAR

Email-alert example

HP 3PAR Virtual Service Processor


Secure Remote Support
Virtual Service Processor
Cost-efficient, secure gateway for
remote connectivity for the StoreServ
7000 arrays
Effortless, one-click configuration
Supported on:

VMware vSphere (4.x, 5.x)


Microsoft Hyper-V (Windows Server
2008, R2, or 2012)

Enables:

Remote, online SW upgrade


Proactive fault detection with remote
call home diagnostics
Remote serviceability
Alert notifications

Optional HW Service Processor

HP 3PAR Policy Manager


Provides:
Centralized audit log to facilitate security audits
Centralized policy administration for all HP 3PAR Storage systems
Complete control over policy administration including
File UploadFile uploads of diagnostic data to HP 3PAR Central are allowed or
disallowed
File DownloadFile downloads from HP 3PAR Central are allowed or disallowed
Remote SessionRemote sessions for remote serviceability can or cannot be
established with HP 3PAR Central
Always AllowAll remote connection requests are allowed
Always DenyAll remote connection requests are denied
AskApproval is needed via email within a configured timeout window from the
configured customer administrator

Policy Manager is software installed on a separate, customer-provided Windows

Learning check
1. Which HP 3PAR applications are found in the Management
Console?
____________________________________________________________________
____________________________________________________________________
_________________________________________________________________

Learning check answer


1. Which HP 3PAR applications are found in the Management
Console?
3PAR Thin Provisioning, Virtual Copy, Dynamic
Optimization, Virtual Domains, and Remote Copy

Virtualization
integration
VMware
Hyper-V
Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

3PAR VMware integration


HP 3PAR Utility Storage is the perfect fit for virtualized environments
Efficient integration of HP 3PAR Thin Technologies
Simplified storage administration with:

vCenter Server integration


vCenter Operations Manager integration
VAAI and VASA support
vCenter Site Recovery Manager integration

High availability and disaster tolerance thanks to vSphere Metro Storage


Cluster certification
Allows greater virtual machine density thanks to:

Inherent wide striping


Mixed workload support

Easy recovery and replication using HP 3PAR Recovery Manager Software for
VMware vSphere
See also: http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA3-4023ENW.pdf

HP OneView for VMware vCenter


An integrated application with three modules (formerly HP Insight
Control)
VMware vCenter
VMware vSphere
VMware vSphere
VMware vCenter
server

VMware vSphere
client (legacy)

VMware vSphere
web client

HP OneView for VMware vCenter Server


Core
Module

Core
module
Server
module

StoreFront
Module

Server
Module

Provides the framework required by the Server Module


and the Storage Module
Provides server hardware management capabilities, including
comprehensive monitoring, firmware update, ESX/ESXi image
deployment, remote control, end-to-end monitoring for Virtual
Connect, and power optimization for HP servers in the VMware
environment

Provides storage provisioning, configuration, and status information


StoreFront
Module (former for mapping VMs, datastores, and hosts to LUNs on HP storage
Storage Module)

arrays in the VMware environment

HP OneView StoreFront Module for


Single
pane of glass from VMware vCenter to your HP Converged
vCenter
Infrastructure

The StoreFront module enables you to:

Map the VMware virtual environment to HP storage and


provide detailed contextual storage information
Create/expand/delete VMware data stores
Create virtual machines from a template
Clone virtual machines from an existing virtual machine
Delete an unassigned volume
Integrated with vSphere client and the new vSphere
Web Client
Visualize complex relationships between VMs and
storage
Easily manage Peer Persistence for HP 3PAR StoreServ

Supports HP MSA, EVA/P6000, StoreVirtual,


XP/P9500, 3PAR

Servers

Management
software

Network

Storage

HP OneView for VMware vCenter Server


New
7.4functionality added to the StoreFront Module (former Storage
Module)

Full interoperability with vSphere 5.5


Recovery Manager 2.5 for HP 3PAR StoreServ integration in the plug-in
HP 3PAR StoreServ VASA integration in the plug-in
Switch Peer Persistence support in storage provisioning actions for HP
3PAR StoreServ systems
New storage provisioning wizards in vSphere Web Client 5.5
New storage provisioning wizards now support Peer Persistence for HP
3PAR StoreServ systems

auto_failover and path_management parameters are set

Peer Persistence configuration diagram


Graphical view of VMs-to-volumes information

HP OneView for VMware vCenter


Example: Peer Persistence manual transparent storage failover

HP StoreFront Analytics Pack for VMware


Effortless management thanks to the 3PAR vCenter Operations Manager
vCOPS
integration

Provides detailed 3PAR and VM reporting for:

I/O ports

Physical drives

CPGs

Volumes

Capacities

Performance

Response times

I/O Sizes

Health

and more

Free features

Health information for the array

See the demo video at: http://www.youtube.com/watch?v=GF3ZQF_k5ME&feature=player_detailpage

Licensed features

Capacity information

Performance information (key performance metrics:

HP 3PAR Recovery Manager for VMware


Array-based
snapshots for rapid online recovery
vSphere
Solution composed of:

3PAR Recovery Manager for VMware vSphere


3PAR Virtual Copy
VMware vCenter

Use cases

Expedite provisioning of new


virtual machines from VM copies
Rapid online recovery of files
Snapshot copies for testing and development

Benefits

Hundreds of VM snapshots granular, rapid online


recovery
Reservation-less, non-duplicative without agents
vCenter integrationsuperior ease
usedocumentation at http://
Findof
product
h18006.www1.hp.com/storage/software/3par/rms-vsphere/index.html
See the demo video at:3PAR Management plug-in and Recovery Manager for VMware

HP 3PAR Management Plug-in for VMware


Peer Persistence status information in the vCenter Client
vCenter
Peer Persistence
topology
Is visible in the
vCenter Client via
the HP 3PAR
Management Plugin for VMware
vCenter
Is fully supported
with Recovery
Manager for
VMware vSphere

Peer Persistence for VMware vSphere

P
Primary RC Volume active path presentation

Certified for vSphere Metro Storage Cluster

Secondary
RC Volume LUN passive path presentatio
S
S

What does it provide?

vSpher
e

High availability across data centers


Automatic or manual transparent LUN swap
Transparent VM vMotion between data centers

How does it work?

Based on 3PAR Remote Copy and vSphere ALUA

Primary RC Volume presented with active paths


Secondary RC Volume presented with passive paths Data Center 1

Automated LUN swap arbitrated by a Quorum


Witness
(QW Linux ESX VM on third site)

Supported environments
ESX vSphere 5.0, 5.1, 5.5 including HA, Failsafe
and
uniform vSphere Metro Storage Cluster
Up to RC sync supported max of 2.6ms RTT
(~260km)

vSpher
e

P
S

HP 3PAR
Synchronous
Remote Copy
+ Peer
Persistence
vSpherLAN/WAN

QW
DC 3

vSpher
e

vSphere Cluster
vSpher
e

S
P
HP 3PAR
Data Center 2

Requirements

Two 3PAR disk arrays


FC, iSCSI, or FCoE cross-site Server SAN
Two RC sync links (RCFC or RCIP*)

Also see the VMware KB "Implementing vMSC using 3PAR Peer Persistence"
and the HP Whitepaper Implementing vMSC using HP 3PAR Peer Persistence
* RCFC strongly recommended; VMware vMSC certification is based on RCFC

Peer Persistence for VMware


VMware vSphere 5.x Metro Storage Cluster
(single subnet)

Never lose access to your volumes


Each host is connected to each array on
both
sites via redundant fabrics (FC or iSCSI
or FCoE)

Fabric A

Synchronous copy of the volume is kept


on
the partner array/site (RCFC or RCIP)
Each volume is exported in R/W mode
with
same WWN from both arrays on both
sites
Volume paths for a given volume are
Active only on the array where the
Primary copy of the volume resides
Active path (Vol A)
Other volume paths are marked Standby
Active path (Vol B)
Passive (standby)
Both arrays can host active and passive
path

volumes

Fabric B
Up to 2.6 ms RTT
latency

B
A

3PAR array
Site A

Vol B
sec
Vol A
prim

Vol B
prim
Vol A sec
QW
vSphere

Site C

B
A

3PAR array
Site B

Peer Persistence for VMwareALUA


path view
VMware vSphere 5.x Metro Storage Cluster
(single subnet)

vCenter path management view


2

Vol A
prim

Vol A sec

QW

3PAR array
Site A

3PAR MC Remote Copy view

vSphere

Site C

3PAR array
Site B

Peer Persistence3PAR Storage and


Easy
setup
vSphere
Steps to configure the QW VM

Install the canned HP QW Red Hat VM thinly provisioned


and on a vSphere server located at a preferably third
site
Set up the QW network
Define the QW hostname
Set the QW password
From the 3PAR management console or the CLI
configure the communication between the 3PAR
StoreServs and the QW

Now you can set up Peer Persistence

Zone ESX host to the 3PAR arrays (SAN)


Create hosts with persona 11, VVs, LUNs on primary
3PAR
Create datastores in vSphere
Create hosts with persona 11, VVs on secondary 3PAR
Set up and sync Remote Copy
Add Remote Copy Group to Enabled Automatic Failover
Remote Copy Groups in Peer Persistence

3PAR Recovery Manager for VMware


How
is Peer Persistence (Transparent Failover) integrated?
vSphere
1. RMV collects all HP 3PAR storage devices
information from vCenter Server
2. If a storage device contains multiple paths
from a different StoreServ array, it
participates in a Peer Persistence setup
3. The active path determines which
StoreServ is the primary site
4. User can create crash level/application
consistent virtual copy on both local and
remote sites
5. Virtual Copy can be recovered from any
site

VMware vSphere Metro Storage Cluster

VMware VAAI primitives overview


vStorage API for Array Integration
Primitive

vSphere 4.1
3PAR support
introduced with
3PAR OS 2.31 mu2+

Description
ATS
XCOPY

WRITE SAME
TP Stun
UNMAP*

vSphere 5.x
3PAR support
introduced with
3PAR OS 3.11

TP LUN Reporting
Out of Space Condition
Quota Exceeded
Behavior

Atomic Test and Set; stop locking entire LUN and only lock blocks
Also known as Fast or Full Copy; leverage array ability to mass copy and
move blocks within the array
Eliminate redundant and repetitive write commands
Report array TP state to ESX so VM can gracefully pause if out of space
Used for space reclamation rather than WRITE_SAME; reclaim space after
a VMDK is deleted within the VMFS environment using the vmkfstools y
command
TP LUN identified via TP enabled (TPE) bit from READ CAPACITY (16)
response as described in section 5.16.2 of SBC3 r27
Uses CHECK CONDITION status with either NOT READY or DATA PROTECT
sense condition
Done through THIN PROVISIONING SOFT THRESHOLD REACHED
(described in 4.6.3.6 of SBC3 r22)

* Initial vSphere 5.0 implementation automatically reclaimed space. However, VMware detected a flaw that can cause major
performance issues with certain third-party arrays. VMware therefore disabled the automatic T10 UNMAP; see KB article
vSphere 5.5 introduced a new simpler VAAI UNMAP/Reclaim command: # esxcli storage vmfs unmap

vStorage API for Array Integration


Hardware-assisted Full Copy
Optimized data movement within the
SAN for:

Storage vMotion
Deploy template
VM cloning

Fully leverages 3PAR Thin


Technologies
Significantly lower CPU and network
overhead

Much quicker migration


No host I/Ocopy done by array

HP 3PAR VMware VAAI support example


VMware Storage vMotion with VAAI enabled and disabled

Back-end
disk I/O

Front-end
I/O
DataMover.
HardwareAcceleratedMove=1

DataMover.
HardwareAcceleratedMove=0

vStorage API for Array Integration


Hardware-assisted locking
Increase I/O performance and scalability by offloading block locking
mechanism

Moving a VM with VMotion


Creating a new VM or deploying a VM from a template
Powering a VM ON or OFF
Creating a template
Creating or deleting a file, including snapshots
ESX

ESX

Without VAAI

SCSI
Reservation
locks entire LUN

With VAAI

SCSI Reservation
locks at Block
Level

vStorage API for Array Integration


Hardware-assisted block zero
Offloads large, block-level write operations of zeroes to storage hardware
Reduces the ESX server workload

ESX
0

ESX
0

000000000
000000000
000000000
000

000000000
0
000000000
000000000
000

Without VAAI

With VAAI

VMware Write SameBlock Zero in


VM
creation 100 GB EagerZeroedThick formatted on 3PAR RAID 1 ThP volume
action
Disk ports
Back-end throughput
Host-ports
Front-end throughput

Back-end and

Front-end
traffic

Front-end
traffic
only

Minimal frontend
traffic only

VAAI

Off

Off

On

3PAR zero
detect

Off

On

On

5:29 min

4.14 min

14
sec

Creation time

VMware space reclamation


With vSphere 5 and 3PAR OS 3.1.x

HP 3PAR with Thin Persistence

Transparent

Thin Persistence allows manual


reclaiming of VMware space with T10
Unmap support in vSphere 5.0 and
3PAR OS 3.1.x using the vmkfstools
-y command *

Granular

DATASTORE

25 GB
00000000
00000000

25 GB
00000000
00000000

10GB

10GB

0000Rapid, inline
ASIC Zero Detect

25 GB
00000000

X
20GB

25 GB
00000000
00000000
15GB

T10 UNMAP vmkfstools


-y
(16 KB granularity)

Reclamation granularity is as low as 16


3PAR scalable Thin Provisioning
100 GB ThP
100 GB ThP
KB compared to 768 KB with EMC VMAX
or 42 MB with HDS VSP
55GB
20 GB
Freed blocks of 16 KB of contiguous
space are returned to the source
Time
20 GB VMDKs finally only consume ~20 GB rather than
volume
100 GB
Freed blocks of 128 MB of contiguous
* Initial
5.0 implementation
space
arevSphere
returned
to the CPGautomatically
for use reclaimed space. However, VMware detected a flaw which
can cause major performance issues with certain third-party arrays. VMware therefore disabled the automatic
by other
volumes
T10 UNMAP;
see KB article

VMware vStorage VAAI


Are there any caveats to be aware of?
The VMFS data mover does not leverage hardware offloads andinstead uses
software data movement if:

The source and destination VMFS volumes have different block sizes
The source file type is RDM and the destination file type is non-RDM (regular file)
The source VMDK type is EagerZeroedThick and the destination VMDK type is thin
The source or destination VMDK is any sort of sparse or hosted format
The source virtual machine has a snapshot
The logical address and/or transfer length in the requested operation are not aligned to
the minimum alignment required by the storage device
All datastores created with the vSphere client are aligned automatically
The VMFS has multiple LUNs/extents and they are all on different arrays
Hardware cloning between arrays (even ifwithin the same VMFS volume)does notwork
You can find vStorage APIs for Array Integration FAQ at:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1
021976

HP 3PAR StoreServ and VVols


Hypervisor-aware storage on HP 3PAR StoreServ
Distinguish between individual VMs
3PAR StoreServ is the first array to
provide space reclamation via the
zero-detect engine in the 3PAR ASIC
with VVols on a per-VM basis
3PAR Priority Optimization provides array-based
QoS guarantees at the application and/or VM
level for most efficient and dynamic/real-time
control of VM performance
3PAR Snapshot enables
thousands of recovery points
on a per-VM basis

Virtual volumes
Granular, policy-based VM
storage

What are VMware virtual volumes?


VVols are part of the VASA 2.0 specification defined by VMware as a
new architecture for VM storage array abstraction
VASA 2.0 introduces interfaces to query storage abstractions such as
storage containers and the capabilities they support
This information helps VMware Storage PolicyBased Management
(SPBM) make decisions about virtual disk placement and compliance
VASA 2.0 also introduces interfaces to provision and manage the
lifecycle of VVols
Multiple

VVols are created for a single virtual machine

Datastore model
VM1

This diagram illustrates how a storage array is


provisioned for VMs today
The storage array provides a single secure
datastore for a number of VMs and their
associated data sets
The positive aspect of this is that a large
number of VMs and their data sets can be
represented on the fabric with a small number
of storage entities, which is good for the
scalability of the deployment
The negative impact of multiplexing a large
number of VMs into a monolithic datastore is
that the granularity of service-provisioning
management of VMs is limited

VM2

VM3
vCenter

VM1
VM2
VM3

Volume
HP 3PAR
StoreServ

VVol model
VM
1

The VASA 2.0 specification describes the


use of virtual volumes to provide ease of
access and ease of manageability to each
VM datastore
Each VMDK, VMConfig, and SWAP disk is
provisioned as a separate VVol within the
array
A single point of access on the fabric is
provisioned via the protocol endpoint
These PEs are discoverable using regular
LUN discovery commands

VM2

VM3
vCente
r
Out-of-band
communicati
on

Protocol
endpoint

VM
1
VVo
l

VASA
provide
r

VM2
VVo
l

HP 3PAR StoreServ

VM3
VVo
l

Goals of the VVol architecture


Support VMware SPBM
Granularity of service provisioning

Previous versions of vSphere stored several VMDKs in a single volume


As a result, the storage services provisioned to all VMs could not be changed easily
and were not on a per-VM (or per-application) basis

Scalability of VMs and VMDKs

Because a number of VMDKs were stored in a single volume, scalability of the vSphere
deployment was not a major issue
When each VMDK is stored in a single volume, then the number of LUNs that can be
attached to ESXi becomes a problem, solved through the use of protocol endpoints to
export multiple VVols from a single point on the fabric

Improvement in operational recovery and disaster recovery

Management information stored alongside each VVol

VVOL timeline

VMware
announced
VVOL Beta on
06/30

VVOL beta
functionality
to be
included
with 3.2.1
09/29

Handful of
3PAR
customers
will be
allowed to
enable VVOLs
on nonproduction
arrays

VVOL
functionalit
y will be
enabled by
default in
3.2.1. MU

All customers
can use
VVOLs after
VMware
vSphere 2015
is GA

Key Windows Server 2012 storage


features MSFT solution
MSFT Thin
Provisioning

Detects/identifies thinly provisioned virtual disks


Notifies administrator when storage thresholds are met
Unmapreturns storage when no longer needed
MSFT requires UNMAP and SBC3 (T10 std) enablement to pass ThP certification

ODX (Offload Data


Transfer)

Enables protocol and transport agnostic, storage subsystem assisted data transfer
HPSD targeting lead with enterprise arrays for ODX support

Storage Management
via operating system

Integrated in the operating system; optimal for Small to Medium Businesses


SNIA Standards based (SMI-S) providers need to support Lifecycle Indications (auto
updates device info cache in OS); without Lifecycle Indications, cache can be stale
and would need to be refreshed either manually or via the 24-hour auto cycle

Failover Clustering

Scale: Operating system supports 64 nodes


Support level dependent on array market

SMB 3 (Server
Message Block)

Hyper-V support for SMB file storage; transparent failover, bandwidth


improvements, support for RDMA NICs

Storage spaces

Optional certification for JBODs (SATA, SAS, USB; not supported for Fibre Channel or
iSCSI)
Introduces Storage Pools concept; supports multi-tenancy mirroring or parity,
clustering, ThP

HP 3PAR Thin Persistence in Microsoft


A
perfect fit
environments
With Windows Server 2003 and 2008
Zero unused space in a volume with:

sdelete
fsutil

Introduced with Windows Server 2012


Active Thin Reclamation with T10 UNMAP

Detects/identifies thinly provisioned virtual disks


Notifies administrator when storage thresholds are met
UNMAPreturns storage when no longer needed

Thin Provisioning support


Just-in-time allocation and ability to reclaim unused space
automatically
Identification

Providing mechanisms for identifying thinly provisioned LUNs throughout the


operating system
(VPD 00h, B2h, PT= 010b LBPU=1)
Ability to query the mapped/unmapped state of LUN extents
Notification
Exposing events to indicate when LUNs cross threshold boundaries
(Temporary/Permanent resource exhaustion handling)
Events will be consumable by management applications

Optimization

Providing end-to-end transparency of application and file system allocations


All the way from the app layer (including Hyper-V guests on VHDX) through the
storage hardware
UNMAP (space reclaim) requests provided both real-time and on scheduled basis

Compatibility

Identification
Seen in Optimize Drives and File and Storage Services sub-screens and
wizards

Volume view
Pool view

Automatically reclaim space with


UNMAP
Scheduled UNMAP

Runs at times of low server I/O or CPU


utilization and at scheduled times (such as
Defrag)
Runs at time of file deletion

Offloaded Data TransferODX


ODX allows Hyper-V and operating system to move storage faster and
more efficiently
Enables protocol- and transport-agnostic, cross-machine storage subsystem
assisted data transfer

Practically eliminates load on the server, enables a significant reduction in the load on
the transport network, and presents an opportunity for innovation for the storage
subsystem

Used for live storage migration, VHD creation, bulk data movement, and so
Arrayforth
offloaded
E:\
E:\

Up to 260 MB/s

copies

D:\
D:\
D:\
D:\

F:\
F:\

File copy
request

E:\
E:\

File
copy
request

F:\
F:\
G:\
G:\

Without ODXHyper-V live storage migration, 3PAR host port


throughput

G:\
G:\

Virtually no I/O
activity

HP 3PAR
With ODXHyper-V live storage migration, 3PAR host port
throughput

Hyper-V ODX support


Boost your performance
Secure Offload Data Transfer
Fixed VHD/VHDX creation
Dynamic VHD/VHDX expansion
VHD/VHDX merge
Live storage migration
Just another example

Creation of a 10 GB fixed disk


200
150
100
50
0

Average Desktop

~3
minutes
Time
(seconds)

~10
seconds!

Capacity efficiency with deduplication


Home folders

30% savings

Variable size chunk-based deduplication

General
file
shares

64% savings

Scope and scale

Software
development
shares

32 K - 128 K chunks found using sliding window hash


(Rabin fingerprint)
Chunks are compressed when saved

67% savings

Runs per-volume with multiple volumes simultaneously


CPU and memory throttling can minimize performance
impact
Metadata kept redundant to protect against data corruption

Performance (source: Microsoft)

Virtual
hard disk
library

82% savings

Capacity savings by Microsoft


IT

No noticeable effect on typical office workloads (home


directories)
10% reduction in performance in the number of users
supported over SMB3.0 using FSCT
Optimization/deduplication @ 20-35 MB/s using a single
core with

Windows Server 2012 unified


management concepts
SMB & NFS
shares
FSRM & DFS

Windows
File Server

iSCSI
Deduplication
File systems
Windows
Cluster
Windows
volumes
Windows disks
Storage
Virtual disks
Storage pools
Physical disks

Space
s
Space(s)
Space(s)
Pool

Windows
Server

Disk

SMP/SM
I-S
LUN(s)
LUN(s)
Storag
e pool

Windows
and/or
external
storage

Storage ManagerStorage Pool View


After registering the SMI-S Provider

Storage ManagerArray View (Storage


Pool)

SCOM 2007
SCOM 2012

HP Storage Management Pack for SCOM


HP Storage management in SCOM
Automated installation and discovery

Supports both SCOM 2007 and SCOM 2012

Monitoring of HP Storage directly from


SCOM

Events and alerts for HP Storage hardware


Diagram/Topology views

Not included with HP Insight Control for System


Center

Download for free from:


http://h18006.www1.hp.com/storage/SCOM_managementpa
ck.html

SCVMM 2012

HP Storage SMI-S integration in SCVMM


Manage and provision storage directly
out of SCVMM2012

Import multiple SMI-S capable storage devices


Categorize logical units via classifications and pools
Provision logical units directly out of SCVMM
Allocate capacity for Hyper-V hosts or clusters

View list of HP SMI-S capable storage devices


http://h18004.www1.hp.com/storage/smis-matrix.html

HP 3PAR Storage overview and details

Capacity information

Provisioned, allocated, and


available capacity for HP
3PAR

Volume information

RAID, CPG, WWN


Health status

Provision infrastructure >15x faster


Add Hyper-V hosts in five easy steps
With traditional tools
1. Update server FW
2. Configure BIOS
3. Configure iLO

4. Configure Smart
Array
5. Select credentials
6. Set discovery
scope

7. Select servers
8. Select host
groups and
profiles
9. Perform deep
server discovery

10.Configure server
names and
network
11.Deploy Hyper-V to
bare-metal server

With HP OneView SCVMM integration: Five


steps
1. In SCVMM
2. Select
3. Match
4. Input
5. Confirm
Launch the
HP Add
Capacity
wizard

SCVMM and
HP OneView
profiles and
servers to
deploy

SCVMM
network
adapters
with HP
OneView
uplink ports

computer
names and
optional
network
configuratio
n

settingsa
nd start
provisionin
g

12.Configure SCVMM
virtual switches
13.Assign IP
addresses
14.Complete host
networking and
storage
configuration

Automated
steps

1. HP OneView profile
is applied (BIOS, VC,
Storage)
2. Hyper-V is deployed
3. Host networking and
shared SAN storage
* Based on HP internal testing as of April 2014 comparing HP One View v1.10 vs. traditional HP and Microsoft management tools, each
16 servers.
isdeploying
configured
Test was to configure the networks, enclosure, template, and profiles. HP OneView SCVMM integration takes 10 minutes of an admins time vs. traditional HP
and Microsoft management tools taking 159 minutes of admin time.

Cluster Extension for Windows


Clustering solution protecting against server and storage failure
What does it provide?

File share Witness

Manual or automated site failover for server and storage


resources
Transparent Hyper-V live migration between sites

Supported environments
LAN/WAN

Windows Server 2003, 2008, 2012


HP StoreEasy (Windows Storage Server)
Max supported distances
Remote Copy sync supported up to 2.6 ms RTT (~260 km)
Up to Microsoft Cluster heartbeat maximum of 20 ms RTT

HP 3PAR Synchronous
or
asynchronous
Remote Copy
managed by HP
CLX
3PAR

1:1 and SLD configuration


Sync or async Remote Copy

Requirements
Data Center 2

3PAR disk arrays


3PAR Remote Copy
Windows cluster
HP Cluster Extension (CLX)
Max 20 ms cluster IP network RTT

Licensing options

Also see the HP CLX resources

Option 1: per cluster node


1 LTU per Windows cluster node (4 LTUs for configuration to
the left)

Peer Persistence for Windows

P
P
S
S

Available with 3PAR OS 3.2.1

Clustered application

High availability across data centers


Automatic or manual transparent LUN swap
Transparent live migration between data
centers

Windows Failover Cluster


Hyper-V
Hyper-V

How does it work?

Based on 3PAR Remote Copy and MS MPIO


Primary RC volume presented with active
paths
Secondary RC volume presented with passive
paths

Automated LUN swap arbitrated by a


Quorum Witness (QW Linux Hyper-V VM on
third site)

Supported environments

Secondary RC volume LUN passive path


presentation
Hyper-V VM

What does it provide?

Primary RC volume active path


presentation

Windows Server 2008 R2 and 2012 R2


Stand-alone servers and Windows cluster
Hyper-V
Up to RC supported max of 2.6 ms RTT
(~260 km )

Requirements

Hyper-V

P
Data Center 1

HP 3PAR
Synchronous
Remote Copy
+ Peer
Persistence
LAN/WAN
Witness
QW
DC 3

Hyper-V

S
P
HP 3PAR
Data Center 2

Peer Persistence for Windows


Never lose access to your volumes

Windows Server Failover


Cluster

Each host is connected to each array


on both sites via redundant fabrics (FC
or iSCSI or FCoE)
Synchronous copy of the volume is kept
on the partner array/site (RCFC or
RCIP)

Fabric A

Each volume is exported in R/W mode


with same WWN from both arrays on
both sites
Volume paths for a given volume are
Active only on the array where the
Primary copy of the volume resides.
Other volume paths are marked
Standby

Fabric B
Up to 2.6 ms RTT
latency
Vol B

Vol B
prim
Vol A sec

B sec
A

Both arrays can host active and passive


volumes
Active path (Vol A) 3PAR array
Active path (Vol B)
rd
Site A
Quorum Witness on 3 site acts as
Passive (standby)
arbitrator in case of failures
path

Vol A
prim
QW
Hyper-V

Site C

B
A

3PAR array
Site B

Peer Persistence for WindowsMPIO


path view
Windows Server Failover
Cluster

2
2

Windows MPIO view

Vol A
prim

Vol A sec

QW

3PAR array
Site A

3PAR MC Remote Copy view

Hyper-V

Site C

3PAR array
Site B

Recovery managers for Microsoft


Exchange Server and Microsoft SQL
Server
RM MS Exchange Server and RM MS SQL Server
Automatic

discovery of Exchange and SQL Server servers and


their associated databases
VSS integration for application-consistent snapshots
Support for Exchange Server 2003, 2007, and 2010
Support for SQL Server 2005, 2008, and 2012
Support for SQL Server running in a vSphere Windows VM
Database verification using Microsoft tools

See the demo video at:


3PAR Recovery Manager for SQL

Built on 3PAR Thin Virtual Copy technology


Fast

point-in-time snapshot backups of Exchange and SQL Server


databases
Hundreds of copy-on-write snapshots with just-in-time, granular
snapshot space allocation
Automatic recovery from snapshot
3PAR Remote Copy integration
Exporting of database backups to other hosts

Backup integration
HP

DataProtector
NetBackup and Backup Exec
Microsoft System Center Data Protection Manager
Symantec

Find product documentation at:


http://h18006.www1.hp.com/storage/software/3par/rms-exchange/index.html
http://h18006.www1.hp.com/storage/software/3par/rms-sql/index.html

Recovery Manager for Microsoft HyperVBuilt on 3PAR Thin Virtual Copy technology
Supports hundreds of snapshots with just-intime, granular snapshot space allocation
Create crash- and application-consistent
virtual copies of Hyper-V environment
VM restore from snapshot to original location
Mount/unmount of virtual copy of any VM
Time-based VC policy per VM
Web GUI scheduler to create/analyze VC
PowerShell cmdlets (CLI and scripting)
Supported with:
Windows

Server 2008 R2 and 2012

Stand-alone

Hyper-V servers and Hyper-V


Failover Cluster (CSV)

F-Class,

StoreServ 7000 and 10000

RMS and RME architecture


Exchange or SQL Server
production server

RM client and
backup server

1. Off-host backup
2. Direct restore from tape
3. Direct mount of
snapshot
4. Restore from snapshot
with file copy restore
9:00
13:00
3PAR
production
volumes

17:00
Snapshots

Optional library
tape or D2D

RME and RMS and RMH VSS integration


1. Backup server requests RM agent to create 3PAR VC
2. RM agent requests MS Volume Shadow Copy Service
(VSS) for DB metadata details

Production DB server
1
1
1

RM agent
3
3

2
2

1
1
0
0

MS VSS
9
9

4
4 5
5

Exch /SQL
Server DB
or VDD

1
1

Recovery
Manager

Backup
server

4. MS VSS queries 3PAR VSS provider if 3PAR VC can be


created
5. MS VSS sets DB/VHD to quiesce mode

8
8
4 6
4
3PAR VSS
provider

3. RM agent calls MS VSS to create virtual copies (VC)


for specific DB volumes

7
7

3PAR
array

6. MS VSS calls 3PAR VSS provider to create VC of


volumes
7. 3PAR VSS provider sends commands to 3PAR array to
create VC of volumes
8. 3PAR VSS provider acknowledges VSS VC creation
completed
9. MS VSS sets DB/VHD back to normal operation
10.MS VSS acknowledges RM agent creation of VC

RM Exchange and SQL Server in a CLX


Extended
possibilities
environment
RM backup server at
the remote secondary
site (Site B) can
actively manage
Virtual Copy
That means all the
operations, including
recovery, can be
performed at the
remote site

Site B

Site A

RM backup server local

RM backup server remote

Single copy cluster / SQL Extended Cluster (using CLX)


Exchange/SQL
1

Exchange/SQL
2

Exchange/SQL
3

Remote Copy

DB

VC

VC

Can recover
at Site B using
RM

Exchange/SQL
4

Validations can take


hours to complete for
large databases (size of
TB)
Queuing and
sequentially validating
many databases can
take a long time (hours
to days)
This enhancement
ensures that the
validations occur in
parallel, mitigating the
issue

DB1 (2 TB)

Earlier
versions

3 hrs

DB2 (2 TB)

3 hrs

DB3 (2 TB)

Sequentia
l

Recovery Manager for Microsoft


Concurrent
database validations
Exchange
3 hrs

Total: 9 hrs to complete

Since
v4.4

Concurre
nt

DB1 (2
TB)

DB2 (2
TB)

3 hrs

3 hrs

DB3 (2 TB)

3 hrs

Total: approx 3 hrs to complete

Recovery Manager Diagnostic Tool


A tool that validates all the RM configuration parameters and
generates reports indicating non-compliance
Runs on the backup server
Automatically probes all the servers registered in the
recovery
manager including the backup server itself
Checks all parameters required for a successful RM operation,
such as:
Database

status
HWP configuration
StoreServ connectivity
VSS

Generates a report indicating success, warning, and error


Advises the user of corrective action
Displays high-level dashboard status
Currently supported:
RM

Exchange Server
SQL Server
RM Hyper-V pending
RM

Learning check
1. On which two technologies is Peer Persistence for Windows based?
(Select two)
a. 3PAR Remote Copy
b. Microsoft MPIO
c. Microsoft Active Directory
d. VMware Storage PolicyBased Management (SPBM)

Learning check answer


1. On which two technologies is Peer Persistence for Windows based?
(Select two)
a. 3PAR Remote Copy
b. Microsoft MPIO
c. Microsoft Active Directory
d. VMware Storage PolicyBased Management (SPBM)

Learning check
2. Peer Persistence for VM requires four 3PAR disk arrays
.True
.False

Learning check answer


2. Peer Persistence for VM requires four 3PAR disk arrays
.True
.False
Peer Persistence for VM requires two 3PAR disk arrays

Learning check
3.

What are VVols?


__________________________________________________________________
__________________________________________________________________
__________________________________________________________________

Learning check answer


3.

What are VVols?


VVols are part of the VASA 2.0 specification defined by
VMware as a new architecture for VM storage array
abstraction

Learning check
4.

List three benefits of HP 3PAR RM that apply to any server


environment
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________

Learning check answer


4.

List three benefits of HP 3PAR RM that apply to any server


environment
. Off-host backup
. Direct mount of snapshot
. Restore from snapshot with file copy restore
. Hundreds of copy-on-write snapshots with just-in-time,
granular snapshot space allocation

Security and
multi-tenancy

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

3PAR DAR encryptionWhats new


HP 3PAR StoreServ is now a complete FIPS 140-2 compliant encryption
solution
A 3PAR array uses these FIPS 140-2 validated encryption modules:

FIPS 140-2 Level 2 Self-Encrypting Drives (SEDS)


New 920GB FIPS Encrypted MLC SSD

FIPS 140-2 Level 2/3 Enterprise Key Managers (EKM)


FIPS 140-2 Level 1 Software

A 3PAR array now supports these optional EKMs:


HP Enterprise Secure Key Manager v4.0
SafeNet KeySecure k450 and k150

Why is key management important?


Encryption is simpleKey management is not
Data is not protected just by encrypting it
Keys are the little secrets that protect the big secrets

Event or threat

Risk and impact

Exposing keys, unauthorized


access

Exposure of protected data,


noncompliance

Loss of authorized access to


keys

Loss of data access, business


interruption

Loss or accidental destruction


of keys

Loss of keys, data loss, business


failure

Failure to control/monitor/log
Audit failures, increased liability
access
Keys
must be securely preserved, protected, and accessible

data
3PAR now supports the following enterprise key managers:

HP Enterprise Secure Key Manager v4.0


SafeNet KeySecure k450 and k150

for the life of the

Other recent 3PAR security


enhancements
Common Criteria Certification commences on 7000 and 10000 with 3PAR OS
3.2.1

Takes two - three months to complete

Maximum password length increases from 8 characters to 32


Password hash length was 31, now is 107

Hash: A one-way cryptographic function that allows you to store a password without
knowing its contents

CA signed certificates

Allow an admin to import a certificate signed by an external authority


This enables TPDTCL to prove the authenticity of the StoreServ array to the remote CLI

New Audit User Account

A new class of user to enable Retina Network Security Scanner and Nessus
Vulnerability Scanner to perform a credential, or local, scan of the 3PAR OS Linux file
system

What are 3PAR virtual domains?


Multi-tenancy with traditional storage

Admin A
App A
Dept A
Customer A

Admin B
App B
Dept B
Customer B

Admin C
App C
Dept C
Customer C

Multi-tenancy with 3PAR domains

Admin A
App A
Dept A
Customer A

Domain
A

Separate, physically secured


storage

Admin B
App B
Dept B
Customer B

Admin C
App C
Dept C
Customer C

Domain
B

Shared, logically secured 3PAR


storageDomain C

See also http://h18006.www1.hp.com/storage/software/3par/vds/index.html

Domain C

3PAR virtual domains for multi-tenancy


security
Provides fine-grained
access control for users
and hosts to achieve
greater storage service
levels
Securely separates data
and eliminates
unauthorized or accidental
access
Up to 1024 domains with
individual settings per
3PAR array
Optionally Priority
Optimization allows
assigning QoS to virtual
domains

QoS

Hosts
Access
CPG
Parameter
s

Volumes

Virtual
Domain n

Virtual
domain A
QoS

Hosts
Access
CPG
Parameter
s

Volumes

What are the benefits of virtual


Centralized storage
Self-service storage
domains?
administration
with traditional storage

Users

administration
with 3PAR virtual domains

Users

Consumers only

Self-provisioning

Virtual domains

Provisioned
storage

Secure virtual array

Centralized
admin

Centralized
admin

Full setup,
provisioning, and
monitoring

Virtual domain
setup and
monitoring only

Consolidated
storage

Physical storage

Consolidated
storage

HP 3PAR

Authentication and authorization


LDAP login
Management
workstation

3PAR array

LDAP server
2

1
6

3
4
5

Step
1:

User initiates login to 3PAR via 3PAR CLI/GUI or SSH

Step
2:

3PAR OS searches local user entries first; upon mismatch, configured LDAP server is
checked

Step
3:

LDAP server authenticates user

Step
4:

LDAP server provides LDAP group information for user

HP 3PAR Virtual Lock

HP 3PAR Virtual Lock Software prevents


deletion of selected virtual volumes for a
specified period of time
Locked virtual volumes cannot be deleted,
even by a 3PAR Storage System administrator
with the highest level of privileges
Note: Mounted servers can still read, write,
and delete files and folders
Locked RO virtual copies cannot be deleted
and overwritten (for compliance reasons)
Because it is tamper-proof, it is also a way to
avoid administrative mistakes
Supported with:

Fat and thin virtual volumes


Full Copy, Virtual Copy, and Remote Copy

Also see the Virtual Lock overview

Learning check
1. List at least five recent HP 3PAR StoreServ security enhancements
____________________________________________________________________
____________________________________________________________________
____________________________________________________________________
____________________________________________________________________
_______________________________________________________________

Learning check answer


1. List at least five recent HP 3PAR StoreServ security enhancements
. Is now a complete FIPS 140-2 compliant encryption
solution
. Common Criteria Certification commences on 7000 and
10000 with 3PAR OS 3.2.1
. Maximum password length increases from 8 characters to
32
. Password hash length was 31, now is 107
. Allow an admin to import a certificate signed by an
external authority
. New Audit User Account
. Virtual domains provide secure virtual arrays
. HP 3PAR Virtual Lock Software prevents deletion of

Competitive

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

Why HP 3PAR versus EMC, NetApp,


HDS, and IBM
Only 3PAR has a single architecture that spans mid-range, high-end, and allflash
Only 3PAR has ASIC-enabled thin technology providing no performance
drop-off for zero-reclaim and built-in thin provisioning
Only 3PAR allows different RAID levels on the same disks, thus eliminating
wasted space
Note: NetApp offers only a single RAID level
Only 3PAR has true ASIC-enabled, line speed scale-out in the mid-range
Note: Both NetApp and IBM mid-range scale-out is based on slower
Ethernet with all I/O inter-node traffic driven by the same Intel processors
that drive front- and back-end IOPS
Only 3PAR has true federation built into the storage controllers
Only 3PAR offers a full-featured, dedupe-enabled all-flash native block array
Note: NetApp just introduced a FAS AFA with performance numbers for files;
with NetApp, block is an emulation, not native

Reasons to buy 3PAR over VMAX3


3PAR

VMAX

Single HW and SW architecture


mid-to-high-end and all-flash

Requires VNX, VMAX, and XtremIO

Better HA3PAR node-pair


architecture is more resilient

VMAX engine architecture has two


clear SPOFs

Industrys best Tier-1 ease-ofmanagementautonomic

According to customers, managing


VMAX is like taking a beating

Industrys most advanced thin


technology

Weak, bolted on, and slowsee


Edison Group report

3PAR only charges SW licenses


for up to 1/3 the capacity of the
array

VMAX license fees are typically


based strictly on each TB in the
array

Top reasons to buy 3PAR over VNX-2


3PAR

VNX-2

Single HW and SW architecture


mid-to-high-end and all-flash

Requires VNX, VMAX, and XtremIO

Solid, proven operating system

Claims millions of lines of new


VNX-2 code

Industrys best ease-ofmanagement autonomic

Unisphere is easy, but VNX


requires a lot of pre-planning and
decisions

Industrys most advanced thin


technology

Thin technology is weak, bolted


on, and slowsee Edison Group
report

Industrys best real-world, dayto-day performance

Might be fast, but requires


constant retuning to stay fast

Four mesh-active controllers in

Two controllers max

Top reasons to buy 3PAR 7450 over


XtremIO 3PAR
XtremIO
Single HW and SW architecture
mid-to-high-end and all-flash

Requires VNX, VMAX, and XtremIO


to match 3PARs range

Full sync and async replication

No remote replication

Four-controller scale-out with


flexible configurations and scaleup

Fixed configurationsno scale-up


after purchase

Scales to 240 x 1.9 TB SSDs

Limited to 150 SSDs in maxbrick/12 controllers configuration

< $2 per usable GB

~$5 per usable GB

ASIC enabled deduplication

Dedupe driven by same


processors as system
IOPSperformance numbers
given with dedupe turned off

Top reasons to buy 3PAR over NetApp


3PAR

NetApp

Single HW and SW
architecture mid-to-highend and all-flash

A mid-range architecture. FAS all-flash is


stop-gap with highly suspect
performance numbers

Industrys best ease-ofmanagementautonomic

Cumbersome and complex


management, particularly in block
environments and anti-virus

Industrys most advanced


thin technology

Thin provisioning is risky. Zero reclaim is


untestablesee Edison Group report

Industrys best real-world,


day-to-day performance

NetApps own SPC results show that it


requires 50% more controllers than
3PAR to reach similar performance

Four mesh-active
controllers in the mid-range

SPECsfs benchmark results show that


there is a huge efficiency drop when

Top reasons to buy 3PAR over HDS


3PAR

HDS

Single HW and SW
architecture mid-tohigh-end and all-flash

VSP is high-end. VM is similar and mid-range


but only two controllers. AFA VM has limited
scalability and relatively weak performance.
HUS is a different platform.

Industrys most
advanced thin
technology

Zero reclaim has big performance and latency


hitsee Edison Group report

Industrys best easeofmanagementautono


mic

In both the high-end and mid-range HDS is


more difficult to manage than 3PAR

Easy NAS with good


value

HUS and HUS VM have expensive and


cumbersome NAS based on BlueArc

Top reasons to buy 3PAR over IBM


3PAR

IBM

Single HW and SW
architecture mid-tohigh-end and all-flash

High-end (DS8800), mid-range (Storwize),


and all-flash (FlashSystem) are totally
different platforms

Industrys best realworld, day-to-day


performance

DS8800 matches 3PAR SPC result; however,


3PAR mid-range handily beats Storwize, XIV,
and FlashSystem

Industrys best easeofmanagementautono


mic

Traditional LUN management

Industrys most
advanced thin
technology

Traditional thin provisioning;


Zero reclaim runs on system processors

Learning check
1. Name three unique features of HP 3PAR that are missing in
competitors products
____________________________________________________________________
____________________________________________________________________
____________________________________________________________________
________________________________________________________________

Learning check answer


1. Name three unique features of HP 3PAR that are missing in
competitors products
Single architecture that spans mid-range, high-end, and
all-flash
ASIC-enabled thin technology providing no performance
drop-off for zero-reclaim and built-in thin provisioning
Support for different RAID levels on the same disks, thus
eliminating wasted space
True ASIC-enabled, line speed scale-out in the mid-range
True federation built into the storage controllers
A full-featured, dedupe-enabled all-flash native block array

Converged Storage

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

HP Converged Storage promise: Simplify

Architectural
attributes

Autonomic
Efficient
Multi-tenant
Federated

Choreograph across servers, networks, and


storage

Scale-out and federated software


Non-disruptive data growth and mobility

Standard x86-based platforms


Increase storage performance and density

High
End

n
io ,
at
rm ion nd
fo ct
In ote on a
Pr nti tics
te ly
Re Ana

Polymorphic

Converged management
orchestration

Pr
i
HD ma
D ry S
<
> tora
Fl
as ge
h

Modern storage architectures designed for the cloud, optimized for big
data, and built on converged infrastructure

Entry

Block | Object |
File

Converged managementHP OneView


Innovations for converged management productivitySuccessor to HP
SIM
Simple: Consumer-inspired user experience

Everyday tasks in seconds


Architected for team productivity

Fast: Software-defined process templates

Push-button precision, consistency, reliability


Automate storage provisioning using server
profiles

Extensible: Enterprise software integration

Infinite possibilities to automate and customize


VMware vCenter, Microsoft System Center, Red
Hat RHEV, HP CloudSystem, HP Orchestration,
user customizations

Whats new in storage management? 3PAR


provisioning
HP
OneView 1.1Automation for 3PAR StoreServ Storage, traditional FC fabrics, Flat
SAN
Automated storage provisioning
Import 3PAR storage systems and storage pools
Carve 3PAR volumes on the fly
Attach/export 3PAR volumes to Server Profiles

Automated SAN zoning


Import Brocade fabrics for automated zoning
Zoning is fully automated via
Server Profile volume attachments

Integration of Storage with Server


Profiles
Attach private/shared stand-alone volumes to
server profiles
Create ephemeral volumes in the Server
Profilelike adding a vdisk to a VM, but with real
hardware
Automated boot target configuration using port

A Storage license is included as part of the


HP OneView 1.1 release

Introducing: HP ConvergedSystem for


Virtualization
The FASTEST way to virtualize with breakthrough TCO on a cloud-compatible platform

Simple to
buy

50 - 1000+ VMs
Starting at
$2,250/month

Simple to
deploy

Order to operations
in as few as 20 days

Simple to
manage

Managed as ONE

Simple to
support

One company: HP

HP ConvergedSystem 700
HP advantages versus VCE Vblock 300*

35%

Performanc
up to
more throughput
e
HP BL460 Gen 8 NetPerf vs Cisco UCS B200 M3 @ MTU
1500/Default IRQ

Storage

VMs

up to 55% lower SAN network


latency
HP CS700 Flat SAN 2.05s vs. Vblock 300 4.5 s
3x
Hypervisor choice
(FEX>FI>MDS)

HP CS700x VMware, Hyper-V, Red Hat KVM vs Vblock


VMware

Deploymen
t
Cost

up to

33%

Faster deployment

HP CS700 30 days to deploy vs 45 days Vblock

~15%

Lower price

Based on estimated list prices

*Comparison between HP Converged System 700 and the Vblock 300 May vary by deployment. VCE Data as of 11/1/2013 per : http://
www.vce.com/asset/documents/vblock-320-gen3-1-architecture-overview.pdf. Hypervisor choice based on CS700x

Integrated
design
Managed as
one
Supported by
one
point of contact

HP Helion portfolio overview


Best-in-class products, solutions, and services for hybrid IT
Integrated cloud solutions
HP CloudSystem Enterprise
HP CloudSystem Foundation

OpenStack Software
HP Helion OpenStack
Community

Cloud software and


infrastructure
HP Automation
&
Orchestration
HP Hybrid Cloud Management
HP Converged Infrastructure

Managed services
HP Helion Managed Virtual
Private Cloud & Managed
Private Cloud
HP Helion Managed &
Workplace Applications

PartnerOne for
Cloud

Public cloud and SaaS


HP Helion Public Cloud
HP SaaS applications

Cloud builders Cloud resellers Cloud service


providers

OpenStack Professional
Advisory Services
Strategy
Apps
Operations
transform
Education
Implementatio
n
Design

What is Recovery Manager Central? (1


of
3)
Snapshot-based data protection platform
Two elements

Recovery Manager Central for VMware

Managed via vCenter plug-in; for VM backups only (application-consistent)

Recovery Manager Central Express Protect

Managed via web browserfor all other snap backups (crash-consistent)

Fosters integration of 3PAR and StoreOnce


Near-instant recovery
Longer-term data retention
Catalyst integration as backup target

Flat Backupdata streams from 3PAR to StoreOnce*


* v1.0Data path goes through RMC VM until 1.1 or 2.0, depending on when RMC is embedded in StoreOnce

What is Recovery Manager Central? (2


of 3)

3PAR StoreServ

StoreOnce
Recovery
Manager
Central

3PAR StoreServ system (any currently supported


model*)
StoreOnce (software 3.12.x to support Backup
Protect**)
StoreOnce Recovery Manager Central 1.0
VMware 5.1 and 5.5
*7000 series and 10000 series will have full functionality; F-Class and T-Class will be limited

StoreOnce

What is Recovery Manager Central? (3


of 3)
It is not a replacement for existing backup application
RMC 1.0:

3PAR onlycannot protect other storage platforms


No Oracle, SQL Server, Exchange Server, Hyper-V (unless on a VM)
No Hyper-V or KVM VM
No bare-metal recovery (unless on a VM)
No granular-recovery capability

It is intended to be a complementary piece along with backup app

Faster and cheaper alternative to backup app for non-granular protection

RMC value proposition


Converged availability and backup service for VMware
Flat

backup alternative to traditional backup apps

Performance of Virtual Copy snaps


Reliability and retention of StoreOnce
Speed of backups and restores via SnapDiff

Control of VMware protection passes to VMware admins


Managed

from within vSphere

Extension of primary storage


Snapshots

key to entire data protection process

Common integration and API point for backup applications,


reporting, and security

Learning check
1. How does HP OneView innovate storage management?
____________________________________________________________________
__________________________________________________________________

Learning check answer


1. How does HP OneView innovate storage management?
By automating 3PAR StoreServ Storage, traditional FC
fabrics, Flat SAN
By automating SAN zoning
By integrating storage with server profiles

Customer resources

Copyright 2014 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

HP product information

The HP Marketing Document


Library allows you to:

Access and search the QuickSpecs


online

Download the offline QuickSpecs


application

Create quick quote for your desired


product

Look up individual product list prices

The QuickSpecs provide technical


info for:

HP products

HP services

HP solutions

Go here http://www.hp.com/go/qs

HP Storage Information Library


Find just what you are looking for

Find up-to-date information


including:

Installation Guides
Configuration Guides
User Guides
References such as Release Notes and
Planning Manuals
Service and Maintenance Guides

Available for 3PAR and some other


storage systems

Visit www.hp.com/go/docs and click


the Storage Information tab

HP SAN certification and support


HP SAN Design Reference Guide

Main goals and contents

Architectural guidance
HP support matrices
Implementation best practices
Incorporation of new technologies
HP Storage implementations such as iSCSI,
NAS/SAN Fusion, FC-IP, FCoE, DCB

Provides the benefit of HP engineering


when building a scalable, highly
available enterprise storage network

Documents HP Services SAN integration,


planning, and support services

Visit www.hp.com/go/sandesign

HP Storage interoperability
Single Point of Connectivity Knowledge for HP Storage products
SPOCK provides the information to
determine interoperability for:
Integration

of new products and

features
Maintaining

active installations

SPOCK can be accessed by:


HP

internal users

HP

customers

HP

partners

HP internal access:
http://spock.corp.hp.com/default.aspx
External access (requires an HP Passport):
http://www.hp.com/storage/spock

HP 3PAR assessment tools


Available for HP and HP partners

NinjaSTAR
S

Allows capturing customer-installed


storage base capacities,
configuring HP StoreServ 7000,
and projecting thin savings

NinjaThin

Allows capturing customer-installed


storage base capacities and
projecting thin savings with HP
StoreServ 10000

NinjaVirtual

Allows capturing customer-installed


vSphere configuration and
projecting VM density increase
using HP StoreServ

Learning check
1. Which 3PAR assessment tools are available for HP partners?
a. NinjaSTARS
b. NinjaThin
c. NinjaOptimize
d. NinjaVirtual
e. NinjaSystems

Learning check answer


1. Which 3PAR assessment tools are available for HP partners?
a. NinjaSTARS
b. NinjaThin
c. NinjaOptimize
d. NinjaVirtual
e. NinjaSystems

Thank you

www.hp.com/go/3PAR storeserv
Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

S-ar putea să vă placă și