Sunteți pe pagina 1din 31

1

Modernize your
Datacenter with Flexible
and Scalable Storage
Networks

PURPOSE OF STORAGE?

One Main
Job:

Give me back the correct bit


I asked you to hold on for me.

Everything we do in storage (including


storage networking) is based around
completing that job safely, securely,
reliably, and without error

IT ENVIRONMENTS EXPERIENCING
ACCELERATING TRANSITIONS

Infrastructure Agility Needed to Enable Greater Speed of Business


INDUSTRY DYNAMICS

DATA CENTER TRENDS

EFFECT ON STORAGE
INFRASTRUCTURE

Platform
6% Integrated
CAGR by 2015-18

Internet of Things

Server Virtualization

Cloud

14%

Integrated Infrastructure

69% CAGR by 2017

Software Defined
Infrastructure

Analytics

Increased Flash Usage

58%

Hyperconverged

40% CAGR by 2017

69% CAGR by 2014-19

85% by 2018

65% Growth by 2017

7x Growth SSD by 2018

STORAGE GROWTH

CAGR by 2015-18

CAGR by 2015-18

10Xs
Growth in Information
Created by 2020
Higher Demand
on Multiprotocol Storage

Sources: Cisco Visual Networking Index (VNI), Cisco Cloud Index, IDC, Gartner, IDC WW Integrated Systems Forecast 2014-2018 (Nov 14)
IDC WW Hyperconverged Systems 2015-2019 Forecast (April 15)
4

EMC AND CISCO BUILDING BETTER


SOLUTIONS

FILE

OBJECT

ISCSI

HYPERCONVEREGED

FIBRE
CHANNEL

CISCOS FAMILY OF STORAGE NETWORKING


SOLUTIONS
LAN/SAN

Nexus

SAN

Connectrix
MDS

COMPUTE

UCS

CISCO MULTI-PROTOCOL PORTFOLIO


SAN, LAN & COMPUTE
LAN/SAN

Cisco Nexus
7000

Cisco Nexus
5500

Cisco
Nexus
2000

SAN

Cisco Nexus
9000

Nexus
Cisco Nexus
5672UP-16G
5600

Nexus
2348UPQ

Cisco
Nexus 3000

EMC Connectrix
MDS 9706/9710

COMPUTE

48x16G
Line-Rate FC

EMC Connectrix
MDS 9396S

24 x 40G
Line-Rate
EMC Connectrix FCoE
MDS 9718

48x10G
Line-Rate
FCoE

Cisco UCS
6300
Series FI

Cisco UCS
6200 Series FI

EMC Connectrix
MDS 9148S

EMC Connectrix
MDS 9250i

Cisco UCS BSeries


Blade Servers

Cisco UCS C-Series


Rack Servers

Common OS, Common Management


7

EMC CONNECTRIX MDS 9718 DIRECTOR


BUILD HIGH-PERFORMANCE, HIGHDENSITY NETWORKS
1.536 Tbps per slot switching performance
Up to 768 line rate ports per chassis

REDUCE COMPLEXITY, ACCELERATE


MANAGEMENT
Programmability and automation with Restful-based
NXAPI
Power On Auto Provisioning automates configuration

SAVE DOLLARS, INVEST IN THE


FUTURE
Same line-cards, NXOS, power supplies across all
MDS 9700 Directors
32G ready for 768 line-rate 32G FC ports

Higher Speed, Higher Density, More Flexible SAN Switching


8

EMC CONNECTRIX MDS 9718 USE CASE:


SWITCH CONSOLIDATION
64 Target ports
per fabric

BENEFITS:
64 Target ports
per fabric

REDUCED
MANAGEMENT

Fewer switches to manage


No ISLs to manage

REDUCED
POWER

Fewer switches
Fewer ports deployed (no
ports used by ISLs)

REDUCED
CABLING
704 Server
ports per
fabric

CORE-EDGE DESIGN
6 x 384 port directors
256 ISL ports
1792 ports deployed

Elimination of ISL cables

704 Server
ports per
fabric

COLLAPSED CORE DESIGN


2 x 768 port directors
0 ISL ports
1536 ports deployed
9

EMC CONNECTRIX MDS 9718 USE CASE:


SCALED GROWTH

BENEFITS:
384 Target ports
per fabric

384 Target ports


per fabric

CORE-EDGE DESIGN
.8.

.8.

.8.

2688 host ports


per fabric

2688 host ports


per fabric

EDGE-CORE-EDGE DESIGN

26 x 384 port directors


3072 ISL ports
9216 ports deployed
Core at maximum capacity

.8.

REDUCED
OPEX

REDUCED
CAPEX

REDUCED
CABLING

Fewer switches to manage


Fewer ISLs to manage

Fewer switches deployed


Fewer ports deployed
(fewer ISLs)

Elimination of ISL
ports/cables

CORE-EDGE DESIGN

4 x 768 port directors, 16 x 384 port directors


1536 ISL ports
7680 ports deployed
Core can continue to grow storage and edges
10

EMC CONNECTRIX MDS 9700 MODULE


194% higher ISL
bandwidth
compared to 16G FC
47% higher ISL
bandwidth
compared to 32G FC
Reduced cost - BiDi
Optics use existing OM3
or OM4 cabling

40GE ISLs for higher performance


11

CISCO 40G QSFP BIDI TRANSCEIVERS


UTILIZING EXISTING DUPLEX FIBER

LC Fiber Cable
Each Fiber Strand Carries
16G/10 Gbps

MPO-12 Fiber Cable


8 Fiber Strands Carry 10 Gbps Each;
4 Fiber Strands Unused
12-Fiber MPO

OM3 MMF: 100m


OM4 MMF: 150m

LC Fiber Cable
Duplex 20 Gbps (Receive and
Transmit on Two Different
Wavelengths)
Duplex LC

12-Fiber MPO

LC

OM3 MMF: 300m (10G), 100m (16G)


OM4 MMF: 400m (10G), 125m (16G)

LC Cable with 40G QSFP-BiDi

Duplex LC

MPO-12 Cable with 40G QSFPSR4

LC

LC Cable with SFP+

OM3 MMF: 100m


OM4 MMF: 150m

No Need to Upgrade the Fiber Plant


12

40GE ISL USE CASE:


ISL CONSOLIDATION
64 Target ports
per fabric

BENEFITS:
64 Target ports
per fabric

256
ISL Ports

704 Server
ports per
fabric

REDUCED
MANAGEMENT

Fewer switches to manage

REUSE
CABLING

BiDi Optics allow use of


existing LC cabling

96
ISL Ports

704 Server
ports per
fabric

16G FC ISLs
6 x 384 port directors
256 ISL ports
1792 ports deployed

40GE FCoE ISLs


6 x 384 port directors
96 ISL ports
1632 ports deployed
13

40GE ISL USE CASES:


CONVERGED NETWORKS TO FIBRE CHANNEL SAN

40GE
FCoE

10GE
FCoE

NEXUS Fixed /Modular

40GE
FCoE

40GE
FCoE

10GE
FCoE

NEXUS with FEX

UCS with 6333 Fabric Interconnect

14

CISCO NEXUS 5672UP-16G


ALL FEATURES OF NEXUS 5600 AND MORE

Flexible Traditional
Ethernet plus Storage:
File, iSCSI, FCoE and FC

24 Fixed 1/10G SFP+ Ports


24 Unified Ports provide 2/4/8/16G FC
or 10G Ethernet/FCoE

Reduced cost Deploy


once, implement solutions
as needed

6x 40G QSFP+ Ports


Flexibility to use 4x10G or 40G

Enhanced 5672UP for SAN


15

CISCO NEXUS 5672UP-16 USE CASES


Ethernet LAN
FC SAN

Ethernet LAN

Nexus
Spine
40GE

8/16G FC
16G FC

40GE
5672UP-16

5672UP-16

10GE

5672UP-16

16G FC
10GE FCoE

Ethernet and FC Connectivity

Converged Access to FC and


IP Storage

IP Storage

16

CISCO NEXUS 2348UPQ FABRIC EXTENDER


Flexible Deploy ports
as required, LAN or SAN

Reduced cost Lower


cost than Ethernet switch
Unified Ports provide 2/4/8/16G FC
or 10G Ethernet/FCoE
Up to 24 x 16G FC ports
Up to 48 2/4/8G FC ports

6 x 40G QSFP+ Ports


Flexibility to use
4 x 10G or 40G

Reduced management
Configuration and OS
done on parent switch

First FEX Solution Designed for All Storage Connectivity


17

CISCO NEXUS 2348UPQ USE CASE:


LAN/SAN ACCESS CONVERGENCE
TRADITIONAL RACK
ARCHITECTURE

UP FEX RACK ARCHITECTURE


To LAN/SAN

To LAN To SAN

2 x TOR LAN
Switches
2 x TOR SAN
Switches

2 x LAN/SAN
2348UPQ FEX

Rack Mount
Servers

Rack Mount
Servers

18

NEW APPLICATION DESIGN IS EMERGING


IP STORAGE TRENDS TRACKING THESE CHANGES

19

DESIGN CONSIDERATIONS FOR IP SANS


BANDWIDTH
LATENCY

Resource sharing amongst multiple


application workloads
Low latency high throughput networks
for sensitive workloads

SCALABILITY

Meet the increasing demands of network and


data without compromising fan out and
performance

HIGH
AVAILABILITY

Hardware and software redundancy needed at


the Server, Fabric and Storage levels

20

DATA CENTER BANDWIDTH REQUIREMENTS


Workloads are increasing
demand on networks
Increased Virtualization
Hyper-convergence

Future technologies will


drive requirements further

NVMe over fabrics


RoCE
iWARP
RDMA

Image courtesy of the Ethernet Alliance: http://www.ethernetalliance.org/wpcontent/uploads/2015/03/Front-of-Map-04-28-15.jpg

21

BANDWIDTH = BETTER EFFICIENCY


HIGHER SPEED LINKS IMPROVE MULTI-PATH EFFICIENCY

2010Gbps
Uplinks

2100Gbps
Uplinks

12

20

Probability of 100% throughput =


3.27%

1110Gbps flows
(55% load)

1
2
Probability of 100% throughput =
99.95%
On the Data Path Performance of Leaf-Spine Datacenter Fabrics - M. Alizadeh, T.
Edsall: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6627738

Higher speed links improve multi-path efficiency


22

LATENCY
NETWORK DESIGN CONSIDERATION FOR LATENCY
L3

L2/L3

L2

3-Tier Network

Spine Leaf Network


23

FLEXIBLE SCALIBILITY
Future Data Centers need
to be flexible

Grow Vertically (racks)


Grow Horizontally (rows)
Scale Performance

Solution Architectures and


Infrastructure need to be
designed to meet these
needs from the start

24

TYPICAL HIGH AVAILABILITY


Typical Hierarchal Design
Dual Networks for Redundancy

If Network A path breaks all


traffic fails over to Network B
If a core switch goes down, the
entire Network A goes down

Impact of is failure high, so


greater requirement for
equipment HA

25

RETHINKING HIGH AVAILABILITY


Spine/Leaf topology reduces
impact of failure

Spine

Loss of path reduces edge switch


bandwidth fractionally
Loss of Spine does not take down
entire fabric, only fractional capacity lost

X
Leaf

Reduces impact of failure


Reduces dependency hardware
redundancy

26

INDUSTRY LEADING STORAGE NETWORKING

Connectrix MDS

Nexus

Ciscos industry leading networking solutions for all


storage networking needs

27

MODERNIZE WITH EMC AND CISCO


INTEGRATED INDUSTRY LEADING SOLUTIONS IN EVERY CATEGORY

28

Continue the Journey with


Cisco at EMC World
Visit Cisco Booth 412
Meet with Cisco experts
See world class solutions
Hear more Cisco and Partner
presentations

Contact Us

Mark Allen mallen@cisco.com

30

S-ar putea să vă placă și