Sunteți pe pagina 1din 21

HP BladeSystem

c-Class 10GbE network


adapter NDA

April, 2008

2007 Hewlett-Packard Development Company, L.P.


The information contained herein is subject to change without notice
Why Bladesystem c-Class Networking
Multiple enclosure options
Highly flexible c7000 enclosure
Eight interconnect modules, up to four redundant I/O fabrics
Lower cost c3000 enclosure
Supports the same interconnects and mezz cards as the c7000
Wide selection of Ethernet switches & pass-thru
interconnects
Innovative Virtual Connect technology
Creating an agile, flexible change-ready infrastructure
Greatest network port density available in the industry
To 1.3 Tb full duplex NIC bandwidth per c7000 enclosure
Most robust blade network adapter portfolio

2 HP Confidential, NDA Required


HP NC512m 10GbE Overview
Dual-port KX-410GbE for ProLiant c-Class
One adapter per server
Up to 32 10GbE ports per c7000, 16 per c3000
Mezzanine slot 2 or 3 (FH server) and
slot 2 (HH server)
x8 PCI Express

Linux TCP/IP offload engine (TOE)


Netxen NX2031 controller
Compatible with the new 10Gb Ethernet Switch
Double wide switch with 10GbE downlinks and uplinks
Complete solution for 10GbE connectivity to the server

3
NC512m Linux TCP/IP Offload Engine
Based on Netxens Linux Socket
Acceleration (LSA)
Firmware-based, installed similar to a
loadable driver
No kernel recompile or kernel patches
required
Supported on both Red Hat and SUSE
Consistent implementation across
ProLiant blade, rack, and tower 10GbE
NIC offerings
Includes Netxens Selective
Acceleration
Command line utility to control which
applications are accelerated (i.e. which
applications use LSA)
4
HP NC522m 10GbE Overview
Dual-port KR 10GbE for ProLiant c-Class
10GbE KR technology with next gen Netxen NX3031 controller
Supports 10,000/1,000 Mbps speeds1
Supports single-wide interconnects
Supported on all ProLiant c-Class servers with type II mezzanine slots
Up to two adapters per full height server, one per half height server
TCP/IP offload engine (TOE) and accelerated iSCSI capable
Interconnect compatibility:
A new 10GbE Virtual Connect module2
Currently shipping Virtual Connect, switch, and pass thru modules with
1Gb downlinks3
Target 4Q08 availability

1. Both ports across server backplane must be at the same speed.


2. Available speeds across the enclosure backplane to the interconnect are 20Gb/s full duplex or 2Gb/s full duplex.
3. Available speed across the enclosure backplane to the interconnect is 2Gb/s full duplex.
5 HP Confidential, NDA Required
Why 10Gb Ethernet
The use of faster multi-core processors is
increasing bandwidth requirements per server
Customers in a broad range of advanced
computing environments are working with
ever more complex data sets requiring 10Gb
bandwidth
High performance computing (HPC)
Database clusters, iSCSI, and
storage backups
Grid systems and virtualization
Server, I/O, & fabric consolidation
VoIP and VOD
As the delta between 1Gb and 10Gb narrows,
it becomes cost effective to utilize 10GbE
switch uplinks to reduce cabling

6
HP BladeSystem Solution Builder
Program
The HP BladeSystem Solution Builder Program is
designed to enable vendors to develop and build add-on
components compatible with BladeSystem
HP is currently working closely with several vendors to
provide c-Class compatible 10GbE mezzanine adaptors
These will be sold and supported by the vendors
Available Ethernet Adapters
New
Chelsio S320EM-BS c-Class 10GbE-KX4 mezzanine card
ServerEngines BEM3H10 c-Class 10GbE-KX4 mezzanine card New
Sold and supported through Chelsio and ServerEngines
See your Chelsio and ServerEngines representative for details

7
ServerEngines BEM3H10 Dual Port 10GbE
Mezzanine Card for HP BladeSystem c-
Class
Dual port 10GbE KX-4 x8 PCIe Type I card New
Full 10GbE Ethernet, TOE and iSCSI offload
iSCSI initiator throughput
2000 MB/s, 130K IOPs

Supported on c-Class BL460c, BL480c, BL685c


Win2K3, RHEL 4&5, SLES 9&10
VMware ESX 3.5 NIC/iSCSI (targeted Apr08)
OS Agnostic Int13 boot, configuration, discovery,
reporting and mgmt
Bare-Metal provisioning

32-ports I/O Virtualization


Bandwidth allocation and QoS for virtual machines

Power (max/typical): 9W/7W


Sold and supported through ServerEngines
8
10GbE c-Class Mezzanine Hardware Feature Summary
(key differences in bold)
Model HP NC522m1 HP NC512m
Availability 4Q08 Target Shipping
Part Number 467801-B21 440910-B21
10GbE protocol IEEE 10G-Base-KR IEEE 10G-Base-KX4
Network Controller Netxen NX3031 Netxen NX2031
Bus x8 PCI Express v2.0 (Gen2) x8 PCI Express 1.1
2 @ 10,000 or 1000 speed
2 @ 10,000,
Ports (Mbps) negotiation speed,
duplex across enclosure backplane
duplex across enclosure backplan2
Server Support All ProLiant c-Class with type II slots All ProLiant c-Class with type II slots
Form Factor Type II card Type II card
New 10 GbE KR Virtual Connect
Supported Interconnects Existing interconnects with 1Gb 10Gb KX4 Ethernet Switch
downlinks and 1GBE pass thru
Maximum cards per server 2 per FH server, 1 per HH server 1 per FH server, 1 per HH server
Supported Enclosures c3000 and c7000 c3000 and c7000
2GB minimum, 4GB minimum, 32GB max
Server memory requirements
servers capacity maximum (Windows), 64GB max (Linux)

1. Planned feature set at release subject to change


2.9 Both ports across server backplane must be at the same speed. HP Confidential, NDA Required
10GbE c-Class Mezzanine Software Feature
Summary
(key differences in bold)
Model HP NC522m1 HP NC512m
Availability 4Q08 Target Shipping
Teaming/Bonding Linux Linux
PXE
802.1Q VLANs
802.1p QoS
TOE Capable Linux
Accelerated iSCSI Capable
RDMA
iSCSI Boot Capable
RSS
VMware ESX Certification TBD 3.5, 3i
Jumbo Frames 9K industry standard 8K
802.3ad Link aggregation
802.1x Flow control
TCP checksum offloads
Interrupt coalescence
OS Support RH4&5, SUSE 9&10, Win 2003 RH4&5, SUSE 9&10, Win 2003

10
1. Planned feature set at release subject to change.
HP Confidential, NDA Required
For more HP information in your region
Sales issues Presales support issues
Product availability Technology or supported
configuration questions
Special pricing
Qualifying customers Planning and deployment
scenarios
Order entry, configuration,
or status issues Infrastructure requirements

Region Contact Email Location


Americas Harry Levine harry.levine@hp.com Austin, USA
Dornach-Aschheim,
EMEA Barbara Hallmans barbara.hallmans@hp.com
DE
AP Arun Natarajan arun.natarajan@hp.com Singapore, SG
Japan Shingo Yamanaka shingo.yamanaka@hp.com Tokyo, JP

11
Back-up
c7000 Enclosure I/O Bays
8 Interconnect bays
Multiple redundant I/O fabrics
Ethernet, Fibre Channel, IB
16:1 cable reduction per
Ethernet switch
Greatest LAN density
available in a blade system
Up to 128 1GbE adapter ports
Up to 32 10GbE adapter ports

13
c3000 Enclosure I/O Bays
Upcoming new enclosure
4 Interconnect bays
Multiple redundant I/O fabrics
Ethernet, Fibre Channel, IB
Accepts same interconnects as
c7000 enclosure
16:1 cable reduction per Ethernet
switch
Up to 64 1GbE adapter ports
Up to 16 10GbE adapter ports

14
10GbE Network Adapter Short Haul
Standard Comparison
Product
Standard Media Distance Benefits Drawbacks
Availability
Distance
Highest cost
10km (SMF) Can be used for more
-LX4 Fiber Now Not compatible w/ 1Gb
300m (MMF) than short haul
Limited NIC offering
MMF or SMF
Lower cost fiber
Incumbents

Much more expensive than


Distance versus copper
-SR Fiber 82m (MMF) Now
copper
Not compatible w/ 1Gb
1GbE MMF usage
More expensive than Base-T
Lowest cost of current
standards long term
Twin
-CX4 axial 15m Now Not compatible w/ 1Gb
Only currently
copper available copper IB-like cables
standard Distance, cost, bending
Lower cost than SR,
Emerging

-LRM Fiber 300m (MMF) TBD but distance of LX4 10GBASE-SR installed base

Twisted- Base-T installed base


pair
-T copper 100m (CAT6A) TBD 1GbE Compatible 10GBASE-CX4 installed base
(RJ-45) Lowest cost long term

15
ProLiant Networking Alternatives
Customers require various interconnects for their
applications and environments:
Ethernet is the best standards-
based choice for cost driven,
Incumbents

pervasive networking
requirements
InfiniBand is the best standards-
based choice for highest
performance networking
requirements
Emerging

Proprietary interconnects
(Myrinet, QsNet) remain viable
alternatives for specific solutions
16
10Gb Ethernet or InfiniBand?
Fabric Strengths Weaknesses Outlook

10Gb Ethernet is pervasive, Multifunction not prime Ethernet is default choice


mature foundation time for most applications
Ethernet Low TCO Performance Overcoming weaknesses
Fiber media CPU and memory IP Offload, RDMA, iSCSI
Extends beyond the utilization
Base-T solutions
datacenter Latency expected 2009
Manageable cables Cost Cost beginning to drop
CX4 is currently the only significantly (30%+/yr),
shipping copper standard but not expected to match
Base-T not yet available InfiniBand price/
performance

InfiniBand Proven unified fabric Added fabric Ideal for applications where
SAN, LAN, and cluster TCO advantages outweigh
integration introduction of new fabric
MS and Oracle support
Price/performance Continued bandwidth
lagging
Lowest latency improvements
Copper media only
Most affordable 10Gb Will remain price/
Limited distance
4x DDR now available performance leader over
Bulky, expensive cables Ethernet for foreseeable
future

17
Receive Side Scaling (RSS)
Spreads incoming CPU 3
CPU 2
connections across the CPU 1
CPUs within a server CPU 0

Overcomes the single TCP/IP


TCP/IP
CPU bottleneck TCP/IP NDIS
TCP/IP NDIS
Works well in NDIS
NDIS
applications with lots of
short-lived connections
Where TOE doesnt work
well
NIC W/ RSS

18
TCP/IP Offload Engine (TOE)
TCP/IP processing Networking
Application
moved from the host
CPU to TOE NIC (TNIC) Kernel Switch

Improves performance
TCP/IP
Reduces CPU utilization for Offload
segmentation and Path
reassembly Device Device
Driver Driver
Reduces interrupts and
context switches Multifunction
NIC
NIC
Allows for zero-copy receives
to kernel memory buffers Network
19
Accelerated iSCSI Traditional SCSI
over Ethernet Networks (TCP/IP)
Direct attached
block storage Remote block storage
using SCSI/SAS using iSCSI

File System File System

Disk Driver Disk Driver

SCSI SCSI

SCSI MP iSCSI MP iSCSI SCSI

Multifunction
NIC
TCP/IP TCP/IP

Ethernet Ethernet

Ethern
et

iSCSI Initiator iSCSI Target

20
Closing Slide

S-ar putea să vă placă și