Documente Academic
Documente Profesional
Documente Cultură
April, 2008
3
NC512m Linux TCP/IP Offload Engine
Based on Netxens Linux Socket
Acceleration (LSA)
Firmware-based, installed similar to a
loadable driver
No kernel recompile or kernel patches
required
Supported on both Red Hat and SUSE
Consistent implementation across
ProLiant blade, rack, and tower 10GbE
NIC offerings
Includes Netxens Selective
Acceleration
Command line utility to control which
applications are accelerated (i.e. which
applications use LSA)
4
HP NC522m 10GbE Overview
Dual-port KR 10GbE for ProLiant c-Class
10GbE KR technology with next gen Netxen NX3031 controller
Supports 10,000/1,000 Mbps speeds1
Supports single-wide interconnects
Supported on all ProLiant c-Class servers with type II mezzanine slots
Up to two adapters per full height server, one per half height server
TCP/IP offload engine (TOE) and accelerated iSCSI capable
Interconnect compatibility:
A new 10GbE Virtual Connect module2
Currently shipping Virtual Connect, switch, and pass thru modules with
1Gb downlinks3
Target 4Q08 availability
6
HP BladeSystem Solution Builder
Program
The HP BladeSystem Solution Builder Program is
designed to enable vendors to develop and build add-on
components compatible with BladeSystem
HP is currently working closely with several vendors to
provide c-Class compatible 10GbE mezzanine adaptors
These will be sold and supported by the vendors
Available Ethernet Adapters
New
Chelsio S320EM-BS c-Class 10GbE-KX4 mezzanine card
ServerEngines BEM3H10 c-Class 10GbE-KX4 mezzanine card New
Sold and supported through Chelsio and ServerEngines
See your Chelsio and ServerEngines representative for details
7
ServerEngines BEM3H10 Dual Port 10GbE
Mezzanine Card for HP BladeSystem c-
Class
Dual port 10GbE KX-4 x8 PCIe Type I card New
Full 10GbE Ethernet, TOE and iSCSI offload
iSCSI initiator throughput
2000 MB/s, 130K IOPs
10
1. Planned feature set at release subject to change.
HP Confidential, NDA Required
For more HP information in your region
Sales issues Presales support issues
Product availability Technology or supported
configuration questions
Special pricing
Qualifying customers Planning and deployment
scenarios
Order entry, configuration,
or status issues Infrastructure requirements
11
Back-up
c7000 Enclosure I/O Bays
8 Interconnect bays
Multiple redundant I/O fabrics
Ethernet, Fibre Channel, IB
16:1 cable reduction per
Ethernet switch
Greatest LAN density
available in a blade system
Up to 128 1GbE adapter ports
Up to 32 10GbE adapter ports
13
c3000 Enclosure I/O Bays
Upcoming new enclosure
4 Interconnect bays
Multiple redundant I/O fabrics
Ethernet, Fibre Channel, IB
Accepts same interconnects as
c7000 enclosure
16:1 cable reduction per Ethernet
switch
Up to 64 1GbE adapter ports
Up to 16 10GbE adapter ports
14
10GbE Network Adapter Short Haul
Standard Comparison
Product
Standard Media Distance Benefits Drawbacks
Availability
Distance
Highest cost
10km (SMF) Can be used for more
-LX4 Fiber Now Not compatible w/ 1Gb
300m (MMF) than short haul
Limited NIC offering
MMF or SMF
Lower cost fiber
Incumbents
-LRM Fiber 300m (MMF) TBD but distance of LX4 10GBASE-SR installed base
15
ProLiant Networking Alternatives
Customers require various interconnects for their
applications and environments:
Ethernet is the best standards-
based choice for cost driven,
Incumbents
pervasive networking
requirements
InfiniBand is the best standards-
based choice for highest
performance networking
requirements
Emerging
Proprietary interconnects
(Myrinet, QsNet) remain viable
alternatives for specific solutions
16
10Gb Ethernet or InfiniBand?
Fabric Strengths Weaknesses Outlook
InfiniBand Proven unified fabric Added fabric Ideal for applications where
SAN, LAN, and cluster TCO advantages outweigh
integration introduction of new fabric
MS and Oracle support
Price/performance Continued bandwidth
lagging
Lowest latency improvements
Copper media only
Most affordable 10Gb Will remain price/
Limited distance
4x DDR now available performance leader over
Bulky, expensive cables Ethernet for foreseeable
future
17
Receive Side Scaling (RSS)
Spreads incoming CPU 3
CPU 2
connections across the CPU 1
CPUs within a server CPU 0
18
TCP/IP Offload Engine (TOE)
TCP/IP processing Networking
Application
moved from the host
CPU to TOE NIC (TNIC) Kernel Switch
Improves performance
TCP/IP
Reduces CPU utilization for Offload
segmentation and Path
reassembly Device Device
Driver Driver
Reduces interrupts and
context switches Multifunction
NIC
NIC
Allows for zero-copy receives
to kernel memory buffers Network
19
Accelerated iSCSI Traditional SCSI
over Ethernet Networks (TCP/IP)
Direct attached
block storage Remote block storage
using SCSI/SAS using iSCSI
SCSI SCSI
Multifunction
NIC
TCP/IP TCP/IP
Ethernet Ethernet
Ethern
et
20
Closing Slide