Sunteți pe pagina 1din 13

Infiniband

Bart Taylor

What it is
InfiniBand Architecture defines a new interconnect technology for servers that changes the way data centers will be built, deployed and managed. By creating a centralized I/O fabric, InfiniBand Architecture enables greater server performance and design density while creating data center solutions that offer greater reliability and performance scalability. InfiniBand technology is based upon a channel-based switched fabric point-topoint architecture.
--www.infinibandta.org

History
Infiniband is the result of a merger of two competing designs for an inexpensive high-speed network.
Future I/O combined with Next Generation I/O form what we know as Infiniband. Future I/O was being developed by Compaq, IBM, and HP Next Generation I/O was being developed by Intel, Microsoft, and Sun Microsystems Infiniband Trade Association maintains the specification

The Basic Idea


High speed, low latency data transport
Bidirectional serial bus Switched fabric topology
Several devices communicate at once

Data transferred in packets that together form messages


Messages are direct memory access, channel send/receive, or mulitcast

Host Channnel Adapters (HCAs) are deployed on PCI cards

Main Features
Low Latency Messaging: < 6 microseconds
Highly Scalable: Tens of thousands of nodes Bandwidth: 3 levels of link performance
2.5 Gbps 10 Gbps 30 Gbps

Allows multiple fabrics on a single cable


Up to 8 virtual lanes per link No interdependency between different traffic flows

Physical Devices
Standard copper cabling
Max distance of 17 meters

Fiber-optic cabling
Max distance of 10 kilometers

Host Channnel Adapters on PCI cards


PCI, PCI-X, PCI-Express

InfiniBand Switches
10Gbps non-blocking, per port Easily cascadable

Host Channel Adapters


Standard PCI
133 MBps PCI 2.2 - 533 MBps

PCI-X
1066 MBps PCI-X 2 - 2133 MBps

PCI-Express
x1 5Gbps x4 20Gbps x8 40Gbps x16 80Gbps

DAFS
Direct Access File System
Protocol for file storage and access Data transferred as logical files, not physical storage blocks Transferred directly from storage to client Bypasses CPU and Kernel

Provides RDMA functionality Uses the Virtual Interface (VI) architecture


Developed by Microsoft, Intel, and Compaq in 1996

RDMA

TCP/IP Packet Overhead

Latency Comparison
Standard Ethernet TCP/IP Driver
80 to 100 microseconds latency

Standard Ethernet Dell NIC with MPICH over TCP/IP


65 microseconds latency

Infiniband 4X with MPI Driver


6 microseconds

Myrinet
6 microseconds

Quadrics
3 microseconds

Latency Comparison
MPI Latency vs Message Size
350.0 300.0

Latency (us)

250.0 200.0 150.0 100.0 50.0 0.0 64 512 4096 8192 16384 32768 65536 Message Size (Bytes)

InfiniBand Myrinet Quadrics

References
Infiniband Trade Association - www.infinibandta.org OpenIB Alliance - www.openib.org TopSpin - www.topspin.com Wikipedia - www.wikipedia.org OReilly - www.oreillynet.com Sourceforge - infiniband.sourceforge.net Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics. Computer and Information Science. Ohio State University. nowlab.cis.ohio-state.edu/projects/mpi-iba/publication/sc03.pdf

S-ar putea să vă placă și