Documente Academic
Documente Profesional
Documente Cultură
Intel Corporation
3600 Juliette Lane, Santa Clara, CA 54054, USA
1
2
beomtaek.lee@intel.com
mohiuddin.mazumder@intel.com
*
Intel Corporation
100 Center Point Circle, Columbia, SC 29210, USA
richard.mellitz@intel.com
I. INTRODUCTION
The I/O (input/output) buses on Intel server platforms can
be classified as chip-to-chip, chip-to-peripheral, and chip-tonetworks. On server platforms, the ever-increasing throughput requirements have forced conversion of major I/O buses
from multi-load parallel single-ended buses to point-to-point
serial differential buses. The change was first made on
networking buses with Ethernet (GbE), InfiniBand (IB) and
Fiber Channel (FC) links connect server platforms to network
and storage apparatus. Higher transfer rate and I/O density are
primary reason for the conversion. Chip-to-peripheral
interconnections such as universal serial bus (USB), PCI
Express (PCIe), enterprise south-bridge interface (ESI), serial
advanced technology attachment (SATA), and serial attached
SCSI (SAS) have been converted accordingly. The major
chip-to-chip interconnections are for CPU to CPU, CPU to
Input Output Hub (IOH)/ platform control hub (PCH), and
CPU/ memory controller hub (MCH) to memory. QuickPath
Interconnect (QPI) was introduced as the I/O bus for CPU to
CPU and CPU to IOH/ Node Controller (NC), and replaced
the front side bus (FSB). External memory interface (XMI),
fully-buffered DIMM interface (FBD), and system memory
interface (SMI) were introduced as serial differential memory
interfaces, though these I/O buses didnt replace Double Data
Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), which is the major memory interfaces on server
platforms.
Although the high speed differential (HSD) point-to-point
buses were introduced to overcome the limits of single-ended
multi-drop buses, the ever-increasing transfer rate and the
boundary condition of server platforms pose significant
challenges in achieving higher speed link. These boundary
conditions include high volume manufacturing (HVM),
serviceability, high I/O density, low power, low cost, and
779
12
10GBASE-T
10
IB-10G
8GFC
8
[GT/s]
SMI
IB-5G PCIe2
PCIe1
4GFC
2GFC
XMI
FBD
SATA1
SATA2
1000BASE-T
2
SAS1
USB2
4
0
1996
1998
2000
2002
2004
2006
QPI
PCIe3
SATA3
SAS2
USB3
2008
2010
2012
Year
Fig. 1. High speed differential I/O bus standards on Intel server platforms
length ["]
25
20
15
10
5
PCB length
Speed
Connector 0.3
Rterm 1.1
PCB_con_via
0.2
PCB_skt_via
0.1
pkg_via_skt
1.1
Cpad 3.5
PCB_Tline
11.0
pkg_Tline 5.7
(a)
PCB_skt_via
0.4
PCB_Tline
1.7
USB2 *
SATA1 *
PCIe1
SATA2 *
SAS1 *
XMI
FBD
PCIe2
SATA3 *
SAS2 *
QPI
SMI
PCIe3
9
8
7
6
5
4
3
2
1
0
[GT/s]
30
Connector 0.2
PCB_con_via
0.1
Rterm
1.3
Cpad 5.1
pkg_Tline 3.8
(b)
780
HVM variation
10
20
[dB]
30
40
50
60
70
80
0
5
[Hz]
10
9
x 10
WC HVM condition
20
30
[dB]
40
50
60
70
Vss or Vcc
80
0
signal
5
[Hz]
10
9
x 10
B. Environmental Variables
Package and PCB IL varies over humidity and temperature,
which are specified in ASHARE document [3]. Figure 5
shows IL and crosstalk variation over humidity and
temperature for an example of QPI bus at 6.4GT/s with 17
channel and 2 high performance connectors. SNR can be
grossly estimated by the dB difference between IL and
781
Speed
[GT/s]
0.48
5
1.5
3
6
3
6
2.5
5
8
6.4
Tx Tj
[UI]
0.15
0.375
0.37
0.37
0.34
0.25
0.25
0.25
0.25
0.25
0.18
Rx Tj
[UI]
Ref clk
[UI]
0.4
0.45
0.6
0.6
0.6
0.55
0.36
0.4
0.4
0.35
0.22
0.22
0.05
0.4
Channel
[UI]
0.45
0.27
0.135
0.135
0.165
0.3
0.48
0.34
0.34
0.49
0.45
782
Tx
B. Challenges on Channel
The most straightforward way to improve the channel IL is
by using low loss PCB material, and/or fine surface roughness
treatment but these increase platform cost and/or reduced
reliability. However, the reference server platform design is
all done with FR4 or equivalent PCB material and process,
and specified PCB loss.
High I/O density and serviceability require usage of socket
and connectors. Pin count limitation together with crosstalk
from socket and connector become a major bottleneck in
achieving higher transfer rate. High performance socket and
connector are also cost adder.
High I/O density impacts not only the socket but also the
package. The package size increases with higher pin count and
its transmission line length increases. This adds to channel IL
and reduces SNR while increasing crosstalk from denser
escape from C4 bumps.
Reduced SNR also comes from increased ILD with
impedance discontinuities and reflection from via stubs. Via
stub effect can be mitigated by constraining signal routing
layers, using back-drilling, or using micro-via and blind-via
technology. Back-drilling is challenging on high density
packages and on shadow via, and add cost. Constraining
signal routing layers can increase PCB stack-up and cost.
Micro-via and blind-via increase PCB cost significantly.
Channel
Tx uncorrelated
jitter
Equalization
Rx sampling jitter
Statistical
Signaling
Analysis
Pre-aperture
BER eye
Modulation
Post-aperture
BER eye
High-freq
uncorrelated
supply noise
Correlated
supply noise
Jitter impulse
response
Latency
Duty cycle
error
783
VI. CONCLUSION
The high speed differential buses on Intel enterprise server
platforms were explored in this paper. Transfer rate is everincreasing to meet the bandwidth requirements.
Higher transfer rate increases the channel IL and reduces
SNR. High I/O density and low cost PCB design pose design
challenges. Package and PCB IL vary over humidity,
temperature, manufacturing process, and material. Fiberweave induced link performance degradation should be
mitigated to achieve the link performance target.
I/O circuit design must accommodate significant effects
such as pad capacitance, jitter and HVM variation in HSD
buses. Clocking architecture, non-ideal equalization scheme,
and low power design requirement are additional burden to be
accounted for.
With these design challenges in HSD buses, design
methods are also evolving. Statistical BER simulation tools
784