Sunteți pe pagina 1din 6

High Speed Differential I/O Overview and Design

Challenges on Intel Enterprise Server Platforms


Beom-Taek Lee #1, Mohiuddin Mazumder #2, Richard Mellitz *3
#

Intel Corporation
3600 Juliette Lane, Santa Clara, CA 54054, USA
1
2

beomtaek.lee@intel.com

mohiuddin.mazumder@intel.com
*

Intel Corporation
100 Center Point Circle, Columbia, SC 29210, USA
richard.mellitz@intel.com

AbstractIn this paper, the high speed differential I/O buses


which are used on Intel server platforms are explored. The
characteristics of channel components are examined along with
channel and I/O circuit design challenges. Statistical time domain
and frequency domain methods are briefly discussed as start-ofart simulation tools.

I. INTRODUCTION
The I/O (input/output) buses on Intel server platforms can
be classified as chip-to-chip, chip-to-peripheral, and chip-tonetworks. On server platforms, the ever-increasing throughput requirements have forced conversion of major I/O buses
from multi-load parallel single-ended buses to point-to-point
serial differential buses. The change was first made on
networking buses with Ethernet (GbE), InfiniBand (IB) and
Fiber Channel (FC) links connect server platforms to network
and storage apparatus. Higher transfer rate and I/O density are
primary reason for the conversion. Chip-to-peripheral
interconnections such as universal serial bus (USB), PCI
Express (PCIe), enterprise south-bridge interface (ESI), serial
advanced technology attachment (SATA), and serial attached
SCSI (SAS) have been converted accordingly. The major
chip-to-chip interconnections are for CPU to CPU, CPU to
Input Output Hub (IOH)/ platform control hub (PCH), and
CPU/ memory controller hub (MCH) to memory. QuickPath
Interconnect (QPI) was introduced as the I/O bus for CPU to
CPU and CPU to IOH/ Node Controller (NC), and replaced
the front side bus (FSB). External memory interface (XMI),
fully-buffered DIMM interface (FBD), and system memory
interface (SMI) were introduced as serial differential memory
interfaces, though these I/O buses didnt replace Double Data
Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), which is the major memory interfaces on server
platforms.
Although the high speed differential (HSD) point-to-point
buses were introduced to overcome the limits of single-ended
multi-drop buses, the ever-increasing transfer rate and the
boundary condition of server platforms pose significant
challenges in achieving higher speed link. These boundary
conditions include high volume manufacturing (HVM),
serviceability, high I/O density, low power, low cost, and

978-1-4577-0811-4/11/$26.00 2011 IEEE

nonflexible form. While industry researchers demonstrated


high speed link with test vehicles as shown in [1], the
aforementioned challenges resulted in it taking time to realize
such high speed links on products.
In this paper, the HSD buses on Intel platforms are
explored. The bus speed and channel length requirements are
compared among various I/O standards. Channel insertion loss
(IL) at Nyquist frequency and signal-to-noise ratio (SNR) are
investigated with an example. Characteristics of channel
ingredient including printed circuit board (PCB), package,
socket, connector and cable are also examined. Insertion loss
of PCB and package over humidity, temperature, and
manufacturers are investigated. Mitigation of fiber-weave
effect is discussed briefly in this paper. Jitter, clocking
architecture, equalization scheme, power, pad capacitance,
and HVM variation are significant parameters in I/O circuit
design. I/O density, IL, SNR, alien crosstalk, cost, and HVM
variation are significant parameters in channel design. Details
of these design challenges are also discussed in the paper.
State-of-art design methodologies and tools are evolved for
better design predictability based on statistics and bit error
rate (BER), as HSD buses do not show the design margin any
more at the worst case condition. The paper explains the
statistical BER simulation methods based on analytical
convolution and superposition on time domain. In addition,
frequency domain analysis method is elucidated based on IL
and SNR design metrics.

779

12
10GBASE-T
10

IB-10G
8GFC

8
[GT/s]

SMI
IB-5G PCIe2

PCIe1
4GFC
2GFC
XMI
FBD
SATA1
SATA2
1000BASE-T
2
SAS1
USB2
4

0
1996

1998

2000

2002

2004

2006

QPI

PCIe3

SATA3
SAS2
USB3

2008

2010

2012

Year

Fig. 1. High speed differential I/O bus standards on Intel server platforms

II. HSD BUSES ON INTEL SERVER PLATFORMS


Figure 1 illustrates HSD I/O bus standards introduction
time, along with the bus transfer rate. GbE, FC and IB were
introduced on Intel server platforms as HSD buses to network
and storage apparatus in 1995, 1997, and 1999, respectively.
SATA and SAS were introduced as interconnects to local
storage peripherals in 2002 and 2005, respectively. USB was
introduced as to I/O peripherals in 1996. PCIe and ESI were
introduced
for
chip-to-chip
and
chip-to-peripheral
interconnects in 2003. Later on, XMI, FBD, and SMI were
introduced as memory interfaces for 4+ socket server
platforms in 2005, 2007 and 2009, respectively. QPI was
introduced as CPU-to-CPU and CPU-to-IOH/ NC interfaces
in 2009 for 2S and 4S+ server platforms.
Server form factors have not changed to a great extent over
time, thus the I/O bus channel length requirements are mostly
unchanged. Within given server form factor, I/O bus channel
lengths are determined by component placements,
interconnect types, and mechanical constraints. Figure 2 show
data rate and the channel length requirements on HSD I/O on
PCB. The lower speed HSD buses were able to meet the
channel length requirements as the performance limitations
were mainly due to the I/O circuit. However, ever-increasing
bus speeds made it difficult to meet the channel length
requirements due to channel loss, crosstalk, and reflections.
* do not include cable length

length ["]

25
20
15
10
5

PCB length

Speed

Connector 0.3

Rterm 1.1

PCB_con_via
0.2

PCB_skt_via
0.1

pkg_via_skt
1.1

Cpad 3.5

PCB_Tline
11.0

pkg_Tline 5.7

(a)

Insertion loss [dB] contribution at 3.2GHz

PCB_skt_via
0.4

PCB_Tline
1.7

USB2 *
SATA1 *
PCIe1
SATA2 *
SAS1 *
XMI
FBD
PCIe2
SATA3 *
SAS2 *
QPI
SMI
PCIe3

9
8
7
6
5
4
3
2
1
0

A. Channel Component Characteristics


In this paper, IL is measured at Nyquist frequency as the
insertion loss fit, and SNR is the composite of insertion loss
deviation (ILD) and crosstalk, which are briefly described
later in the paper and detailed in [2]. Figure 3 exemplifies IL
and SNR decomposition for a case of QPI 6.4GT/s channel
with Xeon 7500 processors. The channel consists of pad
capacitance, package, 17 PCB trace, and 2 high performance
connectors. As data rates increase and platform designs vary,
the significant parameters of IL and SNR in decomposition
will change.

[GT/s]

30

fiber-weave can cause an increase in loss and mode


conversion. The impact of fiber-weave was published
previously [8] and it is addressed briefly for HSD buses with
mitigation scheme in this paper.

Connector 0.2

PCB_con_via
0.1

Rterm
1.3
Cpad 5.1

pkg_Tline 3.8

Fig. 2. HSD channel length requirements


pkg_via_skt
4.2

It shows the maximum PCB length is around 20 for PCIe


and QPI. Since the channel length requirements do not scale
with bus speed, the primary challenges are the increased
channel IL and reduced SNR in achieving higher transfer rate.
III. HSD BUS CHANNEL CHARACTERISTICS
This section discusses the characteristics of channel
components including package, socket, PCB, connector and
cable. Channel characteristics such as IL and SNR from
channel components are described in the following subsection using a QPI link as an example. Package and PCB IL
varies over humidity, temperature, manufacturing process and
material. Humidity and temperature effects on IL are
discussed in a subsequent sub-section. HVM and process/
material induced variation are also discussed. High speed
buses experience propagation delay variation due to the nonhomogenous property of PCB dielectric material. Essentially,

(b)

SNR [dB] contribution

Fig. 3. Channel component decomposition

Many HSD buses on Intel server platforms use 85


differential impedance on PCB and package, since 85
instead of 100 provide improved return loss by reducing
impedance discontinuities at PCB and package vias. It also
slightly reduces the impact of manufacturing variation.
Therefore, QPI bus developed the connector specification with
85 impedance.
As shown in Figure 3 (a), PCB transmission line, package
transmission line and pad capacitance are dominant
components in IL. To make the insertion loss compliant to the
bus requirements, PCB loss is specified as 0.8 dB/in at 4GHz.

780

crosstalk at the Nyquist frequency. With HVM variation, IL


varies between -16.5 and -20.9dB at 3.2GHz, and SNR varies
between 16.6 and 23.8dB. IL and SNR can be more accurately
calculated with the frequency domain method, which is
addressed later in the paper. At high humidity and temperature
condition, IL gets larger and SNR becomes worse as also
discussed in [4]. This example shows ~2dB IL increase and
~1.7dB SNR decrease at high humidity and temperature. It
impacts not only the link performance, but also the optimal
equalization levels in transmitter and receiver as stated in [5].
If the link equalization scheme is non-adaptive, it should be
programmed to be optimal at the worst humidity and
temperature condition.
0

HVM variation

10
20

[dB]

30

At worst humidity &


temperature

40
50
60
70
80
0

5
[Hz]

10
9
x 10

Fig. 5. IL and crosstalk variation over humidity and temperature


0
10

WC HVM condition

20
30

[dB]

The PCB stack-up varies over platform requirements and most


platforms use 6, 8 and 14 layers; some designs use as many as
24 layers. To control impedance discontinuities at PCB vias,
back-drilling when long via stubs are found at connectors and
layer transitions. The package transmission lines use finer
pitch, thus the IL is a few times larger than that of PCB,
although the material property is similar. Therefore,
minimizing package loss is also an important design factor,
especially for high I/O density and large package. Pad
capacitance consists of electrostatic discharge (ESD)
protection structure, routing parasitic, and circuit intrinsic
capacitance. Pad capacitance contributes to higher insertion
loss with higher speed, which is not linearly scaling with data
rate.
As shown in Figure 3 (b), the dominant SNR contributors
are pad capacitance, package vias, sockets, package
transmission line, and PCB transmission line. The socket has
relatively long electrical paths next to package and PCB vias,
and the signals are not well shielded. Thus, socket crosstalk is
one of major SNR sources. Figure 4 shows the PCB via with
4:1 signal-to-ground/power ratio in pin map. Each differential
pair has five adjacent aggressors in the figure and it can
aggregate the link performance with high crosstalk. Improved
S:G ratio needs to be used for higher data rate to reduce the
number of aggressors since it is significantly challenging to
scale socket technology with data rate. Impedance
discontinuities at package and PCB transmission lines are
another major contributor to SNR. ILD from these
discontinuities are a significant source of SNR. 85
transmission line is chosen for improved impedance
discontinuity as mentioned above.
Connector and cable usage varies over buses. Buses with
industry wide specification enables specific connector and
cable solution such as PCIe, SAS, SATA, USB, GbE, IB, FC,
etc. QPI, Intel proprietary bus, defines a connector
specification but doesnt detail the form factor. SMI reuses
PCI Express connector but it is not limited to the PCI Express
connector.

high dielectric loss

40
50
60
70

Vss or Vcc

80
0

signal

5
[Hz]

10
9
x 10

Fig. 6. IL and crosstalk variation over dielectric loss

Fig. 4. PCB via pattern example (S:G =4:1)

B. Environmental Variables
Package and PCB IL varies over humidity and temperature,
which are specified in ASHARE document [3]. Figure 5
shows IL and crosstalk variation over humidity and
temperature for an example of QPI bus at 6.4GT/s with 17
channel and 2 high performance connectors. SNR can be
grossly estimated by the dB difference between IL and

C. High Volume Manufacturing and Process/Material


Variation
HVM variations such as copper surface treatment, and
dielectric material variation affect package and PCB IL and
SNR. For example, process variation in HVM impacts
transmission line geometries and dielectric thickness, and
therefore causing variation on transmission line impedance
and loss. Depending on copper surface treatment, the surface
roughness varies a lot so is the loss, as shown in [6]. Although
the dielectric material may meet FR-4 specification, the
dielectric loss can have large variation so the loss varies.
Figure 6 shows an example of channel loss variation with

781

HVM variation. With high dielectric loss material the example


shows ~2dB IL increase and ~1.5dB SNR decrease. PCB loss
needs to be specified as mentioned above to make the
insertion loss compliant to the bus requirement. A method of
validating PCB loss was proposed in [7] with time-domain
reflectometry (TDR) and hand-held probes.
As shown in Figure 7, the dielectric layer of PCB is made
of various woven fiber-glass fabrics, strengthened and bound
together with epoxy resin. Because the relative permittivity is
~6 for the glass and ~3.5 for the epoxy, the signal propagation
on transmission lines can experience non-homogeneous effect.
The figure shows an example when one side (D+) of a
differential pair sits on top of fiber-glass, while the other side
(D-) sits on top of a mixture of fiber-glass and epoxy. As D+
and D- have different propagation delay, the fiber-weave
effect causes increased AC common mode (ACCM) noise and
reduced differential eye opening and it degrades link
performance. The fiber-weave impact gets worse for higher
transfer rates and should be mitigated to meet the link
performance target.
There are many ways of mitigating fiber-weave effect.
Homogeneous dielectric material or random weave material
can be used, but they are expensive solutions. Multi-ply
dielectric layers with different weave pitch can be used, but it
increases manufacturing process and cost. As an alternative
the PCB vendor or designer can rotate image of layout for
manufacturing which adds cost but is an easy solution. The
designer can also use angled-routing against fiber-weave or
zig-zag routing on layout which may not add PCB cost but
increases layout complexity. More details of fiber-weave
effect and mitigation options can be found in [8].

Fig. 7. Top view of a differential pair routing over glass fiber-weave

IV. HSD BUS DESIGN CHALLENGES


As stated above, scaling HSD channel to achieve higher
transfer rate is difficult due to increased IL, reduced SNR, and
HVM/environmental variable induced variation. In addition,
the I/O circuit design faces difficulties in scaling pad
capacitance, jitter, and reducing HVM variation. Also,
clocking architecture, non-ideal equalization scheme and low
power design requirement add challenges.
Part of the challenges is due to the inflexible floor-plans,
high I/O pin count, and low cost design requirement in server
platform form factor. The following sub-sections discuss the
details of I/O circuit and channel challenges.

A. Challenges on I/O Circuit


The I/O circuit jitter consists of deterministic jitter (Dj) and
random jitter (Rj), and is determined by power supply noise,
thermal noise, crosstalk, and clocking architecture. Scaling
I/O circuit jitter for higher transfer rate is a significant
challenge given restricted die area and power budget. Table 1
compares jitter budgets from several buses. As the jitter
specification method and location vary over buses and
revisions, it is difficult to compare the jitter budget among
different buses. The jitter budget in table 1 may appear
differently from the specification documents as they are
adjusted for comparison purpose.
TABLE I. JITTER BUDGET COMPARISON
Bus
USB2
USB3
SATA1
SATA2
SATA3
SAS1
SAS2
PCIe1
PCIe2
PCIe3
QPI/SMI

Speed
[GT/s]
0.48
5
1.5
3
6
3
6
2.5
5
8
6.4

Tx Tj
[UI]
0.15
0.375
0.37
0.37
0.34
0.25
0.25
0.25
0.25
0.25
0.18

Rx Tj
[UI]

Ref clk
[UI]
0.4
0.45
0.6
0.6
0.6
0.55
0.36

0.4
0.4
0.35

0.22
0.22
0.05
0.4

Channel
[UI]
0.45
0.27
0.135
0.135
0.165
0.3
0.48
0.34
0.34
0.49
0.45

QPI and SMI use forwarded clocking architecture with


delay lock loop (DLL) at receiver as clock recovery circuit.
Most other buses use derived clocking architecture with phase
lock loop (PLL) at receiver from common reference clock or
separate reference clocks. Forwarded clocking architecture
can provide better I/O jitter budget if the clock channel is
properly designed at the expense of additional I/O pin count.
The latter provides flexible platform architecture, but the I/O
jitter from clock recovery circuit can be a substantial portion
of the jitter budget. The legacy I/O bus standard and backward
compatibility also pose challenges in achieving higher speed
link.
Equalization schemes are common in HSD buses with
higher transfer rate and worse signal integrity impairment
from channel. For low transfer rate and/or short channel, 2 ~ 4
tap finite-filter equalization (FFE) is used at the transmitter.
For higher transfer rates and/or long channels, equalization
designs are added at the receiver such as continuous time
linear equalization (CTLE) and the decision feedback
equalization (DFE). The equalization helps to compensate
channel jitter caused by the inter-symbol interference (ISI).
Adaptive equalization helps to overcome the channel variation
over design, HVM and environmental variables. The
equalization scheme and adaptive equalization features are
implemented at the expense of die area and power.
Although the pad capacitance comes from the I/O circuit
design, it is an important passive channel component as shown
in Figure 3. Pad capacitance does not scale linearly with
semiconductor process shrinks and it impairs channel IL and
SNR more at higher speed. Equalization can alleviate

782

degradation the impact of pad capacitance but not completely


compensate IL and SNR. Inductive structures can be added to
regain channel bandwidth loss, as used in some high
performance low density networking I/O buses. However, it is
difficult to implement in most high density I/O buses for chip
to chip and chip to peripheral connections.
In addition, HVM variation of complex equalization
scheme adds challenge in achieving high speed link to
guarantee the required target bus speed at high yield.

A. BER Simulation Method


Comprehensive link analysis becomes crucial in designing
HSD link as stated in [9], which comprehends link bit-error
ratio (BER), transmitter jitter, receiver jitter/noise, and
channel effects including ISI and crosstalk. Analytical
convolution based method is developed as a full link analysis
tool as described in [10]. This method comprehends the
impact of high frequency transmitter jitter in calculating ISI
probability density function (PDF), which is later convolved
with receiver jitter and noise. Figure 9 shows how the
statistical link analysis works at high level. Tx analytic jitter is
added at the transmitter, and Rx analytic jitter and noise are
added at the receiver when analytic convolution is applied.
Rx

Tx

B. Challenges on Channel
The most straightforward way to improve the channel IL is
by using low loss PCB material, and/or fine surface roughness
treatment but these increase platform cost and/or reduced
reliability. However, the reference server platform design is
all done with FR4 or equivalent PCB material and process,
and specified PCB loss.
High I/O density and serviceability require usage of socket
and connectors. Pin count limitation together with crosstalk
from socket and connector become a major bottleneck in
achieving higher transfer rate. High performance socket and
connector are also cost adder.
High I/O density impacts not only the socket but also the
package. The package size increases with higher pin count and
its transmission line length increases. This adds to channel IL
and reduces SNR while increasing crosstalk from denser
escape from C4 bumps.
Reduced SNR also comes from increased ILD with
impedance discontinuities and reflection from via stubs. Via
stub effect can be mitigated by constraining signal routing
layers, using back-drilling, or using micro-via and blind-via
technology. Back-drilling is challenging on high density
packages and on shadow via, and add cost. Constraining
signal routing layers can increase PCB stack-up and cost.
Micro-via and blind-via increase PCB cost significantly.

V. HSD BUS DESIGN METHODOLOGY AND TOOL


Design methods evolve as HSD bus transfer rate increases.
Instead of designing system for the worst case conditions that
are very low probability, statistical design methods are
adopted. The probability of each design variable, such as
package and PCB impedance, is considered in predicting the
link performance statistically. In addition, the bus bit-error
rate (BER) is used as a decision metric in HSD bus design.
The following sub-sections explain briefly the BER
simulation method and frequency domain method.

Channel

Tx uncorrelated
jitter
Equalization

Channel & cochannel responses

Rx sampling jitter

Statistical
Signaling
Analysis

Pre-aperture
BER eye

Modulation

Post-aperture
BER eye

Rx input referred noise

Fig. 9. Statistical link analysis flow


Thermal noise

High-freq
uncorrelated
supply noise

Correlated
supply noise

Jitter impulse
response

Latency

Duty cycle
error

Fig. 10. Input characteristics to behavioural circuit block models

Fig. 8. HSD bus routing under connector via field

HSD buses may need to be routed under congested


components such as connector via fields, socket via fields, or
power supply via arrays as shown in Figure 8. It can pick up
noise from other high speed buses or power supply switching
events and causes detrimental impact on the link performance.
Special attention is needed in this situation to avoid design
failures.

Since this convolution based tool cannot easily simulate


link training, user specified bit sequence, and receiver
equalization algorithm, a superposition based simulation tool
was also developed as described in [11]. However, this
approach cannot easily calculate BER lower than 10-8. It needs
to be combined with a proper jitter extrapolation capability in
order to project performance at lower BER. The superposition
based simulation tool can be combined with the behavioural
circuit models in order to evaluate the I/O circuit jitter budget.
As shown in Figure 10, six input characteristics are defined
and applied to each behavioural circuit block, and the analytic
jitter and noise are computed. In this case, full path circuit
blocks and channel are combined in the simulation tool.

783

B. Frequency Domain Method


Channel performance can be measured in the frequency
domain with IL and SNR [2]. Insertion loss fitted attenuation
is measured at Nyquist frequency with a channel attenuation
fitting function. The insertion loss noise is defined with the
deviation of insertion loss from this fitted insertion loss. This
is an estimate of the ISI that cannot be reasonably equalized.
Crosstalk noise is calculated by integrating the power sum of
all crosstalk agents and then converting to a standard
deviation of an additive white Gaussian noise (AWGN) source.
In a similar manner, the insertion loss noise is integrated and
converted to an AWGN standard deviation. In both cases, the
power spectral weighting function is applied to evaluate
insertion loss and crosstalk noises at the target bus transfer
rate. The power spectral weighting function represents the
data spectrum and I/O circuit transfer functions [2]. The two
AWGN are combined with a root sum square method
producing the channel noise, Nc. The signal available to the
receiver, Sc is related to a combination of IL and the pulse
height. The SNRc for the channel is defined as:

 =

This SNRc is not the complete story for SNR. The bit to bit
SNR limit is defined for a minimum bit error ratio, BER with
this:
 > 2
 2 
where erfc is complementary error function. A BER
requirement of less than 10-12 requires a SNR requirement
exceeding 17dB. However the SNRc does not include the
silicon, jitter, noise, and voltage swing effect. The implication
is that the channel SNRc is a good indictor but needs to be put
into a system performance context to determine an IL and
SNRc criteria. Once the criteria has been established with
statistical BER simulations, multiple iterations of frequency
domain methods can be used to pre-screen the channel design,
as it can be calculated much faster than time domain
simulation method.

based on convolution and superposition become necessary.


Frequency domain methods are adopted as an auxiliary,
alternative, to time-domain simulation methods.
ACKNOWLEDGMENT
The authors wish to acknowledge Jeff Loyer, Richard
Kunze, Chunfei Ye, and Dennis Miller for providing contents
in this paper.
REFERENCES
[1] B. Casper, J. Jaussi, F. O'Mahony, M. Mansuri, K. Canagasaby, J.
Kennedy, E. Yeung, and R. Mooney, A 20Gb/s forwarded clock
transceiver in 90nm CMOS, IEEE International Solid-State Circuits
Conference, vol. XLIX, pp. 90 - 91, February 2006.
[2] R. Mellitz, V Ragavassamy, M. Brownell, Fast, Accurate,
Simulation-Lite Approach for Multi-GHz Board Design, DesignCon
Proceedings, 2011.
[3] ASHRAE Publication, Thermal guidelines for Data Centers and other
Data Processing Environments, Atlanta, 2004.
[4] G. Sheets, and J. DAmbrosia, The Impact of Environmental
Conditions on Channel Performance, DesignCon Proceedings, 2004.
[5] J. Zerbe, Q. Lin, V. Stojanovic, A. Ho, R. Kollipara, F. Lambrecht, and
C. Werner, Comparison of adaptive and non-adaptive equalization
methods in high-performance backplanes, DesignCon Proceedings,
2005.
[6] G. Brist, S. Hall, S. Clouser, and T. Liang, Non-classical Conductor
Losses due to Copper Foil Roughness and Treatment, ECWC 10
Conference at IPC Printed Circuits Expo, SMEMA Council APEX and
Designers Summit, 2005.
[7] J. Loyer, and R. Kunze, SET2DIL: Method to Derive Differential
Insertion Loss from Single-Ended TDR/TDT Measurements,
DesignCon Proceedings, 2010.
[8] J. Loyer, R. Kunze, and X. Ye, Fiber Weave Effect: Practical Impact
Analysis and Mitigation Strategies, DesignCon Proceedings, 2007.
[9] K. Oh, F. Lambrecht, S. Chang, Q. Lin, J. Ren, C. Yuan, J. Zerbe, and
V. Stojanovic, Accurate System Voltage and Timing Margin Simulation
in High-Speed I/O System Designs, IEEE Transactions on Advanced
Packaging, vol. 31, no. 4, pp. 722 730, November 2008.
[10] B. Casper, G. Balamurugan, J. Jaussi, J. Kennedy, M. Mansuri, F.
OMahony, and R. Mooney, Future Microprocessor Interfaces:
Analysis, Design and Optimization, IEEE Custom Integrated Circuits
Conference, pp. 479 486, Sept. 2007.
[11] K. Xiao, B. Lee, and X. Ye, a flexible and efficient bit error rate
simulation method for high-speed differential link analysis using timedomain interpolation and superposition, IEEE International
Symposium on Electromagnetic, Aug. 2008.

VI. CONCLUSION
The high speed differential buses on Intel enterprise server
platforms were explored in this paper. Transfer rate is everincreasing to meet the bandwidth requirements.
Higher transfer rate increases the channel IL and reduces
SNR. High I/O density and low cost PCB design pose design
challenges. Package and PCB IL vary over humidity,
temperature, manufacturing process, and material. Fiberweave induced link performance degradation should be
mitigated to achieve the link performance target.
I/O circuit design must accommodate significant effects
such as pad capacitance, jitter and HVM variation in HSD
buses. Clocking architecture, non-ideal equalization scheme,
and low power design requirement are additional burden to be
accounted for.
With these design challenges in HSD buses, design
methods are also evolving. Statistical BER simulation tools

784

S-ar putea să vă placă și