Documente Academic
Documente Profesional
Documente Cultură
MODULAR SYSTEMS
Open Modular Architecture with a Choice of
Sun SPARC, Intel Xeon, and AMD Opteron Platforms
White Paper
June 2008
Table of Contents
Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
An Open Systems Approach to Modular Architecture . . . . . . . . . . . . . . . . . . . . . . 2
The Promise of Blade Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
The Sun Blade 6000 and 6048 Modular Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Open and Modular System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Sun Blade 6000 and 6048 Modular Systems Overview . . . . . . . . . . . . . . . . . . . . . 12
Chassis Front Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Chassis Rear Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Passive Midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Server Modules Based on Sun SPARC, Intel Xeon, and AMD Opteron Processors . . . 19
A Choice of Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Server Module Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Sun Blade T6320 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Sun Blade T6300 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Sun Blade X6220 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Sun Blade X6250 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Sun Blade X6450 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
I/O Expansion, Networking, and Management . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Server Module Hard Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
PCI Express ExpressModules (EMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
PCI Express Network Express Modules (NEMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Transparent and Open Chassis and System Management . . . . . . . . . . . . . . . . . . . . 49
Sun xVM Ops Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Executive Summary
Executive Summary
The Participation Age is driving new demands that are focused squarely on the
capabilities of the datacenter. Web services and rapidly escalating Internet use are
driving competitive organizations to lead with innovative new services and scalable,
dynamic infrastructure. High performance computing (HPC) is constantly nding new
applications in both science and industry, fostering new demands for performance and
density. Agility is paramount, and organizations must be able to respond quickly to
unpredictable needs for capacity adding compute power or growing services on
demand. At the same time, most datacenters are rapidly running out of space, power,
and cooling even as energy costs continue to rise. Rapid growth must be met with
consolidated infrastructure, controlled and predictable costs, and efcient
management practices. Simply adding more low-density power-consumptive servers is
clearly not the answer.
Blade server architecture offers considerable promise toward addressing these issues
through increased compute density, improved serviceability, and lower levels of
exposed complexity. Unfortunately, most legacy blade platforms don't provide the
necessary exibility needed by many of today's Web services and HPC applications.
Complicating matters, many legacy blade server platforms lock customers into a
proprietary and vendor-specic infrastructure that often requires redesign of existing
network, management, and storage environments. These legacy chassis designs also
often articially constrain expansion capabilities. As a result, traditional blade
architectures have been largely restricted to low-end Web and IT services.
Responding to these challenges, the Sun Blade 6000 and 6048 modular systems
provide an open modular architecture that delivers the benets of blade architecture
without common drawbacks. Optimized for performance, efciency, and density, these
platforms take an open systems approach, employing the latest processors, operating
systems, industry-standard I/O modules, and transparent networking and
management. With a choice of server modules based on Sun SPARC, Intel Xeon,
and AMD Opteron processors, organizations can select the platforms that best match
their applications or existing infrastructure, without worrying about vendor lock-in.
Together with the successful Sun Blade 8000 and 8000 P modular systems, the Sun
Blade 6000 and 6048 modular systems present a comprehensive multitier blade
portfolio that lets organizations deploy the broadest range of applications on the most
ideal platforms. The result is modular architecture that serves the needs of the
datacenter and the goals of the business while protecting existing investments into the
future. This document describes the Sun Blade 6000 and 6048 modular systems along
with their key applications, architecture, and components.
Chapter 1
While some organizations adopted rst-generation blade technology for Web servers or
simple IT infrastructure, many legacy blade platforms have not been able to deliver on
this promise for a broader set of applications. Part of the problem is that most legacy
blade systems are based on proprietary architectures that lock adopters into an
extensive infrastructure that constrains deployment. In addition, though vendors
typically try to price server modules economically, they often charge a premium for the
required proprietary I/O and switching infrastructure. Availability of suitable
computational platforms has also been problematic.
Together, these constraints caused trade-offs in both features and performance that
had to be weighed when considering blade technology for individual applications:
Power and cooling limitations often meant that processors were limited to less
powerful mobile versions.
Limited processing power, memory capacity, and I/O bandwidth severely constrained
the applications that could be deployed on blade server platforms.
Proprietary tie-ins and other constraints in chassis design dictated networking
topology, and limited I/O expansion possibilities to a small number of proprietary
modules.
These compromises in chassis design were largely the result of a primary focus on
density with smaller chassis requiring small-format server modules. Ultimately these
designs limited the broad application of blade technology.
Highly-Efficient Cooling
Traditional blade platforms have a reputation for being hot and unreliable a
reputation caused by systems with insufcient cooling and chassis airow. Not
only do higher temperatures negatively impact electronic reliability, but hot and
inefcient systems require more datacenter cooling infrastructure, with its
associated footprint and power draw. In response, the Sun Blade 6000 modular
system provides optimized cooling and airow that can lead to reliable system
operation and efcient datacenter cooling.
In fact, Sun Blade modular systems deliver the same cooling and airow capacity
of Suns rackmount systems for both SPARC and x64 server modules
resulting in reliable system operation and less required cooling infrastructure.
Better airow can translate directly into better reliability, reduced downtime, and
improved serviceability. These systems also help organizations meet growing
demand while preserving existing datacenters.
Through this exible approach, server modules can be congured with different
I/O options depending on the applications they host. I/O modules are hot-plug,
and customers can choose from Sun-branded or third-party adapters for
networking, storage, clustering, and other I/O functions.
individual blade chassis, the Sun Blade 6048 modular system provides 20 percent
more usable space in the same physical footprint. Up to 48 server Sun Blade 6000
server modules can be deployed in a single Sun Blade 6048 modular system.
10
The Sun Blade 6000 and 6048 modular system chassis are shown in Figure 1. The Sun
Blade 6000 modular system is provided in a 10 rack unit (10U) chassis with up to four
chassis supported in a single 42U rack or three chassis supported in a 38U rack. The Sun
Blade 6048 modular system chassis takes the form of a standard rack and features four
independent shelves
Figure 1. Sun Blade 6000 and 6048 modular systems (left and right respectively)
Both the Sun Blade 6000 and 6048 modular systems support exible conguration, and
are built from a range of standard hot-plug, hot-swap modules, including:
Sun Blade T6320, T6300, X6220, X6250, or X6450 server modules, in any combination
Blade-dedicated PCI Express ExpressModules (EM), supporting industry-standard PCI
Express interfaces
PCI Express Network Express Modules (NEMs), providing access and an aggregated
interface to all of the server modules in the Sun Blade 6000 chassis or Sun Blade 6048
shelf
Integral Chassis Monitoring Module (CMM) for transparent management access to
individual server modules
Hot-swap (N+N) power supply modules
Redundant (N+1) cooling fans
With common system components and a choice of chassis, organizations can scale
capacity with either ne or course granularity, as their needs dictate. Table 1 lists the
capacities of the Sun Blade 6000 and 6048 modular systems along with single-shelf
11
capacity in the Sun Blade 6048 modular system. Maximum numbers of sockets, cores,
and threads are listed for AMD Opteron, Intel Xeon, and UltraSPARC T1 and T2
processors.
Table 1. Sun Blade 6000 and 6048 modular system capacities
Category
10
12
48
20
24
96
Up to 2
Up to 2
Up to 8
2, 6000 Watt
2, 8400 Watt
8, 8400 Watt
32
20/40/40
24/48/48
96/192/192
40/160/160
48/192/192
192/768/768
10/80/320
12/96/384
48/384/1536
10/80/640
12/96/768
48/384/3072
12
Chapter 2
Web Services
For Web services applications sized to take advantage of two-socket x64 server
economy, the Sun Blade 6000 modular system delivers one of the industrys most
compelling solutions. The system offers maximum performance, enterprise
reliability, and easy scalability at a fraction of the price of competing products.
The stateless approach of modular systems makes it easier to build large Web
server farms with maximum manageability and deployment exibility.
Organizations can add new capacity quickly or redeploy hardware resources as
required.
Virtualization and Consolidation
Virtualization and Consolidation have never been more important as organizations
seek to get more from their deployed infrastructure. Modular systems based on
Suns UltraSPARC T1 and T2 processors with CoolThreads technology can offer
consolidation solutions with Sun Logical Domains and Solaris Containers that cut
power and cooling costs. Modular systems based on Suns x64 based server
modules offer up to twice the memory and I/O of competing x64 blades or
rackmount servers. These systems offer enterprise-class reliability, availability, and
serviceability features providing the needed headroom for consolidation with
VMware, Xen, or Microsoft Virtual Server.
High Performance Computing (HPC)
Commercial and scientic computational applications such as electronic design
automation (EDA) and mechanical computer aided engineering (MCAE) place
signicant demands on system architecture. These applications require a
combination of computational performance and system capacity, with exacting
needs integer and oating point performance, large memory congurations, and
exible I/O. Sun Blade 6000 and 6048 modular systems based on Suns x64 based
server modules combined with the Sun Refresh Service allow organizations to
purchase the highest-performing and most cost-effective platforms now, while
maintaining that technological edge for years to come.
13
Figure 2. The Sun Constellation System can be used to build the largest terascale and petascale
supercomputing clusters and grids
14
either the front or the rear of the chassis for easy serviceability. Server modules, I/O
modules, power supplies, and fans can all be added and removed while the chassis and
other elements in the enclosure are powered on. This capability yields great expansion
opportunity and provides considerable exibility. The front perspectives of the Sun
Blade 6000 chassis and a single Sun Blade 6048 shelf are shown in Figure 3, with
components described in the sections that follow.
Hot-swappable N+N
power supply modules
with integral fans
Figure 3. Front view of the Sun Blade 6000 chassis (left) and a single Sun Blade 6048 shelf (right)
Operator Panel
An operator panel is located at the top of the chassis, providing status on the overall
condition of the system. Indicators show if the chassis is on standby or operational
mode, and if an over-temperature condition is occurring. A push-button indicator acts
as a locator button for the chassis in case there is a need to remotely identify a chassis
within a rack, or in a crowded datacenter. If any of the components in the chassis
should present a problem or a failure, the operator panel reects that issue as well.
15
to provide N+N redundancy, all power cords must be energized. If both power supply
modules are energized, all of the systems in the chassis are protected from power
supply failure. A power supply module can fail or be disconnected without affecting the
server modules and components running inside the chassis. To further enhance this
protection, power grid redundancy for all of the systems and components in the chassis
can be easily achieved by connecting each of the two power supply modules to different
power grids within the datacenter.
Sun Blade 6000 power supply modules have a high 90-percent efciency rating and an
output voltage of 12 V DC. The high efciency rating indicates that there are fewer
power losses within the power supply itself, therefore wasting less power in the energy
conversion stage from alternating current (AC) to direct current (DC). Also, by feeding
12V DC directly to the midplane, fewer conversion stages are required in the individual
server modules. This strategy yields less power conversion energy waste, and generates
less waste heat within the server module, making the overall system more efcient.
Provisioned power for rack mounted congurations depends on the number of chassis
deployed per rack. A 42U rack with four installed Sun Blade 6000 chassis requires 24
kilowatts, while a 38U rack with three chassis requires 18 kilowatts. Depending on the
ongoing load of the systems, actual power consumption will vary. For a more in-depth
analysis of day-to-day power consumption of the system please visit the power
calculator located on the Sun Website at http://www.sun.com/blades.
Sun Blade 6048 power supply modules include three power supply cores, facilitating
adjustable power utilization depending on the power consumption proles of the
installed server modules and other components. Two or three cores can be energized in
each power supply module to make the system perform at optimal efciency. An on-line
power calculator (www.sun.com/servers/blades/6048chassis/calc) can help identify
the power envelope of each shelf, and can help determine how many power supply
cores to energize. Energizing two cores will support 5,600 Watts, and energizing three
cores will support 8,400 Watts per shelf.
Server Modules
Up to 10 Sun Blade 6000 server modules can be inserted vertically beneath the power
supply modules on the Sun Blade 6000 chassis. The Sun Blade 6048 chassis supports up
to 12 Sun Blade 6000 server modules per shelf, or 48 server modules per chassis. The
four hard disk drives on each server module are available for easy hot-swap from the
front of the chassis. Indicator LEDs and I/O ports are also provided on the front of the
server modules for easy access. A number of connectors are provided on the front panel
of each server module, available through a server module adaptor (octopus cable).
Depending on the server module, available ports include a VGA HD-15 monitor port, two
USB 2.0 ports, and a DB-9 or RJ-45 serial port that connects to the server module and
integral service processors.
16
N+1 Redundant
and Hot-Swappable
Fan Modules
17
Passive Midplane
In essence, the passive midplanes in the Sun Blade 6000 and 6048 modular systems are
a collection of wires and connectors between different modules in the chassis. Since
there are no active components, the reliability of these printed circuit boards is
extremely high in the millions of hours, or hundreds of years. The passive midplane
provides electrical connectivity between the server modules and the I/O modules.
All modules, front and rear, with the exception of the power supplies and the fan
modules connect directly to the passive midplane. The power supplies connect to the
midplane through a bus bar and to the AC inputs via a cable harness. The redundant
18
fan modules plug individually into a set of three fan boards, where fan speed control
and other chassis-level functions are implemented. The front fan modules that cool the
PCI Express ExpressModules each connect to the chassis via blind-mate connections.
The main functions of the midplane include:
Providing a mechanical connection point for all of the server modules
Providing 12 VDC from the power supplies to each customer-replaceable module
Providing 3.3 VDC power used to power the System Management Bus devices on each
module, and to power the CMM
Providing a PCI Express interconnect between the PCI Express root complexes on each
server module to the EMs and NEMs installed in the chassis
Connecting the server modules, CMMs, and NEMs to the chassis management
network
EMs
PCI Express x8
PCI Express x8
PCI Express x4/x8 or XAUI
Gigabit Ethernet
SAS Links
Service Processor
Ethernet
CMM
NEM 1
Server Module
NEM 0
Figure 5. Distribution of communications links from each Sun Blade 6000 server module
Each server module is energized through the midplane from the redundant chassis
power grid. The midplane also provides connectivity to the I2C network in the chassis,
letting each server module directly monitor the chassis environment, including fan and
power supply status as well as various temperature sensors. A number of I/O links are
also routed through the midplane for each server module (Figure 5), including:
Two x8 PCI Express links connect from each server module to each of the dedicated
EMs
Two x4 or x8 PCI Express links connect from each server module, one to each of the
NEMs
Two gigabit Ethernet links are provided, each connecting to one of the NEMs
Four x1 Serial Attached SCSI (SAS) links are also provided, with two connecting to
each NEM (for future use)
19
Server Module
Processor(s)
Cores/Threads
Memory Capacity
1 UltraSPARC T2
processor
4, 6, or 8 cores, Up to 64 GB,
up to 64 threads 16 FBDIMM slots
1 UltraSPARC T1
processor
6 or 8 cores,
Up to 32 GB,
up to 32 threads 8 DIMM slots
2 Next Generation
AMD Opteron
processors
4 cores,
4 threads
5100 series:
2 Intel Xeon
4 cores,
Processor 5100
4 threads
series or 5300 series 5300 series:
CPUs
8 cores,
8 threads
Up to 64 GB,
16 FB-DIMM slots
7200 series:
4 Intel Xeon
8 cores,
Processor 7200
8 threads
series or 7300 series 7300 series:
CPUs
16 cores,
16 threads
Up to 96 GB,
24 FB-DIMM slots
Up to 64 GB,
16 DIMM slots
20
Links
2 x8 links,
32 Gbps each
2 x8 links,
32 Gbps each
2 x8 links,
32 Gbps each
2 x8 links,
32 Gbps each
2 x8 links,
32 Gbps each
2 x4 links,
16 Gbps each
2 x8 links,
16 Gbps each
2 x8 links,
32 Gbps each
2 x4 links,
16 Gbps each
2 x4 links,
16 Gbps each
2, 1 Gbps each
2, 1 Gbps each
2, 1 Gbps each
2, 1 Gbps each
2, 1 Gbps each
SAS links
4, 3 Gbps each
4, 3 Gbps each
4, 3 Gbps each
4, 3 Gbps each
4, 3 Gbps each
142 Gbps
142 Gbps
110 Gbps
110 Gbps
a.Server modules with Raid Expansion Module (REM) and Fabric Expansion Modules (FEM)
Enterprise-Class Features
Unlike most traditional blade servers, Sun Blade 6000 server modules provide a host of
enterprise features that help ensure greater reliability and availability:
Each server module supports hot-plug capabilities
Each server module supports four hot-plug disks, and built-in support for RAID 0 or 1
(diskless operation is also supported)1
Redundant hot-swap chassis-located fans mean greater reliability through decreased
part count and no fans located on the server modules
Redundant hot-swap chassis-located power supply modules mean that no power
supplies are located on individual server modules
1.Raid 0, 1, 5, and RAID 0+1 are supported by the Sun Blade X6250 and X6450 server modules with the
Sun StorageTek RAID expansion module (REM)
21
Server Module
22
Sun Logical Domains Support in Sun Blade T6320 and T6300 Server Modules
Supported in all Sun servers that utilize Sun processors with chip multithreading
(CMT) technology, Sun Logical Domains provide a full virtual machine that runs an
independent operating system instance and contains virtualized CPU, memory,
storage, console, and cryptographic devices. Within the Sun Logical Domains
architecture, a small rmware layer known as the Hypervisor provides a stable,
virtualized machine architecture to which an operating system can be written. As
such, each logical domain is completely isolated, and the maximum number of
virtual machines created on a single platform relies upon the capabilities of the
Hypervisor as opposed to the number of physical hardware devices installed in the
system. For example, the Sun Blade T6320 server with a single Sun UltraSPARC T2
processor supports up to 64 logical domains, and each individual logical domain
can run a unique instance of the operating system1.
By taking advantage of Sun Logical Domains, organizations gain the exibility to
deploy multiple operating systems simultaneously on a single server module. In
addition, administrators can leverage virtual device capabilities to transport an
entire software stack hosted on a logical domain from one physical machine to
another. Logical domains can also host Solaris Containers to capture the isolation,
exibility, and manageability features of both technologies. By deeply integrating
logical domains with both the industry-leading CMT capabilities of the UltraSPARC
T1 and T2 processors and the Solaris 10 OS, Sun Logical Domains technology
increases exibility, isolates workload processing, and improves the potential for
maximum server utilization.
23
Binding interfaces The Solaris OS allows considerable exibility in that processes and individual threads can be bound to either a processor or a processor
set, if required or desired.
Support for Virtualized Networking and I/O, and Accelerated Cryptography
The Solaris OS contains technology to support and virtualize components and
subsystems on the UltraSPARC T2 processor, including support for the dual onchip 10 Gb Ethernet ports and PCI Express interface. As a part of a high-performance network architecture, CMT-aware device drivers are provided so that
applications running within virtualization frameworks can effectively share I/O
and network devices. Accelerated cryptography is supported through the Solaris
Cryptographic framework.
Solaris Dynamic Tracing (DTrace) to Instrument and Tune Live Software Environments
When production systems exhibit nonfatal errors or sub-par performance, the
sheer complexity of modern distributed software environments can make accurate
root-cause diagnosis extremely difcult. Unfortunately, most traditional
approaches to solving this problem have proved time-consuming and inadequate,
leaving many applications languishing far from their potential performance levels.
24
The Solaris DTrace facility on both SPARC and x64 platforms provides dynamic
instrumentation and tracing for both application and kernel activities even
allowing tracing of application components running in a Java Virtual Machine
(JVM)1. DTrace lets developers and administrators explore the entire system to
understand how it works, track down performance problems across many layers of
software, or locate the cause of aberrant behavior. Tracing is accomplished by
dynamically modifying the operating system kernel to record additional data at
locations of interest. Best of all, although DTrace is always available and ready to
use, it has no impact on system performance when not in use, making it
particularly effective for monitoring and analyzing production systems.
1.The terms "Java Virtual Machine" and "JVM" mean a Virtual Machine for the Java platform.
25
26
Chapter 3
16 FBDIMM
Sockets
Fabric Expansion
Module (FEM)
UltraSPARC T2
Processor
ILOM 2.0 Service
Processor Card
RAID Expansion
Module (REM)
Figure 6. The Sun Blade T6320 server module with key features called out
With support for up to 64 threads and considerable network and I/O capacity, the Sun
Blade T6320 server module virtually doubles the throughput of earlier Sun Blade T6300
server modules. In addition to its processing and memory density, each server module
hosts additional modules including an ILOM 2.0 service processor, fabric expansion
module (FEM), and RAID expansion module (REM), all while retaining its compact form
factor. With the Sun Blade T6320 server module, a single Sun Blade 6000 chassis can
support up to 640 threads in just 10 rack units, and up to 3,072 threads can supported
in a single Sun Blade 6048 modular system chassis.
27
In spite of its innovative new technology, the UltraSPARC T2 processor is fully SPARC v7,
v8, and v9 compatible and binary compatible with earlier SPARC processors. A high-level
block diagram of the UltraSPARC T2 processor is shown in Figure 7.
FB DIMM
FB DIMM
FB DIMM
FB DIMM
FB DIMM
FB DIMM
FB DIMM
FB DIMM
MCU
MCU
MCU
MCU
L2$
L2$
L2$
L2$
L2$
L2$
L2$
L2$
Cross Bar
C0
C1
C2
C3
C4
C5
C6
C7
FPU
FPU
FPU
FPU
FPU
FPU
FPU
FPU
SPU
SPU
SPU
SPU
SPU
SPU
SPU
SPU
Network
Interface Unit
System Interface
10 Gigabit
Ethernet Ports (2)
PCIe
x8 @ 2.0 GHz
The UltraSPARC T2 processor design recognizes that memory latency is truly the
bottleneck to improving performance. By increasing the number of threads supported
by each core, and by further increasing network bandwidth, the UltraSPARC T2
processor is able provide approximately twice the throughput of the UltraSPARC T1
28
processor. Each UltraSPARC T2 processor provides up to eight cores, with each core able
to switch between up to eight threads (64 threads per processor). In addition, each core
provides two integer execution units, so that a single UltraSPARC core is capable of
executing two threads at a time.
The eight cores on the UltraSPARC T2 processor are interconnected with a full on-chip
non-blocking 8 x 9 crossbar switch. The crossbar connects each core to the eight banks
of L2 cache, and to the system interface unit for IO. The crossbar provides
approximately 300 GB/second of bandwidth and supports 8-byte writes from a core to a
bank and 16-byte reads from a bank to a core. The system interface unit connects
networking and I/O directly to memory through the individual cache banks. Using
FBDIMM memory supports dedicated northbound and southbound lanes to and from
the caches to accelerate performance and reduce latency. This approach provides
higher bandwidth than with DDR2 memory, with up to 42.4 GB/second of read
bandwidth and 21 GB/second of write bandwidth.
Each core provides its own fully-pipelined Floating Point and Graphics unit (FGU), as
well as a Stream Processing Unit (SPU). The FGUs greatly enhance oating point
performance over that of the UltraSPARC T1 processor, while the SPUs provide wirespeed cryptographic acceleration with over 10 popular ciphers supported, including
DES, 3DES, AES, RC4, SHA-1, SHA-256, MD5, RSA to 2048 key, ECC, and CRC32.
Embedding hardware cryptographic acceleration for these ciphers allows end-to-end
encryption with no penalty in either performance or cost.
FB-DIMMs
@667Mhz
Memory
10 Gb Ethernet
10 Gb Ethernet
UltraSPARC
T2 Processor
Fabric
Expansion
Module
PCI Express x8
Intel
Ophir
PCI Express
Switch
PEX8548
2x Gbit Ethernet
FB-DIMMs
@667Mhz
Memory
PCI Express x4
NEM #1
PCI Express x4
RAID
Expansion
Module
SAS Links
NEM #0
Passive Midplane
29
EM #0
NEM #0
NEM #1
EM #1
NEM #0
PCI to
USB
PCI to
PCI
Bridge
NEM #1
ATI
Graphics
Motorola
MPC885
based
ALOM SP
VGA HD-15
RJ-45
Serial ALCOM
10/100Mbps
Management Ethernet
CMM
JUNTA
FPGA
For I/O, the UltraSPARC T2 processor incorporates an eight-lane (x8) PCI Express port
capable of operating at 4 GB/second bidirectionally. In the Sun Blade X6320 server
module, this port interfaces with a PCI Express switch chip that delivers various PCI links
to other parts of the server module, and to the passive midplane. Two of the PCI Express
interfaces provided by the PCI Express switch are made available through PCI Express
ExpressModules.
The PCI Express switch also provides PCI links to other internal components, including
sockets for fabric expansion modules (FEMs) and RAID expansion modules (REMs). The
FEM socket allows for future expansion capabilities. The gigabit Ethernet interfaces are
provided by an Intel chip connected to a x4 PCI Express interface on the PCI Express
switch chip. Two gigabit Ethernet links are then routed through the midplane to the
NEMs. The server module provides the logic for the gigabit Ethernet connection, while
the NEM provides the physical interface.
Sun Blade RAID 0/1 Expansion Module
All standard Sun Blade T6320 server module congurations ship with the Sun Blade
Blade 0/1 RAID Expansion Module (REM). Based on the LSI SAS1068E storage controller,
the Sun Blade 0/1 REM provides a total of eight hard drive interfaces or links. Four
interfaces are used for the on-board hard drives which may be Serial Attached SCSI
(SAS) or Serial ATA (SATA). The other four links are routed to the midplane where they
interface with the NEM for future use. The REM also provides RAID 0, 1, and 0+1.
30
31
UltraSPARC T1
Processor
DIMM sockets
Service
Processor
Two hot-plug SAS or
SATA 2.5-inch drives
Figure 9. The Sun Blade T6300 server module with key components called out
DDR-2 SDRAM
DDR-2 SDRAM
L2 cache
DDR-2 SDRAM
L2 cache
L2 cache
DDR-2 SDRAM
L2 cache
FPU
System Interface
Buffer Switch Core
UltraSPARC T1 Processor
Bus
32
As shown in Figure 10, the individual processor cores are connected by a high-speed,
low-latency crossbar interconnect implemented on the silicon itself. The UltraSPARC T1
processor includes very fast interconnects between the processor, cores, memory, and
system resources, including:
A 134 GB/second crossbar switch that connects all cores
A JBus interface with a 3.1 GB/second peak effective bandwidth
Four DDR2 channels (25.6 GB/second total) for faster access to memory
The memory subsystem of the UltraSPARC T1 processor is implemented as follows:
Each core has an Instruction cache, a Data cache, an Instruction TLB, and a Data TLB,
shared by the four thread contexts. Each UltraSPARC T1 processor has a twelve-way
associative unied Level 2 (L2) on-chip cache, and each hardware thread context
shares the entire L2 cache.
This design results in unied memory latency from all cores (Unied Memory Access,
UMA, not Non-Uniform Memory Access, NUMA).
Memory is located close to processor resources, and four memory controllers provide
very high bandwidth to memory, with a theoretical maximum of 25GB per second.
Extensive built-in RAS features include ECC protection of register les, Extended-ECC
(similar to IBMs Chipkill feature), memory sparing, soft error rates and rate
detection, and extensive parity/retry protection of caches.
Each core has a Modular Arithmetic Unit (MAU) that supports modular multiplication
and exponentiation to help accelerate Secure Sockets Layer (SSL) processing. There is a
single Floating Point Unit (FPU) shared by all cores, thus the UltraSPARC T1 processor is
generally not an optimal choice for applications with oating point intensive
requirements.
33
JBUS
PCI Express
Bridge
Fire
Chip
EM #0
PCIe x8 - 32Gbps
PCIe x4 - 16Gbps
Intel
Ophir
NEM #0
2x Gbit Ethernet
3.2
GB/sec
3.2
GB/sec
PCIe x8 - 32Gbps
Fire
E Bus
PCI Express
Bridge
PCIe x8 - 32Gbps
PCIe x8
PCIe x4
UART
PCIe
LSI
SAS 1068e
LSI LOGIC
SAS Links
DB-9
Serial Posix
USB 2.0
PCI to
PCI
Bridge
PCI to
USB
RJ-45
Serial ALCOM
Passive Midplane
UltraSPARC
T1 Processor
PCIe x8 - 32Gbps
PCIe x8
3.2
GB/sec
3.2
GB/sec
DDR2 533
@400Mhz
Memory
NEM #0
NEM #1
EM #1
NEM #1
NEM #0
NEM #1
Motorola
MPC885
based
ALOM SP
10/100Mbps
Management Ethernet
CMM
JUNTA
FPGA
Figure 11. Sun Blade T6300 server module block level diagram
For I/O, two PCI Express bridges are used to obtain the four x8 PCI Express interfaces
that communicate directly to the Fire Chip that directs I/O through a pair of PCI Express
bridges. Two of the PCI Express interfaces provided by the PCI Express bridges are made
available through PCI Express ExpressModules, and the other two interfaces are
connected to PCI Express Network Express Modules.
For storage, an LSI SAS1068e controller is included on the server module, providing
eight hard drive interfaces or links. Four interfaces are used for the on-board hard drives
which may be Serial Attached SCSI (SAS) or Serial ATA (SATA). The other four links are
routed to the midplane where they interface with the NEM slots for future use. The
storage controller is capable of RAID 0 or 1 and up to two volumes are supported in
RAID congurations.
The gigabit Ethernet interfaces are provided by an Intel chip connected to a x4 PCI
Express interface on one of the bridges. Two gigabit Ethernet links are then routed
through the midplane to the NEMs. The server module provides the logic for the gigabit
Ethernet connection, while the NEM provides the physical interface.
34
ALOM allows the administrator to monitor and control a server, either over a network
or by using a dedicated serial port for connection to a terminal or terminal server.
ALOM provides a command-line interface that can be used to remotely administer
geographically-distributed or physically-inaccessible machines. In addition, ALOM
allows administrators to run diagnostics remotely (such as power-on self-test) that
would otherwise require physical proximity to the server serial port. ALOM can also be
congured to send email alerts of hardware failures, hardware warnings, and other
events related to the server or to ALOM.
The ALOM circuitry runs independently of the server, using the servers standby power.
As a result, ALOM rmware and software continue to function when the server
operating system goes ofine or when the server is powered off. ALOM monitors disk
drives, fans, CPUs, power supplies, system enclosure temperature, voltages, and the
server front panel, so that the administrator does not have to.
ALOM specically monitors the following Sun Blade T6300 server module components:
CPU temperature conditions
Enclosure thermal conditions
Fan speed and status
Power supply status
Voltage thresholds
Midplane
Connector
16 DDR2 667
AMD Opteron
DIMM sockets
Processors
Service
Processor
Figure 12. The Sun Blade X6220 server module with key components called out
35
Core 2
128 KB L1 Cache
128 KB L1 Cache
1MB L2 Cache
1MB L2 Cache
HyperTransport 0
HyperTransport 2
HyperTransport 1
Enhancements of the AMD Opteron processor over the legacy x86 architecture include:
16 64-bit general-purpose integer registers that quadruple the general-purpose
register space available to applications and device drivers as compared to x86
systems
16 128-bit XMM registers provide enhanced multimedia performance to double the
register space of any current SSE/SSE2 implementation
A full 64-bit virtual address space offers 40 bits of physical memory addressing and 48
bits of virtual addressing that can support systems with up to 256 terabytes of
physical memory
Support for 64-bit operating systems provide full transparent, and simultaneous 32bit and 64-bit platform application multitasking
A 128-bit wide, on-chip DDR memory controller supports ECC and Enhanced ECC and
provides low-latency memory bandwidth that scales as processors are added
36
Each processor core has a dedicated 1MB Level-2 cache, and both cores use the System
Request Interface and Crossbar Switch to share the Memory Controller and access the
three HyperTransport links. This sharing represents an effective approach since
performance characterizations of single-core based systems have revealed that the
memory and HyperTransport bandwidths are typically under-utilized, even while
running high-end server workloads.
The AMD Opteron processor with integrated HyperTransport technology links provides a
scalable bandwidth interconnect among processors, I/O subsystems, and other chipsets. HyperTransport technology interconnects help increase overall system
performance by removing I/O bottlenecks and efciently integrating with legacy buses,
increasing bandwidth and speed, and reducing processor latency. At 16 x 16 bits and 1
GHz operation, HyperTransport technology provides support for up to 8 GB/s bandwidth
per link.
DDR2 667
Memory
10.7
GB/sec
As shown in Figure 14, the AMD Opteron processor uses DDR2 memory, running at a
faster memory bus clock rate of 667 MHz. Up to 10.7 GB per second of memory
bandwidth is provided for each memory controller, for a total aggregate memory
bandwidth of 21.4 GB per second. These higher clock rates can be sustained, even when
the CPUs are congured with up to four DDR2 DIMMs. When all eight DIMMs are
populated, the clock speed is dropped to 533 MHz. The total memory capacity available
is 64 GB per server module.
PCIe x8 - 32Gbps
EM #1
8 GB/s
PCIe x8 - 32Gbps
IO-04
NEM #1
Gbit Ethernet
Gbit Ethernet
Next Generation
AMD Opteron 2000
Series Processors
PCIe x8 - 32Gbps
8 GB/s
PCIe x8 - 32Gbps
nForce4
CK8-04
10.7
GB/sec
IDE
PCIe x4
LPC 33MHz
LSI
SA S1068e
LSI LOGIC
PCI
VGA HD-15
DB-9 Serial
Compact
Flash
SAS Links
RageXL
DVI Video
Output
Video
over LAN
Redirect
NEM #1
NEM #0
EM #0
NEM #0
NEM #0
NEM #1
Super I/O
Controller
VGA Video
Output
Passive Midplane
PCIe Bridge
Motorola
MPC8275
SP
Figure 14. Sun Blade X6220 server module block level diagram
BCM
10/100Mbps
Management
Ethernet
CMM
37
The nVidia PCI Express bridges are connected to the AMD Opteron processors over 8 GB
per second HyperTransport links to provide maximum throughput capacity to the PCI
Express lanes that are directed through the passive midplane. Two HyperTransport links
connect the two CPUs, with one used for cache coherency and the other for I/O
communication between the processors and the second PCI Express bridge. These links
also run at 8 GB per second. Two x8 PCI Express interfaces are pulled from each of the
PCI Express bridges, with each link providing a 32 Gb per second interface through the
midplane. Each PCI Express bridge also provides a gigabit Ethernet interface that is
routed through the passive midplane to the PCI Express Network Express Modules.
Sun Blade X6220 server modules also provide a Compact Flash slot, connected to the
system through an IDE connection to the nVidia chipset. By inserting a standard
compact ash device, administrators can store valuable data or even install a bootable
operating environment. The compact ash device is internal to the server module, and
it cannot be removed unless the server module is removed from the chassis.
As in the Sun Blade T6300 server module, an LSI SAS1068e controller is located on the
Sun Blade X6220 server module, providing eight hard drive interfaces. Four interfaces
are used for the on-board hard drives (either SAS or SATA). The other four links are
routed to the midplane for future use. The storage controller is capable of RAID 0 or 1
and up to two volumes are supported in RAID congurations.
38
The ILOM service processor also allows the administrator to remotely manage the
server, independently of the operating system running on the platform and without
interfering with any system activity. To facilitate full-featured remote management, the
ILOM service processor provides remote keyboard, video, mouse, and storage (RKVMS)
support that is tightly integrated with the Sun Blade server modules. Together these
capabilities allow the server module to be administered remotely, while accessing
keyboard, mouse, video and storage devices local to the administrator (Figure 15). ILOM
Remote Console support is provided on the ILOM service processor and can be
downloaded and executed on the management console. Input/output of virtual devices
is handled between ILOM on the Sun Blade server module and ILOM Remote Console
on the Web-based client management console.
.
ILOM Remote Console
Video
(Up to 1024x768@60Hz)
Graphics Redirect Over Ethernet
Management
Console
Floppy Disk or
Floppy Image
CDROM, DVDROM
or .iso Image
Figure 15. Remote keyboard, video, mouse, and storage (RKVMS) support in the ILOM service processor
allows full-featured remote management for Sun Blade server modules
39
Midplane
Connector
Intel Xeon
16 FB DIMM
Processors
667 sockets
RAID Expansion
Module
Figure 16. The Sun Blade X6250 server module with key components called out
Core 1
Core 2
Core 1
Core 2
Core 3
Core 4
64K L1
Cache
64K L1
Cache
64K L1
Cache
64K L1
Cache
64K L1
Cache
64K L1
Cache
4 MB L2 Cache
4 MB L2 Cache
4 MB L2 Cache
Figure 17. Intel Xeon Processor 5100 and 5300 series block-level diagrams
40
5.3
GB/sec
5.3
GB/sec
FDBIMM
667 Memory
10.5 GB/s
PCIe x8 - 32Gbps
EM #1
NEM #0
PCIe x4 or XAUI
Fabric
Expansion
Module (FEM)
Fabric
Expansion
Module
PCIe x4 or XAUI
Gbit Ethernet
Gbit Ethernet
Gigal
PCIe x8 - 32Gbps
ESB2 IO
PCI Bridge
IDE
Compact
Flash
SATA x4
PCIe x4
LPC
RAID
Expansion
Module (REM)
SAS
HW RAID
Controller
PCI
Super
I/O
USB 2.0
SAS/SATA
HDDs
NEM #1
Passive Midplane
5.3
GB/sec
ESI
PCIe x8
10.5 GB/s
5.3
GB/sec
5000
MCH
SAS Links
PCI
DB-9 Serial
NEM #0
NEM #1
EM #0
NEM #0
MUX
NEM #1
VGA HD-15
AST 2000
Service
Processor
10/100
PHY
10/100Mbps
Management
Ethernet
CMM
Figure 18. Sun Blade X6250 server module block level diagram
The MCH also provides the system with high speed memory controllers, and PCI-Express
bridges as well as a high speed link to a second I/O bridge (the ESB2 I/O control hub).
The total memory bandwidth provides read speeds up to 21.3 GB per second and write
speeds of up to 17 GB per second. One of the PCI Express x8 lane interfaces from the
MCH is directly routed to a PCI Express ExpressModule via the passive midplane. The
other interface is routed to the Fabric Expansion Module (FEM) socket available for
future expansion capabilities.
The Intel ESB2 PCI Express bridge provides connectivity to the other PCI Express
ExpressModule and access to the dual gigabit Ethernet interfaces that are routed
through the passive midplane to the NEMs. This bridge also provides the IDE
connection to the compact ash device, used for boot and storage capabilities.
Sun Blade X6250 RAID Expansion Module (REM)
All standard Sun Blade X6250 server module congurations ship with the Sun Blade
X6250 RAID Expansion Module (REM). The REM provides a total of eight SAS ports,
battery backed cache, and RAID 0, 1, 5, and 1+0 capabilities. Using the REM, the server
module provides SAS connectivity on the internal drive slots. Four 1x SAS links are also
41
routed to the NEMs for future storage expansion. Build-to-order Sun Blade x6250 server
modules can be ordered without the REM. While these server modules will not provide
SAS support, SATA connectivity to the internal hard disk drives can be provided by the
Intel ESB8210 PCI Express bridge.
42
Midplane
Connector
Intel Xeon
Processors
24 FB-DIMM
667 sockets
Compact Flash
Storage
Figure 19. The Sun Blade X6450 server module supports up to four Intel Xeon processors
Figure 20. Intel Xeon Processor 7200 and 7300 series block-level diagrams
In the dual-core Intel Xeon 7200 Series, each die includes one processor core, but in the
quad-core Intel Xeon Processor 7300 Series, each die contains two cores. In a Sun Blade
X6450 server server module with four processors, this dense conguration provides up
to 16 execution cores in a compact blade form factor. The 7000 Sequence processor
families share these additional features:
An on-die Level 1 (L1) instruction data cache (64KB per die)
An on-die Level 2 (L2) cache (4MB per die for a total of 8MB in packages with two die)
Multiple, independent Front Side Buses (FSBs) that act as high-bandwidth system
interconnects
43
8.5 GB/s
5.3
GB/sec
5.3
GB/sec
FD-BIMM
667 Memory
PCIe x8 - 32Gbps
EM #0
PCIe x8 - 32Gbps
ESB2 IO
PCI Bridge
LPC
PCIe x8 - 32Gbps
Gbit Ethernet
Gigal
PCI
Super
I/O
IDE
USB 2.0
Gbit Ethernet
MUX
Compact
Flash
NEM #1
NEM #0
NEM #1
NEM #1
NEM #0
EM #1
PCI
VGA HD-15
SAS Links
PCIe x8 - 32Gbps
DB-9 Serial
SAS
HW RAID
Controller
Optional
8.5 GB/s
NEM #0
PCIe x4 or XAUI
Passive Midplane
5.3
GB/sec
5.3
GB/sec
ESI
PCIe x8
PCIe x4
7000
MCH
PCIe x4 or XAUI
Fabric
Expansion
Module
AST 2000
Service
Processor
10/100
PHY
10/100Mbps
Management
Ethernet
CMM
Figure 21. Sun Blade X6450 server module block level diagram
The MCH also provides the system with high-speed memory controllers, and PCI Express
bridges as well as a high speed link to a second I/O bridge (the ESB2 I/O control hub).
The total memory bandwidth provides read speeds up to 21.3 GB per second and write
speeds of up to 17 GB per second. One of the PCI Express x8 lane interfaces from the
MCH is directly routed to a PCI Express ExpressModule via the passive midplane. The
other interface is routed to the Fabric Expansion Module (FEM) socket available for
future expansion capabilities. An x4 PCI Express connection powers an optional RAID
Expansion Module (REM) that can be congured to access Serial Attached SCSI (SAS)
storage devices over the passive midplane.
The Intel ESB2 I/O PCI Express bridge provides connectivity to the other PCI Express
ExpressModule and access to the dual gigabit Ethernet interfaces that are routed
through the passive midplane to the NEMs. This bridge also provides the IDE
connection to the compact ash device. The Sun Blade X6450 server module is diskless,
and contains no traditional hard drives. The integrated CompactFlash device provides a
means for internal storage that can be used as a boot device or as a generic storage
medium.
44
45
Chapter 4
Server Module 0
Server Module 1
Server Module 2
Server Module 3
Server Module 4
Server Module 5
Server Module 6
Server Module 7
Server Module 8
Server Module 9
46
Figure 22. A pair of 8-lane (x8) PCI Express slots allow up to two PCI Express ExpressModules per server
module in the Sun Blade 6000 (shown) and 6048 chassis
With the industry-standard PCI Express ExpressModule form factor, EMs are available
for multiple types of connectivity, including
4 Gb FiberChannel, dual port (Qlogic, SG-XPCIE2FC-QB4-Z)*
4 Gb FiberChannel, dual port (Emulex, SG-XPCIE2FC-EB4-Z)
Figure 23. Several PCI Express ExpressModules available for the Sun Blade 6000 modular server.
47
Figure 24. The Gigabit Ethernet Pass-Through NEM provides a 10/10/1000 BaseT port for each installed
Sun Blade server module (Sun Blade 6000 Pass-Through NEM shown)
48
HCA
HCA
HCA
HCA
HCA
HCA
HCA
HCA
HCA
HCA
HCA
HCA
Figure 25. The Sun Blade 6048 InniBand Switched NEM provides eight switched 12x InniBand
connections to the two on-board 24-port switches, and twelve pass-through gigabit Ethernet ports, one
to each Sun Blade 6000 server module in the Sun Blade 6048 shelf
Each Sun Blade 6048 InniBand Switched NEM employs two of the same Mellanox
InniScale III 24-port switch chips used in Sun DS 3456 and 3x24 InniBand switches,
providing 12 internal and 12 external connections. Redundant internal connections are
provided from Mellanox ConnectX HCA chips to each of the switch chips, allowing the
system to route around failed links. Additionally, 12 pass-through gigabit Ethernet
connections are provided to access gigabit Interfaces provided on individual Sun Blade
6000 server modules mounted in the Sun Blade 6048 modular system. The same
standard Compact Small Form-factor Pluggable (CSFP) connectors are used on the back
panel for direct connection to the Sun DS 3456 or 3x24 switch, with each 12x
connection providing four 4x InniBand connections.
49
CMM Architecture
A portion of the CMM functions as an unmanaged switch dedicated exclusively to
remote management network trafc, letting administrators access the remote
management functions of the server modules. The switch in the CMM provides a single
network interface to each of the server modules and to each of the NEMs, as well as to
50
the service processor located on the CMM itself. Figure 26 provides an illustration and a
block-level diagram of the Sun Blade 6000 CMM. The Sun Blade 6048 NEM has a
different form factor but provides the same functionality.
Server Module 0
Server Module 1
Server Module 2
Server Module 3
Server Module 4
Server Module 5
Server Module 6
Server Module 7
Server Module 8
Server Module 9
NEM 0
NEM 1
Unmanaged
Switch
Figure 26. The CMM provides a management network that connects to each server module, the two
NEMS, and the CMM itself (Sun Blade 6000 CMM shown)
51
52
Conclusion
Chapter 5
Conclusion
Sun's innovative technology and open-systems approach make modular systems
attractive across a broad set of applications and activities from deploying dynamic
Web services infrastructure to building datacenters run demanding HPC codes. The Sun
Blade 6000 modular system provide the promised advantages of modular architecture
while retaining essential exibility for how technology is deployed and managed. The
Sun Blade 6048 modular system extends and amplies these strengths, allowing
organizations to build ultra-dense infrastructure that can scale to provide the worlds
largest terascale and petascale supercomputing clusters and grids.
Suns standard and open-systems based approach yields choice and avoids compromise
providing a platform that benets from widespread industry innovation. With
chassis designed for investment protection into the future, organizations can literally
cable once, and change their deployment options as required mixing and matching
server modules as desired. A choice of Sun SPARC, Intel Xeon, or AMD Opteron based
server modules and a choice of operating systems makes it easy to choose the right
platform for essential applications. Industry-standard I/O provides leading exibility
and leading throughput for individual servers. Transparent networking and
management means that the Sun Blade 6000 and 6048 modular systems t easily into
existing network and management infrastructure.
The Sun Blade 6000 and 6048 modular systems get blade architecture right. Together
with the Sun Blade 8000 and 8000 P modular systems, Sun now has one of the most
comprehensive modular system families in the industry. This breadth of coverage
translates directly to savings in terms of administration and management. For example,
unied support for the Solaris OS across all server modules means that the same
features and functionality are available on all processor platforms. This approach saves
both time in training and administration even as the system delivers agile
infrastructure for the organizations most critical applications.
53
Conclusion
Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 USA Phone 1-650-960-1300 or 1-800-555-9SUN (9786) Web sun.com
2007-2008 Sun Microsystems, Inc. All rights reserved. Sun, Sun Microsystems, the Sun logo, CoolThreads, Java, JVM, Solaris, Sun Blade, Sun Fire, N1 and ZFS are trademarks or registered trademarks of Sun
Microsystems, Inc. and its subsidiaries in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the US and
other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. Intel Xeon is a trademark or registered trademark of Intel Corporation or its subsidiaries in
the United States and other countries. AMD Opteron is a trademark or registered trademarks of Advanced Micro Devices. Information subject to change without notice. Printed in USA
SunWIN #:494863 06/08