Sunteți pe pagina 1din 56

SUN BLADE 6000 AND 6048

MODULAR SYSTEMS
Open Modular Architecture with a Choice of
Sun SPARC, Intel Xeon, and AMD Opteron Platforms
White Paper
June 2008

Sun Microsystems, Inc.

Table of Contents
Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
An Open Systems Approach to Modular Architecture . . . . . . . . . . . . . . . . . . . . . . 2
The Promise of Blade Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
The Sun Blade 6000 and 6048 Modular Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Open and Modular System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Sun Blade 6000 and 6048 Modular Systems Overview . . . . . . . . . . . . . . . . . . . . . 12
Chassis Front Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Chassis Rear Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Passive Midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Server Modules Based on Sun SPARC, Intel Xeon, and AMD Opteron Processors . . . 19
A Choice of Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Server Module Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Sun Blade T6320 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Sun Blade T6300 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Sun Blade X6220 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Sun Blade X6250 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Sun Blade X6450 Server Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
I/O Expansion, Networking, and Management . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Server Module Hard Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
PCI Express ExpressModules (EMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
PCI Express Network Express Modules (NEMs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Transparent and Open Chassis and System Management . . . . . . . . . . . . . . . . . . . . 49
Sun xVM Ops Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Executive Summary

Sun Microsystems, Inc.

Executive Summary
The Participation Age is driving new demands that are focused squarely on the
capabilities of the datacenter. Web services and rapidly escalating Internet use are
driving competitive organizations to lead with innovative new services and scalable,
dynamic infrastructure. High performance computing (HPC) is constantly nding new
applications in both science and industry, fostering new demands for performance and
density. Agility is paramount, and organizations must be able to respond quickly to
unpredictable needs for capacity adding compute power or growing services on
demand. At the same time, most datacenters are rapidly running out of space, power,
and cooling even as energy costs continue to rise. Rapid growth must be met with
consolidated infrastructure, controlled and predictable costs, and efcient
management practices. Simply adding more low-density power-consumptive servers is
clearly not the answer.
Blade server architecture offers considerable promise toward addressing these issues
through increased compute density, improved serviceability, and lower levels of
exposed complexity. Unfortunately, most legacy blade platforms don't provide the
necessary exibility needed by many of today's Web services and HPC applications.
Complicating matters, many legacy blade server platforms lock customers into a
proprietary and vendor-specic infrastructure that often requires redesign of existing
network, management, and storage environments. These legacy chassis designs also
often articially constrain expansion capabilities. As a result, traditional blade
architectures have been largely restricted to low-end Web and IT services.
Responding to these challenges, the Sun Blade 6000 and 6048 modular systems
provide an open modular architecture that delivers the benets of blade architecture
without common drawbacks. Optimized for performance, efciency, and density, these
platforms take an open systems approach, employing the latest processors, operating
systems, industry-standard I/O modules, and transparent networking and
management. With a choice of server modules based on Sun SPARC, Intel Xeon,
and AMD Opteron processors, organizations can select the platforms that best match
their applications or existing infrastructure, without worrying about vendor lock-in.
Together with the successful Sun Blade 8000 and 8000 P modular systems, the Sun
Blade 6000 and 6048 modular systems present a comprehensive multitier blade
portfolio that lets organizations deploy the broadest range of applications on the most
ideal platforms. The result is modular architecture that serves the needs of the
datacenter and the goals of the business while protecting existing investments into the
future. This document describes the Sun Blade 6000 and 6048 modular systems along
with their key applications, architecture, and components.

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

Chapter 1

An Open Systems Approach to Modular Architecture


Organizations operating traditional IT infrastructure, business processing, and back
ofce applications are always looking for ways to cut costs and safely consolidate
infrastructure. For many, large numbers of older and less efcient systems constrain the
ability to grow and adapt, both physically and computationally. Emerging segments
such as Web services along with a renewed focus on high performance computing
(HPC) are demanding computational performance, density, and dramatic scalability.
With most datacenters constrained by space, heat, or power, these issues are very real.
Successful solutions must be efcient, cost effective, and reliable with investment
protection factored into fundamental design considerations.
Fortunately, new technology is yielding opportunities for increased efciency and
exibility in the datacenter. Dual and multicore processor technologies are doubling
compute density every other year. Virtualization technologies and more powerful
servers are making it possible to consolidate widely distributed datacenters using
smaller numbers of more powerful servers. Standard high bandwidth networking and
interconnect technologies are becoming more affordable. Modern provisioning
technology makes it possible to dynamically readjust workloads on the y.
Regrettably, most current server form factors have failed to take full advantage of these
trends. For instance, most traditional rackmount servers require a box swap in order to
allow an organization to deploy new CPU and I/O technology. Modular architecture
offers the opportunity to rapidly harvest the returns of new technology advances, while
serving the constantly changing needs of the enterprise.

The Promise of Blade Architecture


At its best, modular or blade server architecture blends the enterprise availability and
management features of vertically-scalable platforms with the scalability and economic
advantages of horizontally-scalable systems. In general, modular architectures offer
considerable promise, and can contribute to:
Higher compute density providing more processing power per rack unit (RU) than
with rackmount systems
Increased serviceability and availability featuring shared common system
components such as power, cooling, and I/O interconnects
Reduced complexity through fewer required components, cable and component
aggregation, and consolidated management
Faster service expansion and bulk deployment letting organizations expand or
scale existing services and exibly pre-provision chassis and I/O components
Lowered costs since modular servers can be less expensive to acquire, easier to
service, and easier to manage

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

While some organizations adopted rst-generation blade technology for Web servers or
simple IT infrastructure, many legacy blade platforms have not been able to deliver on
this promise for a broader set of applications. Part of the problem is that most legacy
blade systems are based on proprietary architectures that lock adopters into an
extensive infrastructure that constrains deployment. In addition, though vendors
typically try to price server modules economically, they often charge a premium for the
required proprietary I/O and switching infrastructure. Availability of suitable
computational platforms has also been problematic.
Together, these constraints caused trade-offs in both features and performance that
had to be weighed when considering blade technology for individual applications:
Power and cooling limitations often meant that processors were limited to less
powerful mobile versions.
Limited processing power, memory capacity, and I/O bandwidth severely constrained
the applications that could be deployed on blade server platforms.
Proprietary tie-ins and other constraints in chassis design dictated networking
topology, and limited I/O expansion possibilities to a small number of proprietary
modules.
These compromises in chassis design were largely the result of a primary focus on
density with smaller chassis requiring small-format server modules. Ultimately these
designs limited the broad application of blade technology.

Sun Blade 6000 and 6048 Modular Systems


To address the shortcomings of earlier blade platforms, Sun started with a design point
focused on the needs of the datacenter, rather than with preconceptions of chassis
design. With this innovative and truly modular approach and a no-compromise feature
set, the newly expanded Sun Blade family of modular systems offers considerable
advantages for a wide range of applications. Organizations gain the promised benets
of blades, and can save more by deploying a broader range of their applications on
modular system platforms.
Scalable, Expandable, and Serviceable Multitier Architecture
Sun Blade 6000 and 6048 modular systems let organizations deploy multitier
applications on a single unied modular architecture. These systems support all
major volume CPU architectures, including UltraSPARC T1 and T2 processors with
CoolThreads technology, the Intel Xeon processor, and Next Generation AMD
Opteron processors. The Solaris Operating System (Solaris OS) is supported
uniformly on all platforms, and support is also provided for Linux and Windows
operating systems as appropriate.
By offering the fastest AMD, Intel, and UltraSPARC T1 and T2 processors available,
large memory, and high I/O capacity, these systems support a very broad range of
applications. In addition, the Sun Blade 6000 and 6048 modular systems achieve

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

better power efciency by consolidating power and cooling infrastructure for


multiple systems into the modular system chassis. The result is high-performance
infrastructure that packs more performance and functionality into a smaller space
both in terms of real estate as well as power envelope.
With innovative chassis design, Sun Blade modular systems allow organizations to
take full advantage of future technology without forklift upgrades.
Organizations can independently service, upgrade, and expand compute, I/O,
power, cooling, and management modules. All major components are hot
pluggable and hot swappable, including I/O modules.

Sun Blade Transparent Management


Many blade vendors provide management solutions that lock organizations into
proprietary management tools. With the Sun Blade 6000 and 6048 modular
systems, customers have the choice of using their existing management tools or
Sun Blade Transparent Management. Sun Blade Transparent Management is a
standards-based cross-platform tool that provides direct management over
individual server modules and direct management of chassis-level modules using
Sun Integrated Lights out Management (ILOM). With direct management access
to server modules, existing or favorite management tools from Sun or third
parties can be used. With this approach, administrative staff productivity can be
retained, with no additional training or changes in management practices.
Open and Independent Industry Standard I/O
The Sun Blade 6000 and 6048 modular systems provide a cable-once architecture
with complete hardware isolation of compute and I/O modules. Sun supports true
industry standard I/O on its modular systems platforms with a design that
completely separates CPU and I/O modules. Sun Blade modular systems utilize
standard PCI Express I/O architecture and adapters, the same technology that
dominates the rackmount server industry. I/O adapters from multiple vendors are
available to work with Sun Blade modular systems.
A truly modular design based on industry standard hot-pluggable I/O means that
systems are easier to install and service providing simpler administration,
higher reliability, and better compatibility with existing network and storage
environments. For instance, replacing an I/O module in a Sun Blade modular
system requires less than a minute.

Highly-Efficient Cooling
Traditional blade platforms have a reputation for being hot and unreliable a
reputation caused by systems with insufcient cooling and chassis airow. Not
only do higher temperatures negatively impact electronic reliability, but hot and
inefcient systems require more datacenter cooling infrastructure, with its

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

associated footprint and power draw. In response, the Sun Blade 6000 modular
system provides optimized cooling and airow that can lead to reliable system
operation and efcient datacenter cooling.
In fact, Sun Blade modular systems deliver the same cooling and airow capacity
of Suns rackmount systems for both SPARC and x64 server modules
resulting in reliable system operation and less required cooling infrastructure.
Better airow can translate directly into better reliability, reduced downtime, and
improved serviceability. These systems also help organizations meet growing
demand while preserving existing datacenters.

Virtually Unmatched Investment Protection with the SunSM Refresh Service


Computing technology is constantly evolving, delivering improved performance
and new energy efciencies over time. Unfortunately, this progress combined with
traditional purchasing models often results in server sprawl as businesses add new
servers year over year to meet growing needs for computational infrastructure.
This consumptive model causes real issues, driving datacenter buildout and power
and cooling costs that are often well in excess of hardware acquisition costs.
The SunSM Refresh Service for Sun Blade Modular Systems lets organizations break
away from the traditional acquire-and-depreciate life cycle replenishing
datacenters with fresh technology and providing virtually unmatched investment
protection. With this service, IT managers can adapt to ongoing changes in
technology and business needs at lower costs, refreshing the datacenter
frequently in order to reap the benets offered by the latest advancements in
technology. Increasing the productivity of datacenter infrastructure with the Sun
Refresh Service also minimizes the need to add more datacenter space.
Sun Blade modular systems in particular complement this approach, since
compute elements can be easily upgraded with minimal disruption to the rest of
the infrastructure. Careful planning has gone into Sun Blade 6000 and 6048
modular systems to help ensure that they provides the power, cooling, and I/O
headroom to operate future server modules. The Sun Refresh Service is being
expanded in phases to different geographies around the world. Please check
http://www.sun.com/blades for service availability in desired locations.

Open and Modular System Architecture


Along with the Sun Blade 8000 and 8000 P modular systems, the Sun Blade 6000 and
6048 modular systems provide a new approach to modular system architecture. This
approach combines careful long-term chassis design with an open and standard
systems architecture.

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

Innovative Industry-Standard Design


Providing choice in modular system platforms is essential, both to help enable the
broadest set of applications, and to provide the best investment protection for a range
of different organizations and their requirements. Sun Blade 6000 and 6048 modular
systems offer choice and key innovations for modular computing.
A Choice of Processor Architectures and Operating Systems
Sun Blade 6000 and 6048 modular systems support a range of full performance
and full featured Sun Blade 6000 server modules.
The Sun Blade T6320 server module offers support for the massively-threaded
UltraSPARC T2 processor with either four, six, or eight cores, up to 64 threads
and support for 64 GB of memory.
The Sun Blade T6300 server module provides a single socket for an
UltraSPARC T1 processor, featuring either six or eight cores, up to 32 threads,
and support for up to 32 GB of memory.
The Sun Blade X6220 server module provides support for two Next Generation
AMD Opteron 2000 Series processors and support for up to 64 GB of memory.
The Sun Blade X6250 server module provides two sockets for Dual-Core Intel
Xeon Processor 5100 series or two Quad-Core Intel Xeon Processor 5300 series
CPUs with up to 64 GB of memory per server module.
The Sun Blade X6450 server module provides four sockets for Dual-Core Intel
Xeon Processor 7200 series or Quad-Core Intel Xeon Processor 7300 series CPUs,
with up to 96 GB of memory per server module.
Each server module provides signicant I/O capacity as well, with up to 32 lanes of
PCI Express bandwidth delivered from each server module to the multiple
available I/O expansion modules (a total of up to 142 Gb/s per supported per
server module). To enhance availability, server modules have no power supply or
fans and feature four hot-swap disks with hardware RAID built in. Organizations
can deploy server modules based on the processors and operating system that
best serve their applications or environment. Different server modules can be
mixed and matched in a single chassis, and deployed and redeployed as needs
dictate.

Complete Separation Between CPU and I/O Modules


Sun Blade 6000 and 6048 modular system design avoids compromises because it
provides a complete separation between CPU and I/O modules. Two types of I/O
modules are supported.
Up to two industry-standard PCI Express ExpressModules (EMs) can be dedicated
to each server module.
Up to two PCI Express Network Express Modules (NEMs) provide bulk IO for all of
the server modules installed in the system.

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

Through this exible approach, server modules can be congured with different
I/O options depending on the applications they host. I/O modules are hot-plug,
and customers can choose from Sun-branded or third-party adapters for
networking, storage, clustering, and other I/O functions.

Transparent Chassis Management Infrastructure


Within the Sun Blade 6000 and 6048 modular systems, a Chassis Monitoring
Module (CMM) works in conjunction with the service processor on each server
module to form a complete and transparent management solution. Each Sun
Blade 6000 server module contains its own directly addressable management
service processor that is accessible through the CMM. Though similar in function,
these service processors vary with the individual server modules. Generally, these
service processors support Lights Out Management (LOM), and provide support for
IPMI, SNMP, CLI (through serial console or SSH), and HTTP(S) management
methods. In addition, Sun xVM Ops Center (formerly Sun Connection and Sun N1
System Manager software ) provides discovery, aggregated management, and
bulk deployment for multiple systems.
Innovative and Highly-Reliable Chassis Design for Different Needs
Sun Blade 6000 and 6048 modular systems are intended for a long life, with a
design that assumes ongoing improvements in technology. The chassis integrates
AC power supplies and cooling fans for all of the server and I/O modules. This
approach keeps these components off of the server modules, making them
efcient and more reliable. Power supplies and fans in the chassis are designed for
ease-of-service, hot-swappability, and redundancy. The chassis provides power and
cooling infrastructure to support current and future CPU and memory
congurations, helping to ensure that the chassis life-cycle will span multiple
generations of processor upgrades. All modular components such as the CMM,
server modules, EMs, and NEMs are hot-plug capable. In addition, I/O paths can
be congured in a redundant fashion.
One Architecture with a Choice of Chassis
Organizations need modular chassis that allow them to deploy exactly the amount
of processing and I/O that they require, while scaling effectively to meet their
needs. With a single unied architecture, Sun Blade 6000 and 6048 modular
systems provide different levels of capacity. For smaller incremental growth, the
Sun Blade 6000 modular system is provided in a compact rackmount chassis that
occupies 10 rack units (10 RU). Each Sun Blade 6000 chassis can house up to 10
server modules, providing support for up to 40 server modules per rack. Designed
for maximum density and scalability, the Sun Blade 6048 modular system features
a standard rack-size chassis that facilitates the deployment of high-density
infrastructure. By eliminating all of the hardware typically used to rack-mount

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

individual blade chassis, the Sun Blade 6048 modular system provides 20 percent
more usable space in the same physical footprint. Up to 48 server Sun Blade 6000
server modules can be deployed in a single Sun Blade 6048 modular system.

A Choice of Sun SPARC, Intel Xeon, and AMD Opteron Processors


Legacy blade platforms were often restrictive in the processor architectures they
supported, limiting innovation for modular systems and forcing difcult architectural
choices for adopters. In contrast, Sun Blade 6000 and 6048 modular systems offer a
choice of server modules based on UltraSPARC T2 or T1 processors, Intel Xeon
processors, or Next Generation AMD Opteron 2000 Series processors. In addition, Sun
Blade 6000 server modules provide large memory capacities, while the individual
chassis provide signicant power and cooling capacity. The available Sun Blade 6000
server modules are described below.

Sun Blade T6320 Server Module


Based on the Industrys rst massively threaded system on a chip (SoC), the
UltraSPARC T2 processor based Sun Blade T6320 Server module brings nextgeneration chip multithreading (CMT) to a modular system platform. Building on
the strengths of its predecessor, the UltraSPARC T2 processor offers support for
eight threads per core, and integrates memory control, caches, networking, I/O,
and cryptography on the processor die. Four-, six-, and eight-core UltraSPARC T2
processors are supported, yielding up to 64 threads. Like Suns rackmount Sun
SPARC Enterprise T5120 and T5220 servers, the Sun Blade T6320 server module
provides signicant memory bandwidth with support for 667 MHz Fully-Buffered
DIMMs (FBDIMMs). Up to 16 FBDIMMs can be installed to support up to 64 GB of
memory. Individual Sun Blade T6320 server modules can provide industry-leading
performance as measured by the Space, Watts, and Performance (SWaP) metric1.
Sun Blade T6300 Server Module
The Sun Blade T6300 server module utilizes the successful UltraSPARC T1
processor. With a single socket for a six- or eight- core UltraSPARC T1 processor, up
to 32 threads can be supported for applications that require substantial amounts
of throughput. Similar to the Sun Fire / SPARC Enterprise T2000 server, the server
module uses all four of the processors memory controllers, providing large
memory bandwidth. Up to eight DDR2 533 DIMMs at 400 MHz can be installed for
a maximum of 32 GB of RAM per server module.
Sun Blade X6220 Server Module
Ideal for consolidation in x64 environments, the Sun Blade X6220 server module
provides support for two Next Generation AMD Opteron 2000 Series processors,
with dual cores per processor. Sixteen memory slots are provided for a total of up
to 64 GB of RAM with 667 MHz DDR2 DIMMs. Organizations can consolidate IT and
1.1. For more information on the SWaP metric, along with the latest benchmark results, please see
www.sun.com/swap.

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

Web services infrastructure at a fraction of the cost of competing x64 servers or


blades. The Sun Blade X6220 server module also delivers industry-leading oating
point performance helping to empower HPC applications that require both
computational density and performance.

Sun Blade X6250 Server Module


The Sun Blade X6250 server module is ideal for x64 applications, such as those at
the Web and application tiers, and is also appropriate for HPC applications. Two
sockets are provided for Dual-Core Intel Xeon Processor 5100 series or Quad-Core
Intel Xeon Processor 5300 series CPUs. A high memory density of up to 64 GB gives
the Sun Blade X6250 server module considerable capacity. This server module also
provides industry-leading integer performance and unconstrained I/O capacity as
compared to other Intel Xeon Processor-based blade servers.
Sun Blade X6450 Server Module
The Sun Blade X6450 server module is ideal for x64 applications and scalable
workloads such as databases and HPC applications. Four sockets are provided for
Dual-Core Intel Xeon Processor 7200 series or Quad-Core Intel Xeon Processor 7300
series CPU, offering strong integer performance characteristics. Up to 24 FBDIMMs are supported, yielding a large memory capacity of up to 96 GB using 4 GB
FB-DIMMs. Industry-leading I/O capacity is provided as compared to other Intel
Xeon Processor-based blade servers.

Modular and Future-Proof Chassis Design


Sun Blade 6000 and 6048 modular systems provide signicant improvements over
legacy server module platforms. Suns focus on the needs of the datacenter have
resulted in chassis designs that dont force compromises in the performance and
capabilities delivered by the server modules. For example, in addition to offering a
choice of server modules that support the latest volume processors, these systems
deliver 100 percent of system I/O to the I/O modules through a passive midplane.

10

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

The Sun Blade 6000 and 6048 modular system chassis are shown in Figure 1. The Sun
Blade 6000 modular system is provided in a 10 rack unit (10U) chassis with up to four
chassis supported in a single 42U rack or three chassis supported in a 38U rack. The Sun
Blade 6048 modular system chassis takes the form of a standard rack and features four
independent shelves

Figure 1. Sun Blade 6000 and 6048 modular systems (left and right respectively)

Both the Sun Blade 6000 and 6048 modular systems support exible conguration, and
are built from a range of standard hot-plug, hot-swap modules, including:
Sun Blade T6320, T6300, X6220, X6250, or X6450 server modules, in any combination
Blade-dedicated PCI Express ExpressModules (EM), supporting industry-standard PCI
Express interfaces
PCI Express Network Express Modules (NEMs), providing access and an aggregated
interface to all of the server modules in the Sun Blade 6000 chassis or Sun Blade 6048
shelf
Integral Chassis Monitoring Module (CMM) for transparent management access to
individual server modules
Hot-swap (N+N) power supply modules
Redundant (N+1) cooling fans
With common system components and a choice of chassis, organizations can scale
capacity with either ne or course granularity, as their needs dictate. Table 1 lists the
capacities of the Sun Blade 6000 and 6048 modular systems along with single-shelf

11

An Open Systems Approach to Modular Architecture

Sun Microsystems, Inc.

capacity in the Sun Blade 6048 modular system. Maximum numbers of sockets, cores,
and threads are listed for AMD Opteron, Intel Xeon, and UltraSPARC T1 and T2
processors.
Table 1. Sun Blade 6000 and 6048 modular system capacities

Category

Sun Blade 6000


modular system

Sun Blade 6048


modular shelf

Sun Blade 6048


modular system

Sun Blade 6000 server modules

10

12

48

PCI Express Express Modules

20

24

96

PCI Express Network Express Modules

Up to 2

Up to 2

Up to 8

Chassis monitoring modules (CMM)

Hot-swap power supplies (N+N)

2, 6000 Watt

2, 8400 Watt

8, 8400 Watt

Redundant cooling fans (N+1)

32

Maximum AMD Opteron sockets/cores/threads

20/40/40

24/48/48

96/192/192

Maximum Intel Xeon sockets/cores/threads

40/160/160

48/192/192

192/768/768

Maximum UltraSPARC T1 sockets/cores/threads

10/80/320

12/96/384

48/384/1536

Maximum UltraSPARC T2 sockets/cores/threads

10/80/640

12/96/768

48/384/3072

12

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

Chapter 2

Sun Blade 6000 and 6048 Modular Systems


Overview
Together with the Sun Blade 8000 and 8000 P modular systems, Sun Blade 6000 and
6048 modular systems bring signicant advancements to deploying modular systems
across the organization. Sun Blade 6000 modular system are ideal for delivering
maximum entry-level price/performance with superior features as compared to
traditional rackmount servers. With its standard rack-sized chassis and high density, the
Sun Blade 6048 modular system helps enable the streamlined deployment of dense and
highly-scalable datacenters. Supporting a choice of x64 or SPARC platforms, Sun Blade
6000 and 6048 modular systems are ideal for a variety of applications and markets.

Web Services
For Web services applications sized to take advantage of two-socket x64 server
economy, the Sun Blade 6000 modular system delivers one of the industrys most
compelling solutions. The system offers maximum performance, enterprise
reliability, and easy scalability at a fraction of the price of competing products.
The stateless approach of modular systems makes it easier to build large Web
server farms with maximum manageability and deployment exibility.
Organizations can add new capacity quickly or redeploy hardware resources as
required.
Virtualization and Consolidation
Virtualization and Consolidation have never been more important as organizations
seek to get more from their deployed infrastructure. Modular systems based on
Suns UltraSPARC T1 and T2 processors with CoolThreads technology can offer
consolidation solutions with Sun Logical Domains and Solaris Containers that cut
power and cooling costs. Modular systems based on Suns x64 based server
modules offer up to twice the memory and I/O of competing x64 blades or
rackmount servers. These systems offer enterprise-class reliability, availability, and
serviceability features providing the needed headroom for consolidation with
VMware, Xen, or Microsoft Virtual Server.
High Performance Computing (HPC)
Commercial and scientic computational applications such as electronic design
automation (EDA) and mechanical computer aided engineering (MCAE) place
signicant demands on system architecture. These applications require a
combination of computational performance and system capacity, with exacting
needs integer and oating point performance, large memory congurations, and
exible I/O. Sun Blade 6000 and 6048 modular systems based on Suns x64 based
server modules combined with the Sun Refresh Service allow organizations to
purchase the highest-performing and most cost-effective platforms now, while
maintaining that technological edge for years to come.

13

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

Terascale and Petascale Supercomputing Clusters and Grids


The largest supercomputing clusters in the world are needed to push back the
fundamental limits of understanding in key scientic and engineering endeavors.
The Sun Constellation System serves these institutions as the worlds rst open
petascale computing environment, combining ultra-dense high-performance
computing, networking, storage, and software into an integrated system. The Sun
Constellation System delivers massive scalability from teraops to petaops
while offering dramatically reduced complexity and breakthrough economics.
Components of the Sun Constellation System include:
The Sun Datacenter Switch 3456, the worlds largest InniBand core switch with
capacity for 3,456 server nodes (and up to 13,824 server nodes with multiple
core switches)
The Sun Blade 6048 modular system, for high-density compute nodes with integral InniBand switched NEM
Sun Fire X4500 server clusters and the Sun StorageTek 5800 system, providing
massively scalable and cost-effective storage solutions.
A comprehensive HPC software stack to manage and augment the worlds largest
supercomputing clusters and grids.
Sun Constellation System components are shown in Figure 2.

Figure 2. The Sun Constellation System can be used to build the largest terascale and petascale
supercomputing clusters and grids

Chassis Front Perspectives


Sun Blade 6000 and 6048 chassis house the server modules and I/O modules,
connecting the two through the passive midplane. Redundant and hot-swappable
power supplies and fans are also hosted in the chassis. All slots are accessible from

14

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

either the front or the rear of the chassis for easy serviceability. Server modules, I/O
modules, power supplies, and fans can all be added and removed while the chassis and
other elements in the enclosure are powered on. This capability yields great expansion
opportunity and provides considerable exibility. The front perspectives of the Sun
Blade 6000 chassis and a single Sun Blade 6048 shelf are shown in Figure 3, with
components described in the sections that follow.

Hot-swappable N+N
power supply modules
with integral fans

Sun Blade 6000


server modules

Figure 3. Front view of the Sun Blade 6000 chassis (left) and a single Sun Blade 6048 shelf (right)

Operator Panel
An operator panel is located at the top of the chassis, providing status on the overall
condition of the system. Indicators show if the chassis is on standby or operational
mode, and if an over-temperature condition is occurring. A push-button indicator acts
as a locator button for the chassis in case there is a need to remotely identify a chassis
within a rack, or in a crowded datacenter. If any of the components in the chassis
should present a problem or a failure, the operator panel reects that issue as well.

Power Supply Modules and Front Fan Modules


Two power supply modules load from the front of the chassis or shelf. Each module
contains multiple power supplies cores enclosed within a single unit (two for the Sun
Blade 6000, and three for the Sun Blade 6048 power supply modules), and each module
requires a corresponding number of power inlets. Power supply modules are hot swap
capable and contain a replaceable fan module that helps cool both the power supplies
as well as the PCI Express modules in the rear of the enclosure. In case of a power
supply failure, the integral fan modules will continue to function because they are
actually energized directly from the chassis power grid, independently from the power
supply modules that contain them.
The power supply modules provide the total power required by the chassis (or shelf).
The power supply modules can be congured redundantly in an N+N conguration,
with a single power supply module able to power the entire chassis at full load. In order

15

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

to provide N+N redundancy, all power cords must be energized. If both power supply
modules are energized, all of the systems in the chassis are protected from power
supply failure. A power supply module can fail or be disconnected without affecting the
server modules and components running inside the chassis. To further enhance this
protection, power grid redundancy for all of the systems and components in the chassis
can be easily achieved by connecting each of the two power supply modules to different
power grids within the datacenter.
Sun Blade 6000 power supply modules have a high 90-percent efciency rating and an
output voltage of 12 V DC. The high efciency rating indicates that there are fewer
power losses within the power supply itself, therefore wasting less power in the energy
conversion stage from alternating current (AC) to direct current (DC). Also, by feeding
12V DC directly to the midplane, fewer conversion stages are required in the individual
server modules. This strategy yields less power conversion energy waste, and generates
less waste heat within the server module, making the overall system more efcient.
Provisioned power for rack mounted congurations depends on the number of chassis
deployed per rack. A 42U rack with four installed Sun Blade 6000 chassis requires 24
kilowatts, while a 38U rack with three chassis requires 18 kilowatts. Depending on the
ongoing load of the systems, actual power consumption will vary. For a more in-depth
analysis of day-to-day power consumption of the system please visit the power
calculator located on the Sun Website at http://www.sun.com/blades.
Sun Blade 6048 power supply modules include three power supply cores, facilitating
adjustable power utilization depending on the power consumption proles of the
installed server modules and other components. Two or three cores can be energized in
each power supply module to make the system perform at optimal efciency. An on-line
power calculator (www.sun.com/servers/blades/6048chassis/calc) can help identify
the power envelope of each shelf, and can help determine how many power supply
cores to energize. Energizing two cores will support 5,600 Watts, and energizing three
cores will support 8,400 Watts per shelf.

Server Modules
Up to 10 Sun Blade 6000 server modules can be inserted vertically beneath the power
supply modules on the Sun Blade 6000 chassis. The Sun Blade 6048 chassis supports up
to 12 Sun Blade 6000 server modules per shelf, or 48 server modules per chassis. The
four hard disk drives on each server module are available for easy hot-swap from the
front of the chassis. Indicator LEDs and I/O ports are also provided on the front of the
server modules for easy access. A number of connectors are provided on the front panel
of each server module, available through a server module adaptor (octopus cable).
Depending on the server module, available ports include a VGA HD-15 monitor port, two
USB 2.0 ports, and a DB-9 or RJ-45 serial port that connects to the server module and
integral service processors.

16

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

Chassis Rear Perspective


The rear of the Sun Blade 6000 chassis and a single Sun Blade 6048 shelf provide access
to the back side of the passive midplane for I/O modules (Figure 4). Slots for PCI Express
ExpressModules (EMs) and PCI Express Network Express Modules (NEMs) are provided.
I/O modules are all hot swap capable and provide I/O capabilities to server modules.
Plugs/Cords
PCI Express
ExpressModules
PCI Express
Network Express Modules

N+1 Redundant
and Hot-Swappable
Fan Modules

Figure 4. Rear view of the Sun Blade 6000 chassis

PCI Express ExpressModules (EMs)


Twenty hot-plug capable PCI Express ExpressModule slots are accessible at the top of
the Sun Blade 6000 chassis, with 24 EMs supported by each Sun Blade 6048 shelf. EMs
offer a variety of choices for communications including gigabit Ethernet, Fibre Channel,
and Inniband interconnects. Different EMs can be chosen for every server module in
order to provide each with the right type of fabric connectivity with a high degree of
granularity. Two PCI Express ExpressModule slots are dedicated and directly connected
to each server module through the passive midplane. Slots 0 and 1 from right to left are
connected to server module 0, slots 2 and 3 are connected to server module 1,
continuing across the back of the chassis.

PCI Express Network Express Modules


Space is provided for up to two PCI Express Network Express Modules (NEMs) in the Sun
Blade 6000 chassis, and in each Sun Blade 6048 shelf. NEMs provide the same I/O
capabilities across all of the server modules installed in the chassis, simplifying
connectivity and also usually offering a low-cost I/O solution since they provide I/O to
all of the server modules. All the server modules are directly connected to each of the
congured NEMs through PCI Express connections. Due to the different chassis widths,
specic NEMS are provided to t the Sun Blade 6000 and 6048 modular systems. More
details on available NEMs for both systems are provided in Chapter 3.

17

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

Chassis Monitoring Module


A Chassis Monitoring Module (CMM) is located the NEM slots on the left-hand side of
the Sun Blade 6000 chassis, and to the left of the NEM slots on the Sun Blade 6048
chassis providing remote monitoring and a central access point to the chassis. The
CMM includes an integrated switch that gives LAN access to the CMM's Ethernet ports
and to the individual server module management ports. Individual server module
management is completely transparent and independent from the CMM. The CMM on
the Sun Blade 6048 modular system is combined with the power input module.

Power Supply Inlets


Four power supply inlets (plugs) are available from the rear of the Sun Blade 6000
chassis, with six provided for each Sun Blade 6048 shelf. The number of inlets
corresponds to the number of power supply cores in the two front-loaded power supply
modules. Integral cable holders prevent accidental loss of power from inadvertent
cable removal. Each of the cables require a 220V, 20A circuit, and a minimum of two
circuits are required to power each chassis. For full N+N redundancy, four circuits are
required by the Sun Blade 6000 modular system, and six circuits are required by each
Sun Blade 6048 modular system shelf.

Fans and Airow


Chassis airow is entirely front to back in both chassis, and is powered by rear fan
modules, and by the front fan modules mounted in the power supply modules. All rear
fan modules are hot-swap and N+1, with six fan modules provided for each Sun Blade
6000 chassis, and eight fan modules provided for each Sun Blade 6048 shelf. Each rear
fan module is comprised of two redundant in-line fans.The front fan modules pull air in
from the front of the chassis and blow it across the power supplies and exhaust through
the EM and NEM spaces. The rear fan modules pull air from the front of the chassis and
exhaust it through the rear. When all of the fans in the chassis are running at full
speed, the chassis can provide up to 1,000 cubic feet per minute (CFM) of airow
through the chassis.

Passive Midplane
In essence, the passive midplanes in the Sun Blade 6000 and 6048 modular systems are
a collection of wires and connectors between different modules in the chassis. Since
there are no active components, the reliability of these printed circuit boards is
extremely high in the millions of hours, or hundreds of years. The passive midplane
provides electrical connectivity between the server modules and the I/O modules.
All modules, front and rear, with the exception of the power supplies and the fan
modules connect directly to the passive midplane. The power supplies connect to the
midplane through a bus bar and to the AC inputs via a cable harness. The redundant

18

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

fan modules plug individually into a set of three fan boards, where fan speed control
and other chassis-level functions are implemented. The front fan modules that cool the
PCI Express ExpressModules each connect to the chassis via blind-mate connections.
The main functions of the midplane include:
Providing a mechanical connection point for all of the server modules
Providing 12 VDC from the power supplies to each customer-replaceable module
Providing 3.3 VDC power used to power the System Management Bus devices on each
module, and to power the CMM
Providing a PCI Express interconnect between the PCI Express root complexes on each
server module to the EMs and NEMs installed in the chassis
Connecting the server modules, CMMs, and NEMs to the chassis management
network

EMs

PCI Express x8
PCI Express x8
PCI Express x4/x8 or XAUI
Gigabit Ethernet
SAS Links

Service Processor
Ethernet
CMM

PCI Express x4/x8 or XAUI


Gigabit Ethernet
SAS Links

NEM 1

Server Module
NEM 0

Figure 5. Distribution of communications links from each Sun Blade 6000 server module

Each server module is energized through the midplane from the redundant chassis
power grid. The midplane also provides connectivity to the I2C network in the chassis,
letting each server module directly monitor the chassis environment, including fan and
power supply status as well as various temperature sensors. A number of I/O links are
also routed through the midplane for each server module (Figure 5), including:
Two x8 PCI Express links connect from each server module to each of the dedicated
EMs
Two x4 or x8 PCI Express links connect from each server module, one to each of the
NEMs
Two gigabit Ethernet links are provided, each connecting to one of the NEMs
Four x1 Serial Attached SCSI (SAS) links are also provided, with two connecting to
each NEM (for future use)

19

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

Server Modules Based on Sun SPARC, Intel Xeon,


and AMD Opteron Processors
The ability to host demanding compute, memory, and I/O-intensive applications is
ultimately dependent on the characteristics of the actual server modules. The
innovative Sun Blade 6000 and 6048 chassis allow designers considerable exibility in
terms of delivering powerful server modules for a broad range of applications.
Except for labeling, all Sun Blade 6000 server modules feature a physically identical
front panel design. This design is intentional since any server module can be used in
any slot of the chassis, no matter what the internal architecture of the server module.
As mentioned, all server modules use the same midplane connectors and have
equivalent I/O characteristics.

A Choice of Processors, a Choice of Operating Systems


By providing a choice of Sun SPARC, Intel Xeon, and AMD Opteron processors, the Sun
Blade 6000 and 6048 modular systems can serve a wide range of applications and
demands. Organizations are free to choose the platform that best suits their needs or
ts in with their existing environments. Server modules of different architectures can
also be mixed and matched in a single Sun Blade 6000 chassis, or within a single Sun
Blade 6048 modular system shelf.
To help assure the best application performance, Sun Blade 6000 server modules
provide substantial computational and memory capacity to support demanding
applications. Table 2 lists the capabilities of the Sun Blade 6000 server modules
including processors, cores, threads, and memory capacity.
Table 2. Processor support and memory capacities for Sun Blade 6000 server modules

Server Module

Processor(s)

Cores/Threads

Memory Capacity

Sun Blade T6320


server module

1 UltraSPARC T2
processor

4, 6, or 8 cores, Up to 64 GB,
up to 64 threads 16 FBDIMM slots

Sun Blade T6300


server module

1 UltraSPARC T1
processor

6 or 8 cores,
Up to 32 GB,
up to 32 threads 8 DIMM slots

Sun Blade X6220


server module

2 Next Generation
AMD Opteron
processors

4 cores,
4 threads

Sun Blade X6250


server module

5100 series:
2 Intel Xeon
4 cores,
Processor 5100
4 threads
series or 5300 series 5300 series:
CPUs
8 cores,
8 threads

Up to 64 GB,
16 FB-DIMM slots

Sun Blade X6450


server module

7200 series:
4 Intel Xeon
8 cores,
Processor 7200
8 threads
series or 7300 series 7300 series:
CPUs
16 cores,
16 threads

Up to 96 GB,
24 FB-DIMM slots

Up to 64 GB,
16 DIMM slots

Sun Blade 6000 and 6048 Modular Systems Overview

20

Sun Microsystems, Inc.

Leading I/O Throughput


Sun Blade 6000 server modules provide extensive I/O capabilities and a wealth of I/O
options, allowing modular servers to be used for applications that require signicant
I/O throughput:
Up to 142 Gbps of I/O throughput is provided on each Sun Blade 6000 server module,
delivered through 32 lanes of PCI Express I/O, as well as multiple gigabit Ethernet
and SAS links. Each server module delivers its I/O to the passive midplane and the I/O
devices connected to it in the Sun Blade 6000 chassis or Sun Blade 6048 shelf.
Four 2.5-inch SAS or SATA disk drives are supported in each server module (PCI-based).
Two hot-plug PCI Express ExpressModules (EM) slots are dedicated to each server
module (20 per chassis) for granular blade I/O conguration.
Network Express Modules (NEMs) provide bulk I/O across multiple server modules
and aggregate I/O functions. Sun Blade 6000 and 6048 modular systems supply up to
two NEMs, each with a PCI Express x8 or XAUI connection, gigabit Ethernet
connection, and two SAS link connections to each server module.
Table 3 lists the throughput provided through the passive midplane from each of the
three server modules.
Table 3. Midplane throughput for Sun Blade 6000 server modules

Links

Sun Blade T6320


server modulea
(links, Gbps)

Sun Blade T6300


server module
(links, Gbps)

Sun Blade x6220


server module
(links, Gbps)

Sun Blade X6250 Sun Blade X6450


server modulea
server modulea
(links, Gbps)
(links, Gbps)

PCI Express links to EMs

2 x8 links,
32 Gbps each

2 x8 links,
32 Gbps each

2 x8 links,
32 Gbps each

2 x8 links,
32 Gbps each

2 x8 links,
32 Gbps each

PCI Express Links to NEMs

2 x4 links,
16 Gbps each

2 x8 links,
16 Gbps each

2 x8 links,
32 Gbps each

2 x4 links,
16 Gbps each

2 x4 links,
16 Gbps each

Gigabit Ethernet links

2, 1 Gbps each

2, 1 Gbps each

2, 1 Gbps each

2, 1 Gbps each

2, 1 Gbps each

SAS links

4, 3 Gbps each

4, 3 Gbps each

4, 3 Gbps each

4, 3 Gbps each

4, 3 Gbps each

142 Gbps

142 Gbps

110 Gbps

110 Gbps

Total server module bandwidth 142 Gbps

a.Server modules with Raid Expansion Module (REM) and Fabric Expansion Modules (FEM)

Enterprise-Class Features
Unlike most traditional blade servers, Sun Blade 6000 server modules provide a host of
enterprise features that help ensure greater reliability and availability:
Each server module supports hot-plug capabilities
Each server module supports four hot-plug disks, and built-in support for RAID 0 or 1
(diskless operation is also supported)1
Redundant hot-swap chassis-located fans mean greater reliability through decreased
part count and no fans located on the server modules
Redundant hot-swap chassis-located power supply modules mean that no power
supplies are located on individual server modules
1.Raid 0, 1, 5, and RAID 0+1 are supported by the Sun Blade X6250 and X6450 server modules with the
Sun StorageTek RAID expansion module (REM)

21

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

Open Transparent Management


Together, Sun Blade 6000 server modules and Sun Blade 6000 and 6048 modular
systems provide a robust and comprehensive list of management features, including:
A dedicated service processor on each server module for blade-level management
granularity
A Chassis Monitoring Module (CMM) for direct access to server module management
features
Sun xVM Ops Center for server module discovery and OS provisioning as well as bulk
application-level provisioning

A Choice of Operating Systems


In order to provide maximum exibility and investment protection, the Sun Blade 6000
server modules support a choice of operating systems, including:
Solaris 10 OS
The Linux operating system (64-bit Red Hat or SuSE Linux)
Microsoft Windows
VMware ESX Server
Table 4 lists the specic operating system versions supported by the Sun Blade 6000
server modules as of this writing. Please see www.sun.com/servers/blades/6000 for the
latest supported operating systems and environments.
Table 4. Processor and memory capacities for supported server modules

Server Module

Supported Operating Systems

Sun Blade T6320


server module

Solaris 10 OS Update 4 with patches or later

Sun Blade T6300


server module

Solaris 10 OS Update 3 with patches or later

Sun Blade X6220, X6250, Solaris 10 11/06 OS on x64, HW2 64-bit


and X6450 server modules Red Hat Enterprise Linux Advanced Server 4, U4 and U5, 32-bit
SuSE Linux Enterprise Server 10, 32-bit
VMware ESX 3.0.2 and 3.5
Microsoft Windows Server 2003 R2:
Standard Edition 32- and 64-bit
Enterprise Edition, 32- and 64-bit
Microsoft Windows Server 2008

Solaris OS Support on all Server Modules


Among the available operating systems, the Solaris OS is ideal for large-scale enterprise
deployments. Supported on all Sun Blade 6000 server modules, the Solaris OS has
specic features that can enhance exibility and performance with different features
affecting different processors as noted.

22

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

Sun Logical Domains Support in Sun Blade T6320 and T6300 Server Modules
Supported in all Sun servers that utilize Sun processors with chip multithreading
(CMT) technology, Sun Logical Domains provide a full virtual machine that runs an
independent operating system instance and contains virtualized CPU, memory,
storage, console, and cryptographic devices. Within the Sun Logical Domains
architecture, a small rmware layer known as the Hypervisor provides a stable,
virtualized machine architecture to which an operating system can be written. As
such, each logical domain is completely isolated, and the maximum number of
virtual machines created on a single platform relies upon the capabilities of the
Hypervisor as opposed to the number of physical hardware devices installed in the
system. For example, the Sun Blade T6320 server with a single Sun UltraSPARC T2
processor supports up to 64 logical domains, and each individual logical domain
can run a unique instance of the operating system1.
By taking advantage of Sun Logical Domains, organizations gain the exibility to
deploy multiple operating systems simultaneously on a single server module. In
addition, administrators can leverage virtual device capabilities to transport an
entire software stack hosted on a logical domain from one physical machine to
another. Logical domains can also host Solaris Containers to capture the isolation,
exibility, and manageability features of both technologies. By deeply integrating
logical domains with both the industry-leading CMT capabilities of the UltraSPARC
T1 and T2 processors and the Solaris 10 OS, Sun Logical Domains technology
increases exibility, isolates workload processing, and improves the potential for
maximum server utilization.

Scalability and Support for CoolThreads Technology


The Solaris 10 OS is specically designed to deliver the considerable resources of
UltraSPARC T1 and T2 processor-based systems such as the Sun Blade T6320 and
T6300 server modules. In fact, the Solaris 10 OS provides new functionality for
optimal utilization, availability, security, and performance of these systems:
CMT awareness The Solaris 10 OS is aware of the UltraSPARC T1 and T2 processor hierarchies so that the scheduler can effectively balance the load across
all the available pipelines. For instance, even though it exposes the UltraSPARC
T2 processor as 64 logical processors, the Solaris OS understands the correlation
between cores and the threads they support.
Fine-granularity manageability The Solaris 10 OS has the ability to enable or
disable individual processors and threads. In the case of the UltraSPARC T1 and
T2 processors, this ability extends to enabling or disabling individual cores and
logical processors (hardware thread contexts). In addition, standard Solaris OS
features such as processor sets provide the ability to dene a group of logical
processors and schedule processes or threads on them.
1.Though technically possible, this practice is not generally recommended

23

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

Binding interfaces The Solaris OS allows considerable exibility in that processes and individual threads can be bound to either a processor or a processor
set, if required or desired.
Support for Virtualized Networking and I/O, and Accelerated Cryptography
The Solaris OS contains technology to support and virtualize components and
subsystems on the UltraSPARC T2 processor, including support for the dual onchip 10 Gb Ethernet ports and PCI Express interface. As a part of a high-performance network architecture, CMT-aware device drivers are provided so that
applications running within virtualization frameworks can effectively share I/O
and network devices. Accelerated cryptography is supported through the Solaris
Cryptographic framework.

Solaris Containers for Consolidation, Secure Partitioning, and Virtualization


Solaris Containers comprise a group of technologies that work together to
efciently manage system resources, virtualize the system, and provide a
complete, isolated, and secure runtime environment for applications. Solaris
Containers can be used to partition and allocate the considerable computational
resources of the Sun Blade server modules. Solaris Zones and Solaris Resource
Management work together with the Solaris fair-share scheduler on both SPARCand x64-based server modules.
Solaris Zones Solaris Zones can be used to create an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris OS. Zones can be used to
isolate applications and processes from the rest of the system. This isolation
helps enhance security and reliability since processes in one zone are prevented
from interfering with processes running in another zone.
Resource Management Resource management tools provided with the
Solaris OS lets administrators dedicate resources such as CPU cycles to specic
applications. CPUs in a multicore multiprocessor system such those provided
by Sun Blade server modules can be logically partitioned into processor sets
and bound to a resource pool, and can ultimately be assigned to a Solaris zone.
Resource pools provide the capability to separate workloads so that consumption of CPU resources does not overlap. Resource pools also provide a persistent
conguration mechanism for processor sets and scheduling class assignment. In
addition, the dynamic features of resource pools let administrators adjust system resources in response to changing workload demands.

Solaris Dynamic Tracing (DTrace) to Instrument and Tune Live Software Environments
When production systems exhibit nonfatal errors or sub-par performance, the
sheer complexity of modern distributed software environments can make accurate
root-cause diagnosis extremely difcult. Unfortunately, most traditional
approaches to solving this problem have proved time-consuming and inadequate,
leaving many applications languishing far from their potential performance levels.

24

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

The Solaris DTrace facility on both SPARC and x64 platforms provides dynamic
instrumentation and tracing for both application and kernel activities even
allowing tracing of application components running in a Java Virtual Machine
(JVM)1. DTrace lets developers and administrators explore the entire system to
understand how it works, track down performance problems across many layers of
software, or locate the cause of aberrant behavior. Tracing is accomplished by
dynamically modifying the operating system kernel to record additional data at
locations of interest. Best of all, although DTrace is always available and ready to
use, it has no impact on system performance when not in use, making it
particularly effective for monitoring and analyzing production systems.

NUMA Optimization in the Solaris OS


With memory managed by each processor on Sun Blade X6220 server modules,
the implementation represents a non-uniform memory access (NUMA)
architecture. Namely, the speed needed for a processor to access its own memory
is slightly different than that required to access memory managed by another
processor. The Solaris OS provides technology that can specically help
applications improve performance on NUMA architectures.
Memory Placement Optimization (MPO) The Solaris 10 OS uses MPO to
improve the placement of memory across the physical memory of a server,
resulting in increased performance. Through MPO, the Solaris 10 OS works to
help ensure that memory is as close as possible to the processors that access it,
while still maintaining enough balance within the system. As a result, many
database and HPC applications are able to run considerably faster with MPO.
Hierarchical lgroup support (HLS) HLS improves the MPO feature in the
Solaris OS. HLS helps the Solaris OS optimize performance for systems with
more complex memory latency hierarchies. HLS lets the Solaris OS distinguish
between the degrees of memory remoteness, allocating resources with the lowest possible latency for applications. If local resources are not available by
default for a given application, HLS helps the Solaris OS allocate the nearest
remote resources.

Solaris ZFS File System


The Solaris ZFS le system offers a dramatic advance in data management,
automating and consolidating complicated storage administration concepts and
providing unlimited scalability with the worlds rst 128-bit le system. ZFS is
based on a transactional object model that removes most of the traditional
constraints on I/O issue order, resulting in dramatic performance gains. ZFS also
provides data integrity, protecting all data with 64-bit checksums that detect and
correct silent data corruption.

1.The terms "Java Virtual Machine" and "JVM" mean a Virtual Machine for the Java platform.

25

Sun Blade 6000 and 6048 Modular Systems Overview

Sun Microsystems, Inc.

A Secure and Robust Enterprise-Class Environment


Best of all, the Solaris OS doesnt require arbitrary sacrices. The Solaris Binary
Compatibility Guarantee helps ensure that existing SPARC applications continue to
run unchanged on UltraSPARC T1 and T2 platforms, protecting investments.
Certied multi-level security protects Solaris environments from intrusion. Suns
comprehensive Fault Management Architecture means that elements such as
Solaris Predictive Self Healing can communicate directly with the hardware to
help reduce both planned and unplanned downtime.

26

Server Module Architecture

Sun Microsystems, Inc.

Chapter 3

Server Module Architecture


The Sun Blade 6000 and 6048 modular systems provide high performance, capacity, and
massive levels of I/O through full featured interfaces that use the latest technology and
make the most of innovative chassis design. Sun Blade T6320, T6300, X6220, and X6250
server modules are described in this chapter, while PCI Express ExpressModules (EMs),
PCI Express Network Express Modules (NEMs), and the Chassis Monitoring Module
(CMM) are described in Chapter 4.

Sun Blade T6320 Server Module


Successful Sun Fire / Sun SPARC Enterprise T1000 and T2000 servers and the Sun Blade
T6300 server module powered by the breakthrough innovation of the UltraSPARC T1
processor completely changed the equation on space, power, and cooling in the
datacenter. With the advent of the UltraSPARC T2 processor, the Sun Blade T6320 server
module takes chip multithreading performance, density, and energy efciency to the
next level. Similar in capabilities to Sun SPARC Enterprise T5120 and T5220 servers, the
physical layout of the Sun Blade T6300 server module is shown in Figure 9.
Midplane
Connector
Two hot-plug SAS or
SATA 2.5-inch drives

16 FBDIMM
Sockets

Two hot-plug SAS or


SATA 2.5-inch drives

Fabric Expansion
Module (FEM)
UltraSPARC T2
Processor
ILOM 2.0 Service
Processor Card
RAID Expansion
Module (REM)

Figure 6. The Sun Blade T6320 server module with key features called out

With support for up to 64 threads and considerable network and I/O capacity, the Sun
Blade T6320 server module virtually doubles the throughput of earlier Sun Blade T6300
server modules. In addition to its processing and memory density, each server module
hosts additional modules including an ILOM 2.0 service processor, fabric expansion
module (FEM), and RAID expansion module (REM), all while retaining its compact form
factor. With the Sun Blade T6320 server module, a single Sun Blade 6000 chassis can
support up to 640 threads in just 10 rack units, and up to 3,072 threads can supported
in a single Sun Blade 6048 modular system chassis.

27

Server Module Architecture

Sun Microsystems, Inc.

The UltraSPARC T2 Processor with CoolThreads Technology


The UltraSPARC T2 processor extends Suns Throughput Computing initiative with an
elegant and robust architecture that delivers real performance to applications.
Implemented as a massively-threaded system on a chip (SoC), each UltraSPARC T2
processor supports:
Up to eight cores @ 1.2 Ghz 1.4 Ghz
Eight threads per core for a total maximum of 64 threads per processor
4 MB L2 cache in eight banks (16-way set associative)

Four on-chip memory controllers for support of up to 16 FBDIMMs


Up to 64 GB of memory (4 GB FBDIMMs) with 60 GB/s memory bandwidth
Eight fully pipelined oating point units (1 per core)
Dual on-chip 10 Gb Ethernet interfaces
Integral PCI-Express interface

In spite of its innovative new technology, the UltraSPARC T2 processor is fully SPARC v7,
v8, and v9 compatible and binary compatible with earlier SPARC processors. A high-level
block diagram of the UltraSPARC T2 processor is shown in Figure 7.

FB DIMM

FB DIMM

FB DIMM

FB DIMM

FB DIMM

FB DIMM

FB DIMM

FB DIMM

MCU

MCU

MCU

MCU

L2$

L2$

L2$

L2$

L2$

L2$

L2$

L2$

Cross Bar

C0

C1

C2

C3

C4

C5

C6

C7

FPU

FPU

FPU

FPU

FPU

FPU

FPU

FPU

SPU

SPU

SPU

SPU

SPU

SPU

SPU

SPU

Network
Interface Unit

System Interface

10 Gigabit
Ethernet Ports (2)

PCIe

x8 @ 2.0 GHz

Figure 7. Block-level diagram of an eight-core UltraSPARC T2 processor

The UltraSPARC T2 processor design recognizes that memory latency is truly the
bottleneck to improving performance. By increasing the number of threads supported
by each core, and by further increasing network bandwidth, the UltraSPARC T2
processor is able provide approximately twice the throughput of the UltraSPARC T1

28

Server Module Architecture

Sun Microsystems, Inc.

processor. Each UltraSPARC T2 processor provides up to eight cores, with each core able
to switch between up to eight threads (64 threads per processor). In addition, each core
provides two integer execution units, so that a single UltraSPARC core is capable of
executing two threads at a time.
The eight cores on the UltraSPARC T2 processor are interconnected with a full on-chip
non-blocking 8 x 9 crossbar switch. The crossbar connects each core to the eight banks
of L2 cache, and to the system interface unit for IO. The crossbar provides
approximately 300 GB/second of bandwidth and supports 8-byte writes from a core to a
bank and 16-byte reads from a bank to a core. The system interface unit connects
networking and I/O directly to memory through the individual cache banks. Using
FBDIMM memory supports dedicated northbound and southbound lanes to and from
the caches to accelerate performance and reduce latency. This approach provides
higher bandwidth than with DDR2 memory, with up to 42.4 GB/second of read
bandwidth and 21 GB/second of write bandwidth.
Each core provides its own fully-pipelined Floating Point and Graphics unit (FGU), as
well as a Stream Processing Unit (SPU). The FGUs greatly enhance oating point
performance over that of the UltraSPARC T1 processor, while the SPUs provide wirespeed cryptographic acceleration with over 10 popular ciphers supported, including
DES, 3DES, AES, RC4, SHA-1, SHA-256, MD5, RSA to 2048 key, ECC, and CRC32.
Embedding hardware cryptographic acceleration for these ciphers allows end-to-end
encryption with no penalty in either performance or cost.

Server Module Architecture


Figure 8 provides a logical block-level diagram of the Sun Blade T6320 server module.
Similar to the Sun SPARC Enterprise T5120 and T5220 rackmount servers, the Sun Blade
T6320 server module contains an UltraSPARC T2 processor, FB-DIMM sockets for main
memory, integrated lights out manager (ILOM) service processor, and I/O subsystems.
The memory conguration uses all eight of the UltraSPARC T2 processors memory
controllers to provide better memory bandwidth, The on-chip memory controllers
communicate directly to FB-DIMM memory through high-speed serial links. Up to 16
667 MHz FB-DIMMs may be congured in the server module.

Server Module Architecture

FB-DIMMs
@667Mhz
Memory

10 Gb Ethernet
10 Gb Ethernet

UltraSPARC
T2 Processor

Sun Microsystems, Inc.

Fabric
Expansion
Module

PCI Express x4 (16 Gbps) or XAUI

PCI Express x8 - 16Gbps


T2

PCI Express x8 - 32Gbps

PCI Express x8

Intel
Ophir

PCI Express
Switch
PEX8548

2x Gbit Ethernet

PCI Express x8 - 32Gbps

FB-DIMMs
@667Mhz
Memory

PCI Express x4

NEM #1

PCI Express x4 (16 Gbps) or XAUI

PCI Express x4
RAID
Expansion
Module

SAS Links

NEM #0

Passive Midplane

29

EM #0
NEM #0
NEM #1
EM #1

NEM #0

Server Module Front Panel


USB 2.0

PCI to
USB

PCI to
PCI
Bridge

NEM #1

ATI
Graphics

Motorola
MPC885
based
ALOM SP

VGA HD-15
RJ-45
Serial ALCOM

10/100Mbps
Management Ethernet

CMM

JUNTA
FPGA

Figure 8. Sun Blade T6300 server module block level diagram

For I/O, the UltraSPARC T2 processor incorporates an eight-lane (x8) PCI Express port
capable of operating at 4 GB/second bidirectionally. In the Sun Blade X6320 server
module, this port interfaces with a PCI Express switch chip that delivers various PCI links
to other parts of the server module, and to the passive midplane. Two of the PCI Express
interfaces provided by the PCI Express switch are made available through PCI Express
ExpressModules.
The PCI Express switch also provides PCI links to other internal components, including
sockets for fabric expansion modules (FEMs) and RAID expansion modules (REMs). The
FEM socket allows for future expansion capabilities. The gigabit Ethernet interfaces are
provided by an Intel chip connected to a x4 PCI Express interface on the PCI Express
switch chip. Two gigabit Ethernet links are then routed through the midplane to the
NEMs. The server module provides the logic for the gigabit Ethernet connection, while
the NEM provides the physical interface.
Sun Blade RAID 0/1 Expansion Module
All standard Sun Blade T6320 server module congurations ship with the Sun Blade
Blade 0/1 RAID Expansion Module (REM). Based on the LSI SAS1068E storage controller,
the Sun Blade 0/1 REM provides a total of eight hard drive interfaces or links. Four
interfaces are used for the on-board hard drives which may be Serial Attached SCSI
(SAS) or Serial ATA (SATA). The other four links are routed to the midplane where they
interface with the NEM for future use. The REM also provides RAID 0, 1, and 0+1.

30

Server Module Architecture

Sun Microsystems, Inc.

Integrated Lights-Out Management (ILOM) System Controller


Provided across many of Suns x64 servers, the Integrated Lights Out Management
(ILOM) service processor acts as a system controller, facilitating remote management
and administration. The service processor is fully featured and is similar in
implementation to that used in other Sun modular and rackmount x64 servers. As a
result, Sun Blade T6320 server modules integrate easily with existing management
infrastructure.
Critical to effective system management, the ILOM service processor:
Implements an IPMI 2.0 compliant services processor, providing IPMI management
functions to the server's rmware, OS and applications, and to IPMI-based
management tools accessing the service processor via the ILOM Ethernet
management interface, giving visibility to the environmental sensors (both on the
server module, and elsewhere in the chassis)
Manages inventory and environmental controls for the server, including CPUs,
DIMMs, and power supplies, and provides HTTPS/CLI/SNMP access to this data
Supplies remote textual console interfaces,
Provides a means to download upgrades to all system rmware
The ILOM service processor also allows the administrator to remotely manage the
server, independent of the operating system running on the platform and without
interfering with any system activity. ILOM can also send e-mail alerts of hardware
failures and warnings, as well as other events related to each server. The ILOM circuitry
runs independently from the server, using the servers standby power. As a result, ILOM
rmware and software continue to function when the server operating system goes
ofine, or when the server is powered off. ILOM monitors the following Sun Blade T6320
server module conditions:
CPU temperature conditions
Hard drive presence
Enclosure thermal conditions
Fan speed and status
Power supply status
Voltage conditions
Solaris watchdog, boot time-outs, and automatic server restart events

31

Server Module Architecture

Sun Microsystems, Inc.

Sun Blade T6300 Server Module


The highly successful Sun Fire / Sun SPARC Enterprise T1000 and T2000 servers powered
by the breakthrough innovation of the UltraSPARC T1 processor helped drive the fastest
product ramp in Suns history. The Sun Blade T6300 server module combines these
advantages with the density, availability, and serviceability advantages of Suns
modular systems. The Sun Blade T6300 server module is shown in Figure 9.
Midplane
Connector

Two hot-plug SAS or


SATA 2.5-inch drives

UltraSPARC T1

Eight DDR2 400

Processor

DIMM sockets

Service
Processor
Two hot-plug SAS or
SATA 2.5-inch drives

Figure 9. The Sun Blade T6300 server module with key components called out

The UltraSPARC T1 Processor with CoolThreads Technology


The UltraSPARC T1 multicore, multithreaded processor was the rst chip that fully
implemented Suns Throughput Computing initiative. Each UltraSPARC T1 processor
used in Sun Blade T6300 server modules has either six, or eight cores (individual
execution pipelines) all on the same chip. Each core, in turn, supports up to four
hardware thread contexts, a set of registers that represent the thread's state. The
processor is able to switch threads on every clock cycle in a round-robin ordered
fashion, and skip threads that are stalled and waiting for a memory access.

DDR-2 SDRAM

DDR-2 SDRAM

L2 cache

DDR-2 SDRAM

L2 cache

L2 cache

DDR-2 SDRAM

L2 cache

On-chip cross-bar interconnect

FPU

Core Core Core Core Core Core Core Core


0

System Interface
Buffer Switch Core

UltraSPARC T1 Processor
Bus

Figure 10. Block-level diagram of an eight-core UltraSPARC T1 processor

32

Server Module Architecture

Sun Microsystems, Inc.

As shown in Figure 10, the individual processor cores are connected by a high-speed,
low-latency crossbar interconnect implemented on the silicon itself. The UltraSPARC T1
processor includes very fast interconnects between the processor, cores, memory, and
system resources, including:
A 134 GB/second crossbar switch that connects all cores
A JBus interface with a 3.1 GB/second peak effective bandwidth
Four DDR2 channels (25.6 GB/second total) for faster access to memory
The memory subsystem of the UltraSPARC T1 processor is implemented as follows:
Each core has an Instruction cache, a Data cache, an Instruction TLB, and a Data TLB,
shared by the four thread contexts. Each UltraSPARC T1 processor has a twelve-way
associative unied Level 2 (L2) on-chip cache, and each hardware thread context
shares the entire L2 cache.
This design results in unied memory latency from all cores (Unied Memory Access,
UMA, not Non-Uniform Memory Access, NUMA).
Memory is located close to processor resources, and four memory controllers provide
very high bandwidth to memory, with a theoretical maximum of 25GB per second.
Extensive built-in RAS features include ECC protection of register les, Extended-ECC
(similar to IBMs Chipkill feature), memory sparing, soft error rates and rate
detection, and extensive parity/retry protection of caches.
Each core has a Modular Arithmetic Unit (MAU) that supports modular multiplication
and exponentiation to help accelerate Secure Sockets Layer (SSL) processing. There is a
single Floating Point Unit (FPU) shared by all cores, thus the UltraSPARC T1 processor is
generally not an optimal choice for applications with oating point intensive
requirements.

Server Module Architecture


Figure 11 provides a logical block-level diagram of the Sun Blade T6300 server module.
Similar in design to the Sun SPARC Enterprise T2000 server, the memory conguration
uses all four of the processors memory controllers to provide better memory
bandwidth, and up to eight DDR2 533 DIMMs may be congured in the server module.
As in other UltraSPARC T1 based systems, the actual memory speed is 400 MHz.

Server Module Architecture

33

Sun Microsystems, Inc.

JBUS

PCI Express
Bridge

Fire
Chip

EM #0

PCIe x8 - 32Gbps

PCIe x4 - 16Gbps

Intel
Ophir

NEM #0

2x Gbit Ethernet

3.2
GB/sec

3.2
GB/sec

PCIe x8 - 32Gbps
Fire
E Bus

PCI Express
Bridge

PCIe x8 - 32Gbps

PCIe x8

PCIe x4

UART

PCIe

LSI
SAS 1068e
LSI LOGIC

Blade Module Front Panel

SAS Links

DB-9
Serial Posix
USB 2.0

PCI to
PCI
Bridge

PCI to
USB

RJ-45
Serial ALCOM

Passive Midplane

UltraSPARC
T1 Processor

PCIe x8 - 32Gbps

PCIe x8

3.2
GB/sec

3.2
GB/sec

DDR2 533
@400Mhz
Memory

NEM #0
NEM #1
EM #1
NEM #1

NEM #0
NEM #1

Motorola
MPC885
based
ALOM SP

10/100Mbps
Management Ethernet

CMM

JUNTA
FPGA

Figure 11. Sun Blade T6300 server module block level diagram

For I/O, two PCI Express bridges are used to obtain the four x8 PCI Express interfaces
that communicate directly to the Fire Chip that directs I/O through a pair of PCI Express
bridges. Two of the PCI Express interfaces provided by the PCI Express bridges are made
available through PCI Express ExpressModules, and the other two interfaces are
connected to PCI Express Network Express Modules.
For storage, an LSI SAS1068e controller is included on the server module, providing
eight hard drive interfaces or links. Four interfaces are used for the on-board hard drives
which may be Serial Attached SCSI (SAS) or Serial ATA (SATA). The other four links are
routed to the midplane where they interface with the NEM slots for future use. The
storage controller is capable of RAID 0 or 1 and up to two volumes are supported in
RAID congurations.
The gigabit Ethernet interfaces are provided by an Intel chip connected to a x4 PCI
Express interface on one of the bridges. Two gigabit Ethernet links are then routed
through the midplane to the NEMs. The server module provides the logic for the gigabit
Ethernet connection, while the NEM provides the physical interface.

The ALOM Service Processor


The remote management capabilities of the Sun Blade T6300 server module are a
complete implementation of the Advanced Lights Out Manager (ALOM). The ALOM
service processor allows the Sun Blade T6300 server module to be remotely managed
and administered identically to Sun Fire / SPARC Enterprise T1000 and T2000 servers.

34

Server Module Architecture

Sun Microsystems, Inc.

ALOM allows the administrator to monitor and control a server, either over a network
or by using a dedicated serial port for connection to a terminal or terminal server.
ALOM provides a command-line interface that can be used to remotely administer
geographically-distributed or physically-inaccessible machines. In addition, ALOM
allows administrators to run diagnostics remotely (such as power-on self-test) that
would otherwise require physical proximity to the server serial port. ALOM can also be
congured to send email alerts of hardware failures, hardware warnings, and other
events related to the server or to ALOM.
The ALOM circuitry runs independently of the server, using the servers standby power.
As a result, ALOM rmware and software continue to function when the server
operating system goes ofine or when the server is powered off. ALOM monitors disk
drives, fans, CPUs, power supplies, system enclosure temperature, voltages, and the
server front panel, so that the administrator does not have to.
ALOM specically monitors the following Sun Blade T6300 server module components:
CPU temperature conditions
Enclosure thermal conditions
Fan speed and status
Power supply status
Voltage thresholds

Sun Blade X6220 Server Module


The Sun Blade X6220 server module provides a two-socket x64-based platform with
signicant computational, memory, and I/O density. The result is a compact, efcient,
and exible package with leading oating-point performance for demanding
applications such as HPC. The physical layout of the Sun Blade X6220 server module is
illustrated in Figure 12.

Two hot-plug SAS or


SATA 2.5-inch drives

Midplane
Connector

16 DDR2 667

AMD Opteron

DIMM sockets

Processors

Two hot-plug SAS or

Service

SATA 2.5-inch drives

Processor

Figure 12. The Sun Blade X6220 server module with key components called out

35

Server Module Architecture

Sun Microsystems, Inc.

Second Generation AMD Opteron Series 2000 Processors


The Sun Blade X6220 server module is based on the Second Generation AMD Opteron
2000 Series processor, leveraging AMDs Direct Connect Architecture and the nVidia
2200 Professional chipset for scalability and fast I/O throughput. The Sun Blade X6220
server module will initially support dual-core AMD Opteron processors, and will support
AMDs future processors as they become available. The Sun Blade 6000 chassis provides
sufcient airow for the server modules to be congured with any type of AMD Opteron
processor, including the Special Edition (SE) versions that consume more power but
provide greater clock speed.
The AMD Opteron processor extends the ubiquitous x86 architecture to accommodate
x64 64-bit processing. Formerly known as x86-64, AMDs enhancements to the x86
architecture allow seamless migration to the superior performance of x64 64-bit
technology. The AMD Opteron processor (Figure 13) was designed from the start for
dual-core functionality, with a crossbar switch and system request interface. This
approach denes a new class of computing by combining full x86 compatibility, a highperformance 64-bit architecture, and the economics of an industry-standard processor.

Second-Generation Dual-Core AMD Opteron


Core 1

Core 2

128 KB L1 Cache

128 KB L1 Cache

1MB L2 Cache

1MB L2 Cache

System Request Interface


Crossbar Switch
DDR2
Memory
Controller

HyperTransport 0

HyperTransport 2

HyperTransport 1

Figure 13. High-level architectural perspective of a dual-core AMD Opteron processor

Enhancements of the AMD Opteron processor over the legacy x86 architecture include:
16 64-bit general-purpose integer registers that quadruple the general-purpose
register space available to applications and device drivers as compared to x86
systems
16 128-bit XMM registers provide enhanced multimedia performance to double the
register space of any current SSE/SSE2 implementation
A full 64-bit virtual address space offers 40 bits of physical memory addressing and 48
bits of virtual addressing that can support systems with up to 256 terabytes of
physical memory
Support for 64-bit operating systems provide full transparent, and simultaneous 32bit and 64-bit platform application multitasking
A 128-bit wide, on-chip DDR memory controller supports ECC and Enhanced ECC and
provides low-latency memory bandwidth that scales as processors are added

36

Server Module Architecture

Sun Microsystems, Inc.

Each processor core has a dedicated 1MB Level-2 cache, and both cores use the System
Request Interface and Crossbar Switch to share the Memory Controller and access the
three HyperTransport links. This sharing represents an effective approach since
performance characterizations of single-core based systems have revealed that the
memory and HyperTransport bandwidths are typically under-utilized, even while
running high-end server workloads.
The AMD Opteron processor with integrated HyperTransport technology links provides a
scalable bandwidth interconnect among processors, I/O subsystems, and other chipsets. HyperTransport technology interconnects help increase overall system
performance by removing I/O bottlenecks and efciently integrating with legacy buses,
increasing bandwidth and speed, and reducing processor latency. At 16 x 16 bits and 1
GHz operation, HyperTransport technology provides support for up to 8 GB/s bandwidth
per link.

Server Module Architecture

DDR2 667
Memory

10.7
GB/sec

As shown in Figure 14, the AMD Opteron processor uses DDR2 memory, running at a
faster memory bus clock rate of 667 MHz. Up to 10.7 GB per second of memory
bandwidth is provided for each memory controller, for a total aggregate memory
bandwidth of 21.4 GB per second. These higher clock rates can be sustained, even when
the CPUs are congured with up to four DDR2 DIMMs. When all eight DIMMs are
populated, the clock speed is dropped to 533 MHz. The total memory capacity available
is 64 GB per server module.

PCIe x8 - 32Gbps

EM #1

8 GB/s
PCIe x8 - 32Gbps

IO-04

NEM #1

Gbit Ethernet
Gbit Ethernet

Next Generation
AMD Opteron 2000
Series Processors

PCIe x8 - 32Gbps
8 GB/s

PCIe x8 - 32Gbps

nForce4
CK8-04

10.7
GB/sec

IDE

PCIe x4

LPC 33MHz

3 USB 2.0 Ports - Remote KMS

LSI
SA S1068e
LSI LOGIC

PCI

Blade Module Front Panel


(Via adapter cable)
USB 2.0

VGA HD-15
DB-9 Serial

Compact
Flash

SAS Links

RageXL

DVI Video
Output

Video
over LAN
Redirect

NEM #1
NEM #0
EM #0
NEM #0

NEM #0
NEM #1

Super I/O
Controller

VGA Video
Output

Passive Midplane

PCIe Bridge

Motorola
MPC8275
SP

Figure 14. Sun Blade X6220 server module block level diagram

BCM

10/100Mbps
Management
Ethernet

CMM

37

Server Module Architecture

Sun Microsystems, Inc.

The nVidia PCI Express bridges are connected to the AMD Opteron processors over 8 GB
per second HyperTransport links to provide maximum throughput capacity to the PCI
Express lanes that are directed through the passive midplane. Two HyperTransport links
connect the two CPUs, with one used for cache coherency and the other for I/O
communication between the processors and the second PCI Express bridge. These links
also run at 8 GB per second. Two x8 PCI Express interfaces are pulled from each of the
PCI Express bridges, with each link providing a 32 Gb per second interface through the
midplane. Each PCI Express bridge also provides a gigabit Ethernet interface that is
routed through the passive midplane to the PCI Express Network Express Modules.
Sun Blade X6220 server modules also provide a Compact Flash slot, connected to the
system through an IDE connection to the nVidia chipset. By inserting a standard
compact ash device, administrators can store valuable data or even install a bootable
operating environment. The compact ash device is internal to the server module, and
it cannot be removed unless the server module is removed from the chassis.
As in the Sun Blade T6300 server module, an LSI SAS1068e controller is located on the
Sun Blade X6220 server module, providing eight hard drive interfaces. Four interfaces
are used for the on-board hard drives (either SAS or SATA). The other four links are
routed to the midplane for future use. The storage controller is capable of RAID 0 or 1
and up to two volumes are supported in RAID congurations.

The Integrated Lights Out Management (ILOM) Service Processor


The Integrated Lights Out Management (ILOM) service processor is fully featured and is
identical in implementation to that used in other Sun modular and rackmount x64
servers. As a result, the Sun Blade X6220 server module integrates easily with existing
management infrastructure.
Critical to effective system management, the ILOM service processor:
Implements an IPMI 2.0 compliant BMC, providing IPMI management functions to
the server module's BIOS, OS and applications, and to IPMI-based management tools
accessing the BMC either thru the OS interfaces, or via the ILOM Ethernet
management interface, providing visibility to the environmental sensors (both on the
server module, and elsewhere in the chassis)
Manages inventory and environmental controls for the server module, including
CPUs, DIMMs, and EMs, and provides HTTPS/CLI/SNMP access to this data
Supplies remote textual and graphical console interfaces, as well as a remote storage
(USB) interface (collectively these functions are referred to as Remote Keyboard Video
Mouse and Storage (RKVMS)
Provides a means to download BIOS images and rmware

38

Server Module Architecture

Sun Microsystems, Inc.

The ILOM service processor also allows the administrator to remotely manage the
server, independently of the operating system running on the platform and without
interfering with any system activity. To facilitate full-featured remote management, the
ILOM service processor provides remote keyboard, video, mouse, and storage (RKVMS)
support that is tightly integrated with the Sun Blade server modules. Together these
capabilities allow the server module to be administered remotely, while accessing
keyboard, mouse, video and storage devices local to the administrator (Figure 15). ILOM
Remote Console support is provided on the ILOM service processor and can be
downloaded and executed on the management console. Input/output of virtual devices
is handled between ILOM on the Sun Blade server module and ILOM Remote Console
on the Web-based client management console.
.
ILOM Remote Console

Video

Displays Remote Video in


Application Window

(Up to 1024x768@60Hz)
Graphics Redirect Over Ethernet

Local Mouse and


Keyboard

Management
Console

ILOM Remote Console


Connected to ILOM Over
Management Ethernet

Sun Blade X6220


Server Module

Keyboard, Mouse, CDROM,


and Floppy are Seen as
USB Devices by BIOS and O

Floppy Disk or
Floppy Image

CDROM, DVDROM
or .iso Image

Remote Keyboard, Mouse and Storage


Emulated as USB Devices by ILOM

Figure 15. Remote keyboard, video, mouse, and storage (RKVMS) support in the ILOM service processor
allows full-featured remote management for Sun Blade server modules

Sun Blade X6250 Server Module


Broadening Suns x64-based modular offerings, the Sun Blade X6250 server module
provides support for Dual-Core and Quad-Core Intel Xeon Processors. Intel Xeon
Processor 5100 series CPUs provide the highest clock speeds in the industry in a dualcore package. Intel Xeon Processor 5300 series CPUs provide quad-core processing
power. Figure 16 shows a physical view of the Sun Blade X6250 server module with key
components identied.

Server Module Architecture

39

Sun Microsystems, Inc.

Two hot-plug SAS or

Midplane

SATA 2.5-inch drives

Connector

Intel Xeon

16 FB DIMM

Processors

667 sockets

Two hot-plug SAS or

RAID Expansion

SATA 2.5-inch drives

Module

Figure 16. The Sun Blade X6250 server module with key components called out

Intel Xeon Processor 5100 and 5300 Series


Utilizing the Intel Core microarchitecture, the Intel Xeon Processor 5100 series and 5300
series provide performance for multiple application types and user environments, in a
substantially reduced power envelope. The dual-core 5100 series processor provides
signicant performance headroom for multithreaded applications and helps boost
system utilization through virtualization and application responsiveness. The quad-core
5300 series processor maximizes performance and performance per Watt, providing
increased density for datacenter deployments.
Logical block-level diagrams for both the 5100 and 5300 series processors are provided
in Figure 17. The 5100 series processor includes two processor cores, each provided with
a 64K level-1 cache (32K instruction/32K data). Both cores share a 4 MB level-2 cache to
increase cache-to-processor data transfers, maximize main memory to processor
bandwidth, and reduce latency. The 5300 series processor provides four processor cores,
with two processor cores sharing a 4 MB level-2 cache for a total of 8 MB. The
processors share a high-speed front side bus (FSB).

Dual-core Intel Xeon 5100 Series

Quad-core Intel Xeon 5300 Series

Core 1

Core 2

Core 1

Core 2

Core 3

Core 4

64K L1
Cache

64K L1
Cache

64K L1
Cache

64K L1
Cache

64K L1
Cache

64K L1
Cache

4 MB L2 Cache

Front Side Bus

4 MB L2 Cache

4 MB L2 Cache

Front Side Bus

Figure 17. Intel Xeon Processor 5100 and 5300 series block-level diagrams

40

Server Module Architecture

Sun Microsystems, Inc.

Server Module Architecture


The Sun Blade X6250 server module (Figure 18) uses the Intel 5000P Memory Chip Hub
(MCH), which provides communication to the processors over two high-speed Front
Side Buses (FSBs). The FSBs run at 1,333 MHz for the higher clock speed processors and
at 1,033 MHz for the slower speed bins. The maximum bandwidth through each FSB is
10.5 GB per second for an aggregate processor bandwidth of 21 GB per second.

5.3
GB/sec

5.3
GB/sec

FDBIMM
667 Memory

10.5 GB/s

PCIe x8 - 32Gbps

EM #1
NEM #0

PCIe x4 or XAUI

Fabric
Expansion
Module (FEM)

Fabric
Expansion
Module

PCIe x4 or XAUI
Gbit Ethernet
Gbit Ethernet

Gigal

PCIe x8 - 32Gbps

ESB2 IO
PCI Bridge

IDE

Compact
Flash

SATA x4

PCIe x4

LPC

Blade Module Front Panel


(Via Adapter Cable)

RAID
Expansion
Module (REM)
SAS
HW RAID
Controller

PCI

Super
I/O

USB 2.0

SAS/SATA
HDDs

NEM #1

Passive Midplane

5.3
GB/sec

ESI

PCIe x8

10.5 GB/s

5.3
GB/sec

5000
MCH

SAS Links
PCI

DB-9 Serial

NEM #0
NEM #1
EM #0

NEM #0

MUX

NEM #1
VGA HD-15

AST 2000
Service
Processor

10/100
PHY

10/100Mbps
Management
Ethernet

CMM

Figure 18. Sun Blade X6250 server module block level diagram

The MCH also provides the system with high speed memory controllers, and PCI-Express
bridges as well as a high speed link to a second I/O bridge (the ESB2 I/O control hub).
The total memory bandwidth provides read speeds up to 21.3 GB per second and write
speeds of up to 17 GB per second. One of the PCI Express x8 lane interfaces from the
MCH is directly routed to a PCI Express ExpressModule via the passive midplane. The
other interface is routed to the Fabric Expansion Module (FEM) socket available for
future expansion capabilities.
The Intel ESB2 PCI Express bridge provides connectivity to the other PCI Express
ExpressModule and access to the dual gigabit Ethernet interfaces that are routed
through the passive midplane to the NEMs. This bridge also provides the IDE
connection to the compact ash device, used for boot and storage capabilities.
Sun Blade X6250 RAID Expansion Module (REM)
All standard Sun Blade X6250 server module congurations ship with the Sun Blade
X6250 RAID Expansion Module (REM). The REM provides a total of eight SAS ports,
battery backed cache, and RAID 0, 1, 5, and 1+0 capabilities. Using the REM, the server
module provides SAS connectivity on the internal drive slots. Four 1x SAS links are also

41

Server Module Architecture

Sun Microsystems, Inc.

routed to the NEMs for future storage expansion. Build-to-order Sun Blade x6250 server
modules can be ordered without the REM. While these server modules will not provide
SAS support, SATA connectivity to the internal hard disk drives can be provided by the
Intel ESB8210 PCI Express bridge.

The Embedded LOM Service Processor


Similar to the other Sun Blade 6000 server modules, the Sun Blade X6250 server
module includes an embedded lights out manager (embedded LOM). This built-in,
hardware-based service processor enables organizations to consolidate system
management functions with remote power control and monitoring capabilities. The
service processor is IPMI 2.0 compliant and enables specic capabilities including
system conguration information retrieval, key hardware component monitoring,
remote power control, full local and remote keyboard, video, mouse (KVM) access,
remote media attachment, SNMP V1, V2c, and V3 support, and event notication and
logging.
Administrators simply and securely access the service processor on the the Sun Blade
X6250 server module using a secure shell command line, redirected console, or SSLbased Web browser interface from a remote workstation. The Desktop Management
Task Forces (DMTF) Systems Management Architecture for Server Hardware (SMASH)
command line protocol is supported over both the serial interface and the secure shell
network interface. A Web server and Java Webstart remote console application are
embedded in the service processor. This approach minimizes the need for any specialpurpose software installation on the administrative workstation to take advantage of
Web-based access. For enhanced security, the service processor includes multilevel role
based access to features. The service processor exibly supports native and Active
Directory Service lookup of authentication data. All functions can be provided out-ofband through a designated serial or network interface, eliminating the performance
impact to workload processing.

Sun Blade X6450 Server Module


Adding to the capabilities of the Sun Blade X6250 server module, the Sun Blade X6450
server module provides increased scalability of dual-core and quad-core Intel Xeon
processors. Dual-core Intel Xeon Processor 7200 series and and quad-core Intel Xeon
Processor 7300 series provide support for quad-socket congurations, such as those
offered by the Sun Blade X6450 server module. Offering both quad-core and quadsocket support in a blade package provides signicant computational density while
offering the exible advantages of a modular platform. Figure 19 illustrates a physical
view of the Sun Blade X6450 server module with key components identied.

42

Server Module Architecture

Sun Microsystems, Inc.

Midplane
Connector
Intel Xeon
Processors

24 FB-DIMM
667 sockets
Compact Flash
Storage

Intel 7000 MCH


(Clarksboro Northbridge)

Figure 19. The Sun Blade X6450 server module supports up to four Intel Xeon processors

Intel Xeon Processor 7200 and 7300 Series


The Intel Xeon Processor 7200 Series and 7300 Series processors use a Multi-Chip
Package (MCP) to deliver quad-core congurations. This packaging approach increases
die yields and lowers manufacturing costs, which helps Intel and Sun to deliver higher
performance at lower price points. The dual-core Intel Xeon Processor 7200 Series and
quad-core Intel Xeon Processor 7300 Series both incorporate two die per processor
package, with each die capable of containing two processor cores (Figure 20).

Figure 20. Intel Xeon Processor 7200 and 7300 series block-level diagrams

In the dual-core Intel Xeon 7200 Series, each die includes one processor core, but in the
quad-core Intel Xeon Processor 7300 Series, each die contains two cores. In a Sun Blade
X6450 server server module with four processors, this dense conguration provides up
to 16 execution cores in a compact blade form factor. The 7000 Sequence processor
families share these additional features:
An on-die Level 1 (L1) instruction data cache (64KB per die)
An on-die Level 2 (L2) cache (4MB per die for a total of 8MB in packages with two die)
Multiple, independent Front Side Buses (FSBs) that act as high-bandwidth system
interconnects

43

Server Module Architecture

Sun Microsystems, Inc.

Server Module Architecture


The Sun Blade X6450 server module (Figure 21) uses the Intel 7000 Memory Chip Hub
(MCH) also known as the Clarksboro Northbridge which provides communication
to the processors over four high-speed Front Side Buses (FSBs). The FSBs run at 256 MHz
or 1033 MT/s. The maximum bandwidth through each FSB is 8.5 GB per second for an
aggregate processor bandwidth of 34 GB per second.

8.5 GB/s

5.3
GB/sec

5.3
GB/sec

FD-BIMM
667 Memory

PCIe x8 - 32Gbps

EM #0

PCIe x8 - 32Gbps

ESB2 IO
PCI Bridge

LPC

PCIe x8 - 32Gbps

Gbit Ethernet
Gigal

PCI

Super
I/O

IDE

USB 2.0

Gbit Ethernet

MUX

Compact
Flash

NEM #1
NEM #0
NEM #1

NEM #1
NEM #0
EM #1

PCI

VGA HD-15

SAS Links

PCIe x8 - 32Gbps

Blade Module Front Panel


(Via Adapter Cable)

DB-9 Serial

SAS
HW RAID
Controller

Optional

8.5 GB/s

NEM #0

PCIe x4 or XAUI

Passive Midplane

5.3
GB/sec

5.3
GB/sec

ESI

PCIe x8

PCIe x4

7000
MCH

PCIe x4 or XAUI

Fabric
Expansion
Module

AST 2000
Service
Processor

10/100
PHY

10/100Mbps
Management
Ethernet

CMM

Figure 21. Sun Blade X6450 server module block level diagram

The MCH also provides the system with high-speed memory controllers, and PCI Express
bridges as well as a high speed link to a second I/O bridge (the ESB2 I/O control hub).
The total memory bandwidth provides read speeds up to 21.3 GB per second and write
speeds of up to 17 GB per second. One of the PCI Express x8 lane interfaces from the
MCH is directly routed to a PCI Express ExpressModule via the passive midplane. The
other interface is routed to the Fabric Expansion Module (FEM) socket available for
future expansion capabilities. An x4 PCI Express connection powers an optional RAID
Expansion Module (REM) that can be congured to access Serial Attached SCSI (SAS)
storage devices over the passive midplane.
The Intel ESB2 I/O PCI Express bridge provides connectivity to the other PCI Express
ExpressModule and access to the dual gigabit Ethernet interfaces that are routed
through the passive midplane to the NEMs. This bridge also provides the IDE
connection to the compact ash device. The Sun Blade X6450 server module is diskless,
and contains no traditional hard drives. The integrated CompactFlash device provides a
means for internal storage that can be used as a boot device or as a generic storage
medium.

44

Server Module Architecture

Sun Microsystems, Inc.

The Embedded LOM Service Processor


Like the Sun Blade X6250 server module, the Sun Blade X6450 server module includes
an embedded lights out manager (embedded LOM). This built-in, hardware-based
service processor enables organizations to consolidate system management functions
with remote power control and monitoring capabilities. The service processor is IPMI
2.0 compliant and enables specic capabilities including system conguration
information retrieval, key hardware component monitoring, remote power control, full
local and remote keyboard, video, mouse (KVM) access, remote media attachment,
SNMP V1, V2c, and V3 support, and event notication and logging.
Administrators simply and securely access the service processor on the the Sun Blade
X6250 server module using a secure shell command line, redirected console, or SSLbased Web browser interface from a remote workstation. The Desktop Management
Task Forces (DMTF) Systems Management Architecture for Server Hardware (SMASH)
command line protocol is supported over both the serial interface and the secure shell
network interface. A Web server and Java Webstart remote console application are
embedded in the service processor. This approach minimizes the need for any specialpurpose software installation on the administrative workstation to take advantage of
Web-based access. For enhanced security, the service processor includes multilevel role
based access to features. The service processor exibly supports native and Active
Directory Service lookup of authentication data. All functions can be provided out-ofband through a designated serial or network interface, eliminating the performance
impact to workload processing.

45

I/O Expansion, Networking, and Management

Sun Microsystems, Inc.

Chapter 4

I/O Expansion, Networking, and Management


Todays datacenter investments need to be protected, especially as systems are repurposed, expanded, and altered to meet dynamic demands. Modular systems can play
a key role, allowing organizations to derive maximum benet from their infrastructure,
even as their needs change. More importantly, modular systems must avoid arbitrary
limitations that restrict choice in I/O, networking, or management. The Sun Blade 6000
and 6048 modular systems in particular are designed to work with open and
multivendor industry standards without dictating components, topologies, or
management scenarios.

Server Module Hard Drives


A choice of hot swappable 2.5-inch SAS or SATA hard disk drives is provided with all Sun
Blade 6000 server modules except for the Sun Blade X6450 server module.
Serial Attached SCSI (SAS) drives provide high performance and high density. Drives
are 10,000 rpm and available in capacities of 73 GB or 146 GB. These drives provide
enterprise-class reliability with 1.6 million hours mean time between failures (MTBF).
Serial ATA (SATA) drives are 5400 rpm and available in 80 GB capacities.
Please check Suns Website (www.sun.com/servers/blades/6000) for the latest
available disk drive offerings.

PCI Express ExpressModules (EMs)


Industry-standard I/O, long a staple of rackmount and vertically-scalable servers has
been elusive in legacy blade platforms. Unfortunately the lack of industry-standard I/O
has meant that customers often paid more for fewer options, and were ultimately
limited by a single vendors innovation. Unlike legacy blade platforms, Sun Fire 6000
and 6048 modular systems accommodate PCI Express ExpressModules (EMs) compliant
with PCI SIG form factor. This approach allows for a wealth of expansion module options
from multiple expansion module vendors, and avoids a single-vendor lock on
innovation. The same EMs can be used on both Sun Blade 6000 and 6048 modular
systems as well as Sun Blade 8000 modular systems.
The passive midplane implements connectivity between the EMs and the server
modules, and physically assigns pairs of EMs to individual server modules. As shown in
Figure 22, EMs 0 and 1 (from right to left) are connected to server module 0, EMs 2 and
3 are connected to server module 1, EMs 4 and 5 are connected to server module 3, and
so on. Each EM is supplied with an x8 PCI Express link back to its associated server
module, providing up to 32 Gb/s of I/O throughput. EMs are hot-plug capable according
to the standard dened by the PCI SIG, and fully customer replaceable without opening
either the chassis or removing the server module.

Server Module 0

Server Module 1

Server Module 2

Sun Microsystems, Inc.

Server Module 3

Server Module 4

Server Module 5

Server Module 6

Server Module 7

Server Module 8

I/O Expansion, Networking, and Management

Server Module 9

46

Figure 22. A pair of 8-lane (x8) PCI Express slots allow up to two PCI Express ExpressModules per server
module in the Sun Blade 6000 (shown) and 6048 chassis

With the industry-standard PCI Express ExpressModule form factor, EMs are available
for multiple types of connectivity, including
4 Gb FiberChannel, dual port (Qlogic, SG-XPCIE2FC-QB4-Z)*
4 Gb FiberChannel, dual port (Emulex, SG-XPCIE2FC-EB4-Z)

Gb Ethernet, dual-port (copper, X7282A-Z)*


Gb Ethernet, dual-port (ber, X7283A-Z)
4X InniBand, dual-port (Mellanox, X1288A-Z)*
12 Gb SAS, dual-port (LSI Logic, SG-XPCIE8SAS-EB-Z)
12 Gb SAS RAID, single-port (Intel SRL, SGXPCIESAS-R-BLD-Z)
Gb Ethernet, quad-port (copper, X7284A-Z)

Gb Ethernet, quad-port (copper, X7287A-Z)


10 Gb Ethernet, dual-port (ber, X1028A-Z)
4x Inniband, no-mem, single-port (Mellanox, X1290A)
EMs marked with an asterisk are shown in Figure 23. For the latest available EMs,
please refer to www.sun.com/servers/blades/6000.

Figure 23. Several PCI Express ExpressModules available for the Sun Blade 6000 modular server.

47

I/O Expansion, Networking, and Management

Sun Microsystems, Inc.

PCI Express Network Express Modules (NEMs)


Many legacy blade platforms include integrated network switching as a way to gain
aggregate network access to the individual server modules. Unfortunately, these
switches are often restrictive in their options and may dictate topology and
management choices. As a result, datacenters often nd legacy blade server platforms
difcult to integrate into their existing networks, or are resistant to admitting new
switch hardware into their chosen network fabrics.
Sun Blade 6000 and 6048 modular systems address this problem through a specic PCI
Express Network Express Module (NEM) form factor that provides congurable network
I/O for all of the server modules in the system. Connecting to all of the installed server
modules through the passive midplane, NEMs represent a space-efcient mechanism
for deploying high-density congurable I/O, and provide bulk or I/O options for the
entire chassis.

Gigabit Ethernet Pass-Through NEMs


Gigabit Ethernet Pass-Through NEMs are available for conguration with both the Sun
Blade 6000 and 6048 modular systems, providing pass-through access to the gigabit
Ethernet interfaces located on the server modules. Separate NEMs are provided to
support the different numbers of server modules in the two chassis. Gigabit Ethernet
interface logic resides on the server module while the passive midplane simply provides
access and connectivity. With the Gigabit Ethernet Pass-Through NEMs, individual
servers can be connected to external switches just as easily as rackmount servers
with no arbitrary topological constraints.
The Gigabit Ethernet Pass-Through NEMs provide an RJ-45 connector for each of the
server modules supported in the respective chassis 10 for the Sun Blade 6000
modular system, and 12 for the Sun Blade 6048 modular system shelf. Adding a second
pass-through NEM provides access to the second gigabit Ethernet connection on each
server module. Figure 24 illustrates the Gigabit Ethernet Pass-Through NEM.

Figure 24. The Gigabit Ethernet Pass-Through NEM provides a 10/10/1000 BaseT port for each installed
Sun Blade server module (Sun Blade 6000 Pass-Through NEM shown)

48

I/O Expansion, Networking, and Management

Sun Microsystems, Inc.

Sun Blade 6048 InniBand Switched NEM


Providing dense connectivity to servers while minimizing cables is one of the issues
facing large HPC cluster deployments. The Sun Blade 6048 InniBand Switched NEM
solves this challenge by integrating an InniBand leaf switch into a Network Express
Module for the Sun Blade 6048 chassis. The NEM shares components, cables, and
connectors with the Sun Datacenter Switch (DS) 3456 and 3x24, facilitating build-out of
very large InniBand clusters (up to 288 nodes per Sun DS 3x24, and up to 3,456 nodes
per Sun DS 3456. Up to four Sun DS 3456 core switches can be employed to construct
truly massive clusters with up to 13,824 Sun Blade 6000 server modules. A block-level
diagram of the dual-height NEM is provided in Figure 25, aligned with an image of the
back panel.
12 PCI Express x8 Connections from Server Modules

HCA

HCA

HCA

HCA

HCA

HCA

HCA

HCA

HCA

HCA

HCA

HCA

InfiniBand Leaf Switch


NEM Components

24 Port 384 Gbps IB Switch

24 Port 384 Gbps IB Switch

External NEM Profile

Gigabit Ethernet Connections to Each Server Module

Figure 25. The Sun Blade 6048 InniBand Switched NEM provides eight switched 12x InniBand
connections to the two on-board 24-port switches, and twelve pass-through gigabit Ethernet ports, one
to each Sun Blade 6000 server module in the Sun Blade 6048 shelf

Each Sun Blade 6048 InniBand Switched NEM employs two of the same Mellanox
InniScale III 24-port switch chips used in Sun DS 3456 and 3x24 InniBand switches,
providing 12 internal and 12 external connections. Redundant internal connections are
provided from Mellanox ConnectX HCA chips to each of the switch chips, allowing the
system to route around failed links. Additionally, 12 pass-through gigabit Ethernet
connections are provided to access gigabit Interfaces provided on individual Sun Blade
6000 server modules mounted in the Sun Blade 6048 modular system. The same
standard Compact Small Form-factor Pluggable (CSFP) connectors are used on the back
panel for direct connection to the Sun DS 3456 or 3x24 switch, with each 12x
connection providing four 4x InniBand connections.

49

I/O Expansion, Networking, and Management

Sun Microsystems, Inc.

Transparent and Open Chassis and System Management


Management in legacy blade platforms has typically either been lacking, or
administrators have been forced into adopting unique and platform-specic
management infrastructure. To address this issue, the Sun Blade 6000 and 6048
modular systems provide a wide range of exible management options.

Chassis Monitoring Module (CMM)


The Chassis Monitoring Module (CMM) is the primary point of management of all
shared chassis components and functions, providing a set of management interfaces.
Each server module contains its own service processor, giving it similar remote
management capabilities to other Sun servers. Through their respective Lights Out
Management service processors, individual server modules provide IPMI, HTTPs, CLI
(SSH), SNMP, and le transfer interfaces that are directly accessible from the Ethernet
management port on the Chassis Monitoring Module (CMM). Each server module is
assigned an IP address (either manually, or via DHCP) that is used for the management
network.

CMM Network Functionality


A single CMM is built into each Sun Blade 6000 modular system and Sun Blade 6048
shelf, and is congured with an individual IP address assigned either statically or
dynamically via DHCP. The CMM provides complete monitoring and management
functionality for the chassis (or shelf) while providing access to server module
management functions. In addition, the CMM supports HTTP and CLI pass-thru
interfaces that provide transparent access to each server module. The CMM also
provides access to each server module via a single serial port through which any of the
various LOM interfaces can be congured. The CMM's management functions include:
Implementation of an IPMI satellite controller, making the chassis environmental
sensors visible to the server modules BMC functions
Direct environmental and inventory management via CLI and IPMI interfaces
CMM, ILOM, and NEM rmware management
Pass-through management of blades using IPMI, SNMP, and HTTP links along with
command line interface (CLI) SSH contexts
The management network internal to the CMM joins the local management processor
on each server module to the external management network through the passive
midplane.

CMM Architecture
A portion of the CMM functions as an unmanaged switch dedicated exclusively to
remote management network trafc, letting administrators access the remote
management functions of the server modules. The switch in the CMM provides a single
network interface to each of the server modules and to each of the NEMs, as well as to

50

I/O Expansion, Networking, and Management

Sun Microsystems, Inc.

the service processor located on the CMM itself. Figure 26 provides an illustration and a
block-level diagram of the Sun Blade 6000 CMM. The Sun Blade 6048 NEM has a
different form factor but provides the same functionality.

To CMM Service Processor

Server Module 0
Server Module 1
Server Module 2
Server Module 3
Server Module 4
Server Module 5
Server Module 6
Server Module 7
Server Module 8
Server Module 9
NEM 0
NEM 1

Unmanaged
Switch

Gigabit Ethernet Uplink 1


Gigabit Ethernet Uplink 0

Figure 26. The CMM provides a management network that connects to each server module, the two
NEMS, and the CMM itself (Sun Blade 6000 CMM shown)

The CMMs functionality provides various management functions, including power


control of the chassis as well as hot-plug operations of infrastructure components such
as power supply modules, fan modules, server modules, and NEMs. The CMM acts as a
conduit to server module LOM conguration, allowing settings such as network
addresses and administrative users to be congured or viewed.

Sun xVM Ops Center


Beyond local and remote management capabilities, datacenter infrastructure needs to
be agile and exible, allowing not only fast deployment but streamlined redeployment
of resources as required. Sun xVM Ops Center technology (formerly Sun N1 System
Manager and Sun Connection) provides an IT infrastructure management platform for
integrating and automating management of thousands of heterogeneous systems. To
improve life-cycle and change management, Sun xVM Ops Center supports the
management of applications and the servers on which they run, including the Sun
Blade 6000 and 6048 modular systems.
Sun xVM Ops Center simplies infrastructure life-cycle management by letting
administrators perform standardized actions across logical groups of systems. Sun xVM
Ops Center can automatically discover and group bare-metal systems, performing
actions on the entire group as easily as operating on a single system. Sun xVM Ops
Center remotely installs and updates rmware and operating systems, including
support for:
Solaris 8, 9, and 10 on SPARC systems
Solaris 10 on x86/x64 platforms
Red Hat and SuSE distributions

51

I/O Expansion, Networking, and Management

Sun Microsystems, Inc.

In addition, the software provides considerable lights-out monitoring of both hardware


and software, including fans, temperature, disk and voltage levels as well as swap
space, CPU utilization, memory capacity, and le systems. Role-based access control
lets IT staff grant specic management permissions to specic users. A convenient
hybrid user interface integrates both a command-line interface (CLI) and an easy-to-use
graphical user interface (GUI), providing remote access to manage systems from
virtually anywhere.
Sun xVM Ops Center provides advanced management and monitoring features to the
Sun Blade 6000 and 6048 modular systems. The remote management interface
discovers and presents the Sun Blade server modules in the chassis as if they were
individual servers. In this fashion, the server modules appear in exactly the same way
as individual rackmount servers, making the same operations, detailed inventory, and
status pages available to administrators. The server modules are discovered and
organized into logical groups for easy identication of individual modules, and the
system chassis and racks that contain them. Organizing servers into groups also allows
features such as OS deployment across multiple server modules. At the same time,
individual server modules can also be managed independently from the rest of the
chassis. This exibility allows for management of server modules that may have
different requirements than the other modules deployed in the same chassis.
Some of the functions available through Sun xVM Ops Center software include
operating system provisioning, rmware updates (for both the BIOS and ILOM service
processor rmware), and health monitoring. In addition, Sun xVM Ops Center includes
a framework allowing administrators to easily access inventory information, simplify
the task of running jobs on multiple servers with server grouping functionality.

52

Conclusion

Sun Microsystems, Inc.

Chapter 5

Conclusion
Sun's innovative technology and open-systems approach make modular systems
attractive across a broad set of applications and activities from deploying dynamic
Web services infrastructure to building datacenters run demanding HPC codes. The Sun
Blade 6000 modular system provide the promised advantages of modular architecture
while retaining essential exibility for how technology is deployed and managed. The
Sun Blade 6048 modular system extends and amplies these strengths, allowing
organizations to build ultra-dense infrastructure that can scale to provide the worlds
largest terascale and petascale supercomputing clusters and grids.
Suns standard and open-systems based approach yields choice and avoids compromise
providing a platform that benets from widespread industry innovation. With
chassis designed for investment protection into the future, organizations can literally
cable once, and change their deployment options as required mixing and matching
server modules as desired. A choice of Sun SPARC, Intel Xeon, or AMD Opteron based
server modules and a choice of operating systems makes it easy to choose the right
platform for essential applications. Industry-standard I/O provides leading exibility
and leading throughput for individual servers. Transparent networking and
management means that the Sun Blade 6000 and 6048 modular systems t easily into
existing network and management infrastructure.
The Sun Blade 6000 and 6048 modular systems get blade architecture right. Together
with the Sun Blade 8000 and 8000 P modular systems, Sun now has one of the most
comprehensive modular system families in the industry. This breadth of coverage
translates directly to savings in terms of administration and management. For example,
unied support for the Solaris OS across all server modules means that the same
features and functionality are available on all processor platforms. This approach saves
both time in training and administration even as the system delivers agile
infrastructure for the organizations most critical applications.

53

Conclusion

Sun Microsystems, Inc.

Sun Blade 6000 and 6048 Modular Systems

On the Web sun.com/servers/blades/6000

Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 USA Phone 1-650-960-1300 or 1-800-555-9SUN (9786) Web sun.com
2007-2008 Sun Microsystems, Inc. All rights reserved. Sun, Sun Microsystems, the Sun logo, CoolThreads, Java, JVM, Solaris, Sun Blade, Sun Fire, N1 and ZFS are trademarks or registered trademarks of Sun
Microsystems, Inc. and its subsidiaries in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the US and
other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. Intel Xeon is a trademark or registered trademark of Intel Corporation or its subsidiaries in
the United States and other countries. AMD Opteron is a trademark or registered trademarks of Advanced Micro Devices. Information subject to change without notice. Printed in USA
SunWIN #:494863 06/08

S-ar putea să vă placă și