Sunteți pe pagina 1din 127

AIX VUG

Oct 2014 Power Hardware


Announcement

Joe Armstrong
jdarmstr@us.ibm.com

Credit for charts to


Mark Olson and Jeff Stuecheli

© 2014 IBM Corporation


Agenda

 POWER8 Scale-out Announcements

 Power E870 & E880

 Power I/O news & announcements

© 2014 International Business Machines Corporation 2


Power Processor Technology Roadmap

POWER9
Extreme Analytics
Optimization
POWER8 Extreme Big Data
More Cores Optimization
SMT+++ On-chip accelerators
POWER7/7+ Reliability ++
FPGA Support
45/32 nm Transactional Memory
POWER6/6+ Eight Cores
PCIe Acceleration
On-Chip eDRAM
65/65 nm Power-Optimized Cores
POWER5/5+ Memory Subsystem ++  200+ systems in test
130/90 nm Dual Core SMT++
High Frequencies Reliability +
Dual Core Virtualization + VSM & VSX
Enhanced Scaling Memory Subsystem + Protection Keys+
SMT Altivec
Distributed Switch + Instruction Retry
Core Parallelism + Dynamic Energy Mgmt
FP Performance + SMT +
Memory Bandwidth + Protection Keys
Virtualization

2004 2007 2010 2014


3
POWER8 Processor

Technology
22nm SOI, eDRAM, 15 ML 650mm2

Caches
Cores • 512 KB SRAM L2 / core
• 12 cores (SMT8) Core Core Core Core Core Core • 96 MB eDRAM shared L3

Accelerators
SMP Links
• 8 dispatch, 10 issue, • Up to 128 MB eDRAM L4
16 exec pipe L2 L2 L2 L2 L2 L2 (off-chip)
• 2X internal data flows/queues 8M L3
Regio Memory
• Enhanced prefetching n
• Up to 230 GB/s
• 64K data cache, L3
Mem. Cache & Chip Interconnect
Ctrl. Mem. Ctrl. sustained bandwidth
32K instruction cache
Bus Interfaces
Accelerators PCIe
SMP Links
L2 L2 L2 L2 L2 L2 • Durable open memory attach
• Crypto & memory expansion interface
• Transactional Memory • Integrated PCIe Gen3
Core Core Core Core Core Core
• VMM assist • SMP Interconnect
• Data Move / VM Mobility • CAPI (Coherent Accelerator
Processor Interface)
Energy Management
• On-chip Power Management Micro-controller
• Integrated Per-core VRM
• Critical Path Monitors

4
POWER8 Core

DF Larger Caching Structures vs.


Execution Improvement U POWER7
vs. POWER7 •2x L1 data cache (64 KB)
•SMT4  SMT8 •2x outstanding data cache misses
•8 dispatch ISU FX VSU •4x translation Cache
•10 issue U
•16 execution pipes: Wider Load/Store
• 2 FXU, 2 LSU, 2 LU, 4 FPU, •32B  64B L2 to L1 data bus
2 VMX, 1 Crypto, 1 DFU, 1 CR, •2x data cache to execution dataflow
1 BR
Enhanced Prefetch
•Larger Issue queues (4 x 16-entry)
•Instruction speculation awareness
•Larger global completion, Load/Store LSU •Data prefetch depth awareness
reorder IFU
•Improved branch prediction •Adaptive bandwidth awareness
•Improved unaligned storage access •Topology awareness

Core Performance vs . POWER7


~1.6x Thread
~2x Max SMT
5
1Q 2014 Portfolio: POWER7/POWER7+

Power
780+ Power
Power 795
770+

Power
750+ / 760+
Power
720+/740+
Power
710+/730+

PowerLinux IBM PureFlex


7R4+ System
PowerLinux
7R1+ / 7R2+ P460+

p260+

p24L

© 2014 International Business Machines Corporation 6


2Q 2014 Portfolio: POWER8/POWER7/POWER7+

Power
780+ Power
Power 795
770+

Power
750+ / 760+
Power
720+/740+
Power
710+/730+

PowerLinux IBM PureFlex


7R4+ System
PowerLinux
7R1+ / 7R2+ P460+

p260+

p24L

© 2014 International Business Machines Corporation 7


Power Scale-out Servers

Power Systems
S824
2-socket, 4U
Up to 24 cores
Power 1 TB memory
11 PCIe Gen 3
Power Systems S814 AIX, IBM i, Linux
1-socket, 4U CAPI support (4)
Power Systems S822 Up to 8 cores PowerVM
Systems 2-socket, 2U 512 GB memory
Power Systems Up to 20 cores 7 PCIe Gen 3
S822L 1 TB memory AIX, IBM i, Linux
S812L 2-socket, 2U 9 PCIe Gen 3 CAPI support (2)
POWER8 processor AIX & Linux PowerVM
1-socket, 2U Up to 24 cores
POWER8 processor
CAPI support (4)
1 TB memory
Linux only 9 PCI Gen3 slot PowerVM
CAPI support (2) Linux only
CAPI support (4)
PowerVM & PowerKVM

© 2014 IBM Corporation


8
© 2014 International Business Machines Corporation 8
Scale-out
October
Enhancements

© 2014 International Business Machines Corporation 9


POWER8 Scale-Out Offerings: News/Updates

Expanding the POWER8 Linux Scale-out Portfolio


Power S824L: Incorporating the innovation of the OpenPOWER Community
Partnership with Nvidia to tackle the high performance analytic workloads

Power S812L: Announced in April: have now shipped across Geos between
7/20 and 9/10

Delivering smaller core offerings, (especially interesting to IBM i clients)


• Announced/GA’d in April/June a 4-core S814 for AIX/IBM i/Linux in
Power S814: addition to 6- and 8-core. P05 tier attractive to entry IBM i clients
• Add 100-120V support for rack-mounted S814 ( 4 or 6 core only). Plan
available September 23rd through RPQ 8A2217

Delivering on the promise of Optimization for Big Data

Doubling the memory capacity to 2 TB in the S824 ( through RPQ 8A2232 ) GA: Dec
128GB DIMMS will either be to have 1 TB or 2 TB configurations, no mix and match of DIMM sizings.
If buy 1 TB now can MES upgrade to different add’l DIMMS later

© 2014 International Business Machines Corporation 10


POWER S824L ( 8247-42L )
- Open Technology Platform for High Performance
Analytics, Big Data, and Java Applications workloads
Incorporating the innovation of the
Plan Availability Date
October 31st
OpenPOWER Community
 Partnership with NVIDIA
Accelerates by GPU
 Exploit the uncompromising performance
of proven POWER8 and NVIDIA GPU.
High Performance Analytics, Big Data,
Java Application Workhorse
 Aim to deliver a new class of technology
that maximizes performance and
efficiency for all types of technical
compute and high performance analytics
workloads as well as Java and Big Data
Applications.
11
© 2014 International Business Machines Corporation 11
Power System S824L
Processor
2x 10-core 3.42GHz or
2x 12-core 3.02GHz
Memory
Total 16 DDR3 CDIMM slots
16,32,64 GB CDIMM @ 1600 Mbps
1TB capacity, 384GB/s bandwidth max
O/S Capable
Storage  Linux Ubuntu ( 14.10 )
JBOD, RAID 0,10,5,6
12 SFF Disk Drive, 1 DVD Hypervisor Capable
LAN adapters  OPAL, No virtualization
 2x 10GBASE-T adapter or PCIe Gen3 Slots
 2x 10Gb SFP+ Fiber SR plus 2x 1GE adapter 4 PCIe x16 G3 FHFL slots
GPU adapter (1 min or 2 max) 6 PCIe x8 G3 FHHL slots
CAPI capable on PCIe x16 slots
El Capitan nVidia K40 GPU adapter
Power supply Native I/O
USB 3.0 (2 front, 2 rear)
2+2 1400W PS System Management 1GE (2 rear)
System port (rear), USB 2.0 (2 rear)

12
© 2014 International Business Machines Corporation 12
Nvidia K40 GPU

Systems
 Up to 2 K40 GPU in S824L
GPU Spec
Kepler-2 architecture GPU
ASIC: GK110B
PCIe interface
PCIe Gen3 x16
Full length / double wide PCIe form factor
Plugs in using existing double wide cassette
Power
235W Max power draw :75W via PCIe slot
plus 160W via 8-pin Aux. cable.
OS support
Ubuntu 14.10 or later

http://www.nvidia.com/object/tesla-servers.html
13
© 2014 International Business Machines Corporation 13
POWER S824 More Memory
RPQ announce planned mid October
2X max memory GA in December
Up to 2 TB RPQ Number 8A2232

No eConfig support – manual config


Larger memory DIMM (128GB)

Rules:
a) No Mixing of DIMMs sizes. ( i.e when order 128GB DIMM, no other size ( 64GB,
32GB, 16GB ) can be mixed with )
b) Only qty 8 or qty 16 DIMMs ( no other qty accepted, so when order 128GB DIMM,
customer can either order them by quantity of 8 = 1TB or quantity of 16=2TB )
c) No MES Upgrade provided for installed systems.
d) Only supported on Models S824, no other Scale-out servers.

14
© 2014 International Business Machines Corporation 14
NEBS: Differentiated Value for Telecommunications Clients
Designed for clients that require hardened infrastructures because of the industries they serve
or data center environments where the equipment is located.
Carrier-grade platforms for NGN infrastructure and application deployment, designed for
extreme shock, vibration and thermal conditions which exceed normal datacenter design
standards.
Announcement Date: October 6
General available Date : October 31st
Eligible model : Power S822L and S822
Certifications: NEBS Level-3 and ETSI

Optional DC power supply and flexible thermal settings


 Power Supply, 2x750 Watt, -48V DC, Hot-swap, Base and Redundant (#EB3H)
RPQ 8A2227 approval of configuration validation required
 Flexible Thermal Settings for NEBS Applications (#0709)
 For additional guidance, please consult your IBM sales specialist
or visit: ibm.com/power/solutions/industry/telco
or email powertel@us.ibm.com

15
© 2014 International Business Machines Corporation 15
Power Scale-out Servers

Power Systems
S824
Power 2-socket, 4U
Up to 24 cores
Systems 1 TB memory
Power S824L 11 PCIe Gen 3
2-socket, 4U AIX, IBM i, Linux
Power Systems S814 CAPI support (4)
Power 1-socket, 4U
Up to 24 cores
PowerVM
Systems S822 Linux
Systems Up to 8 cores NVIDIA GPU
Power Systems 2-socket, 2U 512 GB memory CAPI support(2)
S822L Up to 20 cores 7 PCIe Gen 3
S812L 2-socket, 2U 1 TB memory AIX, IBM i, Linux
POWER8 processor 9 PCIe Gen 3
1-socket, 2U
CAPI support (2)
Up to 24 cores
POWER8 processor
AIX & Linux PowerVM
1 TB memory
Linux only 9 PCI Gen3 slot CAPI support (4)
CAPI support (2) Linux only PowerVM
CAPI support (4)
PowerVM & PowerKVM

© 2014 IBM Corporation


16
© 2014 International Business Machines Corporation 16
POWER8 CAPI (Coherent Accelerator Processor Interface)

POWER8
Was just Statement of direction in April 2014 POWER8

10 June annc letter shared billing structure –


feat #EC2A. EC2A in an enablement code to
use with future separately acquired hardware. Coherence Bus

Introducing hardware using CAPI October 2014

PCIe Gen3
Transport for encapsulated messages

FPGA or ASIC

Customizable Hardware / Application Accelerator


• Specific system SW, middleware, or user application
• Written to durable interface

© 2014 International Business Machines Corporation 17


Possible Example: CAPI Attached Flash Optimization

Application
Attach flash memory to POWER8
Read/Write
Syscall via CAPI coherent Attach
FileSystem
strategy() iodone() 20K Application
Instructions Posix Async aio_read()
LVM I/O Style API aio_write()
strategy() iodone()
User Library
Disk & Adapter DD < 500 Shared
Pin buffers, Interrupt, Instructions Memory
Work Queue
Translate, unmap,
Map DMA, unpin,Iodone
Start I/O scheduling

 Issues Read/Write Commands from applications to eliminate 97%


of instruction path length CAPI Flash controller Operates in User Space
 Saves 10 Cores per 1M IOPs

© 2014 International Business Machines Corporation 18


IBM Flash
Optimized
NoSQL

© 2014 International Business Machines Corporation 19


New

IBM Solution for Flash Optimized NoSQL - Power


Systems Edition Significant Cost Savings for In-Memory
NoSQL Data Stores
The Market: Explosive growth of new mobile, social apps The Issue: x86 memory limited by max RAM
requiring lightening fast response at high volume • Scale-out x86 servers limited memory size
 Enabled by in-memory NoSQL, Key Value Stores • Results in costly, complex infrastructure
like Redis
 Ordered (key, value) pairs provide type of in-memory, Load Balancer WWW
lightening fast distributed hash table
500GB Cache
 Plays an important role in many large websites 24U 500GB
NodeCache
500GB
NodeCache
500GB
NodeCache
 GitHub, Amazon, Facebook, Twitter & 1U x86 server (24)
Node
moreK 512 GB memory

The Solution: POWER8 + CAPI FLASH as RAM The POWER8 + CAPI Flash as RAM Advantage:
- Up to 40 TB in 4U • New FLASH as RAM for Redis in-memory apps
WWW
• Provides means for large FLASH exploitation
• Lower cost memory, greater workload density
Power S822L/S812L • Dramatically reduce costs to deliver services
Ubuntu 14.10 • Can be offered as a cloud-based service or as
an on-premise solution for enterprises
4U

FlashSystem 840 24:1 server Up to 3x


2TB to 40 TB Flash
consolidation3 lower TCA
© 2014 International Business Machines Corporation 20
What it Means to the Delivery of NoSQLs
Today’s NoSQL in memory Differentiated NoSQL
(x86) (POWER8 + CAPI Flash)
24:1
WWW
Reduction in
10Gb WWW infrastructure
Uplink
10Gb
Load Balancer Uplink 2.4x
POWER8 Server Price
500GB Cache reduction
500GB Cache
Node
4U
500GB
NodeCache Flash Array w/ up
500GB Cache 12x
Node Cache
512GB to 40TB
Node Less Energy
Node
New memory tier for POWER8 server
Up to 40 TB for NoSQL based applications 12x
Cluster solution in a box Less rack
Infrastructure RequirementsBackup Nodes space
Large Distributed (Scale out)
Infrastructure Attributes 40TB of
Large Memory per node
192 threads in 4U Server drawer extended
Networking Bandwidth Needs
40 TB of memory based Flash per 4U Drawer memory
Load Balancing
Shared Memory & Cache for dynamic tuning
Elimination of I/O and Network Overhead

Power CAPI-attached Flash model for NoSQL regains


infrastructure control and reigns in the cost to deliver services.
© 2014 International Business Machines Corporation 21
S822 / S824
Easy Tier SSD
Enhancement

© 2014 International Business Machines Corporation 22


4U Storage Backplane Options

Expanded
Must select one  Base Split Function *

12 SFF SAS bays 6+6 SFF SAS bays 18 SFF SAS bays
1 SAS controller 2 SAS controllers Dual SAS controllers
4U server No write cache No write cache 7.2GB cache
RAID-0,1,5,6,10 RAID-0,1,5,6,10 RAID-0,1,5,6,10
DVD bay DVD bay DVD bay
External SAS ports
8-bay SSD cage
Easy Tier function
Sweet SSD enhancement for
 AIX / Linux / VIOS
 S824 & S822
 Using Easy Tier function

© 2014 International Business Machines Corporation 23


New Low-price 1.8-Inch SSD for System Unit Cage

For 2-socket server: S822 & S824 Drive goes in


Using high-function backplane 1.8-inch SSD
with Easy Tier function cage

Read intensive SSD -- Great performance, Great entry price

Intelligent SAS controller in backplane places data with high read activity
and low write activity on the drive to allow many years of service without
replacement. (replacement covered by IBM service agreement)

List price qty 3 177GB SSD for RAID-5T2 $ 3,930 63% lower
List price qty 3 387GB SSD for RAID-5T2 $10,764

Maintenance prices after warranty also about 60% less

Prices are USA list prices and subject to change. Reseller prices can vary.

© 2014 International Business Machines Corporation 24


Additional
Scale-out News

© 2014 International Business Machines Corporation 25


No 4U Changing Rack to Tower or Vice Versa
1S 4U
S814
8286-41A

Choose Rack / Tower Tower Rack


Bezel for 12 SAS bay n/a #EJT8
Bezel for 18 SAS bay n/a #EJT9
Rail feature code n/a #EJTN
Front Door & covers for 12 SAS bay #EJTG n/a
Front Door & covers for 18 SAS bay #EJTH n/a

Can NOT change S814 from rack-mount to tower or vice versa


• Originally communicated as supported change, but further
analysis has found it not supportable.
• Restriction due to a combination of factors. Labels with Power
ratings and certifications are impacted since tower and rack use
different power supplies.

© 2014 International Business Machines Corporation 26


FYI M Disk Shipping a Little Differently

Historically IBM shipped most drives for AIX/Linux/VIOS with 512


byte sectors (JBOD)

Now on POWER8 servers most disk shipped with 528 byte sectors
 Provides additional level of protection to client
 Newer generation SAS adapters/controllers get essentially the same
performance
 Save clients time from reformatting drives to 528-byte sectors

© 2014 International Business Machines Corporation 27


SR-IOV and POWER8 SOD

Oct 2014 SOD


IBM plans to add NIC SR-IOV capability to POWER8
servers using selected SR-IOV capable PCIe adapters.
The adapters planned to be added are 4-port Ethernet
Adapters with copper twinax ports (#EN0K, #EN0L,
#EL3C), SR optical ports (#EN0H, #EN0J, #EL38) and LR
optical ports (#EN0N, #EN0M).
IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s
sole discretion. Information regarding potential future products is intended to outline our general product direction and it
should not be relied on in making a purchasing decision. The information mentioned regarding potential future products
is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about
potential future products may not be incorporated into any contract. The development, release, and timing of any future
features or functionality described for our products remains at our sole discretion.

Note: NIC SR-IOV already available on POWER7+


770/780 with proper software/firmware levels with the
copper twinax and SR optical adapters.

© 2014 International Business Machines Corporation 28


PCIe2 4-Port (10Gb+1GbE) SR+RJ45 Adapter
Supported on POWER8 Systems
Operating System support:
 AIX 6.1 & AIX 7.1 or later
Reminder – NIM
SR transceivers not
shown in picture, but  IBM i via VIOS
would be included in
shipment  Linux
ADDED RHEL 6 or later
SLES 11 or later
 PowerKVM Host
#EN0S (FH) / #EN0T (LP)  VIOS
CCIN 2CC3

Quad ports: Two 10GbE SR optical Plus Two 1GbE RJ45


Ethernet NIC traffic
10Gb SR ports include optical transceiver for up to 100m cable distance
RJ45 are 1Gb or 100Mb and use CAT-5 or CAT-6A UTP cabling
Port's configuration are independent of the one other, but all four ports owned by one partitions or one VIOS
CAT-6A cabling supported
NIM/Linux Install support announced July 2014 (GA 29 Aug)

NIM also added to:

PCIe2 4-Port (10Gb+1GbE) Copper SFP+RJ45 Adapter

PCIe2 2-port 10/1GbE BaseT RJ45 Adapter

© 2014 International Business Machines Corporation IBM & Business Partner Confidential until Announcement 29
Power E870
Power E880

© 2014 International Business Machines Corporation 30


4Q 2014 Portfolio: POWER8/POWER7/POWER7+

Power
780+ Power
Power 795
770+

Power
750+ / 760+
Power
720+/740+
Power
710+/730+

PowerLinux IBM PureFlex


7R4+ System
PowerLinux
7R1+ / 7R2+ P460+

p260+

p24L

© 2014 International Business Machines Corporation 31


POWER8 Scale-up Systems

No Primary Node
Midplane
 Service Processors
 Clocks
 Oscillators
Large Memory

E870
795 E880

19” Rack
Modular design
770 Up to 4 CEC drawers/nodes
780 PCIe slots in the nodes

© 2014 International Business Machines Corporation 32


POWER8 Scale-up System

 Blend of Power 795 & Power 780/770


 Architecturally: Similar to Power 795
 Packaging: Similar to Power 780/770

 One to four nodes (E880 - 9119-MHE)


 One to two nodes (E870) - 9119-MME)

 19” Rack

 Great memory
 Up to 16TB (E880) or 4TB (E870)
 Faster memory - 1600 MHz
 Up to 85.3 GB / Core

 PCIe Gen3 Slots

© 2014 International Business Machines Corporation 33


POWER8 System Node (CEC Drawer)
8 PCIe Gen3 x16 Slots
(for LP PCIe adapter or
32 CDIMM slots Optical Interface to I/O drawer)
Up to 4TB

Fans

4 POWER8 SCMs

Power Supplies

5U Enclosure
No integrated SAS bays or SAS controller in node
No integrated DVD bay or DVD controller in node
No integrated Ethernet port in node
No tape bay in node

© 2014 International Business Machines Corporation 34


Power E870/E880 SOD

Power E870: IBM plans to enhance the Power E870 with greater I/O and
memory capacity and flexibility with:
Support for concurrent maintenance on the Power E870 I/O Expansion
Drawer by enabling hot add and repair capabilities
The ability to scale up to 4TBs of memory per Power E870 processor node

Power E880: IBM plans to enhance the Power Systems’ enterprise


system portfolio with greater scalability, availability and flexibility. IBM
intends to deliver the following offerings:
A more scalable Power E880 enterprise-class system with up to 192
POWER8 processor cores and up to 16TBs of total memory
Support for concurrent maintenance on the Power E880 I/O Expansion
Drawer by enabling hot add and repair capabilities
IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information
regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or
functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future
features or functionality described for our products remains at our sole discretion.

© 2014 International Business Machines Corporation 35


POWER8 System Node (CEC Drawer)

1-node 2-node 3-node 4-node


system system system system
E880

32-core node 32-core 64-core 96-core 128-core


4.35 GHz (2015) (2015)
SOD 48-core 2015 2015 2015 2015

E870

40-core node 40-core 80-core n/a n/a


4.19 GHz

32-core node 32-core 64-core n/a n/a


4.02 GHz

© 2014 International Business Machines Corporation 36


System Control Unit (Midplane)

 One system control unit per system

 2U drawer

 Must be immediately physically adjacent


to system nodes

System control unit holds:


 Service processors (FSPs)
 HMC ports
 Master system clocks
 Operator panel
 VPD (vital products data)
 Optional DVD

© 2014 International Business Machines Corporation 37


System Control Unit (Midplane)
Front View
Optional DVD

Fan Fan Fan Fan

Ops panel

Rear View
Redundant Hot plug
Redundant Power Clock Battery Redundant Power

Service Clock Clock Service


Processor Processor
2 HMC 2 HMC
ports ports

Also includes VPD card - 1

© 2014 International Business Machines Corporation 38


DVD Bay in System Control Unit
AIX or Linux optionally uses DVD. IBM i requires a DVD on the servers
somewhere K in system unit or in external I/O enclosure like 7226-1U3
If use optional system control unit DVD --- needs a PCIe USB adapter to be
located either in CEC or PCIe3 IO Drawer (#EC45 or #EC46 PCIe2 4-port USB Adapter)
Note: do not use a SAS adapter to control #EU13. Use a SAS adapter like #EJ0J/EJ0M to control a SATA DVD located in an
external enclosure such as a 7226-1U3.

Optional DVD
#EU13 1.6 meter USB cable #EBK4

USB Port cable


connection to adapter

Front of sys cntrl unitK hot pluggable, optional DVD


DVD is SATA, but a USB-to-SATA converter included

Rear of sys cntrl unitK has USB connector for cabling ease

© 2014 International Business Machines Corporation 39


Power E870/E880 with 1 Node

Node

Sys Cntrl Unit

7U in 19” rack

© 2014 International Business Machines Corporation 40


Power E870/E880 with 2 Nodes

Node

Sys Cntrl Unit

Node

12U in 19” rack

© 2014 International Business Machines Corporation 41


Power E880 with 3 Nodes (2015)

Node

Sys Cntrl Unit

Node

Node
17U in 19” rack

© 2014 International Business Machines Corporation 42


Power E880 with 4 Nodes (2015)

Node

Node

Sys Cntrl Unit

Node

Node
22U in 19” rack

© 2014 International Business Machines Corporation 43


Cabling Configuration

Service Processor
(FSP) Cables

Clock Cables

System Control Unit

FSP Clock Clock FSP

Cables ordered using features


#ECCA, ECCB, ECCC, ECCD

© 2014 International Business Machines Corporation 44


POWER8 Enterprise System Node Cabling

SMP Cables

Sys Cntrl Unit


Redundant
Power

Sys Cntrl Unit


Redundant
Global Service
processor

Sys Cntrl Unit


Redundant
Clock
Flex Cables

Above cable colors are for viewing in this diagram.


Actual colors are mostly all black. 45
© 2014 International Business Machines Corporation
Sample Rack Configurations
9119-MME 9119-MHE
42 PDU PDU location 42 PDU PDU location
41 PDU for IO drws 41 PDU for IO drws
40 40
39 39
38 38
37 37
36 36
35 35 12u available for
34 34 IO drws
33 33
32 32
31 31
30 30
29 24u available for 29
28 IO drws / HMC 28
27 27 5u space reserved for
26 26
25 25
System node
24 24 B B
23 23 e 5 E E E E 5 e
22 22 a
r
8 N N N N 8 a
r
System node
21 21 p 9 0 0 0 0 9 p Drw 1
20 20 a
9 B J J B 9
a
w w
19 19
18 18 Sys Cntrl Unit
17 B B 17 FSP / Clk
16 e 5 E E E E 5 e 16 B
5 E E E E 5 B
e e
15 a
r
8 N N N N 8 a
r
15 a 8 N N N N 8 a System node
14 p 9 0 0 0 0 9 p System node 14 r r
Drw 2
p 9 0 0 0 0 9 p
13 a
9 B J J B 9
a Drw 1 13 a a
12
w w
12 w 9 B J J B 9 w

11 11
10 FSP / Clk Sys Cntrl Unit 10
9 B B
9
5u space reserved for
5 E E E E 5
8
e
a 8 N N N N 8
e
a 8 System node
7 r r System node 7
p 9 0 0 0 0 9 p
Drw 2
6 a a 6
5 w 9 B J J B 9 w 5 2u space reserved for PDU’s
4 4
PDU PDU
3 3
PDU PDU
2 2
1 2u reserved for 1 2u reserved for
cable handling cable handling

© 2014 International Business Machines Corporation 46


E870/880 Racking - “Only Enterprise 42U Rack”

Only the IBM Enterprise rack (7014-T42 or #0553) have


been tested/certified by IBM Development/Test with the
E870/E880 as of Oct 2014. Therefore the 7014-T42 or
#0553 is the only rack IBM Manufacturing uses with
E870/E880 system nodes.

If a different rack (IBM or nonIBM) is desired, work with


IBM Service organization to confirm the rack has the
needed strength, rigidity, hole spacing, clearances, etc.
IBM Service does not require certification by IBM
Development/Test to be able to provide service/warranty
in other racks.

© 2014 International Business Machines Corporation 47


E870/E880 Racking - “De-racking”

IBM Manufacturing will always build/test E870/E880 in a


42U rack:
 New serial number uses 7014-T42
 MES same-serial-number uses #0553
Provides a better experience to client

Use “De-racking” feature #ER21 if client doesn’t want the


7014-T42 or #0553 it was built/test in. (priced at zero)

© 2014 International Business Machines Corporation 48


E870/E880 Racking and Cables

For best possible service access

 Do not use PDU side pockets. Mount PDUs horizontally.


Leaves more room for cable routing
Takes 1U space
Still can use a Left/Right power plugging philosophy, but less
straightforward visibility

 Add 8-inch rack extender (#ERG0) to rear of 42U rack. Provides more
room for cables.
Generally recommended, but not required
STRONGLY urged if there are a lot of I/O cables

 Strongly urge leaving 2U space open at top and/or bottom for cable
egress

© 2014 International Business Machines Corporation 49


E870/E880 Lift Tool

The E870 and E880 Nodes are heavy (167 lbs)

An appropriately rated lift tool is required – FC EB2Z

© 2014 International Business Machines Corporation 50


Power E870
& E880
Performance

© 2014 International Business Machines Corporation 51


rPerf – Multiple SMT Levels

SMT1 SMT2 SMT4 SMT8


Power E870
32-core 4.02 GHz 334.4 484.9 630.4 674.5
64-core 4.02 GHz 668.8 969.8 1260.8 1349.0
40-core 4.19 GHz 424.4 615.4 800.0 856.0
80-core 4.19 GHz 848.8 1230.7 1599.9 1711.9
Power E880
32-core 4.35 GHz 355.1 514.9 669.4 716.3
64-core 4.35 GHz 710.2 1029.8 1338.8 1432.5
96-core 4.35 GHz
128-core 4.35 GHz
POWER 7/7+
P7+ 780 128-core 3.7 GHz 1380.2
P7 795 128-core 4.25 GHz 1852.6
P7 795 256-core 4.0 GHz 2978.2

© 2014 International Business Machines Corporation 52


CPW

E870
 32-core 4.02 GHz 359,000
 64-core 4.02 GHz 711,000

 40-core 4.19 GHz 460,000


Measured
 80-core 4.19 GHz 911,000 using SMT8

SMT4 would
be somewhat
E880 lower

 32-core 4.35 GHz 381,000


 64-core 4.35 GHz 755,000
 96-core 4.35 GHz *
 128-core 4.35 GHz *

* planned to be published closer to their GA date. Early estimates


can be provided under nondisclosure agreements if needed

© 2014 International Business Machines Corporation 53


Power E870
& E880
Memory

© 2014 International Business Machines Corporation 54


Enterprise CDIMMs

• Custom DIMMs (CDIMMs)


• 1600 MHz DDR3 (POWER7/7+ was 1066 MHz DDR2)
• Up to 4 TB per CEC Drawer (Up to 4TB / node)
• Incorporates L4 cache
• Enterprise memory RAS

© 2014 International Business Machines Corporation 55


POWER8 Memory Buffer Chip
DRAM Memory
Chips Buffer

“L4 cache”
DDR Interfaces

16MB
Scheduler & POWER8
Memory
Management Cache Link
Intelligence Moved into Memory
• Previously on POWER7+ chip onto buffer
Processor Interface
• High speed interface
Performance Value

© 2014 International Business Machines Corporation 56


Memory Features and Pricing
Feature GB feature List price - list price - $ / GB w/
code no activate 100% activate 100%
with #EMA5/6 activate
#EM8J 64 (4 x 16GB) $ 3212 $10,700 $168.
Very
#EM8K 128 (4 x 32GB) $ 6424 $21,400 $168. similar
#EM8L 256 (4 x 64GB) $12848 $42,800 $168. pricing
#EM8M 512 (4 x 128GB) $32120 $92,024 $180
Feat Activations List Price
#EMA5 1GB activation $117
#EMA6 100GB activation $11700
#EMA7 100GB mobile activation $12600
Only 8% more for mobile
#EMA9 100GB mobile enabled $12600

 Majority of memory price is in activation Hardware


 This more easily enables additional
memory to be present and ready to activate
 Note memory can not be hot plugged Activation

Prices are USA list prices and subject to change. Reseller prices may vary.

© 2014 International Business Machines Corporation 57


Slide added after 18 Sept

POWER8 Memory Compared to POWER7+

50% faster (MHz)


~20-33% lower $/GB
Up to 2X larger DIMMs

POWER7+ POWER8
DDR2 1066MHz DDR3 1600 MHz
Feature GB per $ / GB Feature GB per $ / GB
code feature 100% activated code feature 100% activated
#EM40 32 GB $203 -- -- -- No compare

#EM41 64 GB $203 #EM8J 64 GB $168


~20% lower price
#EM42 128 GB $203 #EM8K 128 GB $168
#EM44 256 GB $252 #EM8L 256 GB $168 ~33% lower price

-- -- -- #EM8M 512 GB $180 No compare

Prices are USA list prices and subject to change. Reseller prices may vary.
Prices calculated using static activations, not mobile activations.

© 2014 International Business Machines Corporation 58


Prices are USA list prices and subject to change. Reseller prices may vary.
POWER8 Memory Connect to SCM

One CDIMM card One CDIMM card

POWER8 SCM
One CDIMM card One CDIMM card

One CDIMM card One CDIMM card

One CDIMM card One CDIMM card

8 CDIMMS per SCM


Each CDIMM adds memory bandwidth
Each CDIMM adds L4 cache

© 2014 International Business Machines Corporation 59


Memory Bandwidth per E870/E880 Socket (SCM)

POWER8 3x more than POWER7

POWER7

POWER6

POWER5

0 50 100 150 200 250


GB/Sec

© 2014 International Business Machines Corporation 60


Active Memory Expansion

True True True Expand Expand Expand Effectively


memory memory memory memory memory memory
more
True True True Expand Expand Expand
memory
memory memory memory memory memory memory

Like POWER7+, provides POWER8 advantage


Expand memory beyond physical limits
More effective server consolidation
 Run more application workload / users per partition
 Run more partitions and more workload per server
60-day trial like Power 7xx
AIX only
E870/E880 feature = #EM82
 Same price as 770/780 feature #4791, but leveraging more memory
 Significantly lower price than 795 feature #4790
Note expansion percentage is very workload dependent

© 2014 International Business Machines Corporation 61


System
Node PCIe

© 2014 International Business Machines Corporation 62


E870/E880 Node I/O Bandwidth
(System node or processor enclosure or CEC drawer)

POWER8 E870

POWER7+ 770

POWER7 770

POWER6 570

0 50 100 150 200 250 300

GB/Sec

© 2014 International Business Machines Corporation 63


Slide added after 18 Sept

POWER8 DCM vs SCM Socket I/O Bandwidth


Scale-out 1 or 2 sockets Scale-up 4 to 16 sockets

DCM SCM

x16 x8 x16
x16 x16
x8

“per socket” maximum bandwidth higher on scale-out,


but “per system” maximum MUCH higher on scale-up.

E870/E880 Max bandwidth


S812L/S814 max bandwidth
one node: 256GB/s
96GB/s
S822/S824 max bandwidth E870/E880 Max bandwidth
192GB/s two nodes: 512GB/s

E880 Max bandwidth


four nodes: 1024GB/s

© 2014 International Business Machines Corporation 64


System Node PCIe slots

8 PCIe Gen3 x16 low profile

Slots use a new low


profile blind swap
cassette (BSC). Server
comes fully populated
with BSC. No special
feat code associated with
BSC.

Eight Low profile (LP) adapter slots


Used for PCIe adapters (Gen1, Gen2 or Gen3 LP adapters)
Or used to connect to PCIe Gen3 I/O Expansion Drawer

© 2014 International Business Machines Corporation 65


System Node PCIe Adapters
Slots use a new low profile blind
swap cassette (BSC).
Server comes fully populated with
BSC.

Description Other info LP feat # Equiv FH #


Ethernet 2-port 10Gb NIC & RoCE EC29 EC30
Ethernet 2-port 40Gb NIC & RoCE EC3A EC3B
Ethernet 4-port 1Gb RJ45 NIC 5260 5899
Ethernet 2-port 1Gb RJ45 10G BaseT EN0X EN0W
Ethernet 4-port 10+1Gb SR+RJ45 NIC FCoE SR-IOV EN0J EN0H
Ethernet 4-port 10+1Gb SFP+Cu+RJ45 NIC FCoE SR-IOV EN0L EN0K
Fibre Channel HBA 2-port 16Gb EN0B EN0A
Fibre Channel HBA 4-port 8Gb EN0Y* 5729*
Fibre Channel HBA 2-port 8Gb 5273 5735
SAS 4-port 6Gb EJ0M EJ0J
SAS 4-port 6Gb EJ11 EN10
USB-3 4-port EC45 EC46

Card to connect to PCIe I/O drawer EJ07 n/a

See also larger PCIe list of supported adapters in I/O drawer

© 2014 International Business Machines Corporation 66


SOD More PCIe Adapters for System Node

Not planned for 2014

IBM plans to support additional PCIe adapters on the Power E870/E880


to provide additional I/O configuration flexibility to clients. The following
low profile adapters are planned to be placed in the E870/E880 system
node drawers:
 PCIe LP POWER GXT145 Graphics Accelerator (#5269)
 PCIe LP 10GbE SR 1-port Adapter (#5275)
 PCIe LP 4Gb 2-Port Fibre Channel Adapter (#5276)
 PCIe2 LP 4-Port 10GbE&1GbE SR&RJ45 Adapter (#5280)
 PCIe2 LP 2-Port 4X IB QDR Adapter 40Gb (#5283)
 PCIe2 LP 2-port 10GbE SR Adapter (#5284)
 PCIe2 LP 3D Graphics Adapter x1 (#EC41)
 PCIe2 LP 4-Port (10Gb+1GbE) SR+RJ45 Adapter (#EN0T)
 PCIe2 LP 4-port (10Gb+1GbE) Copper SFP+RJ45 Adapter (#EN0V)
 PCIe LP 2-Port Async EIA-232 Adapter (#EN28)
IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information
regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or
functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any
future features or functionality described for our products remains at our sole discretion.

© 2014 International Business Machines Corporation 67


PCIe Gen3
I/O
Expansion
Drawer
© 2014 International Business Machines Corporation 68
PCIe Gen3 I/O Expansion Drawer

Front view
Feat #EMX0
 12 PCIe Gen3 slots

 4U drawer

 Full high PCIe slots

 Hot plug PCIe slots

Feat #EMXF  Modules not hot plug


Rear view Feat #EMXF
Fan-out Module
Fan-out Module 6 PCIe Gen3 Slots
6 PCIe Gen3 Slots 4 x8 & 2 x16
4 x8 & 2 x16

Use same Blind Swap Cassette (BSC) as used


in #5802/5877/5803/5873 I/O drawer

© 2014 International Business Machines Corporation 69


PCIe Gen3 I/O Drawer

PCIe Optical Interface to


system node, 2 CXP ports

PCIe Optical Interface to


system node, 2 CXP ports

Fan-out Module Fan-out Module


Rear view
 Each fan-out module has two CXP ports
 One active optical cable (AOC) connects to each port
 Order one pair of cables (one feat code) per fan-out module:
 #ECC6 2 meter optical cable pair
 #ECC8 10 meter optical cable pair

© 2014 International Business Machines Corporation 70


System Node to PCIe Gen3 I/O Drawer
Rear system
node

PCIe3 Optical Cable Adapter


(#EJ07)
• One #EJ07 per fan-out module
• Can be in any of node’s x16
PCIe slots

AOC cable pairs


 #ECC6 2 meter length
Active Optical Active Optical
 #ECC8 10 meter length Cable Pair Cable Pair
 One feat code ships two
identical cables
 Connect top CXP port of #EJ07 to
top CXP port of fan-out module.
Likewise connect bottom port to
bottom port. Do NOT reverse !!
 Do NOT mix lengths of AOC cables
for the same fan-out module
 Do NOT cross cables connecting
one fan-out module to two different
#EJ07 adapters
fan-out module
fan-out module
Second I/O drawer not show for visual simplicity

© 2014 International Business Machines Corporation 71


Slide added after 18 Sept

Min/Max PCIe I/O Drawer per System

Two system node Two system node Two system node

One system node One system node

+ + +
0 1 1
0 1
2 2
2
3
4
0 or 2 PCIe Gen3 I/O
Drawers in 2014
0, 2 or 4 PCIe Gen3 I/O Drawers in 2014
(max 2 per node)

© 2014 International Business Machines Corporation 72


Power 870/880 PCIe Slot Math

-2 Uses 2 PCIe slots in system


node for attachment

12 PCIe slots in PCIe Gen3 I/O


+ 12 Drawer

---------------
Net +10 additional slots to the system
with each PCIe3 I/O Drawer

Total PCIe slots Total PCIe slots


per 1-node server per 2-node server
With zero PCIe drawers on server 8 16
With two PCIe drawers on server 28 36
With four PCIe drawers on server n/a 56
Note 1 or 3 drawers per server not supported in 2014
© 2014 International Business Machines Corporation 73
PCIe Gen3 I/O Drawer E870/E880 SOD
IBM plans to support up to four PCIe Gen3 I/O drawers per
Power E870/E880 system node drawer, providing additional
configuration flexibility and system growth.
IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future
products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential
future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be
incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.

Sample quantities of PCIe PCIe slots per PCIe slots per PCIe slots per 4-node
drawers on the server 1-node server 2-node server server (2015)
0 8 16 32
2 28 36 52
24 drwr + 4 node 24 drwr + 12 node 24 drwr + 28 node
4 48 (SOD) 56 72
48 drwr + 0 node 48 drwr + 8 node 48 drwr + 24 node
8 n/a 96 (SOD) 112
96 drwr + 0 node 96 drwr + 16 node

12 n/a n/a 152 (SOD)


144 drwr + 8 node

16 n/a n/a 192 (SOD)


192 drwr + 0 node
Not all possible quantities
of drawers shown

© 2014 International Business Machines Corporation 74


Slide added after 18 Sept

IO Bandwidth (Comparing I/O Drawers)

#EMX0 One PCIe Gen3 Drawer (12 PCIe slots)

#5877 10 or 20 slots Total drawer GB/Sec


0 10 20 30 40 50 60 70

POWER7 12X-attached PCIe I/O drawer = #5877 or #5802


One or two #5877 or #5802 can share a single GX++ slot’s 20GB/s bandwidth
A per slot average of 2GB/s (1 drawer) or of 1GB/s (2 drawers)

POWER8 PCIe-attached Gen3 I/O drawer has two fan-out modules and each fan-
out module has 32GB/s
A per slot average of 5+GB/s

#EMX0

One #5877

Two #5877 Ave Per slot GB/Sec


0 1 2 3 4 5

© 2014 International Business Machines Corporation 75


Slide added after 18 Sept

Supported PCIe I/O Drawer Cabling Examples


Note the single blue/green/etc lines below each depicts two physical AOC cables

system
system system node
node node

system system
node node

PCIe I/O
Drwr

PCIe I/O
Drwr PCIe I/O
PCIe I/O Drwr
Drwr

PCIe I/O
Drwr

Notes:
 With two system nodes it is a good practice (but not required) to
attach the two fan-out modules in one I/O drawer to different
system nodes. Combined with placing redundant PCIe adapters
in different fan-out modules, system availability is enhanced.
 PCIe I/O drawer can be in the same or different rack as the
system nodes. If large numbers of I/O cables are attached to
PCIe adapters, it’s nice to have the I/O drawer in a different rack
for cable management ease
 System control unit not shown for visual simplicity

© 2014 International Business Machines Corporation 76


Slide added after 18 Sept

Unsupported PCIe I/O Drawer Cabling Examples


Note the single blue/green/etc lines below each depicts two physical AOC cables

system system system system


node node node node

PCIe I/O PCIe I/O PCIe I/O


Drwr Drwr Drwr

Invalid in 2014
PCIe I/O
PCIe I/O
Drwr
Just one drawer Drwr
not supported

Invalid in 2014 PCIe I/O


Drwr
Cabling ½ a drawer or
only one fan-out
module per drawer not Invalid in 2014
supported Three drawers
per node not
supported. See
SOD Invalid in 2014
max two, not
four drawers per
node See SOD
© 2014 International Business Machines Corporation 77
Power E870/E880 Full High PCIe Adapters Supported
as of October 2014 (page 1)
Ethernet NIC 4-port 1GbE RJ45 #5899
Ethernet NIC 2-port 10GbE 10GBase-T RJ45 #EN0W
Ethernet NIC & FCoE 4-port 10GbE+1GbE SR+RJ45 #EN0H
Ethernet NIC 4-port 10GbE+1GbE SR optical #EN0S
Ethernet NIC 4-port 10GbE+1GbE Copper twinax #EN0U
Ethernet NIC & RoCE 2-port 10GbE SR optical #EC30
Ethernet NIC 2-port 10GbE SR optical #5287
Ethernet NIC 1-port 10GbE LR optical (IBM i native) #5772
Ethernet NIC &
2-port 10GbE Copper twinax #EC2J
OpenOnload
Ethernet NIC & RoCE 2-port 40GbE QSFP+ #EC3B
Ethernet NIC 2-port 1GbE RJ45 #5767
Ethernet NIC 2-port 1GbE SX optical #5768
Ethernet NIC 4-port 1Gbe RJ45 #5717
Ethernet NIC 1-port 10GbE SR optical #5769
4-port 10GbE+1GbE Copper
Ethernet NIC & FCoE #EN0K
Twinax+RJ45
Ethernet NIC & RoCE 2-port 10GbE Copper twinax #EC28 ;

* initially announced without NIM, NIM added late Aug


© 2014 International Business Machines Corporation 78
Power E870/E880 Full High PCIe Adapters Supported
as of October 2014 (page 2)

Fibre Channel 2-port 8Gb #5735


Fibre Channel 4-port 8Gb #5729
Fibre Channel 2-port 16Gb #EN0A
Fibre Channel 2-port 4Gb #5774

Communications 1-port Bisync (IBM i) #EN13, #EN14


Communications 2-port Async/Bisync (IBM i) #2893, #2894
Communications 4-port Async (AIX / Linux) #5785
Communications 2-port Async RS232 5289/5290 replace #EN27

SAS RAID 4-port no-cache PCIe3 for SSD/HDD #EJ0J


SAS Tape/DVD 4-port tape/DVD PCIe3 #EJ10
SAS RAID 4-port huge-cache PCIe3 for SSD/HDD #EJ0L
SAS RAID/Tape/DVD 2-port no-cache PCIe1 for HDD #5901
SAS RAID 3-port large cache PCIe2 HDD/SSD #5913
SAS RAID 3-port large cache PCIe2 HDD/SSD #ESA3

© 2014 International Business Machines Corporation 79


Power E870/E880 Full High PCIe Adapters Supported
as of October 2014 (page 3)

Infiniband (IB) 2-port QDR IB SR optical #5285

Graphics 2D graphics for general use #5748


Graphics 3D graphics for RHEL7 #EC42

Encryption Crypto Coprocessor 4765-001 #EJ28

USB 4-port USB-3 #EC46

© 2014 International Business Machines Corporation IBM & Business Partner Confidential until Announcement 80
SOD More PCIe Adapters for PCIe Gen3 Drawer

Not planned for 2014

IBM plans to support additional PCIe adapters on the Power E870/E880


to provide additional I/O configuration flexibility to clients. The following
full high profile adapters are planned to be placed in the PCIe Gen3 I/O
drawer:
 PCIe2 4-Port (10GbE & 1GbE) SR&RJ45 Adapter (#5744)
 10Gb FCoE PCIe Dual Port Adapter (#5708)
 PCIe 380MB Cache Dual Port 3Gb SAS RAID Adapter (#5805)
 PCIe2 4-port (10Gb FCoE & 1GbE) LR&RJ45 Adapter (#EN0M)
IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information
regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.
The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or
functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any
future features or functionality described for our products remains at our sole discretion.

© 2014 International Business Machines Corporation 81


Full High PCIe Adapters NOT Supported: Oct
 #2054 or #2055 PCIe RAID & SSD SAS Adapter
 #2728 4 port USB PCIe Adapter (use #EC46 instead)
 #4808 PCIe Crypto Coprocessor Gen3 BSC 4765-001
 Use EJ28 instead (also check RPQ 8A228 for exceptions)
 #4809, PCIe Crypto Coprocessor Gen4 BSC 4765-001
 Wrong BSC, won’t fit in PCIe Gen3 I/O drawer – see #4808
 #5288 PCIe2 2-Port 10GbE SFP+ Copper Adapter
 #5289 2 Port Async EIA-232 PCIe Adapter
 #5732 10 Gigabit Ethernet-CX4 PCI Express Adapter
 #5745 PCIe2 4-Port 10GbE&1GbE SFP+Copper&RJ45 Adapter
 #5773, 4 Gigabit PCI Express Single Port Fibre Channel Adapter
 #5903 PCIe 380MB Cache Dual - x4 3Gb SAS RAID Adapter
(use #5805 instead when #5805 supported)
 #EC2K PCIe2 2-port 10GbE SFN5162F Adapter (use #EC2J instead)
 #EJ0X PCIe3 SAS Tape Adapter Quad-port 6Gb (use #EJ10 instead)
 #ESA1, PCIe2 RAID SAS Adapter Dual-port 6Gb (use #EJ0J instead)
 #ES09, IBM Flash Adapter 90

The following are currently unsupported, but see SOD


 #5744 PCIe2 4-Port (10GbE & 1GbE) SR&RJ45 Adapter
 #5708 10Gb FCoE PCIe Dual Port Adapter
 #5805 PCIe 380MB Cache Dual Port 3Gb SAS RAID Adapter
 #EN0M PCIe2 4-port (10Gb FCoE & 1GbE) LR&RJ45 Adapter

© 2014 International Business Machines Corporation IBM & Business Partner Confidential until Announcement 82
Slide added after 18 Sept

PCIe Performance Plugging Considerations

In a E870/E880 System node, no planning required. Full x16 bandwidth available.

However in I/O drawer fan-out module, bandwidth of one system node PCIe slot is
shared by six PCIe slots. It is possible to have multiple high bandwidth adapters in
the fan-out module which together require more bandwidth than available. For
example a 2-port 40Gb Ethernet adapter uses a lot of the available bandwidth
assuming it is really busy. But often clients don’t run their PCIe adapters that
heavily from a bandwidth perspective and it is not a concern

#EMX0 PCIe Gen3 Drawer bandwidth is far, far better than in POWER7+
#5802/5877 12X-attached I/O Drawer. One or two PCIe Gen1 drawers attach to a
GX++ bus and share 10 or 20 PCIe Gen1 x8 slots across that one GX++ bus. A
GX++ bus has a theoretical max of 20GB/s (duplex burst). A fan-out module has a
theoretical max of 32GB/s (duplex burst) to share across just 6 PCIe Gen3 x8 or
x16 slots. One PCIe Gen3 I/O drawer with two fan-out modules has max of
64GB/s (220% larger than a #5802/5877 with a dedicated GX++ slot) .

The PCIe Spreadsheet has sizing factors for individual adapters relating to
bandwidth. Use the same sizing factors for the PCIe adapters for
POWER7/POWER7/POWER8+ servers, but instead of the max GX++ subtotal
value of “60 Gb/s” used for POWER7/POWER7+, use “96 Gb/s” for one EMXF fan-
out module.

© 2014 International Business Machines Corporation 83


Slide added after 18 Sept

PCIe Plugging Limitations


System node: No special rules for supported low profile adapters
===================================================
PCIe Gen3 I/O Expansion Drawer:
 4-port SAS adapters need space between the cards to allow cabling to be
inserted/removed. These adapters are NOT supported in slots C2 or C5:
 PCIe3 RAID SAS Adapter Quad-port 6Gb x8 (#EJ0J)
 PCIe3 SAS Tape/DVD Adapter Quad-port 6Gb x8 (#EJ10)
 PCIe3 12GB Cache RAID SAS Adapter Quad-port 6Gb x8 (#EJ0L)
 The following adapters are only supported in slot c6 of either fan-out module (max 2
adapters per drawer)
 4-Port Async EIA-232 PCIe Adapter (#5785)
 PCIe 2-Line WAN w/Modem (#2893/2894)
 PCIe 1-port Bisync Adapter (#EN13/EN14)
 POWER GXT145 PCI Express Graphics Accelerator (#5748)
 The following adapter is supported only in slots C2 or C5 (max 4 adapters per drawer):
PCIe2 3D Graphics Adapter x1 (#EC42)
 The following adapters are not supported in slots C3 or C6 (max 8 adapters per
drawer):
 40Gb PCIe Gen3 RoCE/NIC (#EC3B)
 PCIe2 2-Port 10GbE RoCE SFP+ Adapter (#EC28)
 PCIe2 2-Port 10GbE RoCE SR Adapter (#EC30)
 PCIe2 2-Port 4X IB QDR Adapter 40Gb (#5285)

© 2014 International Business Machines Corporation 84


Scale-out Servers & PCIe Gen3 I/O Drawer SOD

Oct 2014 SOD


IBM plans to extend the support of PCIe Gen3 I/O
drawers to the POWER8 scale-out servers which
have 6 or more cores. This drawer is planned to
be attached via two x16 PCIe slots and will require
an update to the currently available firmware.

IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal
without notice at IBM’s sole discretion. Information regarding potential future products is intended to
outline our general product direction and it should not be relied on in making a purchasing decision.
The information mentioned regarding potential future products is not a commitment, promise, or legal
obligation to deliver any material, code or functionality. Information about potential future products
may not be incorporated into any contract. The development, release, and timing of any future
features or functionality described for our products remains at our sole discretion.

© 2014 International Business Machines Corporation 85


OS
Support

© 2014 International Business Machines Corporation 86


Power E870/E880 Operation System Support
AIX with any valid I/O config:
 AIX 7.1 TL3 SP4 and APAR IV63332 or later
 AIX 7.1 TL2 SP6 or later (planned availability January 2015)
 AIX 6.1 TL9 SP4 and APAR IV63331 or later
 AIX 6.1 TL8 SP6, or later (planned availability January 2015)
AIX with virtual I/O only:
 AIX 7.1 TL2 SP1 or later
 AIX 7.1 TL3 SP1 or later
 AIX 6.1 TL8 SP1 or later
 AIX 6.1 TL9 SP1 or later

IBM i
 IBM i 7.2 TR1 or later
 IBM i 7.1 TR9 or later

Linux
 Red Hat Enterprise Linux 6.5 or later
 SUSE Linux Enterprise Server 11 SP3 and later SPs

VIOS
 VIOS 2.2.3.4 with ifix IV63331 or later

Firmware 8.2.0

© 2014 International Business Machines Corporation 87


Migrating /
Upgrading to
E870 / E880

© 2014 International Business Machines Corporation 88


Model Upgrades (Same Serial Number)

E880
795

595 780 E870

770
570
750 4-socket
560
740
550
720 6/8 core
Scale-out
520 4-core 1-S / 2-S
720 4 core
520 2-core
730
520 1-core
710

POWER6  POWER7  POWER8

© 2014 International Business Machines Corporation 89


770/780 Model Upgrades (Same Serial Number)

780 780
“B” Mdl “C” Mdl E880
780+
“D” Mdl

570
E870
770 770+
770 “D” Mdl
“C” Mdl
“B” Mdl

POWER6 POWER7 POWER7 POWER7+

© 2014 International Business Machines Corporation 90


POWER6 570 to POWER8 E8x0 Upgrades

570 1 780+
2 E880

770+ E870

IMPORTANT -- No “one-step” upgrades where two model upgrades are


implemented over one long weekend.
Must do two upgrades K ideally separated by several months time K
POWER6POWER7+ and then POWER7+POWER8
This is a legal / business practices requirement. This must be treated as two
steps from an accounting/financial perspective.
 Note the IBM ordering/manufacturing systems key off the POWER6POWER7+ installation
before allowing the POWER7+POWER8 upgrade to be ordered

Words from IBM Business Practices: “The client must buy the POWER7+ upgrade for purpose of
using the POWER7+ 770/780, not solely as a step to the E870/E880. And, they must get business
value out of the POWER7+ 770+/780 for at least 90 days. Finally, the upgrade to the POWER8 server
needs to be sold separately / independently of the upgrade to POWER7+ 770/780.”

Additional notes:
 Assuming historical practices maintained, upgrades from a POWER6 570 to a POWER7+
770/780 will be withdrawn when IBM withdraws the sales of new serial number 770/780.
 Conversions from a POWER6 9406-MMA to a POWER6 9117-MMA are NOT considered a
step requiring a significant time pause in the above “no one-step” rule. However
manufacturing systems won’t accept a 9117-MMA770/780 order until 94069117 MES
shows as installed.

© 2014 International Business Machines Corporation 91


POWER7 770/780 to POWER8 E8x0 Upgrades
E880
780
1
780+

770+
2 E870
770

IMPORTANT -- No “one-step” upgrades where two model upgrades are


implemented over one long weekend.
Must do two upgrades K ideally separated by several months time K
POWER7POWER7+ K. and then POWER7+POWER8
This is a legal / business practices requirement. This must be treated as
two steps from an accounting/financial perspective.
 Note the IBM ordering/manufacturing systems key off the POWER7POWER7+
installation before allowing the POWER7+POWER8 upgrade to be ordered

Words from IBM Business Practices: “The client must buy the POWER7+ upgrade
for purpose of using the POWER7+ 770/780, not solely as a step to the E870/E880.
And, they must get business value out of the POWER7+ 770+/780 for at least 90
days. Finally, the upgrade to the POWER8 server needs to be sold separately /
independently of the upgrade to POWER7+ 770/780.”

© 2014 International Business Machines Corporation 92


MME / MHE Existing/Older I/O Drawers
Considerations

12X-attached PCIe I/O drawers NOT supported


 No GX++ attach on POWER8 -- move drawer content, but not drawer
19-inch #5802/5877 24-inch #5803/5873

12X-attached PCI-X I/O drawers NOT supported


19-inch #5796 24-inch #5797/5798

#5887 EXP24S SAS I/O Drawer Supported (2.5-inch drives)

Older storage drawers not supported


 #5886 EXP12S SAS I/O Drawer NOT supported (3.5-inch drives)
 #5786 EXP24 SCSI I/O Drawer NOT supported (3.5-inch drives)

© 2014 International Business Machines Corporation 93


POWER6 / POWER7 / POWER8 Partition Mobility

AIX 7.1
AIX 7.1 AIX 7.1 AIX 7.1 AIX 7.1 AIX 7.1
AIX 6.1
AIX 6.1 AIX 6.1 AIX 6.1 AIX 6.1 AIX 6.1
AIX 5.3
AIX 5.3 AIX 5.3 IBM i 7.1 IBM i 7.1 *
IBM i 7.2 IBM i 7.2 *
Linux Linux Linux Linux Linux

POWER6/6+ POWER7 POWER8

Leverage POWER6 / POWER7 Compatibility Modes


LPAR Migrate between POWER6 / POWER7 / POWER8 Servers
Can not move POWER8 Mode partitions to POWER6 or POWER7 systems.
* shipped for POWER8 Sept 2014
© 2014 International Business Machines Corporation 94
Power
Enterprise
Pools

© 2014 International Business Machines Corporation 95


“Come on in, the water is great” !!!

Power
Enterprise
Pools

Flexibility & Ease of operations & Price performance


Enhanced availability and cloud characteristics
For POWER7+ 770, POWER7+ 780, Power795,
and Power E870, Power E880

© 2014 International Business Machines Corporation 96


Power Enterprise Pools

Power Enterprise Pools enable you to move processor and memory


activations within a defined pool of systems, at your convenience.

 New mobile activations for both processor and memory


 Mobile activations can be used for systems within the same pool
• One pool type for Power E880 & POWER7+ 780 & Power 795 systems
• One pool type for Power E870 & POWER7+ 770 systems
 Activations can be moved at any time by the user without contacting IBM
 Movement of activations is instant, dynamic and non-disruptive

© 2014 International Business Machines Corporation 97


Power Enterprise Pool Example Monday 8 am

Pool Totals
Sys A Sys B Sys C Sys D
64-core E880 96-core 795 96-core 780 128-core 795
4.35 GHz 3.7 GHz 3.7 GHz 4.0 GHz Activations:
Activations: Activations: Activations: Activations: 96 static
10 static 30 static 16 static 40 static 160 mobile
40 mobile 40 mobile 20 mobile 60 mobile
14 “dark” 26 “dark” 60 “dark” 28 “dark” 128 “dark”

Example
starting
point

© 2014 International Business Machines Corporation 98


Power Enterprise Pool Example Monday 8:01 am

Pool Totals
Sys A Sys B Sys C Sys D
64-core E880 96-core 795 96-core 780 128-core 795
4.35 GHz 3.7 GHz 3.7 GHz 4.0 GHz Activations:
Activations: Activations: Activations: Activations: 96 static
10 static 30 static 16 static 40 static 160 mobile
0 mobile 55 mobile 45 mobile 60 mobile
54 “dark” 11 “dark” 35 “dark” 28 “dark” 128 “dark”

Going back to
initial starting
point and
moving the
activations
differently

Move 15 activations

© 2014 International Business Machines Corporation 99


Power Enterprise Pool and Elastic CoD

PLUS can also use Elastic


(On/Off) CoD to “light up”
dark cores temporarily to
cover peaks Pool Totals
Sys A Sys B Sys C Sys D
64-core E880 96-core 795 96-core 780 128-core 795
4.35 GHz 3.7 GHz 3.7 GHz 4.0 GHz Activations:
Activations: Activations: Activations: Activations: 96 static
10 static 30 static 16 static 40 static 160 mobile
40 mobile 32 mobile 35 mobile 53 mobile
14 “dark” 34 “dark” 45 “dark” 35 “dark” 128 “dark”

Not so dark

© 2014 International Business Machines Corporation 100


Power Enterprise Pool – Memory & Software

Memory
Activations:
Mobile memory activations Xx GB static
work the same way as mobile Xxx GB mobile
processor core activations Xxx GB “dark”

PLUS many Power Systems software


GB
Memory
entitlements also “mobile”

© 2014 International Business Machines Corporation 101


Power Enterprise Pool – HMC

Master HMC knows


• which servers are in the pool
• how many mobile activations are in
the pool and where they are assigned
Pool’s HMC
holds the key
• each server’s static resources

• HMC needs V7.8 or later and needs at least 2GB memory


• Older HMC not supported: 7042-CR4, 7310-CR3/CR4/C05/C06, 7042-C06/C07, 7315-CR3
• Max 1000 partitions managed
• Can manage more than one Power Enterprise Pool
• If Pool’s master HMC down,
• Running servers not impacted unless resource needs to be moved
• Master HMC needed for assigning mobile resource when server
powering on or restarting
• A second, redundant HMC for the pool optional

© 2014 International Business Machines Corporation 102


static mobile
“Mobile-Enabled” Activations

for Power E870/E880 and POWER7+ 770/780 and Power 795


 New static processor core activations can save total cost
 Higher initial cost, but converts to mobile activations at no-charge when ready
 For processor activations only. Memory mobile-enabled not offered (no cost savings
to client to do so)

Original Static + MES conversion  Mobile activation


$$$$ $$$$ $$$$$$$$

Mobile enabled Static + MES conversion  Mobile activation


$$$$$$ 0
$$$$$$

Mobile enabled activation For Power system


#EPMC Power 770 9117-MMD 4.22GHz #EPM0
#EPMD Power 770 9117-MMD 3.8GHz #EPM1
Mobile activation For Power system
#EPHL Power 780 9179-MHD 4.42GHz #EPH0
#EP22 Power 770 9117-MMD any GHz
#EPHM Power 780 9179-MHD 3.72GHz #EPH2
#EP23 Power 780 9117-MMD any GHz or
#4715
#4725
Power 795 9119-FHB 4.0GHz #4700
Power 795 9119-FHB 3.7GHz #4702
 Power 795 9119-FHB any GHz
EP3S Power E870 any GHz
#EPBN Power E870 9119-MME 4.02 GHz #EPBA EP2T Power E880 any GHz
#EPBQ Power E870 9119-MME 4.19 GHz #EPBC
#EPBP Power E880 9119-MHE 4.35 GHz EPBB

© 2014 International Business Machines Corporation 103


E870 / E880
Other Info

© 2014 International Business Machines Corporation 104


Announce / Availability Plans
Annc eConfig GA
Power E870 6 Oct 7 Oct 18 Nov
Power E880 (1-2 node) 6 Oct 7 Oct 18 Nov
Power E880 (3rd/4th node) 6 Oct March Jun 2015
Same serial number upgrades to E870/E880 6 Oct 7 Oct 12 Dec
Power S824L 6 Oct 7 Oct
NEBS capability for S822/S822L 6 Oct 31 Oct
CAPI card for S812L/S822L 6 Oct

RPQ for 110V S814 in rack now n/a Oct

RPQ for bigger S824 memory 6 Oct n/a 6 Dec

SPT availability for E870/E880 planned early


November or very late Oct

© 2014 International Business Machines Corporation 105


Other Useful Data

PVU (E870 and E880) 120

Software Tier
AIX = Medium IBM i = P30

IBM i QRPCFEAT - same as processor SCM feat code

Warranty 1year K same as Power 795

© 2014 International Business Machines Corporation 106


Physical Planning Basics

System node
 Width: 445 mm (17.5 in.)
 Depth: 902 mm (35.5 in.)
 Height: 219 mm (8.6 in.) 5 EIA units
 Weight: 75.7 kg (167 lb)

System Control Unit


 Width: 434 mm (17.1 in.)
 Depth: 813 mm (32.0 in.)
 Height: 86 mm (3.4 in.) 2 EIA units
 Weight: 23.6 kg (52 lb)

PCIe Gen3 I/O Expansion Drawer


 Width: 482 mm (19 in.)
 Depth: 802 mm (31.6 in)
 Height: 173 mm (6.8 in.) 4 EIA units
 Weight: 54.4 kg (120 lb)

© 2014 International Business Machines Corporation 107


Physical Planning – Acoustic

See the IBM Site and Hardware Planning document at


http://www.ibm.com/support/knowledgecenter/POWER8/p8hd
x/POWER8welcome.htm

These servers pack a lot of compute capability in a very small


footprint. You can definitely hear the fans, especially when
they speed up to handle additional load.

The client may want to consider acoustic doors on their rack

© 2014 International Business Machines Corporation 108


Power E870 vs Power 770 2014

9117-MMD 9119-MME
Power 770 Power E870
CPU Sockets per Node 4 4
Max processor nodes 4 2
Max number sockets 16 8
Max Cores 64 80
Max Frequency 3.8 GHz 4.19 GHz
Max Memory 1 TB per node 2 TB per node
Memory per core 31.3 GB 50 GB

Memory Bandwidth (peak) 272 GB/s per node 922 GB/s per node

80 GB/s per node 256 GB/s per node


I/O Bandwidth (peak)
(GX) (PCIe Gen3)
Max PCIe I/O drws 16 (4 per Node) 4 (2 per Node)
160 - in IO drws 48 in IO drws
Max PCIe I/O Slots
24 - internal 8 - internal
1711.9 (80 core)
Best rPerf 729.3 (64 core)
1349 (64 core)

© 2014 International Business Machines Corporation 109


E870/E880
POWER8
Scaling

© 2014 International Business Machines Corporation 110


POWER8 Max Enterprise Interconnect
48-way Drawer

192-way SMP system 76.8 GB/s 25.6 GB/s

111
© 2014 International Business Machines Corporation 111
POWER 795/780+ 3-Hop 128-way Topology
32-way Drawer

128-way SMP system

© 2014 International Business Machines Corporation 112


POWER 795/780+ 3-Hop 128-way Topology

© 2014 International Business Machines Corporation 113


POWER8 Enterprise 2-hop 192-way Topology

© 2014 International Business Machines Corporation 114


POWER8 Enterprise 2-hop Multi-path 192-way Topology

© 2014 International Business Machines Corporation 115


Thanks!

© 2014 International Business Machines Corporation 116


Power E870 vs Power 770 2015

9117-MMD 9119-MME
Power 770 Power E870
CPU Sockets per Node 4 4
Max processor nodes 4 2
Max number sockets 16 8
Max Cores 64 80
Max Frequency 3.8 GHz 4.19 GHz
Max Memory 1 TB per node 4 TB per node SOD
Memory per core 31.3 GB 100 GB

Memory Bandwidth (peak) 272 GB/s per node 922 GB/s per node

80 GB/s per node 256 GB/s per node


I/O Bandwidth (peak)
(GX) (PCIe Gen3)
Max PCIe I/O drws 16 (4 per Node) 8 (4 per Node)
160 - in IO drws 96 in IO drws
Max PCIe I/O Slots
24 - internal 0 - internal

© 2014 International Business Machines Corporation 117


Power E880 vs Power 780 2014

9117-MMD 9179-MHD 9119-MHD


Power 770 Power 780 Power E880
CPU Sockets per Node/Drawer 4 4 4
Max processor nodes/Drawers 4 4 2
Max number sockets 16 16 8
Max Cores 64 128 64
Max Frequency 3.8 GHz 3.7 GHz 4.35 GHz
Max Memory 1 TB per node 1 TB per node 4 TB per node
Memory per core 64 GB 32 GB 128 GB

Memory Bandwidth (peak) 272 GB/s per node 272 GB/s per node 922 GB/s per node

80 GB/s per node 80 GB/s per node 256 GB/s per node
I/O Bandwidth (peak)
(GX) (GX) (PCIe Gen3)
Max PCIe I/O drws 16 (4 per Node) 16 (4 per Node) 4 (2 per Node)
160 - in IO drws 160 - in IO drws 48 in IO drws
Max PCIe I/O Slots
24 - internal 24 - internal 8 - internal

© 2014 International Business Machines Corporation 118


Power E880 vs Power 780 2015

9117-MMD 9179-MHD 9119-MHD


Power 770 Power 780 Power E880
CPU Sockets per Node/Drawer 4 4 4
Max processor nodes/Drawers 4 4 4
Max number sockets 16 16 8
Max Cores 64 128 128
Max Frequency 3.8 GHz 3.7 GHz 4.35 GHz
Max Memory 1 TB per node 1 TB per node 4 TB per node
Memory per core 64 GB 32 GB 128 GB

Memory Bandwidth (peak) 272 GB/s per node 272 GB/s per node 922 GB/s per node

80 GB/s per node 80 GB/s per node 256 GB/s per node
I/O Bandwidth (peak)
(GX) (GX) (PCIe Gen3)
Max PCIe I/O drws 16 (4 per Node) 16 (4 per Node) 15 (4 per Node SOD)
160 - in IO drws 160 - in IO drws 192 in IO drws SOD
Max PCIe I/O Slots
24 - internal 24 - internal 0 - internal

© 2014 International Business Machines Corporation 119


Power E880 vs Power 780 2015 SOD

9117-MMD 9179-MHD 9119-MHD


Power 770 Power 780 Power E880
CPU Sockets per Node/Drawer 4 4 4
Max processor nodes/Drawers 4 4 4
Max number sockets 16 16 8
Max Cores 64 128 192 SOD
Max Frequency 3.8 GHz 3.7 GHz TBD ~4 GHz
Max Memory 1 TB per node 1 TB per node 4 TB per node
Memory per core 64 GB 32 GB 85 GB

Memory Bandwidth (peak) 272 GB/s per node 272 GB/s per node 922 GB/s per node

80 GB/s per node 80 GB/s per node 256 GB/s per node
I/O Bandwidth (peak)
(GX) (GX) (PCIe Gen3)
Max PCIe I/O drws 16 (4 per Node) 16 (4 per Node) 15 (4 per Node SOD)
160 - in IO drws 160 - in IO drws 192 in IO drws SOD
Max PCIe I/O Slots
24 - internal 24 - internal 0 - internal

IBM Confidential
© 2014 International Business Machines Corporation 120
Power E880 vs Power 795 2014
9119-FHB 9110-MHE
Power 795 Power E880
CPU Sockets per Node 4 4
Max Procesor Nodes 8 2 (4 in 2015)
Max Cores 256 64 (128 or SOD 192 in 2015)
Max Frequency 4.0 GHz 4.35 GHz
Inter CEC Drw SMP Bus (A Bus) 336 GB/s per node 307 GB/s per node
Intra CEC Drw Bus (X bus) 576 GB/s per node 922 GB/s per node
4 TB per node
2 TB per node
Max Memory 8 TB max system 2014
16 TB max system
(16 TB max in 2015)
Memory per core 64 GB 125 GB
Memory Bandwidth (peak) 546 GB/s per node 922 GB/s per node
Memory Bandwidth per core
17 GB/sec 19 GB/sec
(peak)
256 GB/s per node (PCIe
I/O Bandwidth (peak) 80 GB/s per node (GX)
Gen3)
4 (2 per Node)
Max PCIe I/O drws 32 (4 per Node)
(16 – 4 per node 2015)
48 in 2014
Max PCIe I/O Slots 640
192 SOD 2015
© 2014 International Business Machines Corporation 121
Power E880 vs Power 795
2015
9119-FHB 9110-MHE E880
Power 795 48-node SOD
CPU Sockets per Node 4 4
Max Procesor Nodes 8 4
Max Cores 256 192 SOD
Max Frequency 4.0 GHz TBD ~4GHz est
Inter CEC Drw SMP Bus (A Bus) 336 GB/s per node 307 GB/s per node
Intra CEC Drw Bus (X bus) 576 GB/s per node 922 GB/s per node
2 TB per node 4 TB per node
Max Memory
16 TB max system 16 TB max systems
Memory per core 64 GB 85 GB
Memory Bandwidth (peak) 546 GB/s per node 922 GB/s per node
Memory Bandwidth per core
17 GB/sec 19 GB/sec
(peak)
256 GB/s per node (PCIe
I/O Bandwidth (peak) 80 GB/s per node (GX)
G3)
Max PCIe I/O drws 32 (4 per Node) 16 (4 per Node)
Max PCIe I/O Slots 640 192 SOD

© 2014 International Business Machines Corporation 122


Special notices
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in
other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM
offerings available in your area.
Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources.
Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give
you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk,
NY 10504-1785 USA.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives
only.
The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or
guarantees either expressed or implied.
All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the
results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations
and conditions.
IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions
worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment
type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal
without notice.
IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.
All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are
dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in
this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-
available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document
should verify the applicable data for their specific environment.

© 2014 International Business Machines Corporation 123


Special notices (cont.)
IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, i5/OS, i5/OS (logo), IBM Business Partner
(logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC
System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill, Cloudscape,
DB2 Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP, HACMP/6000,
HASM, IBM Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture,
Power Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2,
POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, POWER6+, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10,
Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other
countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols
indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law
trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml

The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.
UNIX is a registered trademark of The Open Group in the United States, other countries or both.
Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.
Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.
Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.
AMD Opteron is a trademark of Advanced Micro Devices, Inc.
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both.
TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).
SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are
trademarks of the Standard Performance Evaluation Corp (SPEC).
NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.
AltiVec is a trademark of Freescale Semiconductor, Inc.
Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.
InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association.
Other company, product and service names may be trademarks or service marks of others.

© 2014 International Business Machines Corporation 124


Notes on benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.

IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html.

All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX
Version 4.3, AIX 5L or AIX 6 were used. All other systems used previous versions of AIX. The SPEC CPU2006, SPEC2000, LINPACK, and Technical Computing
benchmarks were compiled using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of
these compilers were used: XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++
Advanced Edition V7.0 for Linux, and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for
FORTRAN and KAP/C 1.4.2 from Kuck & Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these
vendors. Other software packages like IBM ESSL for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.

For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.

TPC http://www.tpc.org
SPEC http://www.spec.org
LINPACK http://www.netlib.org/benchmark/performance.pdf
Pro/E http://www.proe.com
GPC http://www.spec.org/gpc
NotesBench http://www.notesbench.org
VolanoMark http://www.volano.com
STREAM http://www.cs.virginia.edu/stream/
SAP http://www.sap.com/benchmark/
Oracle Applications http://www.oracle.com/apps_benchmark/
PeopleSoft - To get information on PeopleSoft benchmarks, contact PeopleSoft directly
Siebel http://www.siebel.com/crm/performance_benchmark/index.shtm
Baan http://www.ssaglobal.com
Microsoft Exchange http://www.microsoft.com/exchange/evaluation/performance/default.asp
Veritest http://www.veritest.com/clients/reports
Fluent http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers http://www.top500.org/
Ideas International http://www.ideasinternational.com/benchmark/bench.html
Storage Performance Council http://www.storageperformance.org/results
Revised January 15, 2008

© 2014 International Business Machines Corporation 125


Notes on HPC benchmarks and values
The IBM benchmarks results shown herein were derived using particular, well configured, development-level and generally-available computer systems. Buyers should
consult other sources of information to evaluate the performance of systems they are considering buying and should consider conducting application oriented testing. For
additional information about the benchmarks, values and systems tested, contact your local IBM office or IBM authorized reseller or access the Web site of the benchmark
consortium or benchmark vendor.

IBM benchmark results can be found in the IBM Power Systems Performance Report at http://www.ibm.com/systems/p/hardware/system_perf.html.

All performance measurements were made with AIX or AIX 5L operating systems unless otherwise indicated to have used Linux. For new and upgraded systems, AIX
Version 4.3 or AIX 5L were used. All other systems used previous versions of AIX. The SPEC CPU2000, LINPACK, and Technical Computing benchmarks were compiled
using IBM's high performance C, C++, and FORTRAN compilers for AIX 5L and Linux. For new and upgraded systems, the latest versions of these compilers were used:
XL C Enterprise Edition V7.0 for AIX, XL C/C++ Enterprise Edition V7.0 for AIX, XL FORTRAN Enterprise Edition V9.1 for AIX, XL C/C++ Advanced Edition V7.0 for Linux,
and XL FORTRAN Advanced Edition V9.1 for Linux. The SPEC CPU95 (retired in 2000) tests used preprocessors, KAP 3.2 for FORTRAN and KAP/C 1.4.2 from Kuck &
Associates and VAST-2 v4.01X8 from Pacific-Sierra Research. The preprocessors were purchased separately from these vendors. Other software packages like IBM
ESSL for AIX, MASS for AIX and Kazushige Goto’s BLAS Library for Linux were also used in some benchmarks.

For a definition/explanation of each benchmark and the full list of detailed results, visit the Web site of the benchmark consortium or benchmark vendor.
SPEC http://www.spec.org
LINPACK http://www.netlib.org/benchmark/performance.pdf
Pro/E http://www.proe.com
GPC http://www.spec.org/gpc
STREAM http://www.cs.virginia.edu/stream/
Veritest http://www.veritest.com/clients/reports
Fluent http://www.fluent.com/software/fluent/index.htm
TOP500 Supercomputers http://www.top500.org/
AMBER http://amber.scripps.edu/
FLUENT http://www.fluent.com/software/fluent/fl5bench/index.htm
GAMESS http://www.msg.chem.iastate.edu/gamess
GAUSSIAN http://www.gaussian.com
ABAQUS http://www.abaqus.com/support/sup_tech_notes64.html
select Abaqus v6.4 Performance Data
ANSYS http://www.ansys.com/services/hardware_support/index.htm
select “Hardware Support Database”, then benchmarks.
ECLIPSE http://www.sis.slb.com/content/software/simulation/index.asp?seg=geoquest&
MM5 http://www.mmm.ucar.edu/mm5/
MSC.NASTRAN http://www.mscsoftware.com/support/prod%5Fsupport/nastran/performance/v04_sngl.cfm
STAR-CD www.cd-adapco.com/products/STAR-CD/performance/320/index/html
NAMD http://www.ks.uiuc.edu/Research/namd
HMMER http://hmmer.janelia.org/ Revised January 15, 2008
http://powerdev.osuosl.org/project/hmmerAltivecGen2mod

© 2014 International Business Machines Corporation 126


Notes on performance estimates
rPerf for AIX

rPerf (Relative Performance) is an estimate of commercial processing performance relative to other IBM UNIX systems.
It is derived from an IBM analytical model which uses characteristics from IBM internal workloads, TPC and SPEC
benchmarks. The rPerf model is not intended to represent any specific public benchmark results and should not be
reasonably used in that way. The model simulates some of the system operations such as CPU, cache and memory.
However, the model does not simulate disk or network I/O operations.

rPerf estimates are calculated based on systems with the latest levels of AIX and other pertinent software at the time of
system announcement. Actual performance will vary based on application and configuration specifics. The IBM
eServer pSeries 640 is the baseline reference system and has a value of 1.0. Although rPerf may be used to
approximate relative IBM UNIX commercial processing performance, actual system performance may vary and is
dependent upon many factors including system hardware configuration and software design and configuration.
Variations in incremental system performance may be observed in commercial workloads due to changes in the
underlying system architecture.

All performance estimates are provided "AS IS" and no warranties or guarantees are expressed or implied by IBM.
Buyers should consult other sources of information, including system benchmarks, and application sizing guides to
evaluate the performance of a system they are considering buying. For additional information about rPerf, contact
your local IBM office or IBM authorized reseller.

========================================================================

CPW for IBM i

Commercial Processing Workload (CPW) is a relative measure of performance of processors running the IBM i
operating system. Performance in customer environments may vary. The value is based on maximum
configurations. More performance information is available in the Performance Capabilities Reference at:
www.ibm.com/systems/i/solutions/perfmgmt/resource.html

Revised April 2, 2007

© 2014 International Business Machines Corporation 127

S-ar putea să vă placă și