Sunteți pe pagina 1din 248

HP Cluster Platform

Server and Workstation Overview

HP Part Number: A-CPSOV-1H


Published: March 2009
© Copyright 2009 Hewlett-Packard Development Company, L.P.

The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express
warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP
shall not be liable for technical or editorial errors or omissions contained herein.

Intel®, Intel® Xeon™, and Itanium® are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the U.S. and other
countries.

AMD® and AMD Opteron™ are trademarks or registered trademarks of Advanced Micro Devices, Inc.

Microsoft Windows Vista, and Xp are registered trademarks of Microsoft Corporation.

Red Hat is a trademark of Red Hat, Inc.

Printed in the United States.


Table of Contents
About This Manual..........................................................................................................14
Audience...............................................................................................................................................14
Organization.........................................................................................................................................15
HP Cluster Platform Documentation...................................................................................................15
Bracket Installation Guides...................................................................................................................16
Additional Documentation...................................................................................................................16
HP Encourages Your Comments..........................................................................................................22
Important Safety Information...............................................................................................................23
1 Itanium Processor Servers............................................................................................25
1.1 HP Integrity rx1620.........................................................................................................................25
1.1.1 Network Port Assignments.....................................................................................................27
1.1.2 Supported Memory Configurations........................................................................................27
1.1.3 Supported Storage Configurations.........................................................................................28
1.1.4 Cable Management..................................................................................................................28
1.1.5 Installing or Removing a PCI Card.........................................................................................28
1.2 HP Integrity rx2600.........................................................................................................................30
1.2.1 Network Port Assignments.....................................................................................................32
1.2.2 Supported Memory Configurations........................................................................................33
1.2.3 Supported Storage Configurations.........................................................................................34
1.2.4 Cable Management..................................................................................................................34
1.2.5 Installing or Removing a PCI Card.........................................................................................35
1.3 HP Integrity rx2620.........................................................................................................................36
1.3.1 PCI Slot Assignments..............................................................................................................38
1.4 HP Integrity rx2660.........................................................................................................................39
1.4.1 PCI Slot Assignments..............................................................................................................42
1.4.2 Removing the I/O Backplane Assembly..................................................................................42
1.4.2.1 Integrity rx2660 PCI–X and PCI-X/PCI-E I/O Backplane Assembly Options.................43
1.4.3 Installing PCI Cards in the Integrity rx2660............................................................................45
1.5 HP Integrity rx3600.........................................................................................................................45
1.5.1 Supported Memory Configurations........................................................................................46
1.5.2 Upgrading the HP Integrity rx3600.........................................................................................47
1.5.3 PCI-X Slot Assignment and Supported Options.....................................................................47
1.5.4 Installing or Removing a PCI Card.........................................................................................48
1.5.5 Cable Management..................................................................................................................50
1.6 HP Integrity rx4640.........................................................................................................................50
1.6.1 Supported Memory Configurations........................................................................................51
1.6.2 Upgrading the HP Integrity rx4640.........................................................................................51
1.6.3 PCI-X Slot Assignment and Supported Options.....................................................................52
1.6.4 Installing or Removing a PCI Card.........................................................................................52
1.6.5 Cable Management..................................................................................................................54
2 Xeon Processor Servers................................................................................................55
2.1 HP ProLiant DL140 G2....................................................................................................................55
2.1.1 HP ProLiant DL140 G2 PCI Slot Assignments........................................................................57
2.1.2 HP ProLiant DL140 G2 Memory Configurations....................................................................57
2.1.3 Installing a PCI Card in the HP ProLiant DL140 G2...............................................................57
2.2 HP ProLiant DL140 G3 Used in HP Cluster Platform and Scalable Visualization Array..............59
2.2.1 HP ProLiant DL140 G3 PCI Slot Assignments........................................................................61
2.2.2 HP ProLiant DL140 G3 Memory Configurations....................................................................61
2.2.3 Installing a PCI Card in the DL140 G3....................................................................................62
2.3 HP ProLiant DL160 G5 and G5p.....................................................................................................62

Table of Contents 3
2.3.1 PCI Slot Assignments..............................................................................................................63
2.4 HP ProLiant DL360 G3, G4, and G4p..............................................................................................63
2.4.1 PCI Slot Assignments..............................................................................................................68
2.4.2 Embedded Technologies.........................................................................................................68
2.4.3 High-Availability Features......................................................................................................69
2.4.4 Removing a Server from the Rack...........................................................................................70
2.4.4.1 Accessing Internal Components.....................................................................................71
2.4.5 Replacing a PCI Card..............................................................................................................72
2.5 HP ProLiant DL360 G5....................................................................................................................73
2.5.1 HP ProLiant DL360 G5 Front Panel........................................................................................74
2.5.2 HP ProLiant DL360 G5 Rear Panel..........................................................................................75
2.5.3 HP ProLiant DL360 G5 Front Panel LEDs..............................................................................76
2.5.4 HP ProLiant DL360 G5 Rear Panel LEDs and Buttons...........................................................77
2.5.5 System Insight Display............................................................................................................78
2.5.5.1 HP Systems Insight Display LEDs and Internal Health LED Combinations.................79
2.5.6 PCI Slot Assignments..............................................................................................................80
2.5.7 HP ProLiant DL360 G5 Embedded Technologies and Fault Tolerance..................................81
2.5.7.1 Removing a ProLiant DL360 G5 Server from the Rack and Accessing Internal
Components...............................................................................................................................81
2.5.8 Replacing a PCI Card..............................................................................................................81
2.6 HP ProLiant DL380 G3 and G4.......................................................................................................83
2.6.1 PCI Slot Assignments..............................................................................................................85
2.6.2 Removing the Server from the Rack.......................................................................................86
2.6.2.1 Accessing Internal Components.....................................................................................86
2.6.3 Replacing a PCI Card..............................................................................................................86
2.7 HP ProLiant DL380 G5....................................................................................................................89
2.7.1 HP ProLiant DL380 G5 Front Panel........................................................................................90
2.7.2 HP ProLiant DL380 G5 Rear Panel .........................................................................................90
2.7.3 ProLiant DL380 G5 Front LEDs..............................................................................................91
2.7.4 ProLiant DL380 G5 Rear LEDs................................................................................................92
2.7.5 Systems Insight Display LEDs................................................................................................93
2.7.6 PCI Slot Assignments..............................................................................................................94
2.7.7 New DL380 PCI Slot Assignments (as of March 17, 2008)......................................................95
2.7.8 Rack Mounting Holes for ProLiant DL380 G5 Cable Management Brackets.........................95
2.7.9 Removing the Server from the Rack.......................................................................................96
2.7.9.1 Accessing Internal Components.....................................................................................97
2.7.10 Replacing a PCI Card............................................................................................................97
3 Opteron Processor Servers..........................................................................................99
3.1 HP ProLiant DL145.........................................................................................................................99
3.1.1 HP ProLiant DL145 G2 PCI Slot Assignments......................................................................102
3.1.2 Removing an HP ProLiant DL145 from a Rack.....................................................................102
3.1.3 Replacing a PCI card in the HP ProLiant DL145 G1.............................................................105
3.1.4 Installing or Replacing a PCI Card in the HP ProLiant DL145 G2.......................................107
3.2 HP ProLiant DL145 G3..................................................................................................................110
3.2.1 Removing an HP ProLiant DL145 G3 from a Rack...............................................................113
3.2.2 Removing the DL145 G3 Access Panel..................................................................................114
3.2.3 DL145 G3 System Board Expansion Slots.............................................................................115
3.2.4 DL145 G3 PCI Slot Assignments...........................................................................................116
3.2.5 ProLiant DL145 G3 Riser Board Assemblies.........................................................................117
3.2.5.1 Installing or Replacing the ProLiant DL145 G3 Riser Board Assemblies.....................117
3.2.5.2 Removing or Installing a Riser Board...........................................................................118
3.2.6 Replacing a PCI Expansion Card in the ProLiant DL145 G3................................................120
3.3 HP ProLiant DL165 G5..................................................................................................................122
3.3.1 PCI Slot Assignments............................................................................................................123

4 Table of Contents
3.4 HP ProLiant DL385 G1..................................................................................................................124
3.4.1 PCI Slot Assignments............................................................................................................125
3.4.2 Removing an HP ProLiant DL385 G1 from a Rack...............................................................126
3.4.3 Replacing an HP ProLiant DL385 PCI Card..........................................................................127
3.5 HP ProLiant DL385 G2..................................................................................................................129
3.5.1 ProLiant DL385 G2 Front and Rear Views............................................................................130
3.5.2 PCI Slot Assignments............................................................................................................131
3.5.3 New DL385 G2 PCI Slot Assignments (as of March 17, 2008)..............................................132
3.5.4 Removing the ProLiant DL385 G2 from a Rack....................................................................132
3.5.5 Replacing a ProLiant DL385 G2 PCI Card............................................................................134
3.5.6 ProLiant DL385 G2 Systems Insight Display........................................................................136
3.6 HP ProLiant DL385 G5 and G5p...................................................................................................136
3.6.1 PCI Slot Assignments............................................................................................................136
3.7 HP ProLiant DL585........................................................................................................................136
3.7.1 HP ProLiant DL585 Memory Configuration.........................................................................139
3.7.2 Removing an HP ProLiant DL585 from the Rack.................................................................139
3.7.3 Replacing a PCI Card............................................................................................................142
3.8 HP ProLiant DL585 G2..................................................................................................................144
3.8.1 Removing an HP ProLiant DL585 G2 from the Rack............................................................147
3.8.2 Replacing a PCI Card............................................................................................................149
3.9 HP ProLiant DL585 G5..................................................................................................................152
3.9.1 PCI Slot Assignments............................................................................................................152
4 Server Blades.............................................................................................................153
4.1 HP BladeSystem p-Class Overview...............................................................................................153
4.2 HP ProLiant BL35p Server Blade Overview..................................................................................153
4.2.1 Supported Memory Configurations......................................................................................156
4.2.2 Supported Storage.................................................................................................................157
4.2.3 Removing the HP ProLiant BL35p from the Sleeve..............................................................157
4.3 HP ProLiant BL45p Server Blade Overview..................................................................................158
4.3.1 Supported Memory...............................................................................................................162
4.3.2 Supported Storage.................................................................................................................163
4.3.3 Removing the HP ProLiant BL45p from the Rack Enclosure...............................................163
4.4 HP BladeSystem c-Class Enclosure Overview..............................................................................164
4.4.1 HP BladeSystem c–Class Enclosure Features........................................................................164
4.4.2 HP BladeSystem c–Class Enclosure Device Bay Numbering...............................................167
4.4.3 Interconnect Module Bay Numbering...................................................................................169
4.4.4 HP BladeSystem c-7000 Interconnect Module Bay to Server Blade Type Port Mapping......171
4.5 Server Blade Type vs Gigabit Ethernet Blade Switch with Bandwidth Ratios.............................172
4.6 HP 4x DDR InfiniBand Switch Module for c-Class BladeSystem.................................................172
4.6.1 HP 4x DDR InfiniBand Switch Module Removal and Installation Procedure.....................172
4.7 HP ProLiant BL2x220c G5 Server Blade........................................................................................174
4.8 HP ProLiant BL260c G5 Server Blade............................................................................................175
4.9 HP ProLiant BL460c and BL460c G5 Server Blade Overview.......................................................176
4.9.1 HP ProLiant BL460c Front View...........................................................................................177
4.9.2 HP ProLiant BL460c Front Panel LEDs.................................................................................177
4.9.3 HP ProLiant BL460c Internal View.......................................................................................178
4.9.4 HP ProLiant BL460c System Board.......................................................................................178
4.9.5 Memory Options...................................................................................................................179
4.9.6 Mezzanine HCA Card...........................................................................................................180
4.9.7 Supported Storage.................................................................................................................180
4.9.8 Removing the HP ProLiant BL460c from the c–Class Enclosure..........................................181
4.10 HP ProLiant BL480c Server Blade Overview..............................................................................183
4.10.1 HP ProLiant BL480c Front View..........................................................................................183
4.10.2 HP ProLiant BL480c Front Panel LEDs...............................................................................184

Table of Contents 5
4.10.3 HP ProLiant BL480c Internal View......................................................................................185
4.10.4 HP ProLiant BL480c System Board.....................................................................................186
4.10.5 Memory Options..................................................................................................................187
4.10.6 Mezzanine HCA Card.........................................................................................................188
4.10.7 Supported Storage...............................................................................................................188
4.10.8 Removing the HP ProLiant BL480c from the c-Class Enclosure.........................................189
4.11 HP ProLiant BL465c Server Blade Overview..............................................................................190
4.11.1 HP ProLiant BL465c Front View..........................................................................................191
4.11.2 HP ProLiant BL465c Front Panel LEDs...............................................................................192
4.11.3 HP ProLiant BL465c Internal View......................................................................................193
4.11.4 HP ProLiant BL465c System Board.....................................................................................194
4.11.5 Memory Options..................................................................................................................195
4.11.6 Mezzanine HCA Card.........................................................................................................195
4.11.7 Supported Storage...............................................................................................................195
4.11.8 Removing the HP ProLiant BL465c from the c–Class Enclosure........................................196
4.12 HP ProLiant BL465c G5 Server Blade..........................................................................................196
4.13 HP ProLiant BL685c Server Blade Overview..............................................................................196
4.13.1 HP ProLiant BL685c Front View..........................................................................................197
4.13.2 HP ProLiant BL685c Front Panel LEDs...............................................................................198
4.13.3 HP ProLiant BL685c Internal View......................................................................................199
4.13.4 HP ProLiant BL685c System Board.....................................................................................199
4.13.5 Memory Options..................................................................................................................201
4.13.6 Mezzanine HCA Card.........................................................................................................201
4.13.7 Supported Storage...............................................................................................................201
4.13.8 Removing the HP ProLiant BL685c from the c-Class Enclosure.........................................202
4.14 HP ProLiant BL685c G5 Server Blade Overview.........................................................................202
4.15 HP Integrity BL860c Server Blade Overview...............................................................................202
4.15.1 HP Integrity BL860c Front View..........................................................................................203
4.15.2 HP Integrity BL860c LEDs...................................................................................................204
4.15.3 HP Integrity BL860c Internal View......................................................................................205
4.15.4 Configuring or Replacing Memory.....................................................................................206
4.15.5 Mezzanine HCA Card.........................................................................................................207
4.15.6 Supported Storage...............................................................................................................207
4.15.7 Removing the HP Integrity BL860c from the c-Class Enclosure.........................................207
5 Workstations..............................................................................................................208
5.1 HP xw8200 Workstation Overview...............................................................................................208
5.1.1 PCI Slot Assignments............................................................................................................210
5.1.2 Replacing or Installing a PCI Card........................................................................................211
5.1.3 Removing a Workstation from the Rack...............................................................................213
5.2 HP xw8400 Workstation Overview...............................................................................................214
5.2.1 PCI Slot Assignments............................................................................................................217
5.2.2 xw8400 PCI Slot Rules...........................................................................................................218
5.2.3 NVIDIA Quadro FX 4500 Graphics Card..............................................................................218
5.2.4 NVIDIA Quadro G-Sync Option Card..................................................................................219
5.2.5 NVIDIA Quadro FX 5500 Graphics Card..............................................................................220
5.2.6 Replacing or Installing a PCI Card........................................................................................221
5.2.7 Removing a Workstation from the Rack...............................................................................223
5.3 HP xw9300 Workstation Overview...............................................................................................223
5.3.1 PCI Slot Indentification.........................................................................................................225
5.3.2 Slot Assignment Rules...........................................................................................................226
5.3.3 Typical PCI Slot Configuration..............................................................................................227
5.3.4 NVIDIA PCI Graphics Cards ...............................................................................................228
5.3.4.1 NVIDIA FX3450 Graphics Card....................................................................................228
5.3.4.2 NVIDIA FX4500 Graphics Card....................................................................................229

6 Table of Contents
5.3.5 Connecting PCI Graphics Cards to Displays........................................................................229
5.3.6 The NVIDIA FX G-Sync PCI Graphics Card (Optional).......................................................229
5.3.7 System Interconnect Cards....................................................................................................230
5.3.8 Memory Configurations........................................................................................................230
5.3.9 PCI Card Installation and Removal Instructions..................................................................231
5.3.9.1 PCI Card Support..........................................................................................................231
5.3.9.2 Removing and Installing PCI Express Cards................................................................232
5.3.9.3 Removing and Installing PCI or PCI-X Cards...............................................................233
5.3.10 Removing a Workstation from the Rack..............................................................................234
5.4 HP xw9400 Workstation Overview...............................................................................................234
5.4.1 PCI Slot Indentification.........................................................................................................237
5.4.2 Slot Assignment Rules...........................................................................................................237
5.4.3 xw9400 Graphics Options......................................................................................................238
5.4.3.1 NVIDIA Quadro FX 3500..............................................................................................238
5.4.3.2 NVIDIA Quadro FX 4500..............................................................................................239
5.4.3.3 NVIDIA Quadro G-Sync...............................................................................................240
5.4.4 Memory Configurations........................................................................................................241
5.4.5 PCI Card Installation and Removal Instructions..................................................................241
5.4.6 Removing a Workstation from the Rack...............................................................................241
A USB Drive Key Support on ProLiant G4 Models....................................................242
Index...............................................................................................................................243

Table of Contents 7
List of Figures
1-1 HP Integrity rx1620 Front Panel....................................................................................................25
1-2 HP Integrity rx1620 Rear Panel.....................................................................................................26
1-3 Releasing the PCI I/O Riser...........................................................................................................29
1-4 Removing the PCI I/O Riser Assembly.........................................................................................29
1-5 Removing the PCI Slot Cover........................................................................................................30
1-6 Sliding the Card into the PCI Riser Connector.............................................................................30
1-7 HP Integrity rx2600 Front Panel....................................................................................................31
1-8 HP Integrity rx2600 Rear View......................................................................................................31
1-9 HP Integrity rx2600 Network Ports...............................................................................................32
1-10 Removing the Card Cage..............................................................................................................35
1-11 Opening the Card Cage.................................................................................................................35
1-12 Removing the Blank......................................................................................................................36
1-13 Inserting the Card..........................................................................................................................36
1-14 HP Integrity rx2620 Front Panel....................................................................................................37
1-15 HP Integrity rx2620 Rear Panel.....................................................................................................38
1-16 HP Integrity rx2660 Front Panel....................................................................................................40
1-17 HP Integrity rx2660 Rear Panel.....................................................................................................41
1-18 Removing the I/O Backplane Assembly........................................................................................43
1-19 Integrity rx2660 PCI-X I/O Backplane Assembly..........................................................................44
1-20 Integrity rx2660 Mixed PCI-X/PCI–E I/O Backplane Assembly...................................................44
1-21 HP Integrity rx3600 Front View....................................................................................................45
1-22 HP Integrity rx3600 Rear View......................................................................................................46
1-23 Removing the Screws that Secure the Server in the Rack.............................................................48
1-24 Removing the Server's Top Panel..................................................................................................49
1-25 Installing the PCI Card..................................................................................................................49
1-26 HP Integrity rx4640 Front View....................................................................................................50
1-27 HP Integrity rx4640 Rear View......................................................................................................50
1-28 Removing the Screws that Secure the Server in the Rack.............................................................53
1-29 Removing the Server's Top Panel..................................................................................................53
1-30 Installing the PCI Card..................................................................................................................54
2-1 HP ProLiant DL140 G2 Front Panel..............................................................................................56
2-2 HP ProLiant DL140 G2 Rear Panel................................................................................................56
2-3 Removing the HP ProLiant DL140 G2 PCI Card Cage.................................................................58
2-4 HP ProLiant DL140 G2 PCI Card Cage.........................................................................................58
2-5 Installing an Interconnect Adapter in the HP ProLiant DL140 G2 PCI Card Cage......................59
2-6 HP ProLiant DL140 G3 Front Panel..............................................................................................60
2-7 HP ProLiant DL160 G5 and G5p Front View................................................................................62
2-8 HP ProLiant DL160 G5 and G5p Rear View.................................................................................63
2-9 HP ProLiant DL360 G3 Front Panel..............................................................................................65
2-10 HP ProLiant DL360 G3 Rear Panel with Single Power Supply.....................................................65
2-11 ProLiant DL360 G4 Front Panel.....................................................................................................66
2-12 HP ProLiant DL360 G4 Rear Panel................................................................................................66
2-13 ProLiant DL360 G4p Front Panel..................................................................................................67
2-14 HP ProLiant DL360 G4p Rear Panel.............................................................................................68
2-15 Front Unit Identification LEDs......................................................................................................70
2-16 Rear Unit Identification LEDs.......................................................................................................70
2-17 Sliding the Server from the Rack...................................................................................................71
2-18 Removing the Access Panel...........................................................................................................71
2-19 Inserting the PCI Riser Card..........................................................................................................72
2-20 HP ProLiant DL360 G5 Front Panel..............................................................................................74
2-21 HP ProLiant DL360 G5 Rear Panel................................................................................................75
2-22 ProLiant DL360 G5 Front Panel LEDs...........................................................................................76
8 List of Figures
2-23 ProLiant DL360 G5 Rear Panel LEDs............................................................................................77
2-24 System Insight Display (Actual)....................................................................................................78
2-25 System Insight Display Map.........................................................................................................79
2-26 PCI Riser Board Assembly............................................................................................................82
2-27 Inserting a New PCI Adapter Into the PCI Riser Board...............................................................82
2-28 HP ProLiant DL380 G3 front panel...............................................................................................84
2-29 HP ProLiant DL380 G4 Front Panel..............................................................................................84
2-30 HP ProLiant DL380 G3 and G4 Rear Panel...................................................................................85
2-31 ProLiant DL380 PCI Riser Cage Door...........................................................................................87
2-32 ProLiant DL380 PCI Riser Cage Door Latch.................................................................................87
2-33 Removing the HP ProLiant DL380 PCI Riser Cage......................................................................88
2-34 Unlocking the HP ProLiant DL380 PCI Retaining Clip................................................................88
2-35 Removing the HP ProLiant DL380 Expansion Board...................................................................89
2-36 HP ProLiant DL380 G5 Front Panel..............................................................................................90
2-37 HP ProLiant DL380 G5 Rear Panel................................................................................................91
2-38 HP ProLiant DL380 G5 Front LEDs..............................................................................................92
2-39 HP ProLiant DL380 G5 Rear LEDs................................................................................................93
2-40 HP ProLiant DL380 G5 Systems Insight Display..........................................................................94
2-41 Rack Mounting Holes for ProLiant DL380 G5 Cable Management Brackets...............................96
2-42 Removing the ProLiant DL380 G5 PCI Riser Cage.......................................................................98
2-43 Removing the ProLiant DL380 G5 PCI Riser Cage.......................................................................98
3-1 HP ProLiant DL145 G1 Front Panel.............................................................................................100
3-2 HP ProLiant DL145 G1 Rear Panel..............................................................................................100
3-3 HP ProLiant DL145 G2 Front Panel.............................................................................................101
3-4 HP ProLiant DL145 G2 Rear Panel..............................................................................................102
3-5 HP ProLiant DL145 G1 Front Thumbscrews...............................................................................103
3-6 Sliding the HP ProLiant DL145 G1 from the Rack......................................................................104
3-7 Removing the HP ProLiant DL145 G1 Access Panel...................................................................104
3-8 Removing the HP ProLiant DL145 G2 Access Panel...................................................................105
3-9 HP ProLiant DL145 G1 Riser Cage..............................................................................................106
3-10 Removing an HP ProLiant DL145 G1 PCI Card from the Riser Cage........................................106
3-11 HP ProLiant DL145 G2 PCI Expansion Slots..............................................................................107
3-12 Removing the HP ProLiant DL145 G2 PCI Card Cage...............................................................108
3-13 Removing the Cover of the Low-Profile Expansion Slot.............................................................108
3-14 Removing the Cover of the Full-Length Expansion Slot.............................................................109
3-15 Installing a Full-Length PCI Card in the HP ProLiant DL145 G2...............................................109
3-16 HP ProLiant DL145 G3 Front Panel.............................................................................................111
3-17 HP ProLiant DL145 G3 Rear Panel..............................................................................................112
3-18 HP ProLiant DL145 G3 Front Thumbscrews...............................................................................113
3-19 Sliding the HP ProLiant DL145 G3 from the Rack......................................................................114
3-20 Removing the HP ProLiant DL145 G3 Access Panel...................................................................115
3-21 DL145 G3 System Board Expansion Slots...................................................................................116
3-22 Removing the HP ProLiant DL145 G3 Full-Sized PCI Riser Board Assembly...........................118
3-23 Removing the HP ProLiant DL145 G3 Low-Profile PCI Riser Assembly...................................118
3-24 Remove the Full-Sized Riser Board.............................................................................................120
3-25 Remove the Low–Profile Riser Board..........................................................................................120
3-26 Removing a Full-Sized PCI Card in the HP ProLiant DL145 G3................................................121
3-27 Removing a Low-Profile PCI Card in the HP ProLiant DL145 G3.............................................122
3-28 HP ProLiant DL165 G5 Front View.............................................................................................122
3-29 HP ProLiant DL165 G5 Rear View..............................................................................................123
3-30 HP ProLiant DL385 G1 Front Panel.............................................................................................125
3-31 HP ProLiant DL385 G1 Rear Panel..............................................................................................125
3-32 Extending the HP ProLiant DL385 from the Rack......................................................................126
3-33 HP ProLiant DL385 PCI Riser Cage Door...................................................................................127
3-34 HP ProLiant DL385 PCI Riser Cage Door Latch.........................................................................127

9
3-35 Removing the HP ProLiant DL385 PCI Riser Cage.....................................................................128
3-36 Unlocking the HP ProLiant DL385 PCI Retaining Clip..............................................................128
3-37 Removing the HP ProLiant DL385 Expansion Board.................................................................129
3-38 HP ProLiant DL385 G2 Front Panel.............................................................................................130
3-39 HP ProLiant DL385 G2 Rear Panel..............................................................................................131
3-40 Removing the ProLiant DL385 G2 from a Rack..........................................................................133
3-41 Removing the ProLiant DL385 G2 PCI Riser Cage.....................................................................134
3-42 Removing a PCI Slot Cover in the PCI Riser Cage .....................................................................135
3-43 Removing a DL385 G2 PCI Card.................................................................................................135
3-44 HP ProLiant DL585 Front Panel..................................................................................................138
3-45 HP ProLiant DL585 Rear Panel...................................................................................................138
3-46 Loosening Thumbscrews.............................................................................................................140
3-47 Sliding the HP ProLiant DL585 from the Rack...........................................................................140
3-48 Unlocking the HP ProLiant DL585 Access Panel Latch..............................................................141
3-49 Sliding an HP ProLiant DL585 into a Rack.................................................................................141
3-50 HP ProLiant DL585 PCI-X Expansion Slots and Buses...............................................................142
3-51 Removing a PCI Card from an HP ProLiant DL585 Server........................................................143
3-52 Installing a PCI Card in an HP ProLiant DL585 Server..............................................................143
3-53 HP ProLiant DL585 G2 Front Panel.............................................................................................145
3-54 HP ProLiant DL585 G2 Rear Panel..............................................................................................146
3-55 Remove Server from Rack...........................................................................................................147
3-56 ProLiant DL585 G2 from the Rack...............................................................................................148
3-57 Unlock the ProLiant DL585 G2 Access Panel Latch....................................................................149
3-58 HP ProLiant DL585 G2 PCI Slots................................................................................................150
3-59 Removing a PCI Card from an HP ProLiant DL585 G2 Server...................................................151
4-1 HP ProLiant BL35p Front Panel..................................................................................................154
4-2 HP ProLiant BL35p Internal Components..................................................................................155
4-3 Removing the HP ProLiant BL35p from the Enclosure Sleeve...................................................158
4-4 ProLiant BL45p Front Panel........................................................................................................159
4-5 HP ProLiant BL45p Rear Panel Components..............................................................................160
4-6 HP ProLiant BL45p Primary System Board................................................................................160
4-7 HP ProLiant BL45p Secondary System Board.............................................................................162
4-8 Removing the HP ProLiant BL45p from the Rack Enclosure......................................................164
4-9 HP BladeSystem c–Class Enclosure Front View.........................................................................166
4-10 HP BladeSystem c–Class Enclosure Rear View...........................................................................167
4-11 HP BladeSystem c–Class Enclosure Device Bay Numbering (Full-Height Device Bays)...........168
4-12 HP c–Class BladeSystem Enclosure Device Bay Numbering (Half-Height Device Bays)..........168
4-13 HP c–Class BladeSystem Enclosure Example.............................................................................169
4-14 HP c–Class BladeSystem Module Bay Numbering.....................................................................170
4-15 HP c–Class BladeSystem Module Bay Numbering Descriptions................................................170
4-16 Prepare the Bay to Install the 4x DDR IB Switch Module...........................................................173
4-17 Install the 4x DDR IB Switch Module..........................................................................................174
4-18 HP ProLiant BL2x220c G5 Front View........................................................................................174
4-19 HP ProLiant BL260c Front View.................................................................................................175
4-20 HP ProLiant BL460c Front View.................................................................................................177
4-21 HP ProLiant BL460c Front Panel LEDs.......................................................................................177
4-22 HP ProLiant BL460c Internal View..............................................................................................178
4-23 HP ProLiant BL460c System Board Components........................................................................179
4-24 Removing a ProLiant BL460c from the c–Class Enclosure..........................................................182
4-25 HP ProLiant BL480c Front View.................................................................................................184
4-26 HP ProLiant BL480c Front Panel LEDs.......................................................................................184
4-27 HP ProLiant BL480c Internal View..............................................................................................186
4-28 HP ProLiant BL480c System Board Components........................................................................187
4-29 Removing a ProLiant BL480c from the c–Class Enclosure..........................................................189
4-30 HP ProLiant BL465c Front View.................................................................................................192

10 List of Figures
4-31 HP ProLiant BL465c Front Panel LEDs.......................................................................................192
4-32 HP ProLiant BL465c Internal View..............................................................................................193
4-33 HP ProLiant BL465c System Board Components........................................................................194
4-34 HP ProLiant BL465c G5 Front View............................................................................................196
4-35 HP ProLiant BL685c Front View.................................................................................................197
4-36 HP ProLiant BL685c Front Panel LEDs.......................................................................................198
4-37 HP ProLiant BL685c Internal View..............................................................................................199
4-38 HP ProLiant BL685c System Board Components........................................................................200
4-39 HP ProLiant BL685c G5 Front View............................................................................................202
4-40 HP ProLiant BL860c Front View.................................................................................................204
4-41 HP Integrity BL860c LEDs...........................................................................................................205
4-42 HP Integrity BL860c Internal View..............................................................................................206
5-1 HP xw8200 Workstation Front Panel...........................................................................................209
5-2 HP xw8200 Workstation Rear Panel............................................................................................210
5-3 PCI Retainer.................................................................................................................................211
5-4 PCI Levers....................................................................................................................................212
5-5 PCI Express Levers......................................................................................................................212
5-6 Installing a PCI Card in the HP xw8200 Workstation.................................................................213
5-7 Installing a PCI Express Card in the HP xw8200 Workstation...................................................213
5-8 HP xw8400 Workstation Front Panel...........................................................................................215
5-9 HP xw8400 Workstation Rear Panel............................................................................................216
5-10 xw8400 PCI Slots..........................................................................................................................217
5-11 NVIDIA Quadro FX 4500 Graphics Card....................................................................................219
5-12 NVIDIA Quadro G-Sync Card....................................................................................................220
5-13 NVIDIA Quadro FX 5500 Graphics Card....................................................................................220
5-14 PCI Retainer.................................................................................................................................221
5-15 PCI Retention Clamp...................................................................................................................222
5-16 Installing a PCI or PCI Express Card in the HP xw8400 Workstation........................................222
5-17 HP xw9300 Workstation Rear Panel............................................................................................225
5-18 HP xw9300 Slot Numbering........................................................................................................226
5-19 Typical xw9300 Slot Configuration.............................................................................................228
5-20 Removing PCI Card Holders.......................................................................................................231
5-21 Installing PCI Card Holders........................................................................................................232
5-22 Removing PCI Express Cards......................................................................................................232
5-23 Installing PCI Express Cards.......................................................................................................233
5-24 Removing PCI, or PCI-X Cards...................................................................................................233
5-25 Installing PCI or PCI-X Cards......................................................................................................234
5-26 HP xw9400 Workstation Rear View............................................................................................236
5-27 HP xw9400 Slot Numbering........................................................................................................237
5-28 NVIDIA Quadro FX 3500 Graphics Card....................................................................................239
5-29 NVIDIA Quadro FX 4500 Graphics Card....................................................................................240
5-30 NVIDIA Quadro G-Sync Card....................................................................................................241

11
List of Tables
1 HP Cluster Platform Supported Servers.......................................................................................14
1-1 HP Integrity rx1620 Front Panel....................................................................................................25
1-2 HP Integrity rx1620 Rear Panel Features......................................................................................26
1-3 HP Integrity rx2600 Ports Used in Clusters..................................................................................32
1-4 Memory Slot Loading Order.........................................................................................................33
1-5 HP Integrity rx2620 Features.........................................................................................................36
1-6 Quick Reference for HP Integrity rx2620 Connections.................................................................38
1-7 HP Integrity rx2620 PCI Slot Assignments...................................................................................39
1-8 HP Integrity rx2660 Features.........................................................................................................39
1-9 HP Integrity rx2660 PCI Express Slot Assignments......................................................................42
1-10 HP Integrity rx2660 PCI Mixed Slot Assignments........................................................................42
1-11 HP Integrity rx3600 Ports Used in Clusters..................................................................................46
1-12 HP Integrity rx3600 PCI slot assignments.....................................................................................47
1-13 HP Integrity rx4640 Ports Used in Clusters..................................................................................51
1-14 HP Integrity rx4640 PCI slot assignments.....................................................................................52
2-1 HP ProLiant DL140 G2 Features...................................................................................................55
2-2 HP ProLiant DL140 G2 Front Panel Features................................................................................56
2-3 HP ProLiant DL140 G2 Rear Panel Features.................................................................................56
2-4 HP ProLiant DL140 G2 PCI Slot Assignments..............................................................................57
2-5 HP ProLiant DL140 G2 Memory Module Sequence.....................................................................57
2-6 HP ProLiant DL140 G2 PCI Slots..................................................................................................57
2-7 HP ProLiant DL140 G3 Features...................................................................................................59
2-8 HP ProLiant DL140 G3 Front Panel Features................................................................................60
2-9 HP ProLiant DL140 G3 PCI Express Slot Assignments................................................................61
2-10 HP ProLiant DL140 G3 PCI-X Slot Assignments..........................................................................61
2-11 HP ProLiant DL140 G3 Graphics Card PCI Express Slot Assignments (SVA).............................61
2-12 HP ProLiant DL140 G3 Memory Module Sequence.....................................................................61
2-13 ProLiant DL360 G3, G4 and G4p Model Comparison..................................................................64
2-14 HP ProLiant DL360 G4 and DL360 G4p PCI Slot Assignments...................................................68
2-15 HP ProLiant DL360 G5 Features...................................................................................................73
2-16 ProLiant DL360 G5 Front Panel LEDs...........................................................................................76
2-17 ProLiant DL360 G5 Rear Panel LEDs and Buttons........................................................................77
2-18 System Insight Display LEDs........................................................................................................79
2-19 System Insight Display LED and Internal Health LED Combinations.........................................80
2-20 HP ProLiant DL360 G5 PCI Slot Assignments..............................................................................80
2-21 ProLiant DL360 G3 and G4 model comparison............................................................................83
2-22 HP ProLiant DL380 PCI Slot Assignments...................................................................................85
2-23 HP ProLiant DL380 G5 Features...................................................................................................89
2-24 HP ProLiant DL380 G5 Front Panel LEDs.....................................................................................92
2-25 HP ProLiant DL380 G5 Rear LEDs................................................................................................93
2-26 Systems Insight Display LEDs Status............................................................................................94
2-27 HP ProLiant DL380 G5 PCI Express Slot Assignments................................................................95
2-28 HP ProLiant DL380 G5 Mixed PCI Express/PCI-X Slot Assignments..........................................95
2-29 HP ProLiant DL380 G5 PCI Express Slot Assignments (as of March 17, 2008)............................95
3-1 ProLiant DL145 G1 and G2 Comparison......................................................................................99
3-2 HP ProLiant DL145 G2 PCI Slot Assignments............................................................................102
3-3 HP ProLiant DL145 G2 PCI Slots................................................................................................107
3-4 HP ProLiant DL145 G3 Specifications.........................................................................................110
3-5 DL145 G3 System Board Expansion Slot Descriptions................................................................116
3-6 HP ProLiant DL145 G3 PCI Slot Assignments............................................................................117
3-7 HP ProLiant DL385 G1 Rear Panel Ports.....................................................................................125
3-8 HP ProLiant DL385 PCI Slot Assignments..................................................................................126
12 List of Tables
3-9 HP ProLiant DL385 G2 Features.................................................................................................129
3-10 HP ProLiant DL385 G2 PCI Express Slot Assignments..............................................................132
3-11 HP ProLiant DL385 G2 PCI Express/PCI-X Slot Assignments....................................................132
3-12 HP ProLiant DL385 G2 PCI-Express Slot Assignments (as of March 17th 2008)........................132
3-13 HP ProLiant DL385 G5 PCI Express Slot Assignments..............................................................136
3-14 Slot Assignments for the HP ProLiant DL585.............................................................................137
4-1 ProLiant BL45p Characteristics...................................................................................................158
4-2 HP BladeSystem c–Class Enclosure Features..............................................................................165
4-3 HP BladeSystem c-7000 Interconnect Module Bay to Server Blade Type Port Mapping............171
4-4 HP Cluster Platform Server Blade Configurations to InfiniBand Interconnect Module Types
(Bandwidth Ratios)......................................................................................................................173
4-5 HP ProLiant BL460c Features......................................................................................................176
4-6 HP Proliant BL480c Features.......................................................................................................183
4-7 HP ProLiant BL480c Front Panel LEDs.......................................................................................184
4-8 HP ProLiant BL465c Server Blade Features.................................................................................190
4-9 HP Proliant BL685c Server Blade Features..................................................................................197
4-10 HP Proliant BL860c Server Blade Features..................................................................................203
5-1 HP Workstation xw8200 features................................................................................................208
5-2 HP xw8200 Workstation PCI Slots...............................................................................................211
5-3 HP Workstation xw8400 Features...............................................................................................214
5-4 HP xw8400 Workstation PCI Slots...............................................................................................217
5-5 xw8400 PCI Slot Rules.................................................................................................................218
5-6 HP Workstation xw9300 Specifications.......................................................................................223
5-7 HP xw9300 Workstation PCI Slots...............................................................................................226
5-8 Narrow (1-Slot) Graphics Cards..................................................................................................227
5-9 Wide (2-Slot) Graphics Cards......................................................................................................227
5-10 Supported Interconnect Cards.....................................................................................................230
5-11 Supported Memory Configurations............................................................................................230
5-12 HP Workstation xw9400 Features...............................................................................................235
5-13 HP xw9400 Workstation PCI Slots...............................................................................................237
5-14 Graphics Cards............................................................................................................................237

13
About This Manual
This manual presents an overview of the servers and workstations used in HP Cluster Platform
solutions. This manual references the original component documentation for detailed information,
except where information or procedures for the cluster differ from the standalone components.
In these cases, this manual supersedes information supplied in existing component documentation.
HP Cluster Platform products support the servers listed in Table 1.
Table 1 HP Cluster Platform Supported Servers
Processor Server

Intel® Itanium® • HP Integrity rx1620


• HP Integrity rx2600
• HP Integrity rx2620
• HP Integrity rx2660
• HP Integrity rx3600
• HP Integrity rx4640
• HP Server Blade BL860c

Intel Xeon™ • HP ProLiant DL140 G1, G2 and G3


• HP ProLiant DL160 G5 and G5p
• HP ProLiant DL360 G1 through G4, G4p and G5
• HP ProLiant DL380 G3, G4 and G5
• HP ProLiant BL2x220c G5 Server Blade
• HP ProLiant BL260c Server Blade
• HP ProLiant BL460c Server Blade
• HP ProLiant BL460c G5 Server Blade
• HP ProLiant BL480c Server Blade
• HP ProLiant BL680c G5 Server Blade
• HP Workstations xw8200 and xw8400

AMD Opteron™ • HP ProLiant DL145 G1, G2 and G3


• HP ProLiant DL165 G5
• HP ProLiant DL385
• HP ProLiant DL385 G5 and G5p
• HP ProLiant DL585 G1 and G2
• HP ProLiant DL585 G5
• HP ProLiant BL35p
• HP ProLiant BL45p
• HP ProLiant BL465c Server Blade
• HP ProLiant BL465c G5 Server Blade
• HP ProLiant BL685c Server Blade
• HP ProLiant BL685c G5 Server Blade
• HP Workstation xw9300
• HP Workstation xw9400

This manual does not describe the procedures and tools that are required to install and configure
the system hardware or software. It does contain references for cluster components, in addition
to the servers, that have their own documentation.

Audience
This manual is intended for experienced hardware system administrators of large-scale computer
systems, and for HP Global Service representatives. This guide references skilled tasks and
describes important safety considerations and is not intended as a training aide for untrained
personnel.

14
The information in this manual assumes that the reader has the following knowledge:
• Is familiar with HP rack-mounted servers and associated rack hardware.
• Is familiar with basic networking concepts, network switch technology, and network cables.
• Is familiar with the theory and implementation of the high-speed system interconnect
technology used to create clusters.
• Has read the HP Cluster Platform Overview and is familiar with HP Cluster Platform
architecture and concepts.

Organization
This manual is organized as follows:

Chapter Description

Chapter 1 Describes the Itanium processor servers used in HP Cluster Platform


solutions.

Chapter 2 Describes the Xeon processor servers used in HP Cluster Platform


solutions.

Chapter 3 Describes the Opteron processor servers used in HP Cluster Platform


solutions.

Chapter 4 Describes the HP ProLiant server blades supported in HP Cluster Platform


solutions.

Chapter 5 Describes the workstations used in HP Cluster Platform solutions.

HP Cluster Platform Documentation


HP Cluster Platform documentation is available from the following HP website:
http://docs.hp.com/en/highperfcomp.html
Printed and bound books are not available.
The typical HP Cluster Platform documentation set contains both cross-platform and
platform-specific documents, as well as supplementary documents available at the time of release.

Cross-platform documents
The following items are cross-platform documents:
• Cluster Platform Customer Letter
• Cluster Platform Overview
• Cluster Platform Site Preparation Guide
• Cluster Platform Core Components
• Cluster Platform Server and Workstation Overview

Platform-specific documents
The following items are platform-specific documents:
• Platform road map
• One or more system interconnect guides
• A set of cabling tables
• One or more bracket installation guides

Organization 15
Bracket Installation Guides
Some HP Cluster Platform solutions use custom brackets for servers, switches, and cable
management. The following documents can be used, depending on your configuration:

Title Associated Component

XC3000 Cluster Quad Bracket Kit Installation Guide 1U servers

HP XC Interconnect xx3020 Cabinet Kit Installation Guide Myricom interconnects

HP ProCurve Rail Kit Installation Guide 1U Procurve switch

HP XC6000 Cable Management Basket Installation Guide Quadrics interconnects

Integrity rx2600 Cable Management Tray Installation Guide Integrity rx2600


II
QsNet Management Kit Installation Guide Quadrics interconnects

Myrinet Interconnect Rack Kit Installation Guide Myricom interconnects

HP Cluster Platform Generic Cable Management Bracket Installation Guide ProLiant DL360, DL145 G1, DL380,
DL585

HP Integrity rx4640 Cable Bracket Installation Guide Integrity rx4640

ProLiant DL380 G4 Cable Management Bracket Installation Guide ProLiant DL380 G4, and G5

Tab Mount Cable Management Bracket Installation Guide ProLiant DL145

c–Class Blade Cable Management Bracket Installation Guide c-Class Server Blade Enclosure

DL16x Cable Management Bracket Installation Guide ProLiant DL160 G5 and DL165 G5

Additional Documentation
For more information about HP ProLiant servers used in HP Cluster Platform configurations,
see the following table. Also refer to the documentation that shipped with your cluster.

Server Model Web Location

HP ProLiant DL140 G2 and G3

HP ProLiant DL140 G3 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00795598/


Maintenance and Service Guide c00795598.pdf

HP ProLiant DL140 G2 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00368751/


Maintenance and Service Guide c00368751.pdf

HP ProLiant 100 Series Servers User http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00368941/


Guide c00368941.pdf

HP ProLiant DL145 G1

HP ProLiant DL145 Server User Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00368941/


c00368941.pdf

HP ProLiant DL145 G2

HP ProLiant DL145 Generation 2 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00368711/


Maintenance and Service Guide c00368711.pdf

HP ProLiant DL145 Generation 2 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00368726/


Installation Sheet c00368726.pdf

HP ProLiant DL145 G3

HP ProLiant DL145 Generation 3 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00856243/


Maintenance and Service Guide c00856243.pdf

HP ProLiant DL160 G5

16
Server Model Web Location

ProLiant DL160 Generation 5 Server http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01325420/


Maintenance and Service Guide c01325420.pdf

ProLiant DL160 Generation 5 Server http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01403321/


Installation Sheet c01403321.pdf

HP ProLiant DL160 G5p

ProLiant DL160 Generation 5p Server http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01555951/


Maintenance and Service Guide c01555951.pdf

ProLiant DL160 Generation 5p Server http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01555955/


Installation Sheet c01555955.pdf

HP ProLiant DL165 G5

ProLiant DL165 Generation 5 Server http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01384378/


Maintenance and Service Guide c01384378.pdf

ProLiant DL165 Generation 5 Server http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01384382/


Installation Sheet c01384382.pdf

HP ProLiant DL360 G3

ProLiant DL360 Generation 3 Server http://h18004.www1.hp.com/products/servers/platforms/retired.html


Setup and Installation Guide

ProLiant DL360 Generation 3 Server http://h18004.www1.hp.com/products/servers/platforms/retired.html


Maintenance and Service Guide

HP ProLiant DL360 G4

ProLiant DL360 Generation 4 Server http://h18004.www1.hp.com/products/servers/platforms/retired.html


Maintenance and Service Guide

ProLiant DL360 Generation 4 Server http://h18004.www1.hp.com/products/servers/platforms/retired.html


Reference and Troubleshooting Guide

ProLiant DL360 Generation 4 SCSI http://h18004.www1.hp.com/products/servers/platforms/retired.html


Cabling Matrix

HP ProLiant DL360 G4p

ProLiant DL360 Generation 4p Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00293423/


Maintenance and Service Guide c00293423.pdf

HP ProLiant DL360 Generation 4p User http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00292390/


Guide c00292390.pdf

HP ProLiant DL360 G4 and G4p Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00365804/


High-Density Deployment Solution White c00365804.pdf
Paper

HP ProLiant DL360 G5

ProLiant DL360 Generation 5 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00710376/


Maintenance and Service Guide c00710376.pdf

HP ProLiant DL380 G3

ProLiant DL380 Generation 3 Server User http://h18004.www1.hp.com/products/servers/platforms/retired.html


Guide

ProLiant DL380 Generation 3 Server http://h18004.www1.hp.com/products/servers/platforms/retired.html


Maintenance and Service Guide

HP ProLiant DL380 G4

ProLiant DL380 Generation 4 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00687567/


Maintenance and Service Guide c00687567.pdf

Additional Documentation 17
Server Model Web Location

ProLiant DL380 Generation 4 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00300504/


Reference and Troubleshooting Guide c00300504.pdf

ProLiant DL380 Generation 4 SCSI http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00218257/


Cabling Matrix c00218257.pdf

HP ProLiant DL380 G5

ProLiant DL380 Generation 5 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00710359/


Maintenance and Service Guide c00710359.pdf

HP ProLiant DL380 Generation 5 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00710263/


User Guide c00710263.pdf

HP ProLiant DL385 G1

ProLiant DL385 Server User Guide http://h10010.www1.hp.com/wwpc/us/en/ss/WF04a/


15351-241434-241475-241475-f79.html

ProLiant DL385 Server Maintenance and http://h10010.www1.hp.com/wwpc/us/en/ss/WF04a/


Service Guide 15351-241434-241475-241475-f79.html

HP ProLiant DL385 G2

ProLiant DL385 G2 Server User Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00778878/


c00778878.pdf

ProLiant DL385 G2 Server Maintenance http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00778876/


and Service Guide c00778876.pdf

HP ProLiant DL385 G5

ProLiant DL385 G5 Server User Guide http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01306551/


c01306551.pdf

ProLiant DL385 G5 Server Maintenance http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01302275/


and Service Guide c01302275.pdf

HP ProLiant DL385 G5p

ProLiant DL385 G5p Server User Guide http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01609990/


c01609990.pdf

ProLiant DL385 G5p Server Maintenance http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01609666/


and Service Guide c01609666.pdf

HP ProLiant DL585 G1

HP ProLiant DL585 Server User Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00083255/


c00083255.pdf

HP ProLiant DL585 Server Maintenance http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00210164/


and Service Guide c00210164.pdf

HP ProLiant DL585 G2

HP ProLiant DL585 G2 Server User http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00778940/


Guide c00778940.pdf

HP ProLiant DL585 G2 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00778937/


Maintenance and Service Guide c00778937.pdf

HP ProLiant DL585 G5

HP ProLiant DL585 G5 Server User http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01384234/


Guide c01384234.pdf

HP ProLiant DL585 G5 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01384234/


Maintenance and Service Guide c01384234.pdf

HP p-Class BladeSystems

18
Server Model Web Location

HP ProLiant BL35p Server Blade http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00293017/


Maintenance and Service Guide c00293017.pdf

HP ProLiant BL45p Server Blade http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00366047/


Maintenance and Service Guide c00366047.pdf

HP BladeSystem p-Class Enclosure

HP BladeSystem p-Class Maintenance and http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00065042/


Service Guides c00065042.pdf

HP BladeSystem c-Class Xeon Server


Blades

HP ProLiant BL2x220c Generation 5 http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01462829/


Server Blade User Guide c01462829.pdf

HP ProLiant BL2x220c Generation 5 http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01462866/


Server Blade Maintenance and Service c01462866.pdf
Guide

HP ProLiant BL260c Generation 5 Server http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01416681/


Blade User Guide c01416681.pdf

HP ProLiant BL260c Generation 5 Server http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01416733/


Blade Maintenance and Service Guide c01416733.pdf

HP ProLiant BL460c Server Blade User http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00700767/


Guide (includes the HP ProLiant c00700767.pdf
BL460c G5)

HP ProLiant BL460c Server Blade http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00718709/


Maintenance and Service Guide (includes c00718709.pdf
the HP ProLiant BL460c G5)

HP ProLiant BL480c Server Blade User http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00699573/


Guide c00699573.pdf

HP ProLiant BL480c Server Blade http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00718745/


Maintenance and Service Guide c00718745.pdf

HP c-Class BladeSystems Opteron


Server Blades

HP ProLiant BL465c Server Blade User http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00778728/


Guide c00778728.pdf

HP ProLiant BL465c Server Blade http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00778741/


Maintenance and Service Guide c00778741.pdf

HP ProLiant BL465c G5 Server Blade http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01184477/


User Guide c01184477.pdf

HP ProLiant BL465c G5 Server Blade http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00778741/


Maintenance and Service Guide c00778741.pdf

HP ProLiant BL685c Server Blade User http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00805059/


Guide c00805059.pdf

HP ProLiant BL685c Server Blade http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00805082/


Maintenance and Service Guide c00805082.pdf

HP ProLiant BL685c G5 Server Blade http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00805059/


User Guide c00805059.pdf

HP ProLiant BL685c G5 Server Blade http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00805082/


Maintenance and Service Guide c00805082.pdf

HP c-Class BladeSystems Itanium


Server Blades

Additional Documentation 19
Server Model Web Location

HP Integrity BL860c Server Blade http://h18004.www1.hp.com/products/quickspecs/12671_na/12671_na.pdf


QuickSpecs

HP BladeSystem c-Class Enclosure

HP BladeSystem c-Class Maintenance and http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00714237/


Service Guides c00714237.pdf

HP BladeSystem c7000 Enclosure Setup http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00698286/


and Installation Guide c00698286.pdf

HP BladeSystem c-Class Site Planning http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01038153/


Guide c01038153.pdf?jumpid=reg_R1002_USEN

HP BladeSystem c-Class Firmware & http://h18004.www1.hp.com/products/blades/components/


Upgrades c-class-compmatrix.html

Additional ProLiant Documents

The Intel® processor roadmap for http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00164255/


industry-standard servers c00164255.pdf

Technologies for HP ProLiant 100-series http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01362433/


servers technology brief c01362433.pdf

Optimizing facility operation in high http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00064724/


density data center environments c00064724.pdf

Critical factors in intra-rack power http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01034757/


distribution planning for high-density c01034757.pdf
systems

Data center cooling strategies http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01153741/


c01153741.pdf

Disk drive technology overview http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01071496/


c01071496.pdf

Fully-Buffered DIMM technology in HP http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00913926/


ProLiant servers c00913926.pdf

HP ProLiant Servers Troubleshooting http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00300504/


Guide c00300504.pdf

HP BIOS Serial Console User Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00440332/


c00440332.pdf

HP Integrated Lights-Out User Guide for http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00209014/


HP Integrated Lights-Out firmware 1.91 c00209014.pdf

HP Integrated Lights-Out 2 User Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00553302/


for Firmware 1.35 c00553302.pdf

Planning and configuration http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00257375/


recommendations for Integrated c00257375.pdf
Lights-Out processors

HP Integrated Lights-Out Management http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00294268/


Processor Scripting and Command Line c00294268.pdf
Resource Guide for HP Integrated
Lights-Out versions 1.82 and 1.91 and HP
Integrated Lights-Out 2 versions 1.1x,
1.2x, and 1.30

HP Integrated Lights-Out Addendum http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01380081/


c01380081.pdf

Remote Insight Lights-Out Edition II User http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00211914/


Guide c00211914.pdf

20
Server Model Web Location

Integrated Lights-Out Security http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00212796/


Technology Brief, Fifth Edition c00212796.pdf

ROM-Based Setup Utility User Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00191707/


c00191707.pdf

HP BladeSystem Onboard Administrator http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00705292/


User Guide c00705292.pdf

For more information about HP Integrity servers used in HP Cluster Platform configurations,
see the following table. Also refer to the documentation that shipped with your cluster.

Server Model Web Location

HP Integrity rx1620

Overview of the HP Integrity rx1620-2, rx2620-2, http://h71028.www7.hp.com/ERC/downloads/5982-9835EN.pdf


and rx4640-8 Servers

HP Integrity rx2600

HP Integrity Server rx2600 and HP Workstation http://docs.hp.com/en/A9664-90020/A9664-90020.pdf


zx6000 - Getting Started Guide (English
A9664-90030)

HP Integrity rx2600 Server and HP Workstation http://docs.hp.com/en/5969-3163/5969-3163.pdf


zx6000 - Operations and Maintenance Guide, 2nd
Edition (5969-3163)

HP Integrity rx2600 and HPIntegrity rx5670 http://docs.hp.com/en/rx2600rx56xx_update/


Management Processor Card Firmware Upgrade rx2600rx56xx_update.pdf
Product Update

HP Integrity rx2620

HP Integrity rx2620 Installation Guide http://docs.hp.com/en/AB331-90005-en/AB331-90005-en.pdf

HP Integrity rx2620 Maintenance Guide http://docs.hp.com/en/AB331-90007/AB331-90007.pdf

HP Integrity rx2620 Operations Guide http://docs.hp.com/en/AB331-90008/AB331-90008.pdf

HP Integrity rx2620 Single-Core to Dual-Core http://docs.fc.hp.com/en/AD117-9009A/AD117-9009A.pdf


Processor Upgrade

HP Integrity rx2660

HP Integrity rx2660 Installation Guide http://docs.fc.hp.com/en/AB419-9000B/AB419-9000B.pdf

HP Integrity rx2660 Site Prep Guide http://docs.fc.hp.com/en/AB419-9004B/AB419-9004B.pdf

HP Integrity rx2660 User Service Guide http://docs.fc.hp.com/en/AB419-9002B/AB419-9002B.pdf

HP Integrity Integrated Lights-out 2 Management http://docs.fc.hp.com/en/AD217-9001A/AD217-9001A.pdf


Processor (iLO 2 MP) Operations Guide for the HP
Integrity BL860c, rx2660, rx3600, and rx6600

HP Integrity rx3600

HP Integrity rx3600 — overview http://h20341.www2.hp.com/integrity/cache/387513-0-0-225-121.html

HP Integrity rx4640

HP Integrity rx4640 Installation Guide http://docs.hp.com/en/A6961-96008_en/A6961-96008_en.pdf

HP Integrity rx4640 Maintenance Guide http://docs.hp.com/en/rx4640_maint/rx4640_maint.pdf

Additional Documentation 21
Server Model Web Location

HP Integrity rx4640 Operations Guide http://docs.hp.com/en/rx4640_ops/rx4640_ops.pdf

HP Integrity rx4640 Server Upgrade Guide, Second http://docs.fc.hp.com/en/A6961-96018/A6961-96018.pdf


Edition

For more information about HP Workstations used in HP Cluster Platform configurations, see
the following table. Also refer to the documentation that shipped with your cluster.

Workstation Model Web Location

All models

Getting Started Guide HP Workstations http://h200002.www2.hp.com/bc/docs/support/SupportManual/


c00206613/c00206613.pdf

HP Workstations User Manual for Linux: A http://h20000.www2.hp.com/bc/docs/support/SupportManual/


collection of installation, configuration and setup c00063015/c00063015.pdf
papers

Front Card Guide and Fan Kit Installation HP http://h200002.www2.hp.com/bc/docs/support/SupportManual/


Workstation xw Series c00211010/c00211010.pdf

Fixed Rack Kit Installation for HP Workstations http://h200002.www2.hp.com/bc/docs/support/SupportManual/


c00211837/c00211837.pdf

HP xw8200

HP Workstation xw8200 Service and Technical http://h200002.www2.hp.com/bc/docs/support/SupportManual/


Reference Guide c00213033/c00213033.pdf

HP xw8400

HP Workstation xw8400 Service and Technical http://h20000.www2.hp.com/bc/docs/support/SupportManual/


Reference Guide c00680846/c00680846.pdf

HP xw9300

HP Workstation xw9300 Service and Technical http://h20000.www2.hp.com/bc/docs/support/SupportManual/


Reference Guide c00293075/c00293075.pdf

HP xw9400

HP Workstation xw9400 Service and Technical http://h20000.www2.hp.com/bc/docs/support/SupportManual/


Reference Guide c00774787/c00774787.pdf

For more information about the HP XC Software that was designed to run on HP Cluster Platform
solutions, go to the following HP website:
http://www.hp.com/techservers/clusters/xc_clusters.html
If you can not locate a specific HP Cluster Platform or HP XC Software document, try using the
search feature available from the following websites:
http://docs.hp.com/
http://www.hp.com/
If you can not locate a specific document, contact your HP representative for assistance.

HP Encourages Your Comments


HP encourages your comments concerning this document. We are committed to providing
documentation that meets your needs. Send any errors found, suggestions for improvement, or
compliments to docsfeedback@hp.com
Include the document title, manufacturing part number, and any comment, error found, or
suggestion for improvement you have concerning this document.

22
Important Safety Information
This manual provides only an overview of the procedures for removing servers from a cluster
rack and for installing PCI cards. Before performing such procedures, read the safety information
provided in the following documents:
• HP Cluster Platform Site Preparation Guide – The servers are installed in HP 10000-series racks.
Read and follow the rack safety information before starting any server maintenance operation
such as removing a server from a rack or opening a server chassis. Pay particular attention
to the rack power distribution information.
• Server Documentation – Your HP Cluster Platform ships with a full documentation set for
each component, including all server models used in the cluster. Read and follow the server
safety information before starting any server maintenance operation, such as removing a
server from a rack or opening a server chassis. Pay particular attention to warning labels
and icons
• Cable Management Documentation – Servers used as nodes in HP Cluster Platform do not
always use the rack-mount kits and cable management systems that are standard for the
server. HP Cluster Platform solutions use special rack-mount and cable management systems
that are designed to manage the many heavy cables used by some system interconnect
models. Refer to the appropriate cable management guide for each server model.
Also consider the following general warnings before working on an HP Cluster Platform
component:

zk-2071

WARNING: The front panel Power On/Off switch does not shut off all system power completely.
! Portions of the power supply and some internal circuitry remain active until AC power is
removed.

zk-2071

WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that
! only one component is extended at a time. A rack may become unstable if more than one
component is extended at a time for any reason.

zk-2071

WARNING: To reduce the risk of personal injury from hot surfaces, allow the internal system
! components to cool before touching them.

zk-2071

WARNING: To reduce the risk of personal injury or damage to the equipment, place the server
! on a sturdy table or workbench whenever it is removed from the rack for device accessibility.

zk-2071

CAUTION: Before removing the server top cover, be sure that the server is powered down
! and that the power cord is disconnected from the server or the electrical outlet.

Important Safety Information 23


zk-2071

CAUTION: To avoid the risk of damage to the system or expansion boards, remove all power
cords before installing or removing expansion boards. When the Power On/Off switch is in the
! Off position, auxiliary power is still connected to the PCI expansion slot and can damage the
card.

zk-2071

CAUTION: Electrostatic discharge can damage electronic components. Be sure you are properly
! grounded before beginning any installation procedures.

24
1 Itanium Processor Servers
The following Itanium processor servers in the HP Integrity series are supported in HP Cluster
Platform solutions. This chapter presents the following information:
• An overview of the HP Integrity rx1620 (Section 1.1)
• An overview of the HP Integrity rx2600 (Section 1.2)
• An overview of the HP Integrity rx2620 (Section 1.3)
• An overview of the HP Integrity rx2660 (Section 1.4)
• An overview of the HP Integrity rx3600 (Section 1.5)
• An overview of the HP Integrity rx4640 (Section 1.6)

1.1 HP Integrity rx1620


HP Integrity rx1620 is a 1U, dual-processor Itanium 2-based server. It can be used as an application
node, control node, or utility node. It accommodates up to 8 DIMMs and internal peripherals,
including disks and DVD-ROM. It can have up to two low-voltage differential (LVD), 3.5-inch
form factor hard disk drives installed. Optionally, either a DVD or CD-RW/DVD drive can be
added.
Figure 1-1 shows the front panel of the HP Integrity rx1620, and Table 1-1 describes its features.

Figure 1-1 HP Integrity rx1620 Front Panel


1 3 4 6 7 9

2 5 8

The following list describes the callouts shown in Figure 1-1.


1. DVD drive
2. LVD HDD 2
3. Locator button and LED
4. Diagnostic LEDs
5. LVD HDD 1
6. LAN LED
7. System LED
8. Power On/Off button
9. Power On/Off LED
Table 1-1 HP Integrity rx1620 Front Panel
Name Function

DVD Drive Optional removable media drive.

Locator Button and LED The locator button and LED are used to help locate this server within a rack of
servers. When the button is engaged, the blue LED illuminates and an additional
blue LED on the rear panel of the server illuminates. This function may be
remotely activated.

1.1 HP Integrity rx1620 25


Table 1-1 HP Integrity rx1620 Front Panel (continued)
Name Function

Diagnostic LEDs The four diagnostic LEDs operate in conjunction with the system LED to provide
diagnostic information about the system.

LAN LED The LAN LED provides status information about the LAN interface. When the
LAN LED is flashing, there is activity on the LAN.

System LED The System LED provides system status information. When operation is normal,
the LED is green. When there is a system warning, the LED flashes yellow. When
there is a system fault, the LED flashes red.

Power On/Off LED The green power on/off LED is illuminated when the power is on.

LVD HDD 1 and LVD HDD 2 Low-voltage differential (LVD), 3.5-inch form factor hard disk drives.

Power On/Off button The power on/off switch for the server.

Figure 1-2 shows the rear panel of the HP Integrity rx1620. It includes communication ports, I/O
ports, AC power connector, and the locator LED/button. Table 1-2 describes the rear panel
features.

Figure 1-2 HP Integrity rx1620 Rear Panel


2

2a 2b 2c 9

1 3 4 5 6 7 8 10

The following list describes the callouts shown in Figure 1-2:


1. LVD/SE SCSI
2. Optional management board connectors — 2a) 10/100 LAN C, 2b) Video, 2c) Serial
3. 10/100/1000 LAN A 1Gb
4. 10/100/1000 LAN B 1Gb
5. MP reset
6. Locator button and LED
7. Serial port
8. USB port
9. PCI slot 1
10. PCI slot 2
Table 1-2 HP Integrity rx1620 Rear Panel Features
Connector/Switch Function

AC Power Primary power connection for the server.

LVD/SE SCSI 68-pin, LVD, single-ended U320 SCSI. This connector provides external SCSI connection
on SCSI Channel B.

10/100/1000 LAN A 10/100/1000 base-T ethernet LAN A connector. Wake-on-LAN and Alert-on-LAN
capabilities.

26 Itanium Processor Servers


Table 1-2 HP Integrity rx1620 Rear Panel Features (continued)
Connector/Switch Function

10/100/1000 LAN B 10/100/1000 base-T ethernet LAN B connector. Wake-on-LAN and Alert-on-LAN
capabilities.

Serial 9-pin male serial connector. This the console connector if the optional management
processor card is not installed.

USB Two universal serial bus (USB 2.0) connectors.

Locator Button and LED The locator button and LED are used to help locate a server within a rack of servers.
When the button is engaged, the blue LED illuminates and an additional blue LED
on the front panel of the server illuminates. This function can remotely activated.

10/100 LAN C (optional) 10 MB/100 MB LAN C connector for the optional management processor card.

Video (optional) 15-pin female video connector for the optional management processor card.

Serial (optional) 25-pin female serial data bus connector for the optional management processor card.

1.1.1 Network Port Assignments


The HP Integrity rx1620 has three network ports:
• LAN A, a 10/100/1000 base-T Ethernet LAN A Gb port, optionally connected to the local
WAN, enabling remote access.
• LAN B, a 10/100/1000 base-T Ethernet LAN A Gb port, connected to the administrative
network.
• LAN C, a 10/100 Ethernet port for the Management Processor (MP) card. This port is
connected to the cluster's console network switch.
Refer to the cluster's cabling tables for more information.

1.1.2 Supported Memory Configurations


The HP Cluster Platform does not enforce any memory configuration rules on the HP Integrity
rx1620 nodes other than those associated with the server. For the HP Integrity rx1620 these rules
are:
1. The system has eight memory slots for installing DDR SDRAM memory modules.
2. The system supports a maximum of 16 GB of memory and a minimum of 512 MB.
3. Memory modules can either be 256MB, 512MB, 1GB, or 2GB, and they must be arranged as
ordered pairs of equal size. For example, if you place a 1 GB memory module in slot 0A then
you must insert a 1 GB memory module in slot 0B.
4. Memory in the rx1620 must be loaded in quads. This means that you must load two memory
cards per cell. For example, using the loading order in rule 3, place DIMMs in slots 0A and
0B of memory cell 0 and in slots 1A and 1B of memory cell 1.
5. DDR SDRAM must be loaded as matched pairs. To determine if the DIMMs are matched
pairs, look at the HP part number on the DIMMs. If the HP part numbers match, then the
DIMMs can be loaded together as pairs.
6. You can mix module sizes in the system, provided that the DIMMs within each pair share
the same HP part number. For example, you can load a pair of 256 MB DIMMs in slots 0A
and 0B, a pair of 1 GB DIMMs in slots 1A and 1B, and pairs of 512 MB DIMMs in slots 2A,
2B, 3A, and 3B.
7. You must install the first DDR SDRAM matched pair in memory cell 0 and in the slots labeled
0A and 0B. Load the second matched pair in memory cell 1 and in the slots labeled 1A and
1B. Continue loading successive matched pairs.
The HP Integrity rx1620 supports the chip spare feature, enabling the server's error handling to
bypass an entire DDR SDRAM chip on a DIMM if a multibit error is detected. To use the chip
spare feature, you must install only DIMMs built with x4 SDRAM. Load the memory in quads,

1.1 HP Integrity rx1620 27


that is, two DIMMs per memory cell, loaded in the same location in each memory cell and using
the following upgrade rules:
• Begin with four identical DIMMs, loading them into the slots labeled Load Order 1st and
2nd.
• Add four identical DIMMs (identical to each other, but potentially different from the original
quad), loading them into the slots labeled Load Order 3rd and 4th.
• Finally, add four identical DIMMs (identical to each other, but potentially differing from
the existing two quads), loading them into the slots labeled Load Order 5th and 6th.

1.1.3 Supported Storage Configurations


The HP Integrity rx1620 server supports up to two hot-pluggable low-voltage differential (LVD)
3.5-inch SCSI drives. A node's requirement for local storage varies depending upon the role of
the node. The following drives are supported:
• 36 GB
• 73 GB
• 146 GB

Important:
Power subsystem design for racks containing nodes must be designed for nodes that have disk
configurations that draw maximum current.

The physical procedures used to insert and remove a disk drive are discussed in the document
that comes with the drive. However, the operating system must be prepared for insertion or
removal of a disk, or unexpected and harmful effects may occur.

Note:
There is a significant difference between the terms hot-pluggable and hot-swappable. The
hot-swappable process enables you to replace a component in a high-availability system while
it is running without using operating system commands. The hot-pluggable process enables you
to replace a component in a high-availability system while it is running, but a manual software
procedure is necessary to complete the task. The disk drives in the HP ProLiant DL360 G4 are
hot-pluggable, not hot-swappable.

In addition to the two LAN ports and the MP port, the HP Integrity rx1620 also has two PCI-X
133 slots. Each slot is on a separate bus, but only one bus has sufficient throughput in its connection
memory to support the full PCI-X 133 bandwidth (approximately 1 GB per second). The other
three slots provide approximately 533 MB per second.

1.1.4 Cable Management


Each HP Integrity rx1620 that has a connection to the system interconnect requires one cable
bracket installed on the rear rack column. This component is documented in the HP Cluster
Platform Tab Mount Cable Management Installation Guide.

1.1.5 Installing or Removing a PCI Card


The HP Integrity rx1620 server can contain up to 2 PCI cards. PCI cards are located on the I/O
riser assembly. The PCI slots are numbered 1 and 2. Slot 1 (top) is a single, full size PCI slot that
runs at 133MHz. Slot 2 (bottom) is a single, half-size PCI slot that runs at 133MHz. Inserting a
PCI card into a slot that is not configured to accept it can cause operation failure or can cause
the PCI card to operate at less than optimum speed.
When installing or removing a PCI card, heed the warnings and cautions listed in Section
(page 23).
An overview of the PCI card installation procedure follows:

28 Itanium Processor Servers


1. Remove the server cover.
2. Release the PCI I/O riser by turning the jackscrew, as shown in Figure 1-3. This action frees
the PCI I/O riser from the system board.

Figure 1-3 Releasing the PCI I/O Riser

3. Remove the PCI I/O riser from the chassis (Figure 1-4).

Figure 1-4 Removing the PCI I/O Riser Assembly

HPTC-0227

4. Remove the PCI slot cover, as shown in Figure 1-5.

1.1 HP Integrity rx1620 29


Figure 1-5 Removing the PCI Slot Cover

HTPC-0213

5. Grasp the edges of the PCI card being installed and gently press the connector into the PCI
I/O riser connector, as shown in Figure 1-6.

Figure 1-6 Sliding the Card into the PCI Riser Connector

HPTC-0210

6. Insert the card mounting screw and secure it with a T-15 driver.
7. Replace the PCI I/O riser assembly by positioning the connector over the mating connector
on the system board and then turning the jackscrew to complete the connector mating.
8. Connect any cables that are required by the PCI card.
9. Replace the server cover.
More detailed information about this procedure is provided in the HP Integrity rx1620 Installation
Guide.

1.2 HP Integrity rx2600


The dual-processor 2U HP Integrity rx2600 can be used as an application node, control node, or
utility node. HP Integrity rx2600 nodes have 1.3 GHz or 1.5 GHz dual Intel Itanium 2 processors
and 6 MB of on-chip L3 cache. All nodes are fitted with a management processor (the outlined
area in Figure 1-8) that provides access to system console, reset, power management, firmware
upgrade, and system status via 10/100 LAN connections. The Management Processor (MP) is
powered from the node's standby power source and is independent of the power on/off state of
the node.
Figure 1-7 shows the front of the HP Integrity rx2600 .

30 Itanium Processor Servers


Figure 1-7 HP Integrity rx2600 Front Panel

2 3 4 5

The following list describes the callouts shown in Figure 1-7:

Item Description

1 SCSI drives

2 Locator LED

3 Diagnostic LEDs

4 Power switch

5 CD-ROM drive

Table 1-3 describes the ports on the rear of the HP Integrity rx2600 that are used for connections
to other cluster components. These ports are identified by the callouts in Figure 1-8. Refer to the
cabling tables for explanations of the connection origins and destinations.

Figure 1-8 HP Integrity rx2600 Rear View

1 2 3 4 5 6 7

The following list corresponds to the callouts shown in Figure 1-8.


1. Power (PWR2)
2. LAN 10/100
3. 10/100/1000 LAN
4. MP VGA, serial, LAN, reset
5. 10/100 LAN
6. USB ports (mouse and keyboard ports labelled)
7. PCI Slot 0

1.2 HP Integrity rx2600 31


Table 1-3 HP Integrity rx2600 Ports Used in Clusters
Callout Port Label Node Role Cluster Cabling Name and Description

1 PWR 1 All Power input 1 is used for the single power connection.

MP Console, CES – The management processor Ethernet LAN


connection. This 10/100 Base-T port is connected to the root console
2 MP 10/100 Control and utility switch (CES1) in the Utility Building Block (UBB).

NIC2 (Administrative, AES) – The Gigabit Ethernet Port . This


Base-T 10/100/1000 Ethernet port is connected to the root
3 LAN Gb Control and utility administrative switch (AES1) in UBB.

Video Port. This port is optionally connected to the cluster's KVM


4 VGA Control console

NIC1 – The LAN Ethernet Port. This 10/100 Base-T Ethernet port
is optionally connected to the site WAN, enabling remote
5 LAN 10/100 All connections to the cluster's control node.

Upper of the two stacked USB ports. Use this port only for the
optional connection to the KVM mouse and keyboard. Do not use
6 USB Control the ports labelled keyboard and mouse.

Optional for the The PCI-X 133 slot in which the interconnect card is installed. Slot
7 PCI Slot 0 control node 0 is at the top and slot 3 is at the bottom.

The node role column in Table 1-3 defines whether the connection is used in an application node,
a control node, or a utility node.
Refer to the server documentation for definitions of the remaining ports, but be aware that you
should never make additional connections to a server that is configured for a specific role in the
cluster.

1.2.1 Network Port Assignments


The HP Integrity rx2600 has three embedded network ports:
• Management processor (MP) 10/100 Ethernet port, connected to the cluster's console network
switch.
• NIC1 10/100 Ethernet port, optionally connected to the local WAN, enabling remote access.
• NIC2 Gigabit Ethernet port, connected to the administrative network.

Figure 1-9 HP Integrity rx2600 Network Ports

1 2 3 4 5

7 6

The following list describes the callouts in Figure 1-9.

32 Itanium Processor Servers


1. LAN 10/100 (management port)
2. Management card
3. VGA
4. GSP reset (soft)
5. GSP reset (hard)
6. LAN 10/100 (NIC 1)
7. LAN Gb (NIC 2)
The ports, their labels, and the corresponding HP Cluster Platform cabling identifiers are shown
in Figure 1-9.
A single embedded gigabit Ethernet port is available on the HP Integrity rx2600 (NIC2). This
Gigabit Ethernet port is dedicated to the cluster's administrative network. This connection differs
from the HP Integrity rx4640 where NIC1 is the connection to the cluster's administrative network.
Refer to the cluster's cabling tables for more information.

1.2.2 Supported Memory Configurations


HP Cluster Platform does not enforce any memory configuration rules on the HP Integrity rx2600
nodes, other than those associated with the server used. For the HP Integrity rx2600 these rules
are:
• The system has 12 memory slots for installing DDR SDRAM memory modules.
• The system supports a maximum of 12 GB of memory and a minimum of 512 MB.
• Memory modules can either be 256 MB, 512 MB, or 1 GB, and they must be arranged as
ordered pairs of equal size. For example, if you place a 1 GB memory module in slot 0A,
you must insert a 1 GB memory module in slot 0B.
• Memory in the HP Integrity rx2600 must be loaded in quads. This means that you must load
two memory cards per cell. For example, using the loading order provided above, place two
DIMMs in slots 0A and 0B of memory cell 0, and two DIMMs in slots 1A and 1B in memory
cell 1.
• DDR SDRAM must be loaded as matched pairs. To determine if the DIMMs are matched
pairs, look at the HP part number on the DIMMs. If the HP part numbers match, then the
DIMMs can be loaded together as a pair.
• You can mix module sizes in the system, provided that DIMMs in each pair share the same
HP part number. For example, it is acceptable to load a pair of 256 MB DIMMs in slots 0A
and 0B, a pair of 1 GB DIMMs in slots 1A and 1B, then load 512 MB DIMMs in slots 2A, 2B,
3A, and 3B.
• You must install the first DDR SDRAM matched pair in memory cell 0 and in the slots labeled
DIMM 0A and DIMM 0B. Load the second matched pair in memory cell 1 and in the slots
labeled DIMM 1A and DIMM 1B. Continue loading successive matched pairs using the
sequence described in Table 1-4.
Table 1-4 Memory Slot Loading Order
Load Order Memory Cell 0 Load Order Memory Cell 1

A Slots

1st DIMM 0A 2nd DIMM 1A

5th DIMM 4A 6th DIMM 5A

3rd DIMM 2A 4th DIMM 3A

B Slots

1st DIMM 0B 2nd DIMM 1B

1.2 HP Integrity rx2600 33


Table 1-4 Memory Slot Loading Order (continued)
Load Order Memory Cell 0 Load Order Memory Cell 1

5th DIMM 4B 6th DIMM 5B

3rd DIMM 2B 4th DIMM 3B

The HP Integrity rx2600 supports the chip spare feature, enabling the server's error handling to
bypass an entire DDR SDRAM chip on a DIMM if a multi-bit error is detected. To use the chip
spare feature, you must install only DIMMs built with DDR SDRAM. Load the memory in quads,
that is, two DIMMs per memory cell, loaded in the same location in each memory cell and
according to the sequence described in Table 1-4. Use the following upgrade rules:
• Begin with four identical DIMMs. Load them into the slots labeled Load Order 1st and 2nd.
• Add four identical DIMMs (identical to each other, but potentially different from the original
quad). Load them in the slots labeled Load Order 3rd and 4th.
• Finally, add four identical DIMMs (identical to each other, but potentially differing from
the existing two quads). Load them in the slots labeled Load Order 5th and 6th.

1.2.3 Supported Storage Configurations


The HP ProLiant DL360 G4 server supports up to three hot-pluggable LVD 3.5-inch SCSI drives.
Local storage for a node varies depending on the role of the node. The following drives are
supported:
• 36 GB 10K RPM
• 36 GB 15K RPM
• 73 GB 10K RPM

Important:
Power subsystem design for racks containing nodes must be designed for nodes that have disk
configurations that draw maximum current.

The physical procedures used to insert and remove a disk drive are discussed in the document
that comes with the drive. However, the operating system must be prepared for insertion or
removal of a disk, or unexpected and harmful effects may occur.

Note:
There is a significant difference between the terms hot-pluggable and hot-swappable. The
hot-swappable process enables you to replace a component in a high-availability system while
it is running without using operating system commands. The hot-pluggable process enables you
to replace a component in a high-availability system while it is running, but a manual software
procedure is necessary to complete the task. The disk drives in the HP Integrity rx2600 are
hot-pluggable, not hot-swappable.

In addition to the Ethernet connection from the management processor, the HP Integrity rx2600
server provides a 10/100 Base-T port and a 10/100/100 Base-T port. It also has four PCI-X 133
slots. Each slot is on a separate bus, but only one bus has sufficient throughput in its connection
memory to support the full PCI-X 133 bandwidth (approximately 1 GB per second). The other
three slots provide approximately 533 MB per second.

1.2.4 Cable Management


Each HP Integrity rx2600 with a connection to the system interconnect requires one cable
management tray installed on the end of its rail kit. This component is described in HP XC6000
Cable Management Tray Installation Guide.

34 Itanium Processor Servers


1.2.5 Installing or Removing a PCI Card
The HP Integrity rx2600 has four 64-bit, 133 MHz PCI-X card slots in a removable PCI/AGP cage.
To remove and replace a PCI card, remove the server’s cover and take out the PCI card cage.
Removal instructions are provided on the card cage label. Detailed information is provided in
HP Integrity rx2600 Server and HP Workstation zx6000 - Operations and Maintenance Guide.
When performing these tasks, heed the warnings and cautions listed in “Important Safety
Information” (page 23).
An overview of the PCI installation procedure follows:
1. Remove the AGP shipment retainer and slide off the lid of the card cage as shown in
Figure 1-10. Identify slot 1, which is the topmost slot when the card cage is installed in the
server.

Figure 1-10 Removing the Card Cage

HPTC-0201

2. Remove the AGP shipment retainer and slide off the lid of the card cage as shown in
Figure 1-11.

Figure 1-11 Opening the Card Cage

HPTC-0202

When installing a card, first remove the bulkhead screw and blanking plate as shown in
Figure 1-12.
3. Install the card into PCI slot 1 only.

1.2 HP Integrity rx2600 35


Figure 1-12 Removing the Blank

Retain the screw to secure the card and save the blank plate for later use if the card is
removed.
4. Insert (or remove) the card and secure it with a bulkhead screw as shown in Figure 1-13.

Figure 1-13 Inserting the Card

HPCT-0204

5. If you remove a card, ensure that you replace the blank plate to maintain the correct airflow
for optimum cooling.

1.3 HP Integrity rx2620


The HP Integrity rx2620 is an updated version of the rx2600 described in Section 1.2. It has the
same physical format, but provides additional features and specifications described in Table 1-5.
Table 1-5 HP Integrity rx2620 Features
Component Specification

Processor board Up to 2 processors

Chipset: HP zx1

System bus bandwidth: 6.4 GB/s

Processors supported Type: Intel Itanium 2 processor

Speeds: 1.6 GHz and 1.3 GHz

Level 1 cache: 32 KB

Level 2 cache: 256 KB

Level 3 cache: 6 MB or 3 MB at 1.6 GHz, 3 MB at 1.3 GHz

Main memory Bus bandwidth: 8.5 GB/s

RAM type: PC2100 ECC registered DDR266A SDRAM

Capacity: 24 GB maximum

Memory slots: 12 DIMM slots

36 Itanium Processor Servers


Table 1-5 HP Integrity rx2620 Features (continued)
Component Specification

Internal storage devices Internal hard disk drive bays: 3

Disk drive sizes: 36 GB, 73 GB, and 146 GB drives available

Disk drive interface: Ultra320 SCSI

Removable media: 1 open bay for DVD-ROM or DVD+RW

Maximum internal storage 438 GB (3 x 146 GB)

Expansion slots 4 full-length 64-bit/133MHz PCI-X

Core I/O and management processor Two 10/100/1000 Base-T Ethernet One 10/100Base-T management LAN
interconnect Dual channel Ultra320 SCSI 4 USB 2.0 ports 2 RS-232 serial ports for general
use, and 3 RS-232 serial ports

Figure 1-14 and Figure 1-15 show the front and rear panel of the HP Integrity rx2620, including
labels for the ports. The port assignments in HP Cluster Platform are the same as those of the
HP Integrity rx2600, described in Table 1-3.

Figure 1-14 HP Integrity rx2620 Front Panel


1 2

3 4 5 6

The following table describes the callouts in Figure 1-14.

Item Description

1 Control panel

2 DVD drive

3 LVD HDD 3

4 LVD HDD 2

5 LVD HDD 1

6 System product label (pull-out)

1.3 HP Integrity rx2620 37


Figure 1-15 HP Integrity rx2620 Rear Panel
1 5 9 11

WARNING Unplug all power cords from system before servicing

PWR PWR
2 1

Management Card CONSOLE


LAN 10/100
VGA MP CONSOLE / REMOTE / UPS

RESET
SERIAL A
Automatic Internal SCSI Termination

TOC SERIAL B
SCSI LVD/SE LAN Gb A LAN Gb B USB

2 3 4 6 7 8 10 12 13

The following table describes the callouts in Figure 1-15.

Item Description

1 AC power receptacles (AC2 on the left and AC1 on the right)

2 LVD/SE SCSI

3 10/100 management LAN

4 LAN Gb A (10/100/1000 LAN)

5 VGA port

6 LAN Gb B (10/100/1000 LAN)

7 Locator button/LED

8 ToC button

9 Console/Remote/UPS

10 USB ports

11 Console/Serial Port A

12 Serial port B

13 PCI slots (1 through 4, beginning at the top)

Table 1-6 provides a quick reference to these connections.


Table 1-6 Quick Reference for HP Integrity rx2620 Connections
Label Description Connection

LAN 10/100 Management LAN Console ProCurve Switch, CES1.

LAN Gb A Gigabit Ethernet LAN Administrative ProCurve Switch, AES1.

VGA Video Out Optionally connect to KVM, if this is a control node.

USB Upper port of the two stacked Optionally connect to KVM USB mouse and keyboard, if this is
USB ports a control node. (Do not use the USB ports that are labeled with
a mouse and keyboard icon.)

PCI slot 1 Top of the four full-length Install an HBA or HCA adapter for the appropriate interconnect.
64-bit/133MHz PCI-X slot

1.3.1 PCI Slot Assignments


The HP Integrity rx2620 has four PCI slots. Table 1-7 summarizes the slot assignments and the
PCI cards.

38 Itanium Processor Servers


Table 1-7 HP Integrity rx2620 PCI Slot Assignments
Slot Assignment

1 PCI interconnect (optional)

2 10/100/1000 BaseT NIC (optional)

3 Fibre Channel HBA (optional)

4 Fibre Channel HBA (optional) or 10/100/100 BaseT NIC (optional) or empty

To install an interconnect host bus adapter (HBA) or host channel adapter (HCA) card in the HP
Integrity rx2620, use the procedure described in Section 1.2.5. See the following documents for
more information:
• HP Integrity rx2620 Installation Guide:
http://docs.hp.com/en/AB331-90002_en/AB331-90002_en.pdf
• HP Integrity rx2620 Maintenance Guide:
http://docs.hp.com/en/rx2620_maint/rx2620_maint.pdf
To upgrade the processor(s) in the HP Integrity rx2620, see the HP Integrity rx2620 Single-Core
to Dual-Core Processor Upgrade Guide
http://docs.fc.hp.com/en/AD117-9009A/AD117-9009A.pdf.

1.4 HP Integrity rx2660


The HP Integrity rx2660 is a 2U rack-mount system. The server’s internal peripherals include
serial-attached SCSI (SAS) disks and a DVD or DVD+RW drive. Its high availability features
include N+1 hot-swappable fans, 1+1 hot-swappable power supplies, and SAS disks. It contains
up to two single- or dual-core Itanium processors and up to 32 GB of memory. Table 1-8 describes
the components and specifications of the HP Integrity rx2660.
Table 1-8 HP Integrity rx2660 Features
Component Specification

Processors Up to 2 Itanium processors:

1.6 GHz / 6 MB cache single-core

1.4 GHz / 12 MB cache dual-core

1.6 GHz / 18 MB cache dual-core

Memory Eight DIMM slots located on the system board:

Supported DDR2 DIMM sizes: 512 MB, 1 GB, 2 GB, and 4 GB

Minimum memory (2 x 512 MB DIMMs): 1 GB

Maximum memory (8 x 4 GB DIMMs): 32 GB

Disk Drives One to eight hot-pluggable SAS hard drives:

36 GB SAS hard drive

72 GB SAS hard drive

PCI slots Three public PCI-X/e slots:

One slot @133 MHz, two slots @ 266MHz for PCI-X systems, OR

One slot @ 133 MHz, two slots @ PCI-e x8 for PCI-e systems, OR

Three slots @ PCI-e x8 for PCI-e systems (future release)

SAS core I/O Eight-port SAS core I/O card

1.4 HP Integrity rx2660 39


Table 1-8 HP Integrity rx2660 Features (continued)
Component Specification

There are also a pair of internal slots dedicated to optional RAID 5/PCI-e
for the SAS drives.

LAN system I/O Two GigE LAN ports

Management core I/O One serial port, and one 10 Base-T/100 Base-T LAN port

Note: There is an additional serial port, 3 USB ports, (1 front, 2 rear), and
2 VGA ports (1 front, 1 rear) on the server

Optical device One DVD or DVD+RW

Power supply One power supply (900 watts at 120 VAC; 1000 watts at 240 VAC, 1+1
redundancy with second power supply

Figure 1-16 shows the front panel of the HP Integrity rx2660.

Figure 1-16 HP Integrity rx2660 Front Panel

1 2

11 10 9 8 7 6 5 4 3

The following table describes the callouts shown in Figure 1-16.

Item Description

1 System Insight Display

2 Small form factor (SFF) 2.5 inch serial-attached SCSI (SAS) disk drives

3 Power button

4 External health LED

5 Internal health LED

6 System health LED

7 Init (Transfer of Control) button

8 Unit ID (UID) button

9 USB port

10 VGA port

11 DVD or DVD-RW drive

Figure 1-17 shows the rear panel of the HP Integrity rx2660.

40 Itanium Processor Servers


Figure 1-17 HP Integrity rx2660 Rear Panel

1 2 3 4 5 6 7 8 9 10

11

12

18 17 14 13

23 22 21 20
16 15
19

The following table describes the callouts shown in Figure 1-17.

Item Description Comments

1 PCI-X/PCI-E slot 1 PCI Express Interconnect

2 PCI-X/PCI-E slot 2

3 PCI-X/PCI-E slot 3

4 System LAN port 1 Optional outside world

5 Auxiliary serial port

6 VGA port

7 Power supply 2

8 Console serial port

9 Power supply LED

10 Power supply 1

11 iLO MP reset button

12 UID Locator button/LED

13 Standby power

14 iLO MP heartbeat

15 BMC heartbeat

16 iLO MP self-test

17 LAN link status LED

18 LAN link speed LED

19 iLO MP LAN port Connect to Console Network Switch (CES1)

20 USB ports

21 LAN link status LED (LAN port 2)

1.4 HP Integrity rx2660 41


Item Description Comments

22 System LAN port 2 Connect to Administrative Network Switch (AES1)

23 LAN link speed LED (LAN port 2)

1.4.1 PCI Slot Assignments


The HP Integrity rx2660 has three PCI slots. Table 1-9 and Table 1-10 summarize the slot
assignments and the PCI cards.
Table 1-9 HP Integrity rx2660 PCI Express Slot Assignments
Slot PCI-e Assignment: PCI Configuration Comment
Bus

1 A PCI Express x4

2 B PCI Express x4

3 C PCI Express x4

4 D PCI Express x8

5 E PCI Express Interconnect x8

Table 1-10 HP Integrity rx2660 PCI Mixed Slot Assignments


Slot PCI-Mixed Assignment: PCI Configuration Comment
Bus

1 A PCI Slot x4

2 B PCI Slot x4

3 C PCI Express Interconnect x8

4 D PCI Slot 64–bit 133 MHz

5 E PCI-X Interconnect 64–bit 133 MHz

1.4.2 Removing the I/O Backplane Assembly


The I/O backplane assembly consists of the I/O backplane board and a sheet metal enclosure.
The I/O backplane board contains three full-length public I/O slots. To remove a PCI card from
the HP Integrity rx2660, follow these steps:
1. Attach a grounding strap to your wrist or ankle and a metal part of the chassis.

Caution:
Observe all ESD safety precautions before attempting this procedure. Failure to follow ESD
safety precautions may result in damage to the server.

2. Power down the server.

42 Itanium Processor Servers


3. Disconnect the AC power cord, first from the AC outlet and then from the server.

Warning!
Ensure that the system is powered off and all power sources have been disconnected from
the server prior to performing this procedure.
Voltages are present at various locations within the server whenever an AC power source
is connected. This voltage is present even when the main power switch is in the off position.
Failure to observe this warning may result in personal injury or damage to equipment.

4. Remove the server from the rack.


5. Remove the access panel from the server.
6. Disconnect all internal and external cables attached to the I/O cards in the I/O backplane
assembly.

Caution:
Record the slot location of all PCI cards as they are removed. Depending on the operating
system, replacing the PCI cards in a different location may require system reconfiguration
and may cause boot failure.

7. Loosen the two captive screws as shown by callout 1 in Figure 1-18. Check the removal
instructions on the backplane assembly (see callout 3 in Figure 1-18).

Figure 1-18 Removing the I/O Backplane Assembly

a. Press the blue button to release the black knob.


b. Turn the black knob counterclockwise until the captive screw is free from the system
board.
8. Using the sheet metal tab (see callout 2 in Figure 1-18) lift the assembly straight up and out
of the server.

1.4.2.1 Integrity rx2660 PCI–X and PCI-X/PCI-E I/O Backplane Assembly Options
Figure 1-19 shows an Integrity rx2660 PCI-X I/O backplane assembly and Figure 1-20 shows a
mixed PCI-X/PCI-E I/O backplane assembly.

1.4 HP Integrity rx2660 43


Figure 1-19 Integrity rx2660 PCI-X I/O Backplane Assembly

3 3

1
4

9 8 7 6 5

The following list describes the callouts shown in Figure 1-19:


1. Gate latches
2. PCI-X backplane assembly
3. Guide tabs
4. Slotted T15 screws
5. PCI slot covers
6. PCI-X slot 3
7. PCI-X slot 2
8. PCI-X slot 1
9. PCI-X riser board

Figure 1-20 Integrity rx2660 Mixed PCI-X/PCI–E I/O Backplane Assembly

4 3 2 1

The following list describes the callouts shown in Figure 1-20:


1. PCI-E slot 1
2. PCI-E slot 2

44 Itanium Processor Servers


3. PCI-X slot 3
4. Mixed PCI-X/PCI-E riser board

1.4.3 Installing PCI Cards in the Integrity rx2660


Ensure that you install the proper drivers for the PCI-X/PCI-E card before installing the card.
To install a PCI-X/PCI-e card, follow these steps:
1. Remove the I/O backplane assembly as described in Section 1.4.2.
2. Remove the defective PCI card or install a new PCI-X or PCI-E card.
3. If not already removed, remove the appropriate slot cover (see callout 5 in Figure 1-19) for
the PCI slot to be used.
4. Insert the replacement card into the PCI slot:
a. Insert the tab at the base of the card bulkhead into the slot in the server.
b. Align the card connectors with the slots on the I/O backplane board.
c. Apply firm, even pressure to both sides of the card until it fully seats into the slot.

Caution:
Ensure that you fully seat the card into the slot or the card may fail after power is applied
to the slot.

5. Close the gate latch to secure the end of the card.


6. Replace the slotted T15 screw that attaches the card bulkhead to the server; use a T15 driver
to turn the screw clockwise until it tightens to the server.
7. Install the I/O backplane assembly into the server.
8. Connect all internal and external cables to the PCI cards in the I/O backplane assembly.
9. Replace the top cover.
10. Slide the server completely back into the rack.
11. Reconnect the power cables and power on the server.

1.5 HP Integrity rx3600


The HP Integrity rx3600 is a 4U, quad processor server that typically functions as a control node,
utility node, or I/O node. It is a 4-socket system IPF rack-mount server based on the Itanium 2
processor family architecture. The server accommodates up to 32 DIMMs and internal peripherals,
including disks and DVD-ROM. Its high availability features include hot-swappable fans and
200-240 VAC power supplies, hot-pluggable disk drives, and hot-pluggable PCI-X cards.
Figure 1-21 shows the front view of the HP Integrity rx3600.

Figure 1-21 HP Integrity rx3600 Front View

All nodes are fitted with an MP that provides access to system console, reset, power management,
firmware upgrade, and system status via 10/100 LAN connections. The MP is powered from the
node's standby power source and is independent of the power on/off state of the node. Figure 1-22
shows rear ports of the HP Integrity rx3600.

1.5 HP Integrity rx3600 45


Figure 1-22 HP Integrity rx3600 Rear View
1

2 3 4 5 6

Table 1-11 describes the ports on the rear of the HP Integrity rx3600 that are used for connections
to other cluster components. These ports are identified by the callouts in Figure 1-22. Refer to
the cabling tables for explanations of the connection origins and destinations.
Table 1-11 HP Integrity rx3600 Ports Used in Clusters
Callout Port Label Cluster Cabling Name and Description

1 PWR 0 and PWR 1 When used as a control node only, both power supplies are used to provide
redundancy. Otherwise, utility nodes use only PWR 0.

2 Screen Icon VGA Video Port. When used as a control node, this port is optionally connected
to the cluster's KVM console.

3 USB ports Use only the upper of the two stacked USB ports. When this server is used as a
control node, use USB 1 port for the optional connection to the KVM mouse and
keyboard. Do not use the ports labeled keyboard and mouse.

4 MP LAN MP (Console, CES). The MP Ethernet LAN connection. This 10/100 Base-T port is
connected to the root console switch (CES1) in the UBB.

5 LAN Gb NIC1 (Administrative, AES) – The gigabit Ethernet port in PCI slot 2. This
10/100/1000 Base-T Ethernet port is connected to the root administrative switch
(AES1) in the UBB.

6 N/A PCI (slot 8).

The spare Ethernet port in PCI slot 2 is designated NIC 2 and might also be used for an optional
connection to the site WAN, depending on the server's role.
Refer to the server documentation for definitions of the remaining ports, but be aware that you
should never make additional connections to a server that is configured for a specific role in the
cluster.

1.5.1 Supported Memory Configurations


The HP Cluster Platform does not enforce any memory configuration rules on the control node
and utility nodes other than those associated with the server used. You can use DIMMs of 256

46 Itanium Processor Servers


MB, 512 MB, 1 GB, and 2 GB providing you follow the upgrade rules for mixed DIMMs. For the
HP Integrity rx3600 these rules are:
• The HP Integrity rx3600 has a sixteen-DIMM memory extender board that is minimally
configured with 1 GB of memory, four 256 MB DIMMs loaded in quad 0 (slots 0A, 0B, 0C,
and 0D.
• An optional 32 DIMM memory extender board is available to replace the 16-DIMM memory
extender board. Configure the extender board with a minimum of 1 GB in quad 0.
• Install additional DIMMs into both 16 and 32 DIMM boards. When adding DIMMs, use a
minimum of four DIMMs of identical size, and install them in the next available quad.
• You can use DIMMs of varying size across the entire extender board, but all four DIMMs
in each quad must be of an identical size.
• Remove slot fillers only to add DIMMs. Do not leave any empty slots without a filler because
this can affect system cooling.

1.5.2 Upgrading the HP Integrity rx3600


To upgrade the processor(s) in the HP Integrity rx3600, see the HP Integrity rx3600 Server Upgrade
Guide, Second Edition
http://docs.fc.hp.com/en/A6961-96018/A6961-96018.pdf.

1.5.3 PCI-X Slot Assignment and Supported Options


PCI-X slots are numbered from 1 through 8, with the interconnect card in slot 8 on the far right
of the enclosure, as shown in Figure 1-22. The configuration requirements for slots 1 through 8
are as follows:
• PCI slots 1 and 2 are dedicated for use by the core I/O cards—SCSI HBA card in slot 1 and
dual-port gigabit Ethernet LAN card in slot 2. Slots 1 and 2 are not hot-pluggable capable.
Do not install additional PCI-X expansion cards in slots 1 or 2.
• Slots 3 and 4 are the first pair of shared slots, and slots 5 and 6 are the second pair of shared
slots. The maximum capability of each of the shared slots is PCI-X 66 MHz. If a PCI-X 133
MHz card is installed in a shared slot, its maximum capability is PCI-X 66 MHz. If different
modes (PCI instead of PCI-X) or slower speeds (33 MHz) are installed, the slot automatically
downgrades to accept the change.
• Slots 7 and 8 are single slots. The maximum capability of each slot is PCI-X 133 MHz. Only
slots 7 and 8 will allow 133 MHz, PCI-X cards to run at full speed. These two slots are not
limited by bus-mode, frequency-related incompatibilities. Slot 8 is always used for the
interconnect PCI card.
Table 1-12 summarizes the slot assignments and the required and optional PCI cards.
Table 1-12 HP Integrity rx3600 PCI slot assignments
Slot Bus Assignment Maximum Speed

1 1 Core I/O: SCSI HBA 66MHz PCI-X

2 1 Core I/O: Dual Port Gbit/sec Ethernet NIC 66MHz PCI-X

3 2 66MHz PCI-X

4 2 First 2 Gb/s Fibre Channel HBA (optional) 66MHz PCI-X

5 3 66MHz PCI-X

6 3 Second 2 Gb/s Fibre Channel HBA (optional) 66MHz PCI-X

7 4 133MHz PCI-X

8 5 Interconnect PCI adapter 133MHz PCI-X

1.5 HP Integrity rx3600 47


The HP Cluster Platform hardware does not impose any restrictions on supported options.
However, the system software can restrict supported options to guarantee performance, or to
reduce qualification test matrices.

1.5.4 Installing or Removing a PCI Card


The HP Integrity rx3600 provides eight PCI slots across five PCI-X buses and supports
hot-pluggable PCI-X devices. However, you should not install the interconnect PCI adapter while
the server is running. Also, take care to use only slot 8 on bus 5, which is the designated slot for
HP Cluster Platform installations.
Detailed information on installing PCI cards is provided in the HP Integrity rx3600 Maintenance
Guide
http://docs.hp.com/en/rx3600_maint/rx3600_maint.pdf.
When installing or removing PCI cards, heed the warnings and cautions listed in the “Important
Safety Information” (page 23).
An overview of the PCI installation procedure follows:
1. Extend the server from the rack by removing the 25mm Torx screws that retain it in the rack
and flip out the server’s two pull handles. Use these handles to pull the server from the rack
until it locks in place and the cover is accessible, as shown in Figure 1-23.

Figure 1-23 Removing the Screws that Secure the Server in the Rack

2. Remove the server’s top rear cover to access the PCI slots by unscrewing the thumbscrews
at the rear of the server, as shown in Figure 1-24.

48 Itanium Processor Servers


Figure 1-24 Removing the Server's Top Panel

3. If you are inserting a card, remove the bulkhead screw that attaches the blank plate to the
server chassis. Retain both the screw and the plate.
4. Insert the PCI adapter into slot 8, which is closest to the side of the server’s case and furthest
from the power inlet. Ensure that the card is level, as shown in Figure 1-25.

Figure 1-25 Installing the PCI Card

Secure the card using the bulkhead screw.


5. If you remove a card, ensure that you replace the blank plate to maintain the correct airflow
for optimum cooling.

1.5 HP Integrity rx3600 49


1.5.5 Cable Management
Each HP Integrity rx3600 that has a connection to the system interconnect requires one cable
bracket installed on the rear rack column. This component is documented in the HP Cluster
Platform Integrity rx3600 Cable Bracket Installation Guide.

1.6 HP Integrity rx4640


The HP Integrity rx4640 is a 4U, quad-processor server that typically functions as a control node,
utility node, or I/O node. It is a 4-socket system rack-mount server based on the Itanium 2
processor family architecture. The server accommodates up to 32 DIMMs and internal peripherals
including disks and DVD-ROM. Its high availability features include hot-swappable fans and
200-240 VAC power supplies, hot-pluggable disk drives, and hot-pluggable PCI-X cards.

Figure 1-26 HP Integrity rx4640 Front View

All nodes are fitted with an MP that provides access to system console, reset, power management,
firmware upgrade, and system status via 10/100 LAN connections. The MP is powered from the
node's standby power source and is independent of the power on/off state of the node. Figure 1-27
shows the rear ports of the HP Integrity rx4640.

Figure 1-27 HP Integrity rx4640 Rear View


1

2 3 4 5 6

Table 1-13 describes the ports on the rear of the HP Integrity rx4640 that are used for connections
to other cluster components. These ports are identified by the callouts in Figure 1-27. Refer to
the cabling tables for explanations of the connection origins and destinations.

50 Itanium Processor Servers


Table 1-13 HP Integrity rx4640 Ports Used in Clusters
Callout Port Label Cluster Cabling Name and Description

1 PWR 0 and PWR 1 When used as a control node only, both power supplies are used to provide
redundancy. Otherwise, utility nodes use only PWR 0.

2 Screen Icon VGA Video Port. When used as a control node, this port is optionally connected
to the cluster's KVM console.

3 USB ports When this server is used as a control node, use only the top USB port for the
optional connection to the KVM mouse and keyboard. Do not use the ports labeled
keyboard and mouse.

4 MP LAN MP (Console, CES). The MP Ethernet LAN connection. This 10/100 Base-T port is
connected to the root console switch (CES1) in the UBB.

5 LAN Gb NIC1 (Administrative, AES) – The Gigabit Ethernet Port in PCI slot 2. This
10/100/1000 Base-T Ethernet port is connected to the root administrative switch
(AES1) in the UBB.

6 N/A PCI (slot 8).

The spare Ethernet port in PCI slot 2 is designated NIC 2 and might also be used for an optional
connection to the site WAN, depending on the server's role.
Refer to the server documentation for definitions of the remaining ports, but be aware that you
should never make additional connections to a server that is configured for a specific role in the
cluster.

1.6.1 Supported Memory Configurations


The HP Cluster Platform does not enforce any memory configuration rules on the control node
and utility nodes other than those associated with the server used. You can use DIMMs of 256
MB, 512 MB, 1 GB, and 2 GB providing you follow the upgrade rules for mixed DIMMs. For the
HP Integrity rx4640 these rules are:
• The HP Integrity rx4640 has a 16-DIMM memory extender board that is minimally configured
with 1 GB of memory, four 256 MB DIMMs loaded in quad 0 (slots 0A, 0B, 0C, and 0D).
• An optional 32 DIMM memory extender board is available to replace the 16-DIMM memory
extender board. Configure the extender board with a minimum of 1 GB in quad 0.
• Install additional DIMMs into both 16 and 32 DIMM boards. When adding DIMMs, use a
minimum of four DIMMs of identical size, and install them in the next available quad.
• You can use DIMMs of varying size across the entire extender board, but all four DIMMs
in each quad must be of an identical size.
• Remove slot fillers only to add DIMMs. Do not leave any empty slots without a filler because
this can affect system cooling.

1.6.2 Upgrading the HP Integrity rx4640


To upgrade the processor(s) in the HP Integrity rx4640, see the HP Integrity rx4640 Server Upgrade
Guide, Second Edition
http://docs.fc.hp.com/en/A6961-96018/A6961-96018.pdf.

1.6 HP Integrity rx4640 51


1.6.3 PCI-X Slot Assignment and Supported Options
PCI-X slots are numbered from 1 through 8, with the interconnect card in slot 8 on the far right
of the enclosure, as shown in Figure 1-27. The configuration requirements for slots 1 through 8
are as follows:
• PCI slots 1 and 2 are dedicated for use by the core I/O cards—SCSI HBA card in slot 1 and
dual-port gigabit Ethernet LAN card in slot 2. Slots 1 and 2 are not hot-pluggable capable.
Do not install additional PCI-X expansion cards in slots 1 or 2.
• Slots 3 and 4 are the first pair of shared slots, and slots 5 and 6 are the second pair of shared
slots. The maximum capability of each of the shared slots is PCI-X 66 MHz. If a PCI-X 133
MHz card is installed in a shared slot, its maximum capability is PCI-X 66 MHz. If different
modes (PCI instead of PCI-X) or slower speeds (33 MHz) are installed, the slot automatically
downgrades to accept the change.
• Slots 7 and 8 are single slots. The maximum capability of each slot is PCI-X 133 MHz. Only
slots 7 and 8 will allow 133 MHz, PCI-X cards to run at full speed. These two slots are not
limited by bus-mode, frequency-related incompatibilities. Slot 8 is always used for the
interconnect PCI card.
Table 1-14 summarizes the slot assignments and the required and optional PCI cards.
Table 1-14 HP Integrity rx4640 PCI slot assignments
Slot Bus Assignment Maximum Speed

1 1 Core I/O: SCSI HBA 66MHz PCI-X

2 1 Core I/O: Dual Port Gbit/sec Ethernet NIC 66MHz PCI-X

3 2 66MHz PCI-X

4 2 First 2 Gb/s Fibre Channel HBA (optional) 66MHz PCI-X

5 3 66MHz PCI-X

6 3 Second 2 Gb/s Fibre Channel HBA (optional) 66MHz PCI-X

7 4 133MHz PCI-X

8 5 Interconnect PCI adapter 133MHz PCI-X

The HP Cluster Platform hardware does not impose any restrictions on supported options.
However, the system software can restrict supported options to guarantee performance, or to
reduce qualification test matrices.

1.6.4 Installing or Removing a PCI Card


The HP Integrity rx4640 provides eight PCI slots across five PCI-X buses and supports
hot-pluggable PCI-X devices. However, you should not install the interconnect PCI adapter while
the server is running. Also, take care to use only slot 8 on bus 5, which is the designated slot for
HP Cluster Platform installations.
Detailed information on installing PCI cards is provided in the HP Integrity rx4640 Maintenance
Guide http://docs.hp.com/en/rx4640_maint/rx4640_maint.pdf.
When installing or removing PCI cards, heed the warnings and cautions listed in the “Important
Safety Information” (page 23).
An overview of the PCI installation procedure follows:
1. Extend the server from the rack by removing the two 25mm Torx screws that retain it in the
rack and flip out the server’s two pull handles. Use these handles to pull the server from
the rack until it locks in place and the cover is accessible (see callout 1 in Figure 1-28).

52 Itanium Processor Servers


Figure 1-28 Removing the Screws that Secure the Server in the Rack

2. Remove the server’s top rear cover to access the PCI slots by unscrewing the thumbscrews
at the rear of the server (see callout 1 in Figure 1-29).

Figure 1-29 Removing the Server's Top Panel

3. If you are inserting a card, remove the bulkhead screw that attaches the blank plate to the
server chassis. Retain both the screw and the plate.
4. Insert the PCI adapter into slot 8, which is closest to the side of the server’s case and furthest
from the power inlet. Ensure that the card is level, as shown in Figure 1-30.

1.6 HP Integrity rx4640 53


Figure 1-30 Installing the PCI Card

Secure the card using the bulkhead screw.


5. If you remove a card, ensure that you replace the blank plate to maintain the correct airflow
for optimum cooling.

1.6.5 Cable Management


Each HP Integrity rx4640 that has a connection to the system interconnect requires one cable
bracket installed on the rear rack column. This component is documented in the HP Cluster
Platform Integrity rx4640 Cable Bracket Installation Guide.

54 Itanium Processor Servers


2 Xeon Processor Servers
Several servers based on the Xeon processor are supported in HP Cluster Platform solutions.
This chapter presents overviews of the following servers:
• HP ProLiant DL140 G2 (Section 2.1).
• HP ProLiant DL140 G3 (Section 2.2 ).
• HP ProLiant DL160 G5 and G5p(Section 2.3)
• HP ProLiant DL360 G3, G4, and G4p (Section 2.4).
• HP ProLiant DL360 G5 (Section 2.5).
• HP ProLiant DL380 G3 and G4 (Section 2.6)
• HP ProLiant DL380 G5 (Section 2.7).

2.1 HP ProLiant DL140 G2


The HP ProLiant DL140 G2 server is a 1U, dual-processor capable server supporting Intel Xeon
processors to 3.6 GHz/800 MHz with 2MB L2 cache. It provides 8 DIMM slots, enabling a
maximum 16GB of PC2-3200 DDR memory. A standard full-height/full-length PCI express slot
and an additional low-profile, half-length PCI slot are provided for adapters. The DL140 G2
supports non hot-pluggable serial ATA (SATA) and SCSI hard disk drives.
Table 2-1 lists the features of the HP ProLiant DL140 G2.
Table 2-1 HP ProLiant DL140 G2 Features
Feature Specification

Available processor and cache, maximum Intel Xeon 3.6 GHz/800 MHz - 2M
2 processors per chassis
Intel Xeon 3.4 GHz/800 MHz - 1M

Intel Xeon 2.8 GHz/800 MHz - 1M

Memory type PC2-3200 DDR2

Maximum memory 16 GB

Storage (Maximum 2 hard drives of one Non-hot-pluggable Serial ATA (integrated controller)
type)
Non-hot-pluggable SCSI

1 removable media bay

Storage Controller Standard integrated dual channel SATA controller (SATA models)

Optional single channel wide Ultra3 SCSI adapter (in a PCI slot; SCSI
Models)

Network (2 ports) Dual integrated 10/100/1000 Broadcom 5721

Remote management Lights Out 100i

Figure 2-1 and Figure 2-2 identify the front and rear panel features of the HP ProLiant DL140
G2, which has the same physical features as the AMD Opteron-based ProLiant DL145 G2.

2.1 HP ProLiant DL140 G2 55


Figure 2-1 HP ProLiant DL140 G2 Front Panel
1 2 3 4 6 8

HP
ProLiant
DL 145

5 7 9

The following table describes the callouts in Figure 2-1.


Table 2-2 HP ProLiant DL140 G2 Front Panel Features
Item Description

1 Hard disk drive (HDD) bays

2 Optical media device bay

3 Unit identification (UID) button with LED indicator (blue)

4 System health LED indicator (amber)

5 Activity/link status LED indicators for NIC 1 and NIC 2 (green)

6 HDD activity LED indicator (green)

7 USB 2.0 ports

8 Power button with LED indicator (bicolor: green and amber)

9 Thumbscrews for the front bezel

Figure 2-2 HP ProLiant DL140 G2 Rear Panel


1 2 3 6 10 3 14

4 5 7 8 9 11 12 13

The following table describes the callouts in Figure 2-2.


Table 2-3 HP ProLiant DL140 G2 Rear Panel Features
Item Description

1 Ventilation holes

2 Thumbscrew for the top cover

3 Thumbscrews for the PCI riser board assembly

4 Gigabit Ethernet LAN ports for NIC 1 (RJ-45)

5 Gigabit Ethernet LAN ports for NIC 2 (RJ-45)

6 Low-profile 64-bit 133 MHz PCI-X riser board slot cover

7 USB 2.0 ports (black)

8 Video port (blue)

56 Xeon Processor Servers


Table 2-3 HP ProLiant DL140 G2 Rear Panel Features (continued)
Item Description

9 Serial port (teal)

10 Standard height, full-length 64-bit/133 MHz PCI-X riser board slot cover. You can convert the PCI-X
functionality of this slot to PCI Express using the PCI Express riser board option kit.

11 PS/2 keyboard port (purple)

12 PS/2 mouse port (green)

13 10/100 MB/s LAN port for IPMI management (RJ-45)

14 IEC power inlet

2.1.1 HP ProLiant DL140 G2 PCI Slot Assignments


The ProLiant DL140 G2 has two PCI–Express slots on the rear of the chassis. Table 2-4 summarizes
the slot assignments.
Table 2-4 HP ProLiant DL140 G2 PCI Slot Assignments
Slot Assignment

1 PCI-X interconnect

2 64-bit 133 MHz PCI-X

2.1.2 HP ProLiant DL140 G2 Memory Configurations


The HP ProLiant DL140 G2 has eight DIMM slots that support up to 16 GB maximum system
memory (2 GB in each of the eight DIMM slots). Observe the following rules when installing
memory modules:
• Use only HP supported PC2-3200 (400 MHz) registered ECC DIMMs in 512 MB, 1 GB, or 2
GB capacities.
• Install memory modules in pairs of the same size.
• Install memory modules in progressively larger capacity, following the slot sequence listed
in Table 2-5.
Table 2-5 HP ProLiant DL140 G2 Memory Module Sequence
Slot Capacity

DIMMA1 and DIMMB Smallest capacity modules, such as 512 MB.

DIMMA2 and DIMMB2 Next largest capacity modules

DIMMA3 and DIMMB3 Next largest capacity modules

DIMMA4 and DIMMB4 Largest capacity modules, such as 2 GB

2.1.3 Installing a PCI Card in the HP ProLiant DL140 G2


There are two PCI expansion slots on the system board, as shown by callouts 6 and 10 in Figure 2-2.
Table 2-6 HP ProLiant DL140 G2 PCI Slots
Slot Capabilities

64-bit 133 MHz PCI-X slot (left) Supports a low profile 64-bit, 133 MHz PCI-X riser board

64-bit 133 MHz PCI-X slot (right) Supports a standard height, full-length 64-bit 133 MHz PCI-X riser board

To install a standard height interconnect host bus adapter (HBA) or host channel adapter (HCA)
in PCI slot 1 (right side), follow these steps:
2.1 HP ProLiant DL140 G2 57
1. Remove the cover as follows:
a. Loosen the captive thumbscrew on the rear panel. This screw is identified by item 2 in
Table 2-3.
b. Slide the cover approximately 1.25 cm (0.5 in) toward the rear of the unit, then lift the
cover to detach it from the chassis.
c. Place the top cover in a safe place for reinstallation later.
2. Loosen the two captive thumbscrews that secure the PCI card cage to the chassis, as shown
in Figure 2-3.

Figure 2-3 Removing the HP ProLiant DL140 G2 PCI Card Cage

3. Lift and remove the PCI card cage from the chassis, as shown in Figure 2-3.

Figure 2-4 HP ProLiant DL140 G2 PCI Card Cage

4. Identify the wider standard height, full-length 64-bit/ 133 MHz PCI-X slot that is compatible
with the interconnect adapter (see Figure 2-2).
5. Slide the interconnect adapter board into the PCI slot, as shown in Figure 2-5. Press the
board firmly to seat it properly on the connector.

58 Xeon Processor Servers


Figure 2-5 Installing an Interconnect Adapter in the HP ProLiant DL140 G2 PCI Card Cage

6. Reinstall the PCI riser board assembly as follows:


a. Align the assembly with the system board expansion slots, then press it down to ensure
full connection to the system board.
b. Tighten the two captive thumbscrews to secure the assembly to the chassis.
7. Replace the server lid by reversing the procedure described in Step 1.
For more information, see the following documents:
• HP ProLiant DL140 Generation 2 Server Installation Sheet
• HP ProLiant DL140 Generation 2 Server Maintenance and Service Guide

2.2 HP ProLiant DL140 G3 Used in HP Cluster Platform and Scalable


Visualization Array
The HP ProLiant DL140 G3 server is a 1U, dual-processor capable server supporting Intel Xeon
dual-core processors to 3.0 GHz/1333 MHz with 4 MB (1 x 4MB) Level 2 cache (5100 series), or
4 MB (2 x 2MB) Level 2 cache (5000 series). It provides 8 DIMM slots, enabling a maximum 16
GB of PC2-5300 DDR2–667 memory. A standard full-height, full-length PCI express slot and an
additional low-profile, half-length PCI slot are provided for adapters.
The DL140 G3 supports non hot-pluggable serial ATA (SATA) and SCSI 3.5-inch hard disk drives.
The DL140 G3 also supports up to two hot-pluggable Serial ATA (SATA) or Serial Attached SCSI
(SAS) 3.5 inch hard drives and is used as a compute node in HP Cluster Platform configurations.
It is also used as a compute node or as a render node in HP Cluster Platform Scalable Visualization
Array (SVA) configurations.
Table 2-7 lists the features of the HP ProLiant DL140 G3.
Table 2-7 HP ProLiant DL140 G3 Features
Feature Specification

Up to two (2) dual-core processors per Dual-Core Intel® Xeon® Processor 5160, 3.00 GHz, 1333 MHz Front
chassis Side Bus (FSB)

Dual-Core Intel Xeon Processor 5150, 2.66 GHz, 1333 MHz FSB

Dual-Core Intel Xeon Processor 5140, 2.33 GHz, 1333 MHz FSB

Dual-Core Intel Xeon Processor 5130, 2.0 GHz, 1333 MHz FSB

Dual-Core Intel Xeon Processor 5110, 1.60 GHz, 1066 MHz FSB

Dual-Core Intel Xeon Processor 5080, 3.73 GHz, 1066 MHz FSB

Dual-Core Intel Xeon Processor 5060, 3.20 GHz, 1066 MHz FSB

Dual-Core Intel Xeon Processor 5050, 3.00 GHz, 667 MHz FSB

2.2 HP ProLiant DL140 G3 Used in HP Cluster Platform and Scalable Visualization Array 59
Table 2-7 HP ProLiant DL140 G3 Features (continued)
Feature Specification

Cache 5100 series: 4 MB (1x4 MB L2 cache)

5000 series: 4 MB (2x2 MB L2 cache)

Memory type PC2-5300 Fully Buffered DIMMs (DDR2-667) with Advanced ECC

Maximum memory 16 GB

Storage (Maximum two 3.5” hard drives of Non hot-pluggable Serial ATA
one type)
Non hot-pluggable SCSI

Hot-pluggable Serial ATA (SATA)

Serial Attached SCSI (SAS)

1 removable media bay

Storage Controller Non hot-pluggable models: HP embedded SATA RAID controller (non
hot-pluggable SATA models)

Hot-pluggable models: HP 8 internal port SAS HBA with RAID 0, 1, 10


(hot-pluggable SATA/SAS models)

Network (2 ports) Dual integrated 10/100/1000 Broadcom 5721 (Wake on LAN and PXE
capable)

Remote management HP ProLiant Lights Out 100i

Figure 2-6 identifies the front panel features of the HP ProLiant DL140 G3 with hot-pluggable
drives. The rear panel features of the DL140 G3 are the same as the DL140 G2 as shown in
Figure 2-2.

Figure 2-6 HP ProLiant DL140 G3 Front Panel

1 2 3 4 6 8

5 7 9

The following table describes the callouts in Figure 2-6.


Table 2-8 HP ProLiant DL140 G3 Front Panel Features
Item Description

1 Hard disk drive (HDD) bays with hot-pluggable drives shown (otherwise, the front panel is the
same as the DL140 G2)

2 Optical media device bay

3 Unit identification (UID) button with LED indicator (blue)

4 System health LED indicator (amber)

5 Activity/link status LED indicators for NIC 1 and NIC 2 (green)

6 HDD activity LED indicator (green)

7 USB 2.0 ports

60 Xeon Processor Servers


Table 2-8 HP ProLiant DL140 G3 Front Panel Features (continued)
Item Description

8 Power button with LED indicator (bicolor: green and amber)

9 Thumbscrews for the front bezel

2.2.1 HP ProLiant DL140 G3 PCI Slot Assignments


The ProLiant DL140 G3 has two PCI Express slots (optional PCI-X) on the rear of the chassis. All
slots can accept universal keyed PCI cards. Table 2-9, Table 2-10, and Table 2-11 summarize the
PCI slot assignments.
Table 2-9 summarizes the PCI Express slot assignments for the DL140 G3.
Table 2-9 HP ProLiant DL140 G3 PCI Express Slot Assignments
Slot Assignment Comment

1 x16 PCI Express (full-length, full-height) Interconnect

2 x PCI Express (half-length, low-profile)

Table 2-10 summarizes optional PCI-X slot assignments for the DL140 G3.
Table 2-10 HP ProLiant DL140 G3 PCI-X Slot Assignments
Slot Assignment Comment

1 Full-length, full-height PCI-X 100 (133 MHz capable if only Interconnect


one PCI-X card is installed)

2 Half-length, low-profile PCI-X 100 (133 MHz capable if only


one PCI-X card is installed)

Table 2-11 summarizes the PCI Express graphics card slot assignments used in HP Cluster
Platform SVA configurations.
Table 2-11 HP ProLiant DL140 G3 Graphics Card PCI Express Slot Assignments (SVA)
Slot Assignment Comment

1 x16 PCI Express (full-length, full-height) Dual-port graphics adapter

2 PCI Express

2.2.2 HP ProLiant DL140 G3 Memory Configurations


The HP ProLiant DL140 G3 has eight DIMM slots that support up to 16 GB maximum system
memory (2 GB in each of the eight DIMM slots). Observe the following rules when installing
memory modules:
• Use only HP supported PC2-5300 fully buffered DIMMs (DDR2–667) with advanced ECC
capability in 512 MB, 1 GB, or 2 GB capacities.
• Install memory modules in pairs of the same size.
• Install memory modules in progressively larger capacity, following the slot sequence listed
in Table 2-12.
Table 2-12 HP ProLiant DL140 G3 Memory Module Sequence
Slot Capacity

DIMMA1 and DIMMB1 Smallest capacity modules, such as 512 MB

DIMMA2 and DIMMB2 Next largest capacity modules

2.2 HP ProLiant DL140 G3 Used in HP Cluster Platform and Scalable Visualization Array 61
Table 2-12 HP ProLiant DL140 G3 Memory Module Sequence (continued)
Slot Capacity

DIMMA3 and DIMMB3 Next largest capacity modules

DIMMA4 and DIMMB4 Largest capacity modules, such as 2 GB

2.2.3 Installing a PCI Card in the DL140 G3


Installation of PCI cards in the DL140 G3 PCI riser board is the same as the DL140 G2 described
previously in Section 2.1.3. The HP ProLiant DL140 Generation 3 Server supports up to two
optional PCI-X riser boards. For more information on HP ProLiant DL140 G3, see the ProLiant
100 Series Servers User Guide.

2.3 HP ProLiant DL160 G5 and G5p


The HP ProLiant DL160 G5 uses 5400-series Intel Xeon processors and chipsets, with four disk
drive bays in a 1U form factor. The DL160 G5 has two x16 PCI Express 2.0 slots, four 3.5-inch
drives with 3 terabytes (TB) of storage capacity. The DL160 G5 also has a 1600MHz front-side
bus. The DL160 G5 and G5p can be used as a control node, a utility node, and a compute node
in HP Cluster Platform configurations.
For the features and specifications of the HP ProLiant DL160 G5, go to:
http://h18004.www1.hp.com/products/quickspecs/12902_na/12902_na.html
For the features and specifications of the HP ProLiant DL160 G5p, go to:
http://h18004.www1.hp.com/products/quickspecs/13138_na/13138_na.html
Figure 2-7 shows the front view of the ProLiant DL160 G5 and G5p.

Figure 2-7 HP ProLiant DL160 G5 and G5p Front View


1

2 3 4 5 6 7 8

11 10 9

The following list describes the callouts shown in Figure 2-7:


1. Thumbscrews for rack mounting (quantity 2)
2. Optical disk drive bay
3. Serial number pull tab
4. Two front USB 2.0 ports
5. Unit identification (UID) LED button
6. System health LED
7. NIC1 LED
8. NIC2 LED
9. Power button with LED indicator (bicolor: green and amber)
10. Hard disk drive (HDD) LED
11. HDD bays 1, 2, 3, and 4

62 Xeon Processor Servers


Figure 2-8 shows the rear view of the ProLiant DL160 G5 and G5p.

Figure 2-8 HP ProLiant DL160 G5 and G5p Rear View


1 2 3 4 5 6 7

15 14 13 12 11 10 9 8

The following list describes the callouts shown in Figure 2-8:


1. Power supply cable socket
2. PS/2 mouse port (green)
3. GbE LAN port for NIC2
4. Captive thumbscrew for top cover
5. Serial port (teal)
6. Low profile/Half length expansion slot
7. Full-height/Full-length expansion slot
8. T10/T15 wrench
9. Thumbscrew for PCI cage
10. UID LED button
11. VGA port
12. HP LO100i management LAN port
13. Two rear USB 2.0 ports
14. GbE LAN port for NIC1/Management
15. PS/2 keyboard port (purple)

2.3.1 PCI Slot Assignments


The following table describes the PCI slot assignments when the DL160 G5 is used in HP Cluster
Platform solutions:

PCI Slot Assignment

1 PCI Interconnect/ PCI-E x16 Gen2

2 PCI-E x16 Gen2

For additional information, such as the system board layout and installing PCI cards, see the HP
ProLiant DL160 Generation 5 Server Maintenance and Service Guide:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01325420/c01325420.pdf
For additional information, such as the system board layout and installing PCI cards, see the HP
ProLiant DL160 Generation 5p Server Maintenance and Service Guide:
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01555951/c01555951.pdf

2.4 HP ProLiant DL360 G3, G4, and G4p


By combining concentrated 1U compute power, integrated lights-out management, and essential
fault tolerance, the HP ProLiant DL360 is optimized for space constrained data center installations.
Table 2-13 compares the features of the HP ProLiant DL360 G3 and the HP ProLiant DL360 G4.

2.4 HP ProLiant DL360 G3, G4, and G4p 63


Table 2-13 ProLiant DL360 G3, G4 and G4p Model Comparison
Feature HP ProLiant DL360 G3 HP ProLiant DL360 G4 and G4p

Processor 2.4+GHz Xeon, 533MHz Front Side Bus (2P) Intel Xeon processor 3.4, 3.4, and 3.6 GHz
Xeon, 800MHz front side bus and EM64T(2P)

Processor cache 512K L2, 1 MB L3, Or 2 MB L3 1Mb L2 (2Mb L2 advanced transfer cache In
the G4p)

FSB 533MHz 800 MHz

Drive controller Embedded Smart Array 5I+, With 64 MB Embedded Smart Array 6I, optional 128 MB
Memory, Optional Bbwc BBWC (SCSI Models)

NIC 2 Embedded Nc7781 Gigabit NICs Dual port NC7782 Gigabit NIC

Memory pc2100,266MHz DDR, 2:1 interleaved pc2700 333MHz DDR, can be configured for
2:1 or 4:1 interleaving, with online spare
memory and advanced ECC (400MHz DDR2
in the G4p

Drive bays 2 - 1.0" (U320 drives only) 2 - 1.0" (U320 SCSI disk drive models; SATA
disk drive models)

Management Embedded iLO Embedded iLO

i/O slots 1 or 2 PCI-X Slots (1 with redundant power 1 full length and 1 half length PCI-X slots;
installed, 2 without) optional PCI Express slots

Maximum memory 8Gb ECC SDRAM 8Gb ECC SDRAM (12 GB in the G4p)

Power supply Optional HP redundant Optional HP redundant

Fan Non-HP, non redundant Redundant (standard)

Chassis 1U; fixed rails standard, optional sliding rails 1U; sliding rails standard, optional cable
management arm

Power 325W 460W

There are three embedded network ports on an HP ProLiant DL360. The iLO port connects to
the console network switch and is used for server management. The NIC 1 port is used for the
administrative network, and the NIC 2 port is used for the system interconnect. See Figure 2-10
and Figure 2-12 to locate the ports.
Fan redundancy is standard in the HP ProLiant DL360 G4 model and is optional in the G3 model.
It allows for continued operation until maintenance can be scheduled to replace a failed fan
assembly. Similarly, optional hot-pluggable power supply redundancy means no down time to
repair failure because each power supply has its own power cord. Hard disk drives for the HP
ProLiant DL360 are optional. If used, each disk drive must be the same size and speed.
Figure 2-9 and Figure 2-10 describe the front and rear panels of the ProLiant DL360 G3.

64 Xeon Processor Servers


Figure 2-9 HP ProLiant DL360 G3 Front Panel
1
3

2
4
5
6

The following table describes the callouts in Figure 2-9.

Item Description

1 Floppy drive

2 SCSI drive bay 1

3 CD-ROM drive

4 SCSI drive bay 2

5 Fan module

6 Power switch

7 Signal LEDs

Figure 2-10 HP ProLiant DL360 G3 Rear Panel with Single Power Supply
2 7 11 12

1 3 4 5 6 9

8 10

The following table describes the callouts in Figure 2-10.

Item Description

1 iLO

2 NIC2

3 NIC1

4 Serial connector

5 Video connector

6 Mouse connector (green)

7 PCI Slot 1

2.4 HP ProLiant DL360 G3, G4, and G4p 65


Item Description

8 Keyboard connector (purple)

9 USB ports

10 UID

11 PCI Slot 2

12 Power supply

Figure 2-11 and Figure 2-12 show the front and rear panels of the ProLiant DL360 G4 servers.
The DL360 G4 front panel is the same as the DL360 G3 with the exception that the G4 has a USB
port below the power switch.

Figure 2-11 ProLiant DL360 G4 Front Panel


1 3 7

2 4 5 6 8

The following table describes the callouts in Figure 2-11:

Item Description

1 Floppy drive

2 SCSI drive bay 1

3 CD-ROM drive

4 SCSI drive bay 2

5 Fan module

6 Power switch

7 Signal LEDs

8 USB port

Figure 2-12 HP ProLiant DL360 G4 Rear Panel


1 9 11 12

2 3 4 5 6 7 8 10

66 Xeon Processor Servers


The following table describes the callouts in Figure 2-12.

Item Description

1 PCI-X expansion slot 1 (64-bit/133-MHz 3.3V)

2 Serial connector (teal)

3 Video connector (blue)

4 Keyboard connector (purple)

5 Mouse connector (green)

6 iLO connector

7 10/100/1000 NIC 1

8 10/100/1000 NIC 2

9 PCI-X expansion slot 2 (64-bit/100 MHz 3.3V)

10 USB connector

11 Power supply bay 2 (optional)

12 Power supply bay 1

The ProLiant DL360 G4p is a variant of the ProLiant DL360 G4. It provides support for four Serial
SCSI or Serial ATA disk drives, a high performance RAID controller, and external Serial SCSI
connector for MSA50 storage connectivity. It also offers redundant fans and optional redundant
power supplies, online spare memory, and optional transportable battery-backed write cache.
Figure 2-13 and Figure 2-14 show the front and rear panels, respectively, of the ProLiant DL360
G4p, a variant of the DL360 G4.

Figure 2-13 ProLiant DL360 G4p Front Panel

1 3

2 4 5

The following table describes the callouts in Figure 2-13:

Item Description

1 Floppy drive

2 Hard drive bay 1

3 CD-ROM drive

4 Hard drive bay 0

5 Front USB port

2.4 HP ProLiant DL360 G3, G4, and G4p 67


Figure 2-14 HP ProLiant DL360 G4p Rear Panel
1 7 11 12

2 3 4 5 6 8 9 10

The following table describes the callouts in Figure 2-14:

Item Description

1 PCI-X expansion slot 1, 64-bit 133-MHz 3.3V (optional PCI Express slot 1, x8)

2 Serial connector (teal)

3 Video connector (blue)

4 Keyboard connector (purple)

5 Mouse connector (green)

6 iLO connector

7 PCI-X expansion slot 2, 64-bit 133 MHz 3.3V (optional PCI Express slot 2, x8)

8 10/100/1000 NIC 1

9 10/100/1000 NIC 2

10 USB connector

11 Power supply bay 2

12 Power supply bay 1

2.4.1 PCI Slot Assignments


The HP ProLiant DL360 G4 and the HP ProLiant DL360 G4p have two PCI slots on the rear of
the chassis. Table 2-14 summarizes the slot assignments when the server is used as a control
node or utility node.
Table 2-14 HP ProLiant DL360 G4 and DL360 G4p PCI Slot Assignments
Slot Assignment

1 PCI-X interconnect

2 64-bit 133 MHz PCI-X

2.4.2 Embedded Technologies


Embedded technologies and full-length slots offer configuration flexibility in an ultradense form
factor. The embedded technologies of an HP ProLiant DL360 include Integrated Lights Out (iLO)
remote management, dual 10/100/1000 Fast Ethernet network interface cards (NICs), and Smart
Array 5i+.

Integrated Lights Out


The iLO device is the console power manager for this cluster. It enables you to power off and
power on groups of nodes directly from the console. The iLO hardware is self contained. It is
powered as long as the machine is powered. The power manager also provides access to a node's

68 Xeon Processor Servers


console. By default, an iLO uses the Dynamic Host Configuration Protocol (DHCP) to obtain its
IP address. The system software uses this feature when configuring the cluster.
Standard features of iLO include remote power on/off, a text interface for remote viewing and
management of the server's boot sequence, server logs, alert forwarding, diagnostics, group
administration, and security features, including 128-bit Secure Socket Layer (SSL) encryption.

Dual Embedded NICs


The ProLiant DL360 G3 and DL380 G3 include a dual-port embedded NC7781 PCI-X gigabit
Ethernet NIC. The NC7781 is an auto negotiating 10/100/1000 MB/s network interface controller,
selecting either standard Ethernet (10 MB/s), Fast Ethernet (100 MB/s), or gigabit Ethernet (1000
MB/s). NIC1 is the bottom NIC in the DL360 G3.
The DL360 G4 and DL380 G4 include an embedded NC7782 dual-port PCI-X gigabit NIC. The
NC7782 is an auto negotiating 10/100/1000 MB/s network interface controller, selecting either
standard Ethernet (10 MB/s), Fast Ethernet (100 MB/s), or gigabit Ethernet (1000 Mb/s). NIC1 is
the left-hand NIC, (middle RJ45 connector, between the iLO connection and (NIC 2) in a DL360
G4. NIC1 is the right-hand NIC in a DL380 G4.

2.4.3 High-Availability Features


The HP ProLiant DL360 servers also offer the following high-availability features:
• Advanced ECC memory
• Redundant ROM
• Automatic Server Recovery-2 (ASR-2)
• Online spare memory

Advanced ECC memory


Advanced ECC memory detects and corrects single-bit memory errors. It can also correct 4-bit
memory errors that occur within a single DRAM chip on a DIMM. Advanced ECC memory is
standard on the HP ProLiant DL360 server. It provides more robust error detection and correction
capabilities than ECC memory.

Redundant ROM
Redundant ROM reduces the risk associated with system upgrades because the existing ROM
setup is saved as backup during a BIOS upgrade in the event the procedure fails.

Automatic Server Recovery


Automatic Server Recovery-2 (ASR-2) increases server availability by restarting the server after
a system hang or shutdown without IT intervention for server restart.

Online spare memory


Online spare memory is a high level of memory protection that complements Advanced ECC
support. With online spare memory enabled, the system still takes advantage of Advanced ECC
but one bank of memory is designated as a spare bank.
In this mode, the designated bank is not used for total available system memory. If the correctable
error threshold is exceeded by a DIMM in a particular bank of memory, that bank will be taken
offline and the spare bank activated instead. Once the original bank is deactivated, the system
stops using the memory that exhibited the failure. After switching to the spare bank of memory,
the system continues to monitor correctable threshold errors, and logs any failures.
With online spare memory, degraded memory is automatically disengaged and a fresh set of
memory is used in its place. This brings the reliability of the system to the pre-failure level without
any service interruption and without compromising system availability.

2.4 HP ProLiant DL360 G3, G4, and G4p 69


2.4.4 Removing a Server from the Rack
To access internal components in the HP ProLiant DL360 G3 and G4 servers, you must first
power down the server and then remove it from the rack. All of the servers in the cluster are
secured to the rack on a sliding rail.
The front panel power button on the HP ProLiant DL360 G3 and G4, , as shown in Figure 2-15,
toggles power between on and standby. If you press the Power On/Standby switch (callout 2)
to power down the server, the Power On/Off LED changes from green to amber, indicating
standby mode. In Standby, the server removes power from most electronics and drives, portions
of the power supply, and some internal circuitry remain active. If you press the Unit Identification
switch on the front panel (callout 1), an LED illuminates blue on the server front and rear panels.

Figure 2-15 Front Unit Identification LEDs


1

The rear Unit Identification LED, as shown in Figure 2-16, identifies the server being serviced.

Figure 2-16 Rear Unit Identification LEDs

To completely remove all power, disconnect the power cord first from the AC outlet and then
from the server.
When performing this task, heed the warnings and cautions listed in Section .
To remove a server from the rack without shutting down the entire cluster, as shown in
Figure 2-17, follow these steps:
1. Power down the server.
2. Disconnect all remaining cables on the server rear panel, including cables extending from
external connectors on expansion boards. Make note of which ethernet and interconnect
cables are connected to which ports.
3. Loosen the thumbscrews securing the server to the rack (callout 1).
4. Slide the server out of the rack until the rail locks engage.
5. Press and hold the rail locks (callout 2), extend the server until it clears the rack.
6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.

70 Xeon Processor Servers


Figure 2-17 Sliding the Server from the Rack

2
MC

3
1
MC
2

MC
1
MC
2

2.4.4.1 Accessing Internal Components


To access internal components in the ProLiant DL360 servers, remove the access panel.
When performing this task, heed the warnings and cautions listed in “Important Safety
Information” (page 23).
To remove the access panel, as shown in Figure 2-18, take the following steps:
1. Remove the server from the rack.
2. Lift up on the hood latch (callout 1). The access panel slides toward the back of the chassis.
3. Pull up to remove the access panel (callout 2).

Figure 2-18 Removing the Access Panel

1
2
2

MC
1
MC
2

To replace the top cover, reverse steps 1 through 3.

2.4 HP ProLiant DL360 G3, G4, and G4p 71


2.4.5 Replacing a PCI Card
When replacing a PCI card, you need a grounding strap. The adapter card is sensitive to
electrostatic discharge. Take care to avoid mishandling, which could damage the card. Before
beginning installation, and without removing the adapter card from its antistatic bag, inspect
the product for any signs of obvious damage, such as chipped or loose components.
To replace a PCI card, follow these steps:
1. Attach the grounding strap to your wrist or ankle and to a metal part of the chassis.
2. Power off the server.
3. Remove the server from the rack.
4. Remove the cover from the server and locate the PCI slots.
5. Disconnect any cables connected to any existing expansion boards.
6. Loosen the PCI riser board assembly thumbscrews.
7. Lift the front of the assembly slightly and unseat the riser boards from the PCI riser board
connectors.

Note:
Be sure that all DIMM slot latches are closed to provide adequate clearance before removing
the PCI riser board assembly with a half-length expansion board.

8. Remove the riser card from the PCI connector side of the card.
9. Remove the new PCI adapter from its antistatic plastic bag. Handle the adapter only by the
edges. Do not touch the card-edge connectors.
10. Record the adapter serial number located on the card for future reference.
11. Insert the new PCI into the riser card, as shown in Figure 2-19.

Figure 2-19 Inserting the PCI Riser Card

HPTC-0038

Important:
A 64-bit riser card must be used in a 64-bit PCI slot; likewise, a 32-bit riser card must be used
in a 32-bit PCI slot. Otherwise, the PCI interface might not be correctly detected and serious
performance irregularities might result.

12. Insert the riser card into a PCI slot. Handle the interface card gently, preferably by the front
panel or card edges. The front panel of the interface card is the metal plate or molding that
contains two LEDs and the port connector. Be sure the adapter is securely seated.
13. Secure the card in place.
14. Tighten the screws beneath the ports.
15. Replace the exterior cover of the server, and attach the cable between the card and the switch.
16. Power on the server and check that the card is detected.

72 Xeon Processor Servers


2.5 HP ProLiant DL360 G5
The HP ProLiant DL360 G5 allows up to two dual-core Intel Xeon 5100/5000 series processors
and offers up to six hot-pluggable SATA/SAS drive bays (depending on the model). There are
three different model classifications of the HP ProLiant DL360 G5, such as:
Performance The HP ProLiant DL360 G5 performance models feature two dual-core Intel
Xeon 5160/5150 processors and up to six active SATA/SAS drive bays.
Base Four base models feature one dual–core Intel Xeon 5160/5150/5140 processor
and offer four active SATA/SAS drive bays with an option for six when using
an optional storage cable.
Entry There are three entry models that feature one dual-core Intel Xeon
5130/5120/5110 processor, with four active SATA/SAS drive bays standard.
Features of the HP ProLiant DL360 G5 are described in Table 2-15.
Table 2-15 HP ProLiant DL360 G5 Features
Feature Model/Type Description

Processor (one of the Performance and Base Dual-Core Intel Xeon 5160 Processor, 3.00 GHz, 1333 Front
following depending on models Side Bus (FSB)
model)
Dual-Core Intel Xeon 5150 Processor, 2.66 GHz, 1333 FSB
NOTE: Intel 5100/5000 series
processors are 64-bit, Dual-Core Intel Xeon 5140 Processor, 2.33 GHz, 1333 FSB
dual-core, support hyper
Entry models Dual-Core Intel Xeon 5130 Processor, 2.00 GHz, 1333 FSB
threading, and Intel VT
technology. Dual-Core Intel Xeon 5120 Processor, 1.86 GHz, 1066 FSB

Dual-Core Intel Xeon 5110 Processor, 1.60 GHz, 1066 FSB

Optional Dual-Core Intel Xeon 5080 Processor, 3.73 GHz, 1066 FSB

Base models Dual-Core Intel Xeon 5060 Processor, 3.2 GHz, 1066 FSB

Optional Dual-Core Intel Xeon 5050 Processor, 3.0 GHz, 667 FSB

Processor cache (one of the 4MB (2 x 2MB) Level 2 cache (5000 series) or
following depending on
model) 4MB (1 x 4MB) Level 2 cache (5100 series)

Memory Type: PC2-5300 Fully Buffered DIMMs (DDR2-667)

Standard (Entry or Base Models): 1 GB (2 x 512 MB)

Standard (Performance Models): 2 GB (2 x 1 GB)

Maximum: 32 GB (8 x 4 GB)

Storage controller Performance models Smart Array P400i Controller with 256MB battery-backed
write cache (RAID 0/1/5/6)

Base models Smart Array P400i Controller with 256MB cache (RAID
0/1/5)

Entry models Smart Array E200i Controller (64MB; 128MB BBWC


optional)

Internal drive support Up to six small form factor hot-plug drive bays

Up to 437 GB SAS with optional hard drives

Optical drives (Slimline drive Optional CD-ROM, DVD-ROM, DVD/CD-RW Combo


bay) (standard in performance models), DVD-RW, or Slimline
floppy drive

Hard drives Performance and Base Up to six Small Form Factor bays
models

2.5 HP ProLiant DL360 G5 73


Table 2-15 HP ProLiant DL360 G5 Features (continued)
Feature Model/Type Description

Entry models Four Small Form Factor bays

Network controller Embedded Dual NC373i Multifunction Gigabit Network


Adapters with TCP/IP Offload Engine, including support
for Accelerated iSCSI through an optional ProLiant
Essentials Licensing Kit

Expansion slots Two PCI Express expansion slots: one full-length,


full-height slot; one low-profile slot

Front or rear accessible video port

USB 2.0 Ports Four Total: one front, one internal, and two rear accessible
ports

Redundancy Multiple layers of fault tolerance through critical


component redundancy (power supply and fan
redundancy), mirrored memory, embedded RAID
capability, and full-featured remote Lights-Out
management

Management New Integrated Lights-Out 2 featuring Integrated Remote


Console with KVM over IP performance over shared
network access

HP Power Meter and Power Regulator for ProLiant,


delivering integrated power monitoring and server level,
policy based power management with industry leading
energy efficiency and savings on system power and cooling
costs

Convenient slide-out Systems Insight Display for quick


and easy front view server diagnostics

ProLiant Essential Foundation Pack standard, including


HP Systems Insight Manager, SmartStart, SmartStart
Scripting Toolkit, Subscriber's Choice and ROM Based
Setup Utility (RBSU)

Automatic Server Recovery (ASR), ROM Based Setup


Utility (RBSU), HP System Insight Manager, Status LEDs
including system health, UID, and SmartStart

Power supply 700 W, optional (1 + 1 redundant) power supply

Fan Nine fans ship standard; N+1 fan redundancy standard

2.5.1 HP ProLiant DL360 G5 Front Panel


The front of the HP ProLiant DL360 G5 is shown in Figure 2-20.

Figure 2-20 HP ProLiant DL360 G5 Front Panel


1 2 3

5 6
1 2 3 4

11 10 9 8 7 6 5 4

74 Xeon Processor Servers


The following table describes the callouts in Figure 2-20:

Item Description

1 Hard drive bay 5 (an optional controller is required when the server is configured with six
hard drives)

2 Hard drive bay 6 (an optional controller is required when the server is configured with six
hard drives)

3 Multi-bay drive bay

4 USB port

5 Slide-out System Insight Display

6 Video connector

7 Hard drive bay 4

8 Hard drive bay 3

9 Hard drive bay 2

10 Hard drive bay 1

11 Slide out asset tag

2.5.2 HP ProLiant DL360 G5 Rear Panel


Figure 2-21 shows the rear panel of the HP ProLiant DL360 G5.

Figure 2-21 HP ProLiant DL360 G5 Rear Panel


1 2 3 4

5 6 7 8 9 10 11 12 613

The following table describes the callouts in Figure 2-21.

Item Description

1 Low-profile PCI Express slot

2 Full-size PCI Express slot

3 Redundant power supply

4 Redundant power supply

5 iLO2 10/100 NIC

6 USB 2.0 port

7 USB 2.0 port

8 Video port

9 Serial port

10 PS2 Mouse

11 PS2 Keyboard

2.5 HP ProLiant DL360 G5 75


Item Description

12 Multifunction Gigabit Ethernet NIC 1

13 Multifunction Gigabit Ethernet NIC 2

2.5.3 HP ProLiant DL360 G5 Front Panel LEDs


The HP ProLiant DL360 G5 front panel LEDs are shown in Figure 2-22.

Figure 2-22 ProLiant DL360 G5 Front Panel LEDs


1 2 3 4

5 6

The following table describes the callouts in Figure 2-22.


Table 2-16 ProLiant DL360 G5 Front Panel LEDs
Item Description Status

1 Power On/Standby button and system Green = System is on.


power LED
Amber = System is shut down, but power is still applied.

Off = Power cord is not attached, power supply failure has


occurred, no power supplies are installed, facility power is
not available, or disconnected power button cable.

2 Unit Identification Button (UID) button Blue = Identification is activated.

Flashing blue = System is being remotely managed.

Off = Identification is deactivated.

3 Internal health LED Green = System health is normal.

Amber = System health is degraded. To identify the


component in a degraded state, see Section 2.5.5.

Red = System health is critical. To identify the component


in a critical state, see Section 2.5.5.

Off = System health is normal (when in standby mode).

4 External health LED (power supply) Green = Power supply health is normal.

Amber = Power redundancy failure occurred.

Off = Power supply health is normal when in standby mode.

5 NIC 1 link/activity LED Green = Network link exists.

Flashing green = Network link and activity exist.

Off = No link to network exists.

76 Xeon Processor Servers


Table 2-16 ProLiant DL360 G5 Front Panel LEDs (continued)
Item Description Status

If power is off, the front panel LED is not active. View the
LEDs on the RJ-45 connector for status by referring to the
rear panel LEDs.

6 NIC 2 link/activity LED Green = Network link exists.

Flashing green = Network link and activity exist.

Off = No link to network exists.

If power is off, the front panel LED is not active. View the
LEDs on the RJ-45 connector for status by referring to the
rear panel LEDs.

2.5.4 HP ProLiant DL360 G5 Rear Panel LEDs and Buttons


The HP ProLiant DL360 G5 rear panel LEDs are shown in Figure 2-23.

Figure 2-23 ProLiant DL360 G5 Rear Panel LEDs

1 2 3 4 5 6 9

7 8

The following table describes the callouts in Figure 2-23.


Table 2-17 ProLiant DL360 G5 Rear Panel LEDs and Buttons
Item Description Status

1 iLO 2 NIC activity LED Green = Activity exists.

Flashing green = Activity exists.

Off = No activity exists.

2 iLO 2 NIC link LED Green = Link exists.

Off = No link exists.

3 10/100/1000 NIC 1 activity Green = Activity exists.


LED
Flashing green = Activity exists.

Off = No activity exists.

4 10/100/1000 NIC 1 link Green = Link exists.


LED
Off = No link exists.

5 10/100/1000 NIC 2 activity Green = Activity exists.


LED
Flashing green = Activity exists.

Off = No activity exists.

6 10/100/1000 NIC 2 link Green = Link exists.


LED
Off = No link exists.

7 UID button/LED Blue = Identification is activated.

2.5 HP ProLiant DL360 G5 77


Table 2-17 ProLiant DL360 G5 Rear Panel LEDs and Buttons (continued)
Item Description Status

Flashing blue = System is being managed remotely.

Off = Identification is deactivated.

8 Power supply 2 LED Green = Normal.

Off = System is off or power supply has failed.

9 Power supply 1 LED Green = Normal.

Off = System is off or power supply has failed.

2.5.5 System Insight Display


The HP ProLiant DL360 G5 has a slide-out System Insight Display. Figure 2-24 shows the slide-out
System Insight Display, which enables diagnosis with the access panel installed.

Figure 2-24 System Insight Display (Actual)

78 Xeon Processor Servers


Note:
The System Insight Display LEDs represent the board layout (see Figure 2-25).

Figure 2-25 System Insight Display Map

Table 2-18 describes the status of the System Insight Display LEDs.
Table 2-18 System Insight Display LEDs
LED Description Status

Online Spare Memory Green = Protection enabled

Flashing amber = Memory configuration error

Amber = Memory failure occurred

Off = No protection

Mirrored Memory Green = Protection enabled

Flashing amber = Memory configuration error

Amber = Memory failure occurred

Off = No protection

All Other LEDs Amber = Failure

Off = Normal

For additional information detailing the causes for the activation of these LEDs,
see Section 2.5.5.1.

2.5.5.1 HP Systems Insight Display LEDs and Internal Health LED Combinations
When the internal health LED on the front panel illuminates either amber or red, the server is
experiencing a health event. Combinations of illuminated system LEDs and the internal health
LED indicate system status.
The front panel health LEDs indicate only the current hardware status. In some situations, HP
System Insight Manager (SIM) may report server status differently than the health LEDs because
the software tracks more system attributes. Table 2-19 lists the the System Insight Display and
Internal Health LED Combinations.

2.5 HP ProLiant DL360 G5 79


Table 2-19 System Insight Display LED and Internal Health LED Combinations
HP Systems Insight Display Internal Health
LED and Color LED Color Status

Processor failure, socket X Red One or more of the following conditions may exist:
(amber)
– Processor in socket X has failed.

– Processor X is required, but not yet installed in the socket.

– Processor X is unsupported.

Amber Processor in socket X is in a prefailure condition.

Processor failure, both Red Processor types are mismatched.


sockets (amber)

PPM failure (amber) Red Integrated PPM has failed.

FBDIMM failure, slot X Red One or more of the following conditions may exist:
(amber)
– FBDIMM in slot X has failed.

– FBDIMM in slot X is an unsupported type, and no valid memory


exists in another bank.

Amber One or more of the following conditions may exist:

– FBDIMM in slot X has reached single-bit correctable error threshold.

– FBDIMM in slot X is in a prefailure condition.

– FBDIMM in slot X is an unsupported type, but valid memory exists


in another bank.

FBDIMM failure, all slots Red No valid or usable memory is installed in the system.
(amber)

Over temperature (amber) Amber The health driver has detected a cautionary temperature level.

Red The server has detected a critical temperature level.

Riser interlock (amber) Red The PCI riser board assembly is not seated properly.

Online spare memory Amber Bank X failed over to the online spare memory bank.
(amber)

Fan module (amber) Amber A redundant fan has failed.

Fan module (amber) Red The minimum fan requirements are not being met in one or more of
the fan modules. One or more fans have failed or are missing.

2.5.6 PCI Slot Assignments


The HP ProLiant DL360 G5 has two PCI slots on the rear of the chassis. Table 2-20 summarizes
the slot assignments when the server is used as a control node, utility node, or compute
(application) node.
Table 2-20 HP ProLiant DL360 G5 PCI Slot Assignments
Slot Assignment (PCI Express) Assignment (PCI-X)

1 (Low-profile) x8 PCI Express x8 PCI Express (PCI Interconnect)

2 (Full-profile) x8 PCI Express (PCI Interconnect) 64 Bit/133 MHZ PCI-X (Optional)

80 Xeon Processor Servers


2.5.7 HP ProLiant DL360 G5 Embedded Technologies and Fault Tolerance
The HP ProLiant DL360 G5 is similar to prior generations in that it also has embedded
technologies, including:
• iLO2 features Integrated Remote Console with KVM over IP performance over shared
network access which is used to connect to the console network.
• Dual-embedded NICs
The HP ProLiant DL360 G5 also includes high-availability features and is optimized for fault
tolerance with:
• Fan redundancy
• Hot-pluggable power supply with optional redundancy
• Smart Array RAID controller with transportable battery-backed write cache and RAID 6
options
• Advanced ECC memory
• Online spare memory

2.5.7.1 Removing a ProLiant DL360 G5 Server from the Rack and Accessing Internal Components
Removal of a ProLiant DL360 G5 from the rack and access to internal components is the same
as in prior generations of ProLiant DL360 servers. Follow the procedures described in Section 2.4.4
(page 70) and Section 2.4.4.1 (page 71). If the DL360 G5 is installed with an optional cable
management arm, redundant power supplies, and hot-pluggable devices, then the system might
be able to be serviced without bringing the server down. See the HP ProLiant DL360 G5 Server
Maintenance and Service Guide for more information on servicing the system's hot-pluggable
devices such as a hot-pluggable disk or a power supply.

2.5.8 Replacing a PCI Card


When replacing a PCI card, you need a grounding strap. The adapter card is sensitive to
electrostatic discharge. Take care to avoid mishandling, which could damage the card. Before
beginning installation, and without removing the adapter card from its antistatic bag, inspect
the product for any signs of obvious damage, such as chipped or loose components.
To replace a PCI card, follow these steps:
1. Attach the grounding strap to your wrist or ankle and to a metal part of the chassis.
2. Power off the server.
3. Remove the server from the rack.
4. Remove the cover from the server and locate the PCI slots.
5. Disconnect any cables connected to any existing expansion boards.
6. Loosen the four PCI riser board assembly thumbscrews (see callout 1 in Figure 2-26).

2.5 HP ProLiant DL360 G5 81


Figure 2-26 PCI Riser Board Assembly

7. Lift the front of the assembly slightly and unseat the riser boards from the PCI riser board
connectors (see callout 2 in Figure 2-26).

Note:
Be sure that all of the DIMM slot latches are closed to provide adequate clearance before
removing the PCI riser board assembly with a half-length expansion board.

8. Remove the riser card from the PCI connector side of the card.
9. Remove the new PCI adapter from its antistatic plastic bag. Handle the adapter only by the
edges. Do not touch the card-edge connectors.
10. Record the adapter serial number located on the card for future reference.
11. Insert the new PCI adapter into the riser board, as shown in Figure 2-27.

Figure 2-27 Inserting a New PCI Adapter Into the PCI Riser Board

82 Xeon Processor Servers


Important:
A 64-bit riser card must be used in a 64-bit PCI slot; likewise, a 32-bit riser card must be used
in a 32-bit PCI slot. Otherwise, the PCI interface might not be correctly detected and serious
performance irregularities might result.

12. Insert the riser card into a PCI slot (callout 1 in Figure 2-27). Handle the interface card gently,
preferably by the front panel or card edges. The front panel of the interface card is the metal
plate or molding that contains two LEDs and the port connector. Be sure the adapter is
securely seated.
13. Secure the card in place.
14. Tighten the screws beneath the ports.
15. Replace the exterior cover of the server, and attach the cable between the card and the switch.
16. Power on the server and check that the card is detected.

2.6 HP ProLiant DL380 G3 and G4


Table 2-21 compares the features of the HP ProLiant DL380 G3 and the HP ProLiant DL380 G4.
Table 2-21 ProLiant DL360 G3 and G4 model comparison
Feature HP ProLiant DL380 G3 HP ProLiant DL380 G4

2 Xeon, 3.20 GHz/533 MHz FSB processors with Two Intel Xeon 3.20 GHz/800 MHz FSB processors
Processor 1-MB L3 cache with 1 MB L3 cache

Drive Integrated Smart Array 5i+, with 64MB


controller memory, optional BBWC Integrated Smart Array 6i Controller

Embedded HP NC7781 dual-port PCI-X Embedded HP NC7782 Dual Port PCI-X 10/100/1000
NIC 10/100/1000 Gigabit NICs Gigabit NICs

Up to six 1-inch hot-pluggable U320 hard


Drive bays drives Up to six 1-inch hot-pluggable U320 hard drives

Management Embedded iLO Embedded iLO

Three available PCI-X slots: two hot-pluggable Two 64-bit, 100MHz and one 64-bit, 133 MHz 3.3V
I/O slots 64-bit, 100 MHz, one 64-bit, 133 MHz PCI-X slots (hot-pluggable is optional)

Integrated ATI RAGE XL video controller with Integrated ATI RAGE XL video controller with 8 MB
Video 8-MB SDRAM video memory SDRAM video memory

Three USB connections (one on the front, and two


USB Two USB connections on the back)

Fan Hot-pluggable fans with optional redundancy Hot-pluggable fans with optional redundancy

Chassis 2U form factor (8.90 cm/3.5in) 2U form factor (8.90 cm/3.5in)

575 watt power supply with optional hot-pluggable


Power 400W with optional redundancy redundant AC power supply

Figure 2-28 shows the front panel of the HP ProLiant DL380 G3.

2.6 HP ProLiant DL380 G3 and G4 83


Figure 2-28 HP ProLiant DL380 G3 front panel
1 2 4

The following table describes the callouts in Figure 2-28.

Item Description

1 Tape drive bay or hard drive and tape drive blank

2 Diskette drive

3 Hard drive bays

4 CD-ROM drive

Figure 2-29 shows the front panel of the HP ProLiant DL380 G4.

Figure 2-29 HP ProLiant DL380 G4 Front Panel


1 2 3 5

The following table describes the callouts in Figure 2-29.

Item Description

1 USB port

2 Bay for tape drive

3 Diskette drive

4 Six hard drive bays

5 CD-ROM drive

The rear panel of the HP ProLiant DL380 G3 and G4 is identical, as shown in Figure 2-30.

84 Xeon Processor Servers


Figure 2-30 HP ProLiant DL380 G3 and G4 Rear Panel
1 2 4 5 7 9 13

3 6 8 10 11 12

The following table describes the callouts in Figure 2-30.

Item Description Connector Color

1 Hot-pluggable PCI-X expansion slot 3 (bus 6) 64-bit/100 N/A


MHz 3.3V

2 Hot-pluggable PCI-X expansion slot 2 (bus 6) 64-bit/100 N/A


MHz 3.3V

3 VHDCI SCSI connector (port 1) N/A

4 Non-hot-pluggable PCI-X expansion slot 1 (bus 3) 64-bit/133 Teal


MHz 3.3V

5 Serial connector N/A

6 Video connector Green

7 iLO connector N/A

8 USB connectors Purple

9 Mouse connector N/A

10 NIC 2 connector N/A

11 NIC 1 connector Black

12 Keyboard connector Blue

13 Power cord connector N/A

The HP ProLiant DL380 servers share the same embedded technologies as the HP ProLiant DL360
servers, as described in Section 2.4.2 (page 68).

2.6.1 PCI Slot Assignments


The ProLiant DL380 has three PCI slots on the rear of the chassis. Table 2-22 summarizes the slot
assignments.
Table 2-22 HP ProLiant DL380 PCI Slot Assignments
PCI-X PCI Express
Slot Assignment Comment
Bus Bus

1 A — PCI-X interconnect 64-bit 133 MHz PCI-X (1 Gb/s)

2 B — Optional 2 Gb/s Fibre Channel 64-bit 133 MHz PCI-X (1 Gb/s)


HBA

— A PCI Express interconnect X4

2.6 HP ProLiant DL380 G3 and G4 85


Table 2-22 HP ProLiant DL380 PCI Slot Assignments (continued)
PCI-X PCI Express
Slot Assignment Comment
Bus Bus

3 B — PCI-X slot 64-bit 133 MHz PCI-X (1 Gb/s)

— B PCI Express X4

2.6.2 Removing the Server from the Rack


To access internal components in the HP ProLiant DL380 G3 or G4 server, you must first power
down the server and then remove it from the rack. All of the servers in the cluster are secured
to the rack on a sliding rail.
The front panel power button on the HP ProLiant DL380 G3 and G4 toggles power between On
and Standby. If you press the Power On/Standby switch to power down the server, the Power
On/Off LED changes from green to amber, indicating standby mode. In Standby, the server
removes power from most electronics and drives; portions of the power supply and some internal
circuitry remain active. If you press the Unit Identification switch on the front panel, an LED
illuminates blue on the server front and rear panels.
The rear Unit Identification LED identifies the server being serviced.
To completely remove all power, disconnect the power cord first from the AC outlet and then
from the server.
When performing these tasks, heed the warnings and cautions listed in “Important Safety
Information” (page 23).
To remove a server from the rack, follow these steps:
1. Power down the server following the steps previously outlined.
2. Disconnect all remaining cables on the server rear panel, including cables extending from
external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.
3. Loosen the thumbscrews securing the server to the rack.
4. Slide the server out of the rack until the rail locks engage.
5. Press and hold the rail locks, then extend the server until it clears the rack.
6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.

2.6.2.1 Accessing Internal Components


To access internal components in the HP ProLiant DL380 servers, remove the access panel.
When performing this task, heed the warnings and cautions listed in Section .
To remove the access panel, as shown in Figure 2-18, follow these steps:
1. Remove the server from the rack, as described in Section 2.6.2.
2. Lift up on the hood latch. The access panel slides toward the back of the chassis.
3. Pull up to remove the access panel.
To replace the access panel, reverse steps 1 through 3.

2.6.3 Replacing a PCI Card


When replacing a PCI card, you need a grounding strap. The adapter card is sensitive to
electrostatic discharge. Take care to avoid mishandling, which could damage the card. Before
beginning installation, and without removing the adapter card from its antistatic bag, inspect
the product for any signs of obvious damage, such as chipped or loose components.
To replace an HP ProLiant DL380 PCI card, follow these steps:

86 Xeon Processor Servers


1. Attach the grounding strap to your wrist or ankle and a metal part of the chassis.
2. Power off the server.
3. Remove the server from the rack.
4. Remove the cover from the server, and locate the PCI riser cage.
5. Disconnect any cables connected to any existing expansion boards.
6. Open the PCI riser cage door, as shown in Figure 2-31.

Figure 2-31 ProLiant DL380 PCI Riser Cage Door

HPTC-0194

7. Remove the PCI riser cage door latch, as shown in Figure 2-32.

Figure 2-32 ProLiant DL380 PCI Riser Cage Door Latch

8. Remove the PCI riser cage.


• Lift the PCI riser cage thumbscrews (see callout 1 in Figure 2-33) and turn them counter
clockwise (see callout 2 in Figure 2-33).
• Lift the cage upward (see callout 3 in Figure 2-33).

2.6 HP ProLiant DL380 G3 and G4 87


Figure 2-33 Removing the HP ProLiant DL380 PCI Riser Cage

1 2

2
3

9. Unlock the PCI retaining clip, as shown in Figure 2-34.

Figure 2-34 Unlocking the HP ProLiant DL380 PCI Retaining Clip

HPTC-0251

10. Remove the expansion board, as shown by callouts 1 and 2 in Figure 2-35.

88 Xeon Processor Servers


Figure 2-35 Removing the HP ProLiant DL380 Expansion Board

Caution:
To prevent improper cooling and thermal damage, do not operate the server unless all PCI
slots have either an expansion slot cover or an expansion board installed.

11. To reinsert the PCI card cage, reverse the removal procedure.

2.7 HP ProLiant DL380 G5


The HP ProLiant DL380 G5 is a 2U rack-mount server with up to two Intel Xeon 5000 series
processors with Hyper Threading and Intel VT technology. The HP ProLiant DL380 G5 has
improved management features, including iLO2, Systems Insight Display, Smart Power and
thermal and dual power supplies (optional) and eight fans standard. Table 2-23 lists the features
of the HP ProLiant DL380 G5.
Table 2-23 HP ProLiant DL380 G5 Features
Feature HP ProLiant DL380 G5

Processors Dual-core Intel® Xeon™ 5000 and 5100 series processors with 4MB Level 2 cache

Storage Controller Smart Array P400 Controller with 256MB cache (RAID 0,1,5) – optional battery (adds
write cache and RAID 6); option 512MB write cache (adds write cache and RAID 6)

Smart Array E200 Controller with 64MB cache (RAID 0,1) – optional 128MB write cache
(adds RAID 5 capability)

Cache Memory (depending 4MB (2 x 2MB) Level 2 cache (5000 series)


on model)
4MB (1 x 4MB) Level 2 cache (5100 series)

NIC Embedded Dual NC373i Multifunction Gigabit Network Adapters with TCP/IP Offload
Engine, including support for Accelerated iSCSI through an optional ProLiant Essentials
Licensing Kit

Max Drive Bays Eight SFF (Small Form Factor) hot-pluggable drive bays to support SAS (Serial Attached
SCSI) and SATA (Serial ATA) drives

Remote Management Integrated Lights-Out 2 (iLO 2)

Four available PCI Express slots, optional mixed PCI-X / PCI Express configurations
Expansion Slots
available

USB 2.0 ports with support for 5 total ports: two ports up front; two ports in back; one
USB Ports:
port internal

2.7 HP ProLiant DL380 G5 89


Table 2-23 HP ProLiant DL380 G5 Features (continued)
Feature HP ProLiant DL380 G5

Redundancy 12 fully redundant hot plug fans

Hot-pluggable power supply with optional redundancy (Included in Performance


models)

Chassis 2U

Power 800 Watt, CE Mark Compliant (Optional Hot-Pluggable AC Redundant Power Supply)

2.7.1 HP ProLiant DL380 G5 Front Panel


Figure 2-36 shows the front panel of the HP ProLiant DL380 G5.

Figure 2-36 HP ProLiant DL380 G5 Front Panel


4 5 6

P W PO
S P P

FA
I
OV
1 2 3 4 5 6 7 8

U ID

1 2 3

The following table describes the callouts in Figure 2-36:

Item Description

1 Media drive bay (IDE/diskette multi-bay)

2 Video connector

3 USB connectors (2)

4 Systems Insight Display

5 Hard drive bays – 1 through 8

6 Quick release levers (2)

2.7.2 HP ProLiant DL380 G5 Rear Panel


Figure 2-37 shows the rear panel of the HP ProLiant DL380 G5.

90 Xeon Processor Servers


Figure 2-37 HP ProLiant DL380 G5 Rear Panel
1 2 3 4 5 6 7 10 11 12

4
2 1
3

8 9 18 17 16 15 14 13

The following table describes the callouts in Figure 2-37:

Item Description

1 T-10/T-15 Torx screwdriver

2 Expansion slot 3 (PCI Interconnect)

3 Expansion slot 4

4 Expansion slot 5

5 Expansion slot 2

6 External option blank

7 Expansion slot 1

8 NIC 2 connector

9 NIC 1 connector

10 Power supply bay 2

11 Power cord connector (2) – one for each power supply

12 Power supply bay 1

13 iLO 2 connector

14 Video connector

15 USB connectors (2)

16 Serial connector

17 Mouse connector

18 Keyboard connector

2.7.3 ProLiant DL380 G5 Front LEDs


Figure 2-38 shows the ProLiant DL380 G5 front panel LEDs.

2.7 HP ProLiant DL380 G5 91


Figure 2-38 HP ProLiant DL380 G5 Front LEDs

P W PO
S P P

P
FA
I
OV
1 2 3

1 2 3 4 5 6

Table 2-24 describes the callouts in Figure 2-38.


Table 2-24 HP ProLiant DL380 G5 Front Panel LEDs
Item Description Status

1 UID LED button Blue = Activated

Flashing = System being remotely managed

Off = Deactivated

2 Internal health LED Green = Normal

Amber = System degraded. To identify component in


degraded state, see Section 2.7.5

Red = System critical. To identify component in critical state,


see Section 2.7.5

3 External health LED (power supply) Green = Normal

Amber = Power redundancy failure. To identify component


in degraded state, seeSection 2.7.5

Red = Critical power supply failure. To identify component


in critical state, seeSection 2.7.5

4 NIC 1 link/activity LED Green = Network link

Flashing = Network link and activity

Off = No link to network. If power is off, view the rear panel


RJ-45 LEDs for status.

5 NIC 2 link/activity LED Green = Network link

Flashing = Network link and activity

Off = No link to network. If power is off, view the rear panel


RJ-45 LEDs for status.

6 Power On/Standby button/system Green = System on


power LED
Amber = System shut down, but power still applied

Off = Power cord not attached or power supply failure

2.7.4 ProLiant DL380 G5 Rear LEDs


The ProLiant DL380 G5 rear panel LEDs are shown in Figure 2-39.

92 Xeon Processor Servers


Figure 2-39 HP ProLiant DL380 G5 Rear LEDs
1

2 1

2 3 4 5 6

The following table describes the callouts in Figure 2-39.


Table 2-25 HP ProLiant DL380 G5 Rear LEDs
Item Description Status

1 Power supply LEDs Green = Normal

Off = System is off or power supply has failed

2 NIC activity LED Green = Network activity

Flashing = Network activity

Off = No network activity

3 NIC link LED Green = Network link

Off = No network link

4 iLO 2 activity LED Green = Network activity

Flashing = Network activity

Off = No network activity

5 iLO 2 link LED Green = Network link

Off = No network link

6 UID LED button Blue = Activated

Flashing = System being remotely managed

Off = Deactivated

2.7.5 Systems Insight Display LEDs


The Systems Insight Display is on the front panel of the ProLiant DL380 G5. Figure 2-40 shows
the ProLiant DL380 G5 Systems Insight Display LEDs and Table 2-26 lists the description and
status of the Systems Insight Display LEDs.

2.7 HP ProLiant DL380 G5 93


Figure 2-40 HP ProLiant DL380 G5 Systems Insight Display
ONLINE
SPARE

MIRROR
POWER POWER PCI
SUPPLY SUPPLY RISER
CAGE
DIMMS
PPM
PPM

PROC PROC
INTER
LOCK
FANS
OVER
TEMP

Table 2-26 Systems Insight Display LEDs Status


LED Description Status

Online spare Off = No protection

Green = Protection enabled

Amber = Memory failure occurred

Flashing amber = Memory configuration error

Mirror Off = No protection

Green = Protection enabled

Amber = Memory failure occurred

Flashing amber = Memory configuration error

All other LEDs Off = Normal

Amber = Failure

Note:
The HP Systems Insight Display LEDs represent the system board layout. For more information
on HP Systems Insight Display and Internal Health LED combinations, see the ProLiant DL380
Generation 5 Server Maintenance and Service Guide.

2.7.6 PCI Slot Assignments


The HP ProLiant DL380 G5 has four PCI Express expansion slots standard and an optional PCI-X.
The following tables describe the slot assignments:
• Table 2-27 summarizes the default PCI Express slot assignments.
• Table 2-28 summarizes the mixed PCI Express/PCI-X slot assignments.

94 Xeon Processor Servers


Table 2-27 HP ProLiant DL380 G5 PCI Express Slot Assignments
Default PCI
Slot Express Bus Assignment Comment

1 A PCI Express (used with SAS controller) x4

2 B PCI Express x4

3 C PCI Express x4

4 D PCI Express x8

5 E PCI Express (InfiniBand interconnect) x8

Table 2-28 HP ProLiant DL380 G5 Mixed PCI Express/PCI-X Slot Assignments


Mixed PCI
Express/PCI–X
Slot Bus Assignment Comment

1 A PCI Express Slot (used with SAS x4


controller)

2 B PCI Slot x4

3 C PCI Express (Interconnect) x8

4 D PCI-X 64 bit/133 MHz

5 E PCI-X (Interconnect) 64 bit/133 MHz

2.7.7 New DL380 PCI Slot Assignments (as of March 17, 2008)
The following table describes the new PCI slot assignments for DL380 G5 as of March 17, 2008.
The are no performance issues with the previous slot assignments and it is not necessary to move
the HCAs in previously sold configurations. Use Table 2-28 when a PCI-X HCA is used, if
necessary.
For the appropriate cable management bracket mounting holes for the new PCI slot assignments,
see Section 2.7.8.
Table 2-29 HP ProLiant DL380 G5 PCI Express Slot Assignments (as of March 17, 2008)
Slot Assignment Comment

1 PCI Express x4

2 PCI Express x4

3 PCI Express x4

4 PCI Express (Interconnect) x8

5 PCI Express x8

2.7.8 Rack Mounting Holes for ProLiant DL380 G5 Cable Management Brackets
Each HP ProLiant DL380 G5 server requires 2U of rack space referred to as “upper U” and “lower
U”. Each U has three holes: top, middle, and bottom. Different cable management brackets and
the rack-mounting holes used to mount them are necessary for the ProLiant DL380 G5 depending
on the PCI slot and the type of HCA used in HP Cluster Platform configurations. Figure 2-41
shows the location of the holes used to mount the cable management brackets when either PCI
slot 3 or PCI slot 5 is used in the ProLiant DL380 G5.
When a PCI-X HCA is used in slot 5 of the ProLiant DL380 G5, mount the appropriate cable
management bracket in the middle hole (callout 3 in Figure 2-41) of the upper U location (callout
1 in Figure 2-41).

2.7 HP ProLiant DL380 G5 95


When a PCI Express HCA is used in slot 3 of the ProLiant DL380 G5, mount the appropriate
cable management bracket in the top hole (callout 5 in Figure 2-41) of the lower U location (callout
2 in Figure 2-41).
When a PCI Express HCA is used in slot 4 of the ProLiant DL380 G5, mount the appropriate
cable management bracket in the bottom hole (callout 4 in Figure 2-41) of the upper U location
(callout 1 in Figure 2-41).

Figure 2-41 Rack Mounting Holes for ProLiant DL380 G5 Cable Management Brackets

1 M 3

B 4

T 5

2 M

The following list describes the callouts in Figure 2-41:


1. Upper U location (top, middle, and bottom)
2. Lower U location (top, middle, and bottom)
3. Middle hole of upper U location
4. Bottom hole of upper U location
5. Top hole of lower U location
6. Two U locations — 2U is required to rack mount the HP ProLiant DL380 G5

2.7.9 Removing the Server from the Rack


To access internal components in the ProLiant DL380 G5, you must first power down the server
and then remove it from the rack. All of the servers in the cluster are secured to the rack on a
sliding rail.
The front panel power button on the ProLiant DL380 G5 toggles power between On and Standby.
If you press the Power On/Standby switch to power down the server, the Power On/Off LED
changes from green to amber, indicating standby mode. In Standby mode, the server removes
power from most electronics and drives; portions of the power supply and some internal circuitry
remain active. If you press the Unit Identification switch on the front panel, an LED illuminates
blue on the server front and rear panels.
The rear Unit Identification LED identifies the server being serviced.
To completely remove all power, disconnect the power cord first from the AC outlet and then
from the server.
When performing these tasks, heed the warnings and cautions listed in “Important Safety
Information” (page 23).
To remove a server from the rack, follow these steps:

96 Xeon Processor Servers


1. Power down the server following the steps previously outlined.
2. Disconnect all remaining cables on the server rear panel, including cables extending from
external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.
3. Loosen the thumbscrews securing the server to the rack.
4. Slide the server out of the rack until the rail locks engage.
5. Press and hold the rail lock and extend the server until it clears the rack.
6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.

2.7.9.1 Accessing Internal Components


To access internal components in the HP ProLiant DL380 G5 servers, remove the access panel.
When performing this task, heed the warnings and cautions listed in “Important Safety
Information” (page 23).
To remove the access panel, as shown in Figure 2-18 (page 71), follow these steps:
1. Remove the server from the rack, as described in Section 2.7.9 (page 96).
2. Lift up on the hood latch. The access panel slides toward the back of the chassis.
3. Pull up to remove the access panel.
To replace the access panel, reverse steps 1 through 3.

2.7.10 Replacing a PCI Card


When replacing a PCI card, you need a grounding strap. The adapter card is sensitive to
electrostatic discharge. Take care to avoid mishandling, which could damage the card. Before
beginning installation, and without removing the adapter card from its antistatic bag, inspect
the product for any signs of obvious damage, such as chipped or loose components.

Caution:
To prevent damage to the server or expansion boards, power down the server and remove all
AC power cords before removing or installing the PCI riser cage.

To replace an HP ProLiant DL380 G5 PCI card in slots 3 through 5, follow these steps:
1. Attach the grounding strap to your wrist or ankle and a metal part of the chassis.
2. Power off the server.
3. Remove the server from the rack.
4. Remove the cover from the server, and locate the PCI riser cage.
5. Disconnect any cables connected to any existing expansion boards.
6. Press the blue button to release the black knobs (see callout 1 in Figure 2-42).
7. Turn the black knobs counter clockwise (see callout 2 in Figure 2-42).
8. Lift the PCI riser cage upward and remove it (see callout 3 in Figure 2-42).

2.7 HP ProLiant DL380 G5 97


Figure 2-42 Removing the ProLiant DL380 G5 PCI Riser Cage

1 2

9. Remove the expansion board, as shown in Figure 2-43.

Figure 2-43 Removing the ProLiant DL380 G5 PCI Riser Cage

Caution:
To prevent improper cooling and thermal damage, do not operate the server unless all PCI
slots have either an expansion slot cover or an expansion board installed.

10. To reinsert the PCI riser card cage, reverse the removal procedure.

98 Xeon Processor Servers


3 Opteron Processor Servers
Several servers based on the AMD Opteron processor are supported in HP Cluster Platform
solutions. This chapter presents the following information:
• HP ProLiant DL145 G1 and G2 (Section 3.1)
• HP ProLiant DL145 G3 (Section 3.2)
• HP ProLiant DL165 G5 (Section 3.3)
• HP ProLiant DL385 (Section 3.4)
• HP ProLiant DL385 G2 (Section 3.5)
• HP ProLiant DL385 G5 and G5p (Section 3.6)
• HP ProLiant DL585 (Section 3.7)
• HP ProLiant DL585 G2 (Section 3.8 )
• HP ProLiant DL585 G5 (Section 3.9)
Chapter 4 describes the HP ProLiant server blades, also based on the Opteron processor, that
are supported in HP Cluster Platform solutions.

3.1 HP ProLiant DL145


The HP ProLiant DL145 G1 and G2 are supported servers in HP Cluster Platform solutions. The
1U HP ProLiant DL145 server supports up to two AMD Opteron processors and can be used as
an application node, utility node, or control node.
All HP ProLiant DL145 servers in an HP Cluster Platform solution are configured with two
processors. Memory is two-way interleaved. and must be added in pairs. Ideally, memory is
configured identically for each CPU board, up to 16 GB (8 x 2 GB). In most cases, the PCI-X slot
is used for the interconnect host adapter. For configurations in which the HP ProLiant DL145 is
the control node and is not connected to the system interconnect, an alternative I/O option can
be installed.
The HP ProLiant DL145 also has an integrated ProLiant 100-series management processor (MP)
that operates independently from the operating system and is powered by auxiliary power. It
provides system administrators with access to the server at any time, even prior to an operating
system being installed on the server. This MP provides a text remote console and a command-line
interface (CLI). The ProLiant 100-series MP is Intelligent Platform Management Interface (IPMI)
v1.5 compliant. In addition to IPMI, the HP ProLiant DL145 G2 has ProLiant Lights Out 100i
Remote Management.
There are three embedded network ports on an HP ProLiant DL145. The iLO port connects to
the console network switch and is used for server management. The NIC 1 port is used for the
administrative network, and the NIC 2 port is used for the Gigabit Ethernet system interconnect.
See Figure 3-2, and Figure 3-4 to locate the ports.
Table 3-1 describes the characteristics of the HP ProLiant DL145 G1 and the HP ProLiant DL145
G2 servers.
Table 3-1 ProLiant DL145 G1 and G2 Comparison
Features HP ProLiant DL145 G1 HP ProLiant DL145 G2

Processor AMD Opteron up to 2.4 GHz 1 MB AMD Opteron up to 2.6 GHz and
L2 cache 800 MHz HyperTransport future dual-core support 1 MB L2
cache 1 GHz HyperTransport

Chipset AMD 8111 and AMD 8131 AMD 8132 for PCI-X and NVIDIA
CK8-04 for PCI Express

RAM std/max 1 GB-2 GB / 16 GB 1 GB-2 GB / 16 GB

3.1 HP ProLiant DL145 99


Table 3-1 ProLiant DL145 G1 and G2 Comparison (continued)
Features HP ProLiant DL145 G1 HP ProLiant DL145 G2

Memory technology PC2700 ECC DDR SDRAM @ PC3200 ECC DDR1 SDRAM @ 400MHz
333MHz

Drive controller Integrated dual-channel ATA Integrated dual-channel ATA

RAID controller Optional PCI card Optional PCI card

NIC Integrated dual Broadcom 5704 Integrated dual Broadcom 5721


10/100/1000 10/100/1000

Hard drive bays Two non hot-pluggable ATA or SCSI Two non hot-pluggable SATA or SCSI

Slots 1 × 133 MHz PCI-X 2 × 133 MHz PCI-X (one full-length


and one low-profile)
Optional 1 × PCI Express @ x16 in place
of full-length PCI-X slot

Management IPMI 2.0 HP ProLiant Lights Out 100i Remote


Management and IPMI 2.0

Video Integrated ATI Rage XL @ 8 MB Integrated NVIDIA N11 @ 16 MB

Removable media Two USB ports (1 front and 1 rear) Four USB Ports (2 front and 2 rear)

Figure 3-1 shows the front panel of the HP ProLiant DL145 G1, and Figure 3-2 shows its rear
panel.

Figure 3-1 HP ProLiant DL145 G1 Front Panel


1 2 3 6
hp
ProLiant
DL145

4 5

The following table describes the callouts in Figure 3-1.

Item Description

1 LEDs

2 Power button

3 USB port

4 Hard drive bay 1

5 Hard drive bay 2

6 Media bay

Figure 3-2 HP ProLiant DL145 G1 Rear Panel


1 2 3 4 5 6 7 8

100 Opteron Processor Servers


The following table describes the callouts in Figure 3-2.

Item Description

1 Mouse

2 Keyboard

3 Video

4 USB

5 Dedicated Management NIC

6 NIC 1

7 NIC 2

8 COM1/management processor

Figure 3-3 shows the front panel of the HP ProLiant DL145 G2, and Figure 3-4 shows the rear
panel of the HP ProLiant DL145 G2. Among other features, the G2 model has a new bezel design
with unit ID (UID) light diagnostics for easy system identification in large rack-mount
environments.

Figure 3-3 HP ProLiant DL145 G2 Front Panel


1 2 3 4 6 8

HP
ProLiant
DL 145

5 7 9

The following table describes the callouts in Figure 3-3.

Item Description

1 Hard disk drive bays

2 Optical media device bay

3 Unit Identification button with LED indicator (blue)

4 System health LED indicator (amber)

5 Activity/link status LED indicators for NIC 1 and NIC 2 (green)

6 Hard disk drive activity LED indicator (green)

7 USB 2.0 ports

8 Power button with LED indicator (green and amber)

9 Thumbscrew for the front bezel

3.1 HP ProLiant DL145 101


Figure 3-4 HP ProLiant DL145 G2 Rear Panel
1 2 3 6 10 3 14

4 5 7 8 9 11 12 13

The following table describes the callouts in Figure 3-4.

Item Description

1 Ventilation holes

2 Thumbscrew for the access panel

3 Thumbscrews for the PCI riser board assembly

4 GbE LAN ports for NIC1 (RJ-45)

5 GbE LAN ports for NIC2 (RJ-45)

6 Low profile 64-bit/133 MHz PCI-X riser board slot cover

7 USB 2.0 ports (black)

8 Video port (blue)

9 Serial port (teal)

10 Standard height/full-length 64-bit/133 MHz PCI-X riser board slot cover. You can convert
the PCI-X functionality of this slot to PCI Express using the PCI Express riser board option
kit.

11 PS/2 Keyboard port (purple)

12 PS/2 mouse port (green)

13 10/100 Mbps LAN port for IPMI management (RJ-45)

14 Power supply cable socket

The front and rear panels of the HP ProLiant DL145 G2 are identical to the front and rear panels
of the HP ProLiant DL140 G2. However, the DL145 G2 has Opteron processors, and the DL140
G2 has Xeon processors.

3.1.1 HP ProLiant DL145 G2 PCI Slot Assignments


The ProLiant DL145 G2 has two PCI slots on the rear of the chassis. Table 3-2 summarizes the
slot assignments.
Table 3-2 HP ProLiant DL145 G2 PCI Slot Assignments
Slot Assignment

1 PCI-X interconnect

2 64-bit 133 MHz PCI-X

3.1.2 Removing an HP ProLiant DL145 from a Rack


To access internal components in any HP ProLiant DL145 model, you must first shut down power
to the server and remove it from the rack. All of the servers in the cluster are secured to the rack

102 Opteron Processor Servers


on a sliding rail. This section describes how you shut down power, remove a server from the
rack, and access internal components.
When performing these tasks, heed the warnings and cautions listed in “Important Safety
Information” (page 23).
The front panel power button on the HP ProLiant DL145 toggles between On and Off. If you
press the Power button on an HP ProLiant DL145 to power down the server, the LED turns off.
To completely remove all power from the server, disconnect the power cord first from the AC
outlet and then from the server.
To remove an HP ProLiant DL145 from the rack, follow these steps:
1. Power down the server.
2. Disconnect all remaining cables on the server rear panel, including cables extending from
external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.
3. Loosen the front thumbscrews securing the server to the rack (see callout 1 in Figure 3-5).

Figure 3-5 HP ProLiant DL145 G1 Front Thumbscrews

4. Extend the server on the rack rails (see callout 2 in Figure 3-5) until the server rail locks.
5. Press and hold the rail locks (see callout 1 in Figure 3-6), and extend the server until it clears
the rack (see callout 2 in Figure 3-6).

3.1 HP ProLiant DL145 103


Figure 3-6 Sliding the HP ProLiant DL145 G1 from the Rack

6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.
To remove the access panel on the HP ProLiant DL145, as shown in Figure 3-7, follow these steps:
1. Remove the server from the rack, as described in Section 3.1.2.
2. To detach the access panel from the chassis, follow these steps:
• For the DL145 G1, loosen the two captive thumbscrews at the back of the server, as
shown in Figure 3-7.

Figure 3-7 Removing the HP ProLiant DL145 G1 Access Panel

Mouse
Kybd
USB
Mng LAN

1000T
LAN

1
1000T
LAN

Item Description

1 Access panel screws

2 Access panel latch

• For the DL145 G2, loosen the one captive thumbscrew at the back of the server, as shown
in Figure 3-8.

104 Opteron Processor Servers


Figure 3-8 Removing the HP ProLiant DL145 G2 Access Panel

Item Description

1 Access panel screw

2 Access panel latch

3. Slide the cover approximately 1.25 cm (0.5 in) toward the rear of the unit.
4. Pull up the latch to remove the access panel from the chassis.
To replace the access panel and return the server to the rack, reverse the previous steps.

3.1.3 Replacing a PCI card in the HP ProLiant DL145 G1


The HP ProLiant DL145 G1 has one PCI-X expansion slot. To replace the PCI expansion card,
follow these steps:
1. Attach a grounding strap to your wrist or ankle and a metal part of the chassis.
2. Press the Power button to power down the server. When the server powers down, the system
power LED turns off.
3. Disconnect the AC power cord, first from the AC outlet and then from the server.

Note:
The front panel Power button does not completely shut off system power. Portions of the
power supply and some internal circuitry remain active until AC power is removed.

4. Remove the server from the rack, as described in Section 3.1.2.


5. Remove the access panel from the server, as shown in Figure 3-8.
6. Disconnect the cable attached to the PCI card.
7. Pull the PCI riser cage upward, as shown in Figure 3-9.

3.1 HP ProLiant DL145 105


Figure 3-9 HP ProLiant DL145 G1 Riser Cage

HPTC-0222

8. Remove the screw located on the left side of the riser cage before removing the PCI card
from the slot, as shown in Figure 3-10.

Figure 3-10 Removing an HP ProLiant DL145 G1 PCI Card from the Riser Cage

9. Remove the new PCI card from its antistatic plastic bag. Handle the adapter gently, preferably
by the front panel or card edges. Do not touch the connectors. The front panel of the adapter
is the metal plate that contains the port connector and LEDs.
10. Record the adapter card serial number located on the card for future reference.
11. Insert the new PCI card into the riser cage, applying even pressure to seat the board securely.
Be sure the adapter is securely seated.
12. Replace the screw that secures the card in the riser cage.
13. Return the riser cage to its slot.
14. Reinstall the access panel, reversing the steps in Section 3.1.2.
15. Replace the server in the rack, reversing the steps in Section 3.1.2.
16. Attach the cable between the card and the switch, and reconnect the AC power cord.
17. Press the Power button on the server. Check that the card is detected. Refer to your software
documentation for further installation instructions.

106 Opteron Processor Servers


3.1.4 Installing or Replacing a PCI Card in the HP ProLiant DL145 G2
The HP ProLiant DL145 G2 has three PCI expansion slots on the system board. The system
supports up to two expansion boards at a time. Figure 3-11 shows the PCI slots on the HP ProLiant
DL145 G2 system board.

Figure 3-11 HP ProLiant DL145 G2 PCI Expansion Slots

The following table describes the callouts in Figure 3-11.


Table 3-3 HP ProLiant DL145 G2 PCI Slots
Item Slot Capabilities

1 64-bit, 133 MHz PCI-X slot Supports a low profile 64-bit, 133 MHz PCI-X riser board

2 64-bit, 133 MHz PCI-X slot Supports a standard height, full-length 64-bit, 133 MHz PCI-X
riser board

3 PCI Express x16 slot Supports a full-length PCI Express x16 riser board

When replacing a PCI card in an HP ProLiant DL145 G2, use only HP supported expansion
boards that meet the following specifications:
• PCI or PCI-X compliant
— Connector: 32 or 64 bits wide, 3.3 V
— Speed
◦ PCI board speed: 66 MHz
◦ PCI-X board speed: 100 or 133 MHz
— Form factor: low profile or standard height, full-length boards
• PCI Express x16 compliant (available only when the optional PCI Express riser board is
installed)
To replace a PCI expansion card, follow these steps:
1. Attach a grounding strap to your wrist or ankle and a metal part of the chassis.
2. Press the Power button to power down the server. When the server powers down, the system
power LED turns off.
3. Disconnect the AC power cord, first from the AC outlet and then from the server.

Note:
The front panel Power button does not completely shut off system power. Portions of the
power supply and some internal circuitry remain active until AC power is removed.

4. Remove the server from the rack, as described in Section 3.1.2.


5. Remove the access panel from the server, as shown in Figure 3-8.
6. Disconnect the cable attached to the PCI card.
7. Lift and remove the PCI riser board assembly from the chassis as follows:

3.1 HP ProLiant DL145 107


a. Loosen the two captive thumbscrews that secure the assembly to the chassis, as shown
by callout 1 in Figure 3-12.

Figure 3-12 Removing the HP ProLiant DL145 G2 PCI Card Cage

b. Lift the assembly away from the chassis, as shown by callout 2 in Figure 3-12.
c. Identify the slot that is compatible with the expansion board you intend to install.
d. If you are adding an expansion board, pull out the slot cover from the selected slot.
Store it for reassembly later. Figure 3-13 shows how to remove the cover from a
low-profile expansion slot, and Figure 3-14 shows how to remove the cover from a
standard height, full-length expansion slot.

Important:
Do not discard slot covers. If the expansion board is removed in the future, the slot
cover must be reinstalled to maintain proper cooling.

Figure 3-13 Removing the Cover of the Low-Profile Expansion Slot

HPTC-0140

108 Opteron Processor Servers


Figure 3-14 Removing the Cover of the Full-Length Expansion Slot

HPTC-0163

8. Remove the PCI expansion board from its protective packaging, handling it by the edges.
Some expansion boards can only be installed in one slot but other boards can be configured
to fit in either slot by replacing the default bracket (attached to the board) with a different
sized bracket. The different sized bracket and instructions on how to attach it to the board
is included in the option kit.
9. Verify that the board’s default bracket is compatible with the configuration of the selected
slot. If it is not compatible, replace the bracket with one that is compatible.
10. Pull the PCI riser cage upward, as shown in Figure 3-9.
11. Remove the screw located on the left side of the riser cage before removing the PCI card
from the slot, as shown in Figure 3-10.
12. Remove the new PCI card from its antistatic plastic bag. Handle the card gently, preferably
by the front panel or card edges. Do not touch the connectors. The front panel of the card is
the metal plate that contains the port connector and LEDs.
13. Record the card serial number located on the card for future reference.
14. Insert the new PCI card into the riser cage, applying even pressure to seat the board securely.
Be sure the adapter is fully seated. Figure 3-15 shows the installation of a full-length 64-bit,
133 MHz PCI-X card in an HP ProLiant DL145 G2.

Figure 3-15 Installing a Full-Length PCI Card in the HP ProLiant DL145 G2

HPTC-0164

15. Replace the screw that secures the card in the riser cage.
16. Reinstall the PCI riser cage assembly as follows:
a. Align the assembly with the system board expansion slots, then press it down to ensure
full connection to the system board.
b. Tighten the two captive thumbscrews to secure the assembly to the chassis.

3.1 HP ProLiant DL145 109


17. Reinstall the access panel, reversing the steps in Section 3.1.2.
18. Replace the server in the rack, reversing the steps in Section 3.1.2.
19. Attach the cable between the card and the switch, and reconnect the AC power cord.
20. Press the Power button on the server. Check that the card is detected. Refer to your software
documentation for further installation instructions.
For more information, refer to the following documents:
• PCI Express Riser Board Installation Instructions for HP ProLiant DL100 Series Generation 2
Servers
• HP ProLiant 100 Series Servers User Guide
• HP ProLiant DL145 Generation 2 Server Maintenance and Service Guide
• HP ProLiant DL145 Generation 2 Server Installation Sheet

3.2 HP ProLiant DL145 G3


The HP ProLiant DL145 G3 is supported in HP Cluster Platform solutions. The HP ProLiant
DL145 G3 server supports up to two AMD Opteron processors and can be used as an application
node, utility node, or control node. Table 3-4 lists the features of the HP ProLiant DL145 G3.
Table 3-4 HP ProLiant DL145 G3 Specifications
Feature Specification

Processors Supports up to two AMD Opteron 2000 series processors

Supports dual-core 2.6 GHz, 1 MB cache processors

Supports DirectConnect Architecture

Memory 8 DIMM slots supporting up to 16 GB PC2-5300 DDR2-667 MHz memory with


Advanced ECC

Internal Drive Support Supports up to two:

– Hot Plug Serial Attached SCSI (SAS) 3.5"/Serial ATA (SATA) hard drives, or;

– Non-Hot Plug Serial ATA (SATA) 3.5" hard drives

Internal storage capacity of up to 1.5 TB (2 x 750 GB hot-pluggable 3.5 inch SATA


hard drives), or 600 GB (2 x 300 GB hot plug 3.5" SAS hard drives)

Network Controller NC326i PCIe Dual Port Gigabit Server Adapter

Storage Controllers HP Embedded SATA Controller on Non-Hot Plug SATA Models

HP Smart Array E200 Controller with 64 MB cache, optional

– Upgradeable to 128 MB write back cache

HP 8 Internal Port SAS HBA with RAID 0,1 on hot-pluggable SATA/SAS Models

Expansion Slots Standard - PCI Express assembly includes:

– Slot 1: full-length/full-height PCI Express x16

– Slot 2: half-length/low-profile PCI Express x4 (x8 connector)

Optional - PCI-X assembly includes:

– Slot 2: half-length/low-profile PCI-X 133 (note: PCI-X slots are 64 bits; 133 MHz
capable)

Optional - HTX assembly includes: Slot 1: full-length/full-height HTX

Management IPMI 2.0 support

HP ProLiant Lights Out 100i Remote Management support

ROM Setup Utility

110 Opteron Processor Servers


Table 3-4 HP ProLiant DL145 G3 Specifications (continued)
Feature Specification

USB Ports Four USB ports (two front, two rear)

Optical Drive Support for one (optional): CD-ROM, DVD-ROM, DVD RW, DVD/CD RW

Power Supply 650 W Power supply (non hot-pluggable, auto-switching)

Figure 3-16 shows the front panel of the HP ProLiant DL145 G3.

Figure 3-16 HP ProLiant DL145 G3 Front Panel

1 2 3 4 5 6 7 8

12 11 10 9

The following table describes the callouts in Figure 3-16.

Item Description

1 Serial number pull tab

2 Ventilation holes

3 Optical drive bay

4 Unit identification (UID) button with LED indicator (blue)

5 System health LED indicator (amber)

6 Activity LED indicators for NIC 1 and NIC 2 (green)

7 Hard disk drive (HDD) activity LED indicator (green)

8 Power button with LED indicator (bicolor: green and amber)

9 USB 2.0 ports

10 HDD bay 2 — 3.5" Hot-plug SAS/SATA or 3.5" Non-hot plug SATA drive

11 HDD bay 1 — 3.5" Hot-plug SAS/SATA or 3.5" Non-hot plug SATA drive

12 Thumbscrews for the front bezel (2 places)

Figure 3-17 shows the rear panel of the HP ProLiant DL145 G3.

3.2 HP ProLiant DL145 G3 111


Figure 3-17 HP ProLiant DL145 G3 Rear Panel

4 5

1 2 3 6 7 8 9 10

14 13
21 20 19 18 17 16 15 12 11

The following list corresponds to the callouts shown in Figure 3-17.

Item Description

1 Ventilation holes

2 Thumbscrew for the low-profile riser board assembly (two places)

3 Low-profile riser board assembly slot cover

4 Link status LED indicator for the 10/100 Mbps LAN port

Solid green — A valid network link exists

Off — No network link detected

5 Activity status LED indicator for the 10/100 Mbps LAN port

Flashing amber — Network data activity was detected within the preceding one second

Off — No network data activity was detected within the preceding one second

6 10/100 Mbps LAN port for IPMI management (RJ-45)

7 Captive screw for the top cover

8 GbE LAN port for NIC 1 (RJ-45)

9 GbE LAN port for NIC 2 (RJ-45)

10 Full-sized riser board assembly slot cover

11 Power supply cord

12 Ventilation holes

13 GbE LAN port for NIC 2 (RJ-45) activity LED indicator

Flashing amber — Network data activity was detected within the preceding one second
(same for NIC 1)

Off — No network data activity was detected within the preceding one second (same for
NIC 1)

14 GbE LAN port for NIC 2 (RJ-45) link status LED indicator

Flashing amber — Network data activity was detected within the preceding one second
(same for NIC 1)

112 Opteron Processor Servers


Item Description

Off — No network data activity was detected within the preceding one second (same for
NIC 1)

15 USB 2.0 ports (black)

16 PS/2 keyboard port (purple)

17 PS/2 mouse port (green)

18 Serial port

19 Video port (blue)

20 Non-Maskable Interrupt (NMI) button (recessed)

21 UID button and separate LED indicator (blue)

3.2.1 Removing an HP ProLiant DL145 G3 from a Rack


To access internal components in any HP ProLiant DL145 G3, you must first shut down power
to the server and remove it from the rack. All of the servers in the cluster are secured to the rack
on a sliding rail. This section describes how you shut down power, remove a server from the
rack, and access internal components.
When performing these tasks, heed the warnings and cautions listed in “Important Safety
Information” (page 23).
The front panel power button on the HP ProLiant DL145 G3 toggles between On and Off. If you
press the Power button on an HP ProLiant DL145 G3 to power down the server, the LED turns
off. To completely remove all power from the server, disconnect the power cord from the AC
outlet.
To remove an HP ProLiant DL145 G3 from the rack, follow these steps:
1. Power down the server.
2. Disconnect all remaining cables on the server rear panel, including cables extending from
external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.
3. Loosen the front thumbscrews securing the server to the rack, as shown by callout 1 in
Figure 3-18.

Figure 3-18 HP ProLiant DL145 G3 Front Thumbscrews

4. Extend the server on the rack rails until the server rail locks, as shown by callout 2 in
Figure 3-18.

3.2 HP ProLiant DL145 G3 113


5. Press and hold the rail locks (see callout 1 in Figure 3-19), and extend the server until it clears
the rack (see callout 2 in Figure 3-6).

Figure 3-19 Sliding the HP ProLiant DL145 G3 from the Rack

6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.
To replace the access panel and return the server to the rack, reverse the previous steps.

3.2.2 Removing the DL145 G3 Access Panel


To remove the access panel on the HP ProLiant DL145 G3, as shown in Figure 3-20, follow these
steps:

114 Opteron Processor Servers


1. Remove the server from the rack, as described in Section 3.2.1.
2. To detach the access panel from the chassis, follow these steps:
• Loosen the captive screw on the back of the ProLiant DL145 G3 server, as shown by
callout 1 in Figure 3-20. To loosen the screw, HP recommends using the L-shaped wrench
that ships with the server.

Figure 3-20 Removing the HP ProLiant DL145 G3 Access Panel

Item Description

1 Access panel screws

2 Access panel latch

3. Slide the cover approximately 1.25 cm (0.5 in) toward the rear of the unit, as shown by callout
2 in Figure 3-20.
4. Use the two circular grips on the top cover to help you slide the cover more easily.

3.2.3 DL145 G3 System Board Expansion Slots


There are four expansion slots on the system board that support four different PCI riser boards.
Figure 3-21 shows the system board expansion slots and Table 3-5 summarizes the types of slots
available on the system board.

3.2 HP ProLiant DL145 G3 115


Figure 3-21 DL145 G3 System Board Expansion Slots

2
4

The following list describes the callouts shown in Figure 3-21.


Table 3-5 DL145 G3 System Board Expansion Slot Descriptions
Item Component Description

1 HTX slot Supports a full-sized 1 GHz, 16x16 HTX expansion board installed on an
HTX riser board

2 PCI Express x16 slot Supports a full-sized PCI Express x16 expansion board installed on a PCI
Express x16 riser board

3 PCI-X slot Supports a low-profile 64-bit, 133 MHz PCI-X expansion board installed
on a PCI X riser board

4 PCI Express x4 slot Supports a low-profile PCI Express x4 expansion board installed on a PCI
Express x4 riser board

3.2.4 DL145 G3 PCI Slot Assignments


Table 3-6 summarizes the PCI slots assignments in the ProLiant DL145 G3.

116 Opteron Processor Servers


Table 3-6 HP ProLiant DL145 G3 PCI Slot Assignments
PCI Slot PCI Express Assignment PCI Mixed Assignment

1 (Full-Sized) X16 PCI-X Interconnect X16 PCI Express

2 (Low-Profile) X4 PCI Express Interconnect PCI -X

3.2.5 ProLiant DL145 G3 Riser Board Assemblies


The DL145 G3 server supports up to two expansion boards installed on riser boards. With the
appropriate riser boards, the two riser board assemblies that come with the server convert the
expansion slots on the system board to slots that are positioned at a 90° angle from the system
board. You can then install expansion boards in a position parallel to the system board. The
system comes with one full-sized assembly and one low-profile assembly. The full-sized assembly
supports either an HTX riser board or a PCI Express x16 riser board. The low-profile assembly
supports either a PCI Express x4 riser board or a PCI-X riser board.

Note:
You cannot install the PCI Express x4 and PCI-X riser boards at the same time. You also cannot
install the HTX and PCI Express x16 riser boards at the same time.

Expansion Board Installation Guidelines


Use only HP-supported expansion boards that meet the following specifications:
• HTX: Full-sized, 1 GHz, 16x16
• PCI Express x4: Low-profile
• PCI Express x16: Full-sized
• PCI-X: Low-profile, 64-bit, 3.3 V, 133 MHz

3.2.5.1 Installing or Replacing the ProLiant DL145 G3 Riser Board Assemblies


To install or replace the riser board assemblies, follow these steps:
1. Attach a grounding strap to your wrist or ankle and a metal part of the chassis.
2. Press the Power button to power down the server. When the server powers down, the system
power LED turns off.
3. Disconnect the AC power cord from the AC outlet.
4. Remove the server from the rack, as described in Section 3.2.1.
5. Remove the access panel from the server, as shown in Figure 3-20.
6. Disconnect the cable attached to the PCI card, if necessary.

3.2 HP ProLiant DL145 G3 117


7. Lift and remove the appropriate PCI riser board assembly from the chassis as follows:

Figure 3-22 Removing the HP ProLiant DL145 G3 Full-Sized PCI Riser Board Assembly

Figure 3-23 Removing the HP ProLiant DL145 G3 Low-Profile PCI Riser Assembly

1
1

a. Loosen the two captive thumbscrews that secure the assembly to the chassis, as shown
by callout 1 in Figure 3-22 and Figure 3-23.
b. Lift the assembly away from the chassis as shown by callout 2 in Figure 3-22 and
Figure 3-23.
c. If you are adding an expansion board, pull out the slot cover from the selected slot.
Store it for reassembly later.

Important:
Do not discard slot covers. If the expansion board is removed in the future, the slot
cover must be reinstalled to maintain proper cooling.

118 Opteron Processor Servers


3.2.5.2 Removing or Installing a Riser Board
To remove or install a riser board in either the full-sized or low-profile riser board assemblies,
follow these steps:
1. Perform the procedure described in Section 3.2.5 to remove the appropriate riser board
assembly.
2. If an expansion board is installed in the assembly, perform the procedure described in
Section 3.2.6 to remove the expansion board.

3.2 HP ProLiant DL145 G3 119


3. Remove the installed riser board from the riser board assembly as shown in Figure 3-24 and
Figure 3-25.

Figure 3-24 Remove the Full-Sized Riser Board

1
2

Figure 3-25 Remove the Low–Profile Riser Board

2
1

Note:
Keep the two screws you remove in this step for installing the new riser board later.

a. Remove the two screws securing the riser board to the assembly.
b. Remove the riser board from the assembly.
To install a new riser board, reverse the removal procedure.

3.2.6 Replacing a PCI Expansion Card in the ProLiant DL145 G3


To replace a PCI expansion card, follow these steps:

120 Opteron Processor Servers


1. Attach a grounding strap to your wrist or ankle and a metal part of the chassis.
2. Press the Power button to power down the server. When the server powers down, the system
power LED turns off.
3. Disconnect the AC power cord from the AC outlet.

Note:
The front panel Power button does not completely shut off system power. Portions of the
power supply and some internal circuitry remain active until AC power is removed.

4. Remove the server from the rack, as described in Section 3.2.1.


5. Remove the access panel from the server, as shown in Figure 3-20.
6. Disconnect the cable attached to the PCI card.
7. Lift and remove the appropriate PCI riser board assembly from the chassis as described in
Section 3.2.5.1.
8. Remove the PCI expansion board from its protective packaging, handling it by the edges.
9. Verify that the board’s default bracket is compatible with the configuration of the selected
slot. If it is not compatible, replace the bracket with one that is compatible.
10. Remove the new PCI card from its antistatic plastic bag. Handle the card gently, preferably
by the front panel or card edges. Do not touch the connectors. The front panel of the card is
the metal plate that contains the port connector and LEDs.
11. Record the card serial number located on the card for future reference.
12. Remove the PCI expansion card from the appropriate riser board assembly as shown in
Figure 3-26 and Figure 3-27.

Figure 3-26 Removing a Full-Sized PCI Card in the HP ProLiant DL145 G3

3.2 HP ProLiant DL145 G3 121


Figure 3-27 Removing a Low-Profile PCI Card in the HP ProLiant DL145 G3

To install a PCI expansion board, reverse the removal procedure. Insert the new PCI card into
the riser cage, applying even pressure to seat the board securely. Be sure the adapter is fully
seated.
For more information, refer to the following documents:
• HP ProLiant 100 Series Servers User Guide
• HP ProLiant DL145 Generation 3 Server Maintenance and Service Guide

3.3 HP ProLiant DL165 G5


The HP ProLiant DL165 G5 supports up to two AMD Opteron 2300 series Quad-Core processors
and up to four SATA hot plug or non-hot plug midline and entry hard drives. The ProLiant
DL165 G5 can be used as a control node, utility node, and a compute node in HP Cluster Platform
configurations.
For the features and specifications of the HP ProLiant DL165 G5, go to:
http://h18004.www1.hp.com/products/quickspecs/13015_na/13015_na.html
Figure 3-28 shows the front view of the HP ProLiant DL165 G5.

Figure 3-28 HP ProLiant DL165 G5 Front View


1

2 3 4 5 6 7 8

11 10 9

The following list describes the callouts shown in Figure 3-28:


1. Thumbscrews for rack mounting
2. Optical disk drive bay
3. Serial number pull tab

122 Opteron Processor Servers


4. Two front USB 2.0 ports
5. Unit identification (UID) LED button
6. System health LED
7. NIC1 LED
8. NIC2 LED
9. Power button with LED indicator (bicolor: green and amber)
10. Hard disk drive (HDD) LED
11. HDD bays 1, 2, 3, and 4
Figure 3-29 shows the rear view of the HP ProLiant DL165 G5.

Figure 3-29 HP ProLiant DL165 G5 Rear View


1 2 3 4 5 6 7

15 14 13 12 11 10 9 8

The following list describes the callouts shown in Figure 3-29:


1. Power supply
2. PS/2 mouse port (green)
3. GbE LAN port for NIC 2
4. Captive thumbscrew for top cover
5. Serial port (teal)
6. Low profile/Half length expansion slot
7. Full Height/Full Length expansion slot
8. T10/T15 Wrench
9. Thumbscrew for PCI cage
10. UID LED button
11. VGA port
12. HP LO100i Management LAN Port
13. Two rear USB 2.0 ports
14. GbE LAN port for NIC 1
15. PS/2 keyboard port (purple)

3.3.1 PCI Slot Assignments


The following table describes the PCI slot assignments when the DL165 G5 is used in HP Cluster
Platform solutions.

PCI Slot Assignment

1 PCI Interconnect/PCI-E x16

2 PCI-E x4

For additional information, such as the system board layout and installing PCI cards, see the HP
ProLiant DL165 Generation 5 Server Maintenance and Service Guide:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01384378/c01384378.pdf

3.3 HP ProLiant DL165 G5 123


3.4 HP ProLiant DL385 G1
The dual-processor, 2U HP ProLiant DL385 G1 server can be used as a control node, directing
the management and administrative functions, or as a utility node for any specific task, including
running applications. It supports up to two 2.6 GHz 1 MB L2 cache 1 GHz HyperTransport AMD
Opteron processors, 16 GB of PC3200 DDR SDRAM memory running at 400MHz, and three
full-length PCI-X slots, two at 100MHz and one at 133 MHz PCI-X. Each processor memory
controller is directly connected to two dedicated memory banks. A minimum of 1 GB of memory
is required for the first bank of each processor. Both processors must be populated with the same
DIMM types and sizes.
The server also has an integrated HP ProLiant Lights Out port to be connected to the console
network, six hot-pluggable hard disk drive bays, and dual embedded Gigabit Ethernet NICs,
one of which is connected to the administrative network.
The HP ProLiant DL385 G1 has the following features:

Feature Specification

Available processors AMD Opteron™ 252 - 2.6 GHz, 1 MB L2

AMD Opteron™ 250 - 2.4 GHz, 1 MB L2

Processor capacity 2

Memory type PC3200 DDR

Maximum memory 16 MB of 2-way interleaved 400 MHz DDR SDRAM

PCI-X expansion slots 2 non hot-pluggable 64-bit, 100 MHz

1 non hot-pluggable 64-bit, 133 MHz

Advanced memory protection Advanced ECC

NIC NC7782 dual port Gigabit Ethernet NIC

Storage type SCSI hot plug

Maximum drive bays 6

Removable media bays 2

Storage controller Integrated Smart Array 6i

Networking Dual HP NC7781 PCI-X Gigabit server adapters (embedded)

Remote management Integrated Lights Out standard (advanced features available via optional
Integrated Lights Out Advanced Pack); ROM-based setup utility and redundant
ROMs

Redundant power supply Optional hot plug

Redundant fans Optional hot plug

Figure 3-30 shows the front panel (six drive bays) of the ProLiant DL385 G1, and Figure 3-31
shows its rear panel.

124 Opteron Processor Servers


Figure 3-30 HP ProLiant DL385 G1 Front Panel

HPTC-0115

Figure 3-31 HP ProLiant DL385 G1 Rear Panel


1 2 3 5 7 10 12

1
100
MHz

2
100
MHz

3
133
MHz

4 6 8 9 11

The following ports are available on the rear panel of the HP ProLiant DL385 G1
Table 3-7 HP ProLiant DL385 G1 Rear Panel Ports
Item Label or Icon Description

1 1 100 MHz PCI–X Slot 1 @ 100 MHz

2 2 100 MHz PCI–X Slot 2 @ 100 MHz

3 3 133 MHz PCI–X Slot 3 @ 133 MHz

4 Screen icon VGA out

5 101010 DB9 Serial (COM) port and management processor port

6 Stacked USB ports

7 iLO Integrated Lights Out, dedicated management NIC

8 Network icon Gigabit Ethernet NIC

9 Network icon Gigabit Ethernet NIC

10 Mouse icon Serial mouse

11 Keyboard icon Serial keyboard

12 IEC power inlet

3.4.1 PCI Slot Assignments


The ProLiant DL385 has three PCI slots on the rear of the chassis. Table 3-8 summarizes the slot
assignments.

3.4 HP ProLiant DL385 G1 125


Table 3-8 HP ProLiant DL385 PCI Slot Assignments
Slot Bus Assignment Comment

1 A PCI-X interconnect 64-bit 100 MHz PCI-X (1 Gb/s)

2 A Optional 2Gb/sec Fibre Channel HBA 64-bit 100 MHz PCI-X (1 Gb/s)

3 B PCI interconnect 64-bit 133 MHz PCI-X (1 Gb/s)

3.4.2 Removing an HP ProLiant DL385 G1 from a Rack


To access internal components in the HP ProLiant DL385 G1 server, you must first power down
the server and then remove it from the rack. All of the servers in the cluster are secured to the
rack on a sliding rail.
To remove the HP ProLiant DL 385 G1 from the rack, follow these steps:
1. Power down the server.
a. Press the UID LED button on the front panel. Blue LEDs illuminate on the front and
rear panels of the server.
b. Press the Power On/Standby button to place the server in standby mode (2). When the
server activates Standby power mode, the system power LED changes to amber.
c. Locate the server by identifying the illuminated rear UID LED button, and disconnect
the power cord, first from the AC outlet, then from the server.

Note:
The front panel power button on the HP ProLiant DL385 G1 toggles between On and
Off. To completely remove all power from the server, you must disconnect the power
cord first from the AC outlet and then from the server.

d. Disconnect all remaining cables on the server rear panel, including cables extending
from external connectors on expansion boards. Make note of which Ethernet and
interconnect cables are connected to which ports.
2. Pull down the quick release levers on each side of the server to release the server from the
rack, as shown in Figure 3-32.

Figure 3-32 Extending the HP ProLiant DL385 from the Rack

3. Extend the server on the rack rails until the rail locks.
4. Press and hold the rail locks, and extend the server until it clears the rack.

126 Opteron Processor Servers


5. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.
6. To access internal components in the HP ProLiant DL385 G1, lift up on the hood latch handle
and remove the access panel.

3.4.3 Replacing an HP ProLiant DL385 PCI Card


When replacing a PCI adapter card, you need a grounding strap. The adapter card is sensitive
to electrostatic discharge. Take care to avoid mishandling, which could damage the card. Before
beginning installation, and without removing the adapter card from its antistatic bag, inspect
the card for any signs of obvious damage, such as chipped or loose components.
To replace an HP ProLiant DL385 G1 PCI card, follow these steps:
1. Attach the grounding strap to your wrist or ankle and to a metal part of the chassis.
2. Power off the server.
3. Remove the server from the rack.
4. Remove the access panel from the server, and locate the PCI riser cage.
5. Disconnect any cables connected to any existing expansion boards.
6. Open the PCI riser cage door, as shown in Figure 3-33.

Figure 3-33 HP ProLiant DL385 PCI Riser Cage Door

HPTC-0194

7. Remove the PCI riser cage door latch, as shown in Figure 3-34.

Figure 3-34 HP ProLiant DL385 PCI Riser Cage Door Latch

8. Remove the PCI riser cage.


• Lift the PCI riser cage thumbscrews and turn them counter clockwise.
• Lift the cage upward, as shown in Figure 3-35.

3.4 HP ProLiant DL385 G1 127


Figure 3-35 Removing the HP ProLiant DL385 PCI Riser Cage

1 2

2
3

9. Unlock the PCI retaining clip, as shown in Figure 3-36.

Figure 3-36 Unlocking the HP ProLiant DL385 PCI Retaining Clip

HPTC-0251

10. Remove the expansion board, as shown in Figure 3-37.

128 Opteron Processor Servers


Figure 3-37 Removing the HP ProLiant DL385 Expansion Board

Caution:
To prevent improper cooling and thermal damage, do not operate the server unless all PCI
slots have either an expansion slot cover or an expansion board installed.

11. To replace the component, reverse the removal procedure.

3.5 HP ProLiant DL385 G2


The dual processor, 2U HP ProLiant DL385 G2 is used as a control node and utility node in HP
Cluster Platform. The features of the ProLiant DL385 G2 are described in Table 3-9.
Table 3-9 HP ProLiant DL385 G2 Features
Feature Model/Type Description

Processor(s) Performance Two Dual-core AMD Opteron 2218 Processors (2.6 GHz, 95
Watts)

Base (one of the following One Dual-Core AMD Opteron 2218 Processor (2.6 GHz, 95 Watts)
depending on model)
One Dual-Core AMD Opteron 2216 HE Processor (2.4 GHz, 68
Watts)

One Dual-Core AMD Opteron 2214 HE Processor (2.2 GHz, 68


Watts)

Entry One Dual Core AMD Opteron 2210 HE Processor (1.8 GHz, 68
Watts)

Cache Memory Performance 2 MB (1 x 2 MB) Level 2 cache - 2000 Series

Base 2 MB (2 x 1 MB) Level 2 cache - 2000 Series

Entry 2 MB (2 x 1 MB) Level 2 cache - 2000 Series

Memory Performance 4 GB (4 x 1 GB of 2:1 interleaved PC2-5300 DIMMs (DDR2-667)


with Advanced ECC capabilities

Base 2 GB (2 x 1 GB) PC2-5300 DIMMs (DDR2-667) with Advanced


ECC capabilities

Entry 1 GB (2 x 512 MB) PC2-5300 DIMMs (DDR2-667) with Advanced


ECC, capabilities

Network Controller Embedded NC371i MFN Gigabit Server Adapters

3.5 HP ProLiant DL385 G2 129


Table 3-9 HP ProLiant DL385 G2 Features (continued)
Feature Model/Type Description

Storage Controller Performance HP Smart Array P400/512MB BBWC Controller (RAID 0/1/5/6)

Base HP Smart Array P400/256MB Controller (RAID 0/1/5)

Entry HP Smart Array E200/64MB Controller (RAID 0/1)

Hard Drive None ship standard

Internal Storage 1.168TB max - 8 x 146GB SAS drives (with optional hard drives)

Optical Drive Performance IDE DVD-ROM/CDRW combo

Base Optional DVD, DVD/CD-RW combo, DVD+R/RW or floppy


drive

Entry Optional DVD, DVD/CD-RW combo, DVD+R/RW or floppy


drive

Form Factor Rack (2U), (3.5-inch)

Availability Performance Hot Plug Fully Redundant Fans and Redundant Power Supplies

Base Hot Plug Fully Redundant Fans Standard

Entry Hot Plug Fully Redundant Fans Standard

For more information on ProLiant DL385 G2 supported storage, memory, and other options, see
the HP ProLiant DL385 Generation 2 (G2) QuickSpecs.

3.5.1 ProLiant DL385 G2 Front and Rear Views


Figure 3-38 shows the front panel of the ProLiant DL385 G2.

Figure 3-38 HP ProLiant DL385 G2 Front Panel

1 2 3 4 5 6 1

1 2

12 11 10 9 8 7

The following list corresponds to the callouts shown in Figure 3-38:


1. Quick release levers
2. Media drive bay
3. Video connector
4. USB connectors (2)
5. Systems Insight Display
6. Hard drive bays
7. Power On/Standby button/system power LED
8. NIC 2 link/activity LED

130 Opteron Processor Servers


9. NIC 1 link/activity LED
10. External health LED (power supply)
11. Internal health LED
12. UID LED button
Figure 3-39 shows the rear panel of the ProLiant DL385 G2.

Figure 3-39 HP ProLiant DL385 G2 Rear Panel

1 6 7 8 13 16 18 20

21 22

2 3 4 5 9 10 11 12 14 15 17 19

The following list corresponds to the callouts shown in Figure 3-39:


1. T-10/T-15 Torx screwdriver
2. Expansion slot 5
3. Expansion slot 4
4. Expansion slot 3
5. Expansion slot 2
6. External option blank
7. NIC 2 connector
8. NIC 1 connector (connected to administrative network switch, AES1)
9. Expansion slot 1
10. Keyboard connector
11. Mouse connector
12. Serial connector
13. Power supply bay 2
14. USB connectors (2)
15. Video connector
16. Power supply LED
17. iLO 2 connector (connected to console network switch, CES1)
18. Power cord connector
19. UID LED button
20. Power supply bay 1 (populated)
21. NIC/iLO 2 activity LED
22. NIC/iLO 2 link LED

3.5.2 PCI Slot Assignments


The ProLiant DL385 G2 has five PCI slots on the rear of the chassis. Table 3-10 summarizes the
PCI Express slot assignments.

3.5 HP ProLiant DL385 G2 131


Table 3-10 HP ProLiant DL385 G2 PCI Express Slot Assignments
Slot Assignment Comment

1 PCI Express x8

2 PCI Express x8

3 PCI Express x4

4 PCI Express x8

5 PCI Express Interconnect x8

Table 3-11 summarizes a PCI Express/PCI-X mixed configuration.


Table 3-11 HP ProLiant DL385 G2 PCI Express/PCI-X Slot Assignments
Slot Assignment Comment

1 PCI Express x8

2 PCI Express x8

3 PCI Express Interconnect x8

4 PCI 64-Bit 133 MHz

5 PCI Express Interconnect 64-Bit 133 MHz

3.5.3 New DL385 G2 PCI Slot Assignments (as of March 17, 2008)
The following table describes the new PCI slot assignments for DL385 G2 as of March 17, 2008.
The are no performance issues with the previous slot assignments and it is not necessary to move
the HCAs in previously sold configurations. When a PCI-X HCA is used, use Table 3-11.
Table 3-12 HP ProLiant DL385 G2 PCI-Express Slot Assignments (as of March 17th 2008)
Slot Assignment Comment

1 PCI-Express x8

2 PCI-Express x8

3 PCI-Express x4

4 PCI-Express (Interconnect) x8

5 PCI-Express x8

3.5.4 Removing the ProLiant DL385 G2 from a Rack


To access internal components in the ProLiant DL385 G2, you must first power down the server
and then remove it from the rack. All of the servers in the HP Cluster Platform configuration are
secured to the rack on a sliding rail.
To remove the HP ProLiant DL385 G2 from a rack, follow these steps:

Warning!
To reduce the risk of personal injury, electric shock, or damage to the equipment, remove the
power cord to remove power from the server. The front panel Power On/Standby button does
not completely shut off system power. Portions of the power supply and some internal circuitry
remain active until AC power is removed.

1. Back up the server data and power off the server.


2. Shut down the operating system as directed by your operating system documentation.

132 Opteron Processor Servers


Note:
If the operating system automatically places the server in Standby mode, omit the next step.

3. Press the Power On/Standby button to place the server in Standby mode. When the server
activates Standby power mode, the system power LED changes to amber.

Important:
Pressing the UID button illuminates the blue UID LEDs on the front and rear panels. In a
rack environment, this feature facilitates locating a server when moving between the front
and rear of the rack.

4. Disconnect the power cords. The system is now without power.


5. Disconnect all remaining cables on the server rear panel, including cables extending from
external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.
6. Pull down the quick-release levers on each side of the server as shown in Figure 3-40.

Figure 3-40 Removing the ProLiant DL385 G2 from a Rack

7. Extend the server until the server rail-release latches engage.

Warning!
To reduce the risk of personal injury or equipment damage, be sure that the rack is adequately
stabilized before extending a component from the rack.

8. Press and hold the rail locks, and extend the server until it clears the rack.
9. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.
10. To access internal components, lift up on the hood latch handle and remove the access panel.

3.5 HP ProLiant DL385 G2 133


11. To put the server back into the rack, press the server rail-release latches and slide the server
fully into the rack.

Warning!
To reduce the risk of personal injury, be careful when pressing the server rail-release latches
and sliding the server into the rack. The sliding rails could pinch your fingers.

3.5.5 Replacing a ProLiant DL385 G2 PCI Card


When replacing a PCI adapter card, you need a grounding strap. The adapter card is sensitive
to electrostatic discharge. Take care to avoid mishandling, which could damage the card. Before
beginning installation, and without removing the adapter card from its antistatic bag, inspect
the card for any signs of obvious damage, such as chipped or loose components.
To replace a PCI card in the ProLiant DL385 G2 riser cage, follow these steps:
1. Attach the grounding strap to your wrist or ankle and to a metal part of the chassis.
2. Power off the server.
3. Remove the server from the rack.
4. Remove the access panel from the server, and locate the PCI riser cage (slots 3–5).
5. Disconnect any cables connected to any existing expansion boards.
6. Remove the PCI riser cage, as shown in Figure 3-41.

Figure 3-41 Removing the ProLiant DL385 G2 PCI Riser Cage

1 2

7. It might be necessary to remove a slot cover, or move a slot cover to a different slot. Remove
a slot cover in the PCI riser cage, as shown in Figure 3-42.

134 Opteron Processor Servers


Figure 3-42 Removing a PCI Slot Cover in the PCI Riser Cage

Caution:
To prevent improper cooling and thermal damage, do not operate the server unless all of
the PCI slots have either an expansion slot cover or an expansion board installed.

8. Remove the expansion board, as shown in Figure 3-43.

Figure 3-43 Removing a DL385 G2 PCI Card

9. To replace the component, reverse the removal procedure.

3.5 HP ProLiant DL385 G2 135


Note:
See the HP ProLiant DL385 Generation 2 Server User Guide and the HP ProLiant DL385 Generation
2 Server Maintenance and Service Guide for information on the ProLiant DL385 G2 PCI slots 1 and
2.

3.5.6 ProLiant DL385 G2 Systems Insight Display


The HP Systems Insight Display LEDs represent the system board layout. See the HP ProLiant
DL385 Generation 2 Server User Guide and the HP ProLiant DL385 Generation 2 Server Maintenance
and Service Guide for information on the ProLiant DL385 G2 Systems Insight Display.

3.6 HP ProLiant DL385 G5 and G5p


HP ProLiant DL385 G5 is used as a control node and utility node in HP Cluster Platform.
For the features and specifications of the HP ProLiant DL385 G5, see the QuickSpecs at:
http://h18004.www1.hp.com/products/quickspecs/13012_na/13012_na.html
For the features and specifications of the HP ProLiant DL385 G5p, see the QuickSpecs at:
http://h18004.www1.hp.com/products/quickspecs/13161_na/13161_na.html
The front and rear views of the HP ProLiant DL385 G5 and G5p are the same as the ProLiant
DL385 G2. For the front view, see Figure 3-38 (page 130) and for the rear view, see Figure 3-39
(page 131) .

3.6.1 PCI Slot Assignments


HP ProLiant DL385 G5 has five PCI slots on the rear of the chassis. Table 3-13 summarizes the
PCI Express slot assignments.
Table 3-13 HP ProLiant DL385 G5 PCI Express Slot Assignments
Slot Assignment Comment

1 PCI Express x8

2 PCI Express x8

3 PCI Express x4

4 PCI Express (Interconnect) x8

5 PCI Express x8

For additional information for the ProLiant DL385 G5, such as the system board layout and
installing PCI cards, see the HP ProLiant DL385 Generation 5 Server Maintenance and Service Guide:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01302275/c01302275.pdf
For additional information for the ProLiant DL385 G5p, see the HP ProLiant DL385 Generation
5p Server Maintenance and Service Guide:
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01609666/c01609666.pdf

3.7 HP ProLiant DL585


The quad-processor, 4U HP ProLiant DL585 typically functions as a control node, utility node,
or I/O node. When using the HP ProLiant DL585 as the control node, a redundant power supply
is mandatory. Otherwise, a redundant power supply is optional.
The HP ProLiant DL585 server ships standard with a diskette drive and an IDE CD drive in the
universal media bays. You can replace the CD or diskette drive with a DVD drive, another diskette
drive, or another CD drive. The server supports up to four hot-pluggable Ultra3 or Ultra320
drives. For the drives to operate at Ultra320 speeds, however, an optional PCI-X-based Ultra320

136 Opteron Processor Servers


SCSI controller must be installed. The server ships in duplex configuration, but the SCSI backplane
can be configured for either simplex or duplex mode.

Note:
Simplex configuration means all four drives in the server are connected to one SCSI channel.
Duplex configuration means there are two drives, each connected to one of two SCSI channels.

The HP ProLiant DL585 server has the following features:


• 4U system chassis
• Up to four AMD Opteron processors with HyperTransport technology
• 1 MB L2 processor cache
• Base memory of 1 Gb or 2 Gb, depending on the model
• 2-way interleaved PC2100 DDR SDRAM/66 MHz
• Redundant ROM
• Advanced error-checking and correcting (ECC) memory
• Four 1-inch Ultra320 hot-pluggable hard disk drive bays
• Embedded Smart Array 5i Plus controller (Ultra3-based technology) with transportable
battery-backed write cache enabler
• Hot-pluggable floppy drive and CD or DVD drive
• Six 100 MHz 64-bit PCI-X and two 133 MHz 64-bit PCI-X
• An embedded dual-channel Gigabit Ethernet NIC with PXE support and Wake on LAN
(WOL)
• Redundant hot-pluggable power supplies with optional power supply installed
• Redundant hot-pluggable fans with N+1 redundancy
• Integrated Lights Out (iLO) technology
• QuickFind diagnostic display for troubleshooting at the server level
• ROM-based setup utility
The HP ProLiant DL585 has eight slots, two of which are 133 MHz PCI-X. The remainder are 100
MHz PCI-X. The two 133 MHz slots are on separate PCI-X busses. The remaining slots are located
as pairs on three busses.
Table 3-14 shows the required HP Cluster Platform Gigabit Ethernet slot assignments for an HP
ProLiant DL585 when it is used as a control node or compute node. Available slots can be used
for other supported I/O options, if required.
Table 3-14 Slot Assignments for the HP ProLiant DL585
Slot Bus Assignment Description

1 5 1000Base-TX Gigabit Ethernet adapter card 133 MHz PCI-X

2 4 133 MHz PCI-X

3 3 1 Gb/s Ethernet NIC 100 MHz PCI-X

4 3 100 MHz PCI-X

5 2 First 2 Gb/s Fibre Channel HBA (optional) 100 MHz PCI-X

6 2 100 MHz PCI-X

7 1 Second 2 Gb/s Fibre Channel HBA (optional) 100 MHz PCI-X

8 1 100 MHz PCI-X

There are three embedded network ports on an HP ProLiant DL585. The iLO port connects to
the console network switch and is used for server management. NIC 1 port is used for the

3.7 HP ProLiant DL585 137


administrative network, and the NIC 2 port is used for the Gigabit Ethernet system interconnect.
See Figure 3-45 to locate the ports.
Figure 3-44 displays the front panel of the HP ProLiant DL585, and Figure 3-45 displays its rear
panel.

Figure 3-44 HP ProLiant DL585 Front Panel


2 3 5 6 7 8 9 10

hp

1 4

The following table describes the callouts in Figure 3-44:

SCSI IDs

Item Description Simplex Duplex

1 Hot-pluggable power supply 2 (optional)

2 Eject button for universal media bay 1

3 Universal media bay 1 (diskette drive)

4 Hot-pluggable power supply 1 (primary)

5 Universal media bay 2 (CD-drive)

6 Eject button for universal media bay 2

7 SCSI hot-pluggable hard drive bay 0 0 0

8 SCSI hot-pluggable hard drive bay 1 1 1

9 SCSI hot-pluggable hard drive bay 2 2 0

10 SCSI hot-pluggable hard drive bay 3 3 1

Figure 3-45 HP ProLiant DL585 Rear Panel

1 2 3 4 5 6 7 8 9 10 11 12

138 Opteron Processor Servers


The following table describes the callouts in Figure 3-45:

Item Description

1 Integrated Lights Out manager connector

2 USB connector 1

3 USB connector 2

4 Rear Unit Identification (UID) button and LED

5 Keyboard connector

6 Mouse connector

7 Video connector

8 Serial connector

9 NIC 2 connector

10 NIC 1 connector

11 AC inlet 1 (primary)

12 AC inlet 2 (optional)

Integrated Lights Out


Integrated Lights Out (iLO) technology provides server health and remote server manageability.
The iLO subsystem includes an intelligent microprocessor, secure memory, and a dedicated
network interface. This design makes iLO independent of the host server and its operating system.
The iLO subsystem provides remote access to any authorized network client, sends alerts, and
provides other server management functions. It enables you to perform the following tasks:
• Remotely power up, power down, or reboot the host server
• Send alerts from iLO regardless of the state of the host server
• Access advanced troubleshooting features through the iLO interface
• Diagnose iLO using Insight Manager 7 through a Web browser and SNMP alerting
For more information about iLO features, see the Integrated Lights-Out User Guide on the HP
Cluster Platform CD or on the HP website:
http://www.hp.com/servers/lights-out

3.7.1 HP ProLiant DL585 Memory Configuration


The HP Cluster Platform 4000 enforces the following memory configuration rules for the HP
ProLiant DL585:
• DIMMs on a processor memory board must be installed in pairs and in bank order.
• All DIMMs on a processor memory board must be of the same size.
• Minimum of two DIMMs must be installed in the processor memory board in slot 2.
For optimal performance, HP recommends that at least four DIMMs (two banks) be installed in
every processor memory board.

3.7.2 Removing an HP ProLiant DL585 from the Rack


To access internal components in the HP ProLiant DL585 server, you must first shut down power
to the server and remove it from the rack. All of the servers in the cluster are secured to the rack
on a sliding rail. This section describes how you shut down power, remove the server from the
rack, and access internal components.
When performing these tasks, heed the warnings and cautions listed in “Important Safety
Information” (page 23).

3.7 HP ProLiant DL585 139


The front panel power button on the HP ProLiant DL585, as shown in Figure 3-44, toggles between
On and Standby. If you press the Power button on an HP ProLiant DL585 to power down the
server, the LED changes from green to amber, indicating Standby mode. In Standby mode, the
server removes power from most electronics and drives. Portions of the power supply and some
internal circuitry remain active. To completely remove all power from either server, disconnect
the power cord first from the AC outlet and then from the server.
To remove the ProLiant DL585 from the rack, follow these steps:
1. Power down the server.
2. Disconnect all of the remaining cables on the server rear panel, including cables extending
from external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.
3. Loosen the thumbscrews securing the server to the rack, as shown in Figure 3-46.

Figure 3-46 Loosening Thumbscrews

HPTC-200

4. Slide the server out of the rack until the rail locks engage, as shown in Figure 3-47.

Figure 3-47 Sliding the HP ProLiant DL585 from the Rack

hp

hp

hp

HPTC-0199

5. Press and hold the rail locks and extend the server until it clears the rack.
6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.
To remove the access panel on the HP ProLiant DL585, as shown in Figure 3-48, follow these
steps:

140 Opteron Processor Servers


1. Remove the server from the rack, as described in Section 3.7.2.
2. Locate and remove the Torx T-15 tool that is stored on the back of the server chassis, between
the fans and the PCI slots.
3. Unlock the access panel latch using the Torx T-15 tool, as shown in Figure 3-48, that you
removed from the back of the server.

Figure 3-48 Unlocking the HP ProLiant DL585 Access Panel Latch

1 3

hp

Item Description

1 Access panel Torx T-15 tool

2 Access panel latch

3 Access panel

4. Lift up on the latch and remove the access panel.


To replace the access panel, place the panel on top of the server with the latch open. Allow the
panel to extend past the rear of the server approximately 1.25 cm (0.5 in). Push down on the
latch. The access panel slides to a closed position.
To return an HP ProLiant DL585 server into a rack, press the rail-release levers at the front of
both server rails and slide the server into the rack, as shown in Figure 3-49. Secure the server by
tightening the thumbscrews.

Figure 3-49 Sliding an HP ProLiant DL585 into a Rack

HPTC-0224

3.7 HP ProLiant DL585 141


3.7.3 Replacing a PCI Card
The HP ProLiant DL585 supports the installation of both PCI (33 MHz and 66 MHz) and PCI-X
(66 MHz, 100 MHz, and 133 MHz) expansion boards. All PCI-X slots are 64-bit, 3.3-V keyed.
Figure 3-50 shows the HP ProLiant DL585 PCI-X expansion slots and buses. The system BIOS
detects the PCI-X devices in the slots in the following order: 8-7-6-5-4-3-2-1.

Figure 3-50 HP ProLiant DL585 PCI-X Expansion Slots and Buses

5
6

The following table describes the callouts in Figure 3-50.

Item Slot Bus Description

1 Slot 1 Fifth bus 133 MHz

2 Slot 2 Fourth bus 133 MHz

3 Slot 3 Third bus 100 MHz

4 Slot 4 Third bus 100 MHz

5 Slot 5 Second bus 100 MHz

6 Slot 6 Second bus 100 MHz

7 Slot 7 First bus 100 MHz

8 Slot 8 First bus 100 MHz

To replace a PCI expansion card, follow these steps:


1. Attach a grounding strap to your wrist or ankle and a metal part of the chassis.
2. Press the Power button to power down the server.

Note:
When the server powers down, the Power button LED changes from green to amber,
indicating Standby mode. In Standby mode, the server removes power from most electronics
and drives, but portions of the power supply and some internal circuitry remain active until
AC power is removed.

3. Disconnect the AC power cord, first from the AC outlet and then from the server.
4. Remove the server from the rack, as shown in Section 3.7.2.
5. Remove the access panel from the server, as shown in Figure 3-48, and locate the PCI slot.

142 Opteron Processor Servers


6. Disconnect any cables connected to the expansion board.
7. Using the callouts shown in Figure 3-51 as a guide, press the PCI-X retaining clip toward
the front of the server to lock it in the open position (callout 1).

Figure 3-51 Removing a PCI Card from an HP ProLiant DL585 Server

2
1
4
3

8. Press down on the expansion slot latch to release it (callout 2).


9. Open the latch (callout 3).
10. Remove the board from the slot (callout 4).
11. Remove the new PCI card from its antistatic plastic bag. Handle the adapter gently, preferably
by the front panel or card edges. Do not touch the connectors. The front panel of the adapter
is the metal plate that contains the port connector and LEDs.
12. Record the adapter card serial number located on the card for future reference.
13. Insert the new PCI card in the slot, as shown in by callout 1 in Figure 3-52, applying even
pressure to seat the board securely. Be sure the adapter is fully seated.

Figure 3-52 Installing a PCI Card in an HP ProLiant DL585 Server

14. Close the expansion slot latch (callout 2).


15. Close the PCI-X retaining clip.
16. Reinstall the access panel, reversing the steps in Section 3.7.2.
17. Replace the server in the rack, reversing the steps in Section 3.7.2.

3.7 HP ProLiant DL585 143


18. Attach any cables between the card and the switch, and reconnect the AC power cord.
19. Press the Power button on the server. Check that the card is detected. Refer to your software
documentation for further installation instructions.

3.8 HP ProLiant DL585 G2


The HP ProLiant DL585 G2 typically functions as a control node, utility node, or I/O node. When
using the ProLiant DL585 G2 as the control node, a redundant power supply is mandatory.
Otherwise, a redundant power supply is optional. The 4U ProLiant DL585 G2 server can be
configured with up to four AMD Rev F Opteron Dual-Core processors.
The HP ProLiant DL585 G2 server has the following features:

Feature Type Specification

Processors Dual-Core AMD 8200 Series Processors

Memory Up to 128 GB of memory, supported by 32 slots of PC2-5300 Registered


DIMMs at 667 MHz

Cache Memory 1MB Integrated Level 2 Cache memory per core

Chipset nVidia NForce Professional 2200 and 2050 chipsets, and AMD-8132 chipset

Memory Protection & Advanced ECC


Functionality

Memory Type PC2-5300 Registered DIMMs at 667 MHz

Standard One of the following depending on model:

2 GB (4 x 512 MB)

4 GB (4 x 1 GB)

8 GB (8 x 1 GB)

Maximum 128 GB (16 x 8 GB)

Network Controller Dual embedded NC371i Multifunction Gigabit Network Adapters with
TCP/IP Offload Engine, including support for Accelerated iSCSI and
RDMA through optional ProLiant Essentials Licensing Kits

Manageability Integrated Lights-Out 2 (iLO 2) Standard Management (ASIC on the System


Board)

Storage Controllers (One of HP Smart Array P400/256 MB Controller (in a PCI Express slot)
the following depending on
Model) Note:
Standard on 2.0 GHz and 2.2 GHz models

HP Smart Array P400/512 MB BBWC Controller (in a PCI Express slot)


Note:
Standard on 2.4 GHz, 2.6 GHz, and 2.8 GHz models

Storage Data Protection The Battery-Backed Write Cache (BBWC) Enabler for the Smart Array
P400 Controller protects up to 512-MB write cache memory from hard
boot, power, controller, or system board failures.
• Standard on all HP ProLiant DL585 G2 Models
• Battery charge/life: Up to 72 hours/3 years
• Transportable data protection
• Increases overall controller performance

144 Opteron Processor Servers


Feature Type Specification

Note:
Safely transport your write cache data to another HP ProLiant DL585 G2
in the data center by removing the BBWC Enabler and Smart Array P400
Controller Memory Module simultaneously (connected by short cable).

Storage Diskette 1.44 MB Diskette Drive (slimline) - ejectable for security and serviceability
Drive

Optical Slimline DVD/CD-RW Drive Standard on all models - ejectable for security
Drive and serviceability

Hard None ship standard


Drives

Hard Disk Internal SAS backplane supports up to eight SFF hard disk drive
Drive
Backplane

Maximum Internal Storage Hot Plug 1.168 TB (8 x 146 GB SFF SAS drives; internal hot plug)
Type SAS

Hot Plug 480 GB (8 x 60 GB SFF SATA; internal hot plug)


SATA

Graphics Integrated ATI-RN50 with 32 MB DDR memory

Figure 3-53 shows the ProLiant DL585 G2 front panel features.

Figure 3-53 HP ProLiant DL585 G2 Front Panel

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

19 18

The following list corresponds to the callouts shown in Figure 3-53.


1. Hard drive bay 1
2. Hard drive bay 2
3. Hard drive bay 3
4. Hard drive bay 4
5. Hard drive bay 5
6. Hard drive bay 6
7. Hard drive bay 7

3.8 HP ProLiant DL585 G2 145


8. Hard drive bay 8
9. Video connector
10. USB connectors (two)
11. Media drive blank or optional media drive
12. DVD drive
13. UID switch and LED
14. Internal system health LED
15. External system health LED
16. NIC 1 link/activity LED
17. NIC 2 link/activity LED
18. Power on/Standby button and LED
19. Processor memory module
Figure 3-54 shows the rear panel of the HP ProLiant DL585 G2 server.

Figure 3-54 HP ProLiant DL585 G2 Rear Panel

1 2 3 4 5

7 8

UID

17 16 15 14 13 12 11 10 9

The following list corresponds to the callouts shown in Figure 3-54.


1. Redundant hot-plug power supply (optional)
2. PCI Express and PCI-X non-hot-plug expansion slots
3. Hot-plug power supply (primary)
4. Power supply Fail LED 1 (amber)
5. Power supply Power LED 2 (green)
6. T-15 Torx screwdriver
7. NIC Activity LED
8. NIC Link LED
9. NIC connector 1
10. NIC connector 2
11. iLO 2 connector
12. Serial connector
13. USB connectors (two)
14. Keyboard connector
15. Mouse connector

146 Opteron Processor Servers


16. Video connector
17. Rear UID button and LED

3.8.1 Removing an HP ProLiant DL585 G2 from the Rack


To access internal components in the HP ProLiant DL585 G2 server, you must first shut down
power to the server and remove it from the rack. All of the servers in the cluster are secured to
the rack on a sliding rail. This section describes how you shut down power, remove the server
from the rack, and access internal components.
When performing these tasks, heed the warnings and cautions listed in “Important Safety
Information” (page 23).
The front panel power button on the HP ProLiant DL585 G2, as shown in Figure 3-53, toggles
between On and Standby. If you press the Power button on an HP ProLiant DL585 G2 to power
down the server, the LED changes from green to amber, indicating Standby mode. In Standby
mode, the server removes power from most electronics and drives. Portions of the power supply
and some internal circuitry remain active. To completely remove all power from either server,
disconnect the power cord first from the AC outlet and then from the server.
To remove the ProLiant DL585 G2 from the rack, follow these steps:
1. Power down the server.
2. Disconnect all remaining cables on the server rear panel, including cables extending from
external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.
3. Pull down the quick-release levers on each side of the server to release the server from the
rack, as shown by callout 1 in Figure 3-55, and callout 3 in Figure 3-56.

Figure 3-55 Remove Server from Rack

2
1

4. Slide the server out of the rack until the rail locks engage, as shown in Figure 3-56.

3.8 HP ProLiant DL585 G2 147


Figure 3-56 ProLiant DL585 G2 from the Rack

5. Press and hold the rail locks (see callout 1 in Figure 3-56) and extend the server until it clears
the rack.
6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.
To remove the access panel on the HP ProLiant DL585 G2, as shown in Figure 3-57, follow these
steps:
1. Remove the server from the rack, as described in Section 3.8.1.
2. Locate and remove the Torx T-15 tool that is stored on the back of the server chassis to unlock
the access panel latch.

148 Opteron Processor Servers


Figure 3-57 Unlock the ProLiant DL585 G2 Access Panel Latch

1
3

3. Lift up on the latch and remove the access panel.


To replace the access panel, place the panel on top of the server with the latch open. Allow the
panel to extend past the rear of the server approximately 1.25 cm (0.5 in). Push down on the
latch. The access panel slides to a closed position.
To return an HP ProLiant DL585 G2 server into a rack, press the rail-release levers at the front
of both server rails and slide the server into the rack, as shown in Figure 3-56. Secure the server
by snapping in the release latches.

3.8.2 Replacing a PCI Card


Figure 3-58 shows the HP ProLiant DL585 G2 PCI expansion slots.

3.8 HP ProLiant DL585 G2 149


Figure 3-58 HP ProLiant DL585 G2 PCI Slots

1 2 3 4 5 6 7 8 9

The following table describes the callouts in Figure 3-58.

Item Slot Bus Description Comment

1 Slot 1 66 PCI-X, 64-bit/100-MHz (half length) PCI Interconnect

2 Slot 2 66 PCI-X, 64-bit/100-MHz (full-length)

3 Slot 3 73 PCI Express x4 (full-length) 1 gigabit/sec Ethernet NIC

4 Slot 4 76 PCI Express x4 (full-length)

5 Slot 5 70 PCI Express x8 (full-length) First 2 GB/second Fibre Channel HBA


(Optional)

6 Slot 6 79 PCI Express x4 (full-length)

7 Slot 7 2 PCI Express x8 (full-length) Second 2 GB/second Fibre Channel HBA


(Optional)

8 Slot 8 5 PCI Express x8 (full-length)

9 Slot 9 8 PCI Express x4 (half-length)

To replace a PCI expansion card, follow these steps:


1. Attach a grounding strap to your wrist or ankle and a metal part of the chassis.
2. Press the Power button to power down the server.

150 Opteron Processor Servers


Note:
When the server powers down, the Power button LED changes from green to amber,
indicating Standby mode. In Standby mode, the server removes power from most electronics
and drives, but portions of the power supply and some internal circuitry remain active until
AC power is removed.

3. Disconnect the AC power cord, first from the AC outlet and then from the server.
4. Remove the server from the rack, as shown in Section 3.8.1.
5. Remove the access panel from the server, as shown in Figure 3-57, and locate the PCI slot.
6. Disconnect any cables connected to the expansion board.
7. Using the callouts in Figure 3-59 as a guide, press the PCI card retaining clip toward the
front of the server to lock it in the open position (see callout 1).

Figure 3-59 Removing a PCI Card from an HP ProLiant DL585 G2 Server

2
3
4
1

8. Press down on the expansion slot latch to release it (callout 2).


9. Unlock the retaining clip (callout 3), if necessary, such as for full-length PCI expansion cards.
10. Remove the board from the slot (callout 4).
11. Remove the new PCI card from its antistatic plastic bag. Handle the adapter gently, preferably
by the front panel or card edges. Do not touch the connectors. The front panel of the adapter
is the metal plate that contains the port connector and LEDs.
12. Record the adapter card serial number located on the card for future reference.
13. Insert the new PCI card in the slot, applying even pressure to seat the board securely. Be
sure the adapter is fully seated.
14. Close the expansion slot latch (callout 2).
15. Close the PCI retaining clip (callout 3), if necessary.
16. Reinstall the access panel, reversing the steps in Section 3.8.1.
17. Replace the server in the rack, reversing the steps in Section 3.8.1.

3.8 HP ProLiant DL585 G2 151


18. Attach any cables between the card and the switch, and reconnect the AC power cord.
19. Press the Power button on the server. Check that the card is detected. Refer to your software
documentation for further installation instructions.

3.9 HP ProLiant DL585 G5


The HP ProLiant DL585 G5 server typically functions as a control node, utility node, or compute
(application) node in HP Cluster Platform. When using the ProLiant DL585 G5 as the control
node, a redundant power supply is mandatory. Otherwise, a redundant power supply is optional.
The 4U ProLiant DL585 G5 server can be configured with up to four AMD Opteron quad-core
processors and up to 128 MB of memory.
For the features and specifications of the ProLiant DL585 G5, go to the QuickSpecs on the HP
website:
http://h18004.www1.hp.com/products/quickspecs/13016_na/13016_na.html
The front and rear panel features of the ProLiant DL585 G5 are the same as the ProLiant DL585
G2. See Figure 3-53 (page 145) and Figure 3-54 (page 146).

3.9.1 PCI Slot Assignments


The following table provides the PCI slot assignments for ProLiant DL585 G5 when the server
is used in HP Cluster Platform solutions.

Slot Assignment Comment

1 PCI-X Interconnect PCI-X, 64-bit/100-MHz

2 PCI-X, 64-bit/100-MHz

3 1 Gigabit/second Ethernet NIC PCI Express x4

4 PCI Express x4

5 1st 2 Gigabit/second Fibre Channel HBA PCI Express x8


(Optional)

6 PCI Express x4

7 PCI Express Interconnect PCI Express x8

8 PCI Express x8

9 PCI Express x4

For additional information, such as the system board layout and installing PCI cards, see the HP
ProLiant DL585 Generation 5 Server Maintenance and Service Guide:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01384250/c01384250.pdf

152 Opteron Processor Servers


4 Server Blades
An HP Cluster Platform solution can be configured with p-Class or c-Class enclosures. This
chapter describes the BladeSystem enclosures and ProLiant BL servers and devices used in HP
Cluster Platform solutions. All server blades feature high-performance memory, iLO/iLO 2
advanced functionality, multiple general purpose Gigabit Ethernet network controllers, and
optional FC SAN connectivity. This chapter presents overviews of the following:
• HP BladeSystem p-Class (Section 4.1)
• HP ProLiant BL35p (Section 4.2)
• HP ProLiant BL45p (Section 4.3)
• HP BladeSystem c-Class enclosure (Section 4.4)
• HP ProLiant BL2x220c (Section 4.7)
• HP ProLiant BL260c (Section 4.8)
• HP ProLiant BL460c (Section 4.9)
• HP ProLiant BL480c (Section 4.10)
• HP ProLiant BL465c (Section 4.11)
• HP ProLiant BL465c G5(Section 4.12)
• HP ProLiant BL685c (Section 4.13)
• HP ProLiant BL685c G5(Section 4.14)
• HP Integrity BL860c (Section 4.15)

4.1 HP BladeSystem p-Class Overview


The HP BladeSystem p-Class solution consists of HP ProLiant BL server blades, network
interconnect components, a rack-centralized power subsystem, and management tools that enable
adaptive computing and is optimized for rapid deployment. Server blades feature
high-performance DDR memory, BIOS-enhanced RAID, iLO advanced functionality, two
general-purpose Gigabit Ethernet network controllers, and optional FC SAN connectivity.
HP BladeSystem p-Class 6U enclosures hold server blades and network interconnect options.
The server blades slide into the blade enclosure backplanes for power and network connections.
Each server blade enclosure has eight server blade bays in the center of the enclosure and two
interconnect bays at each end. The signals are routed from the server blades, across the server
blade enclosure backplane, and to the interconnect blades. The middle eight bays support server
blades.
Combinations of different series server blades are supported in the same server blade enclosure.
Each enclosure supports a pair of switch or patch panel interconnects for network cable
management. The upgraded enclosure also provides a single Ethernet port for connecting to the
iLO interface of every installed server blade. Some models of server blades, including the BL35p,
require the use of the enclosure with enhanced backplane components.
HP BladeSystem p-Class interconnects pass the network adapter (NIC) signals from the server
blades to external networks. The p-Class solution power subsystem is a 3U power enclosure that
holds up to six hot-pluggable power supplies. Additional power supplies and power enclosures
can be added to a system for redundancy. Power is carried from the power supplies in the power
enclosures to the server blade enclosures through bus bars (for powering multiple enclosures)
or a power bus box (for powering a single enclosure).

4.2 HP ProLiant BL35p Server Blade Overview


The HP ProLiant BL35p features up to two AMD Opteron 200 series low-power processors per
server blade. It has a modular, space-saving design that consumes less power and enables dense
rack architectures.

4.1 HP BladeSystem p-Class Overview 153


The HP ProLiant BL35p server blade has the following features:

Feature Specification

Available processors AMD Opteron™ Model 250 - 2.4 GHz, 1 MB L2 (68W)

Processor capacity 2

Memory type PC3200 DDR 400 MHz (4 slots) 2:1 interleave

Maximum memory 8 GB

NIC Two 10/100/1000 NICs on mezzanine card + 1 dedicated iLO port

Slots No

Storage type ATA non-hot plug

Maximum hard drive bays 2 — Optional small form factor ATA 60 GB hard disk drives

Connects to fibre channel storage Yes

Storage controller Integrated with chipset

Chassis 16 server blades per 6U enclosure

Networking 2 - 10/100/1000T NICs + 1 Dedicated iLO Port

Remote management iLO Advanced

Power Rack-centralized redundant power subsystem, with hot-pluggable power


supplies

Figure 4-1 shows the front panel of the HP ProLiant BL35p and Figure 4-2 shows its internal
components.

Figure 4-1 HP ProLiant BL35p Front Panel

2
UID

4 NIC
1

NIC
2
5

154 Server Blades


The following table describes the callouts in Figure 4-1.

Item Description

1 UID LED

2 Internal system health LED

3 NIC1 LED (actual NIC numeration depends on several factors, including the operating
system installed on the server blade)

4 NIC 2 LED (actual NIC numeration depends on several factors, including the operating
system installed on the server blade)

5 Hard drive activity LED

6 Power On/Standby button

7 Local I/O port

Figure 4-2 HP ProLiant BL35p Internal Components

5
6
7

8 9

10

11

4.2 HP ProLiant BL35p Server Blade Overview 155


The following table describes the callouts in Figure 4-2.

Item Description

1 Hard drive cage

2 Fan assembly

3 Fan assembly connectors (2)

4 System maintenance switch (SW1)

5 Hard drive cable connector

6 Processor socket 2

7 DIMM bank B

8 Processor socket 1 (populated)

9 DIMM bank A (populated)

10 Adapter card connectors (2)

11 Battery

The HP ProLiant BL35p server blades require the support of an HP BladeSystem p-Class sleeve
in a server blade enclosure with enhanced backplane components (enhanced server blade
enclosure). Each sleeve holds two HP BL35p server blades and installs in a single bay in the
enclosure. Up to eight sleeves can be installed in the enclosure and each sleeve holds two blades
for a total of 16 HP ProLiant BL35p blades per 6U enclosure. The enhanced server blade enclosure
also provides a single rear iLO connector for single-cable remote management of all installed
HP ProLiant BL35p server blades. For more information about the enhanced server blade
enclosure, see the HP ProLiant BL p-Class Server Blade Enclosure Upgrade Installation Guide or the
HP ProLiant BL p-Class Server Blade Enclosure Installation Guide.
Each HP ProLiant BL35p server blade includes three network adapters: two Broadcom 5703
Gigabit Ethernet Embedded 10/100/1000T WOL (Wake On LAN) enabled with Preboot eXecution
Environment (PXE) plus one additional 10/100T Ethernet adapter dedicated to iLO management.

4.2.1 Supported Memory Configurations


Each processor in the HP ProLiant BL35p has a bank consisting of two DIMM slots. The server
blade supports up to 8 GB of memory. The following guidelines apply to the HP BL35p DIMMs:
• All DIMMs must be PC3200 DDR 400-MHz SDRAM DIMMs.
• Both DIMM slots in a bank must be populated.
• Both DIMMs in a bank must be identical.
• DIMM bank A must always be populated.
• DIMM bank B is only active when processor socket 2 is populated.
• For best performance, each processor should have a populated memory bank.

Caution:
Use only HP DIMMs. DIMMs from other sources may adversely affect data integrity.

PC3200 DIMMs can either be single- or dual-rank. A dual-rank DIMM is similar to having two
separate DIMMs on the same module. Although only a single DIMM module, a dual-rank DIMM
acts as if it were two separate DIMMs. The primary reason for the existence of dual-rank DIMMs
is to provide the largest capacity DIMM given the current DIMM technology. If the maximum
DIMM technology allows for creating 2 GB single-rank DIMMs, a dual-rank DIMM using the
same technology would be 4 GB.

156 Server Blades


4.2.2 Supported Storage
The HP ProLiant BL35p supports up to up to two 60 GB parallel ATA hard disk drives. The drive
cage assembly lower drive bay is designated as the primary hard drive. The physical aspect of
inserting and removing a disk drive is discussed in the document that ships with the drive and
in the HP ProLiant BL35p Server Blade User Guide.
In addition to the hard drives, the HP ProLiant BL35p has SAN storage capability. It can be
configured for SAN connectivity when used with the following components:
• Fibre Channel adapter
• SAN-compatible interconnect
• SFP transceivers (included with the Dual Port FC Adapter)
• Optical FC cables
• Supported SAN and associated software
For more detailed SAN configuration information for the server blade, see the following
documents:
• The model-specific QuickSpecs document located on the HP ProLiant p-Class server blade
products web page:
http://www.hp.com/products/servers/proliant-bl/p-class/info
• The HP StorageWorks SAN documentation:
http://h18006.www1.hp.com/storage/index.html
Search for the SAN product required, and navigate to technical documentation.
• The HP BladeSystem p-Class storage web page:
http://www.hp.com/go/bladesystem/storage

4.2.3 Removing the HP ProLiant BL35p from the Sleeve


To remove an HP ProLiant BL35p from the HP BladeSystem p-Class sleeve in the blade enclosure,
follow these steps:
• Back up all server blade data.
• Power down the server blade, using one of the following methods:
— Press the Power On/Standby button on the server blade front panel. Be sure that the
server blade is in Standby mode by observing that the power LED is amber. This process
may take 30 seconds, during which time some internal circuitry remains active.
— Use the virtual power button feature in the iLO Remote Console to power down the
server blade from a remote location. After initiating a manual or virtual power down
command, be sure that the server blade goes into Standby mode by observing that the
power LED is amber.

Important:
When the server blade is in Standby mode, auxiliary power is still being provided. To
remove all power from the server blade, remove the server blade from the server blade
enclosure. Removing the sleeve from the server blade enclosure is not necessary.

Important:
Remote power procedures require the most recent firmware for the power enclosure
and server blade enclosure management modules.

• Remove the server blade from the sleeve, as shown in Figure 4-3.

4.2 HP ProLiant BL35p Server Blade Overview 157


Figure 4-3 Removing the HP ProLiant BL35p from the Enclosure Sleeve

To install and power up a server blade, reverse the removal procedure. Server blades are set to
power up automatically upon insertion. If you changed the setting, use the power button or iLO
Virtual Power Button feature to power up the server blade.
For more information about iLO, see the HP Integrated Lights-Out User Guide.

4.3 HP ProLiant BL45p Server Blade Overview


The ProLiant BL45p four-processor server blade features AMD Opteron 800 series processors
with dual-core technology, increased density, SAN storage capability, 32 GB memory capacity
and four gigabit Ethernet NICs standard. It supports maximum performance DDR memory,
integrated SmartArray RAID controller, universal hot-pluggable SCSI hard drives, iLO advanced
functionality, four general-purpose gigabit Ethernet network controllers, and optional FC SAN
connectivity. The HP ProLiant BL45p also shares the same infrastructure components as all other
p-Class server blades.
Each server blade supports up to four 2.2 GHz AMD Opteron Dual-Core processors with 1 GHz
HyperTransport and 1 MB L2 cache. Two universal hot-pluggable SCSI hard drives provide up
to 600 GB capacity, plus an embedded Smart Array 6i Plus controller with Ultra3 performance
and optional battery-backed cache option. Sixteen DIMM slots have a maximum capacity of 32
GB of 400-MHz, ECC PC3200 DDR. Server blades feature 2 x 1 interleaved memory for added
performance. DIMMs must be added in pairs.
The characteristics of the HP ProLiant BL45p server blade are described in Table 4-1.
Table 4-1 ProLiant BL45p Characteristics
Feature Specification

Available processors AMD Opteron™ Model 800 - 2.2 GHz, 1 MB L2 cache (dual-core) or 2.6 GHz, 1
MB L2 cache (single-core)

Processor capacity 4

Memory type PC3200 DDR

Maximum memory 32 GB

Storage type SCSI hot plug

158 Server Blades


Table 4-1 ProLiant BL45p Characteristics (continued)
Feature Specification

Maximum hard drive bays 2 – 3.5" universal SCSI hot-pluggable hard disk drive bays

Connects to Fibre Channel storage Yes

Storage controller Smart Array 6i Plus

Chassis 4 server blades per 6U enclosure

Networking 4 - 10/100/1000T NICs + 1 Dedicated iLO Port

Remote management iLO Advanced

Power Rack-centralized redundant power subsystem, with hot-pluggable power supplies

Figure 4-4 displays the front panel of the HP ProLiant BL45p.

Figure 4-4 ProLiant BL45p Front Panel

1 2 3 4

The following table describes the callouts in Figure 4-4.

Item Description

1 Hot-pluggable SCSI hard disk drive bay 1

2 I/O port

3 Power On/Standby button

4 Hot-pluggable SCSI hard disk drive bay 2

The I/O port is used with the local I/O cable to perform some server blade configuration and
diagnostic procedures.
Figure 4-5 displays the rear panel components of the HP ProLiant BL45p.

4.3 HP ProLiant BL45p Server Blade Overview 159


Figure 4-5 HP ProLiant BL45p Rear Panel Components
1 2

Item Description

1 Power connectors

2 Signal connector

The HP ProLiant BL45p has two system boards. The primary system board relates to the first
and second processor, and the second system board relates to the third and fourth processors.
Figure 4-6 displays the primary system board, and Figure 4-7 displays the secondary system
board.

Figure 4-6 HP ProLiant BL45p Primary System Board

1 11

2 12

3 13
4 14

5 15

6 16

7 17

8 18

9
19
10
20

160 Server Blades


The following table describes the callouts in Figure 4-6.

Item Description

1 Fibre Channel adapter (optional)

2 Power converter modules

3 Smart Array 6i controller

4 Smart Array 6i battery-backed write cache enabler (optional)

5 Processor 1 memory bank 2

6 Processor 1 memory bank 1 (shown populated)

7 DIMMs 1-4

8 Processor socket 1 (shown populated)

9 SCSI backplane board connector 1

10 Power button/LED board connector

11 System maintenance switch (SW1)

12 DC filter module

13 Standard NIC mezzanine card

14 System battery

15 Processor 2 memory bank 2

16 Processor 2 memory bank 1 (shown populated)

17 DIMMs 5-8

18 Processor socket 2 (shown populated)

19 SCSI backplane board connector 2

20 Fan connectors

4.3 HP ProLiant BL45p Server Blade Overview 161


Figure 4-7 HP ProLiant BL45p Secondary System Board

7
1

2 8

3 9

4 10

5 11

The following table describes the callouts in Figure 4-7.

Item Description

1 DC filter module

2 Processor 3 memory bank 2

3 Processor 3 memory bank 1 (shown populated)

4 DIMMs 9-12

5 Processor socket 3 (shown populated)

6 Fan connectors

7 Power converter modules

8 Processor 4 memory bank 2

9 Processor 4 memory bank 1 (shown populated)

10 DIMMs 13-16

11 Processor socket 4 (shown populated)

4.3.1 Supported Memory


The HP ProLiant BL45p server blade ships with two DIMMs installed in memory bank 1 for each
installed processor. Each processor has two banks, and each bank consists of two DIMM slots.

162 Server Blades


DIMM banks are active only when the corresponding processor socket is populated. The following
guidelines apply to the BL45p DIMMS:
• All DIMMs must be PC3200 DDR 400-MHz SDRAM DIMMs.
• Both DIMM slots in a bank must be populated.
• Both DIMMs in a bank must be identical.
• Processor 1 memory bank 1 must always be populated.
• If mixing dual- and single-rank DIMMs, the dual-rank DIMMs must be installed in memory
bank 1.
• For optimal performance in most applications, populate memory bank 1 for every populated
processor socket.
• For DIMM slots 1 and 2, remove the air baffle, if necessary.

Caution:
Use only HP DIMMs. DIMMs from other sources may adversely affect data integrity.

PC3200 DIMMs can either be single- or dual-rank. A dual-rank DIMM is similar to having two
separate DIMMs on the same module. Although only a single DIMM module, a dual-rank DIMM
acts as if it were two separate DIMMs. The primary reason for the existence of dual-rank DIMMs
is to provide the largest capacity DIMM given the current DIMM technology. If the maximum
DIMM technology allows for creating 2-GB single-rank DIMMs, a dual-rank DIMM using the
same technology would be 4 GB

4.3.2 Supported Storage


The HP ProLiant BL45p supports up to two Optional Wide Ultra3 SCSI drives for a maximum
of 600 GB internal storage.
The physical aspect of inserting and removing a disk drive is discussed in the document that
comes with the drive and in the HP ProLiant BL45p Server Blade User Guide.
The HP ProLiant BL45p server blade also delivers optional Fibre Channel support for SAN
implementations and clustering capabilities. Fibre Channel capability is achieved using a Dual
Port Fibre Channel Mezzanine Card (2 GB) specifically designed for the HP ProLiant BL45p.
For more detailed SAN configuration information for the server blade, see the following
documents:
• The model-specific QuickSpecs document located on the HP ProLiant p-Class server blade
products Web page: http://www.hp.com/products/servers/proliant-bl/p-class/info
• The HP StorageWorks SAN documentation:
http://h18006.www1.hp.com/storage/index.html
Search for the SAN product required, and navigate to technical documentation.
• The HP BladeSystem p-Class storage Web page:
http://www.hp.com/go/bladesystem/storage

4.3.3 Removing the HP ProLiant BL45p from the Rack Enclosure


To remove an HP ProLiant BL45p server from the blade enclosure, follow these steps:
• Back up all server blade data.
• Power down the server blade, using one of the following methods:
— Press the Power On/Standby button on the server blade front panel. Be sure that the
server blade is in Standby mode by observing that the power LED is amber. This process
may take 30 seconds, during which time some internal circuitry remains active.
— Use the virtual power button feature in the iLO Remote Console to power down the
server blade from a remote location. After initiating a manual or virtual power down

4.3 HP ProLiant BL45p Server Blade Overview 163


command, be sure that the server blade goes into Standby mode by observing that the
power LED is amber.

Important:
When the server blade is in Standby mode, auxiliary power is still being provided. To
remove all power from the server blade, remove the server blade from the server blade
enclosure. Removing the sleeve from the server blade enclosure is not necessary.

Important:
Remote power procedures require the most recent firmware for the power enclosure
and server blade enclosure management modules.

• Remove the server blade from the blade enclosure, as shown in Figure 4-8.

Figure 4-8 Removing the HP ProLiant BL45p from the Rack Enclosure

To install and power up a server blade, reverse the removal procedure. Server blades are set to
power up automatically upon insertion. If you changed the setting, use the power button or iLO
Virtual Power Button feature to power up the server blade.
For more information about iLO, see the HP Integrated Lights-Out User Guide.

4.4 HP BladeSystem c-Class Enclosure Overview


The HP BladeSystem c-Class Enclosure provides all the power, cooling, and I/O infrastructure
needed to support today’s modular server, interconnect, and storage components as well as to
meet these needs throughout the next several years. The c-Class enclosure (see Section 4.4.1) is
10U high and holds up to 16 server and/or storage blades plus optional redundant network and
storage interconnect modules. It includes a shared, multiterabit high-speed midplane for wire-once
connectivity of server blades to network and shared storage. Power is delivered through a pooled
power backplane that ensures the full capacity of the power supplies is available to all server
blades.

4.4.1 HP BladeSystem c–Class Enclosure Features


The HP BladeSystem c–Class enclosure consolidates the essential elements of a datacenter –
power, cooling, management, connectivity, redundancy, security – into a modular, self-tuning
unit with built-in intelligence. The 10U rack-mount c-Class enclosure supports up to 16 multicore
dual-processor server blades, and is offered in single or three-phase power options for maximum
power configuration flexibility. The c–Class enclosure offers up to four redundant I/O fabrics
and a five Terabit backplane to support current and future I/O connections. The c–Class enclosure

164 Server Blades


includes Onboard Administrator enclosure management, iLO 2 server management, and HP
Insight Control so you can manage servers and enclosures locally and remotely with complete
control regardless of the state of the server operating system.
The c–Class enclosure can be populated with the following components:
• Up to eight full-height server blades or up to 16 half-height server and/or storage blades per
enclosure
• Up to eight interconnect modules supporting a variety of network interconnect fabrics such
as Ethernet, Fibre Channel (FC), InfiniBand (IB), Internet Small Computer System Interface
(iSCSI), or Serial-attached SCSI (SAS) simultaneously within the enclosure
• Active Cool fan kits for a maximum of 10 fans
• Up to six power supplies
• Redundant Onboard Administrator management modules (optional active-standby design)
All devices are customer replaceable and hot-pluggable.
Table 4-2 describes the features of the HP Bladesystem c-Class enclosure.
Figure 4-9 and Figure 4-10 show the front and rear views of an HP BladeSystem c–Class enclosure.
Table 4-2 HP BladeSystem c–Class Enclosure Features
Item Description

Capacity

Device bays Up 16 half-height server blades

Up to eight full-height server blades

Mixed configurations supported

Interconnect bays Eight, in any I/O fabric

Power supply Up to 6 x 2250W

Fans 4 or 6 standard, up to 10 total

Onboard Administrator 2

Power

3-phase North America & Japan 2 x NEMA L15-30p


model

3-phase International model 2 x IEC 309 5-Pin, 9h, Red, 16A

Single-phase model 6 x IEC-320 C20

Interconnects

Ethernet HP 1Gb Ethernet Pass-Thru Module

Cisco Catalyst Blade Switch 3020

GbE2c Ethernet Blade Switch

Fibre Channel HP 16 port 4Gb FC Pass-Thru Module

Brocade 4Gb SAN Switch

InfiniBand HP 4x DDR IB Switch Module

Dimensions

Height 10U

Width 17.5in (445 mm) fits 19 inch rack

Depth 32in (813 mm)

4.4 HP BladeSystem c-Class Enclosure Overview 165


Note:
This guide only addresses the HP BladeSystem c7000 enclosure. For information about the HP
BladeSystem c3000 enclosure, see either of the following documents depending on how the HP
BladeSystem c3000 enclosure is provided:
• If the c3000 enclosure is rack-mounted, see the Workgroup System and Cluster Platform Express
Overview and Hardware Installation Guide.
• If the c3000 enclosure is the pedestal version, see the HP Cluster Platform Workgroup System
Tower Hardware Installation Guide.

Figure 4-9 HP BladeSystem c–Class Enclosure Front View

4 3

The following list corresponds to the callouts shown in Figure 4-9:


1. Device bay 1 (up to 16 half-height server blades)
2. Device bay 16
3. Onboard Administrator Insight Display
4. Power supply bay (4–6 standard, up to 10 optional)
5. c-Class (c7000) BladeSystem enclosure

166 Server Blades


Figure 4-10 HP BladeSystem c–Class Enclosure Rear View

2 2

4 5

The following list corresponds to the callouts in Figure 4-10:


1. Active cool fan bays (up to 10 fans)
2. Interconnect bays
3. Power connections – single phase shown (three phase power is hard-wired to chassis)
4. Onboard Administrator
5. Redundant Onboard Administrator

Note:
See the HP BladeSystem c7000 Enclosure Setup and Installation Guide for more information on LEDs
and buttons for the Onboard Administrator.

4.4.2 HP BladeSystem c–Class Enclosure Device Bay Numbering


The BladeSystem c–Class enclosure can be loaded with eight full-height or 16 half-height device
bays, as shown in Figure 4-11 and Figure 4-12.

4.4 HP BladeSystem c-Class Enclosure Overview 167


Figure 4-11 HP BladeSystem c–Class Enclosure Device Bay Numbering (Full-Height Device Bays)

1 2 3 4 5 6 7 8

Figure 4-12 HP c–Class BladeSystem Enclosure Device Bay Numbering (Half-Height Device Bays)

1 2 3 4 5 6 7 8

9 10 11 12 13 14 15 16

Figure 4-13 shows a sample configuration with half-height and full-height server blades.

168 Server Blades


Figure 4-13 HP c–Class BladeSystem Enclosure Example

1 2 3

6 5

The following list corresponds to the callouts in Figure 4-13.


1. Two full-height ProLiant BL480c server blades
2. Six half-height ProLiant BL460c server blades
3. Six blanks (must be installed if not using server bays for proper airflow and cooling)
4. Front rail mounting screw (four)
5. Hot-swap power supply bays (occupied)
6. Onboard Administrator Insight Display
7. ProLiant BL480c server blade hard drive
8. Systems manager display connector

4.4.3 Interconnect Module Bay Numbering


You must install interconnect modules in the appropriate bay(s) to support network connections
for specific signals. Figure 4-14 shows the module bay numbers in the c-Class enclosure and
Figure 4-15 provides descriptions for each bay.

4.4 HP BladeSystem c-Class Enclosure Overview 169


Figure 4-14 HP c–Class BladeSystem Module Bay Numbering

Figure 4-15 HP c–Class BladeSystem Module Bay Numbering Descriptions

170 Server Blades


Note:
For information on the location of LEDs and ports on individual interconnect modules, see the
documentation that ships with the interconnect module.

4.4.4 HP BladeSystem c-7000 Interconnect Module Bay to Server Blade Type Port
Mapping
The following table maps the server blade's embedded NICs and mezzanine port slot assignments
for various server blade types (single density full-height, single density half-height, and double
density half-height server blades) to the HP BladeSystem c-7000 enclosure interconnect module
bays.
Table 4-3 HP BladeSystem c-7000 Interconnect Module Bay to Server Blade Type Port Mapping
Single-Density Double-Density Double-Density
Interconnect Half-Height Server Half-Height Server Blade Half-Height Server Blade Single-Density Full-Height
Bay Blade - Server A - Server B Server Blade

NIC 1 (embedded) NIC 3 (embedded)


1 NIC 1 (embedded) N/A
NIC 1 (embedded)

NIC 2 (embedded) NIC 1 (embedded) NIC 4 (embedded)


2 N/A
NIC 2 (embedded)

3 Mezzanine slot 1, NIC 2 (embedded) N/A


Mezzanine slot 1, port 1
4 port 1 N/A NIC 2 (embedded)

5 Mezzanine slot 2, Mezzanine slot 2, port 1


Mezzanine port 1 N/A
6 port 1 Mezzanine slot 3, port 2

7 Mezzanine slot 2, Mezzanine slot 2, port 2


N/A Mezzanine port 1
8 port 2 Mezzanine slot 3, port 1

For more information on mapping to interconnect ports, see the HP BladeSystem c7000 Enclosure
Setup and Installation Guide.

4.4 HP BladeSystem c-Class Enclosure Overview 171


4.5 Server Blade Type vs Gigabit Ethernet Blade Switch with Bandwidth
Ratios
The following table provides the server blade type to gigabit Ethernet blade switch bandwidth
(BW) ratios. For more information on the gigabit Ethernet switch modules listed in the table
below, see the HP Cluster Platform Gigabit Ethernet Hardware Guide.

Two 16-Port Two 8-Port


One 16-Port Pass-Through One 8-Port Cisco Switch One 4-Port HP Two 4-Port HP
Pass-Through Modules Per Cisco Switch Modules per GbE2c Switch GbE2c Switch
Module Per HP HP Module per HP HP Module per HP Modules per HP
BladeSystem BladeSystem BladeSystem BladeSystem BladeSystem BladeSystem
c-7000 c-7000 c-7000 c-7000 c-7000 c-7000
Server Blade Enclosure Enclosure Enclosure Enclosure Enclosure Enclosure
Type (406740-B21) (406740-B21) (410916–B21) (410916–B21) (410917-B21) (410917-B21)

Single density
1:1 BW N/A 1:1 BW N/A 2:1 BW N/A
full-height

Single density
1:1 BW N/A 2:1 BW N/A 4:1 BW N/A
half-height

Double
density N/A 1:1 BW N/A 2:1 BW N/A 4:1 BW
half-height

Note:
The part numbers listed in the table above are subject to change without notice.

4.6 HP 4x DDR InfiniBand Switch Module for c-Class BladeSystem


The HP 4x DDR InfiniBand switch module is inserted in a double-wide module bay in the back
of the HP c-Class BladeSystem enclosure. The 4x DDR InFiniBand switch module is based on
the Mellanox 24-port InfiniScale III 4x DDR InfiniBand switch chip. The 4x DDR InfiniBand
switch module provides 24 InfiniBand 4x DDR ports with 20 Gb/s port-to-port connectivity. The
ports are arranged as 16 downlinks to connect up to 16 server blades in the enclosure, and eight
uplinks to connect to external InfiniBand switches to build an InfiniBand fabric. The 4x DDR
InfiniBand switch module is designed to fit into the double-wide switch bays in the c-Class
enclosure. Section 4.6.1 describes the installation procedures for the 4x DDR InfiniBand switch
module.

4.6.1 HP 4x DDR InfiniBand Switch Module Removal and Installation Procedure


Depending on the mezzanine connectors used for the 4x DDR InfiniBand Mezzanine HCA, the
4x DDR IB switch module has to be inserted into switch bays 3 and 4, 5 and 6, or 7 and 8. The
default 4x DDR InfiniBand double-wide switch module (24 ports w/16 DDR links down, and 8
DDR links up) will occupy I/O Bay Row-3 (Bays 5 and 6) of the c-Class (c7000) enclosure for both
half-height (HH) and full-height (FH) server blades. A full-height server blade can support an
additional InfiniBand mezzanine option in mezzanine slot 3, along with a second double-wide
4x DDR InfiniBand switch module installed in I/O bay row-4 which corresponds to bays 7 and
8 in the rear of the c-Class enclosure. In this configuration, a full-height server blade configuration
can be viewed as having a 1:1 I/O ratio.
For the bandwidth ratios for different server blade mezzanine HCA to interconnect module bays
in HP Cluster Platform BladeSystem configurations, see Table 4-4.

172 Server Blades


Table 4-4 HP Cluster Platform Server Blade Configurations to InfiniBand Interconnect Module Types
(Bandwidth Ratios)
One 24-Port InfiniBand Two 24-Port
Module per HP InfiniBand Modules One 32-Port InfiniBand Two 32-Port InfiniBand
BladeSystem c-7000 per HP BladeSystem Module per HP Modules per HP
Enclosure (i.e. c-7000 Enclosure (i.e. BladeSystem c-7000 BladeSystem c-7000
Server Blade Type 410398-B21) 410398-B21) Enclosure (i.e. TBD) Enclosure (i.e. TBD)

Single density 1:1 Bandwidth N/A 1:1 Bandwidth N/A


full-height

Single density 2:1 Bandwidth 1:1 Bandwidth 2:1 Bandwidth N/A


half-height

Double density N/A 2:1 Bandwidth N/A 1:1 Bandwidth


half-height

Note:
The part numbers listed in the table above are subject to change without notice.

Note:
In order to add another 4x DDR InfiniBand switch module to a c-Class enclosure, it may be
necessary to remove the existing c-Class cable management bracket. If the c-Class blade cable
management bracket needs to be removed, reverse the installation procedure described in the
HP Cluster Platform c-Class Blade Cable Management Bracket Installation Guide. In most instances,
it is not necessary to remove the cables from the bracket, however, it might be necessary to
remove the cables from the interconnect module.

To install the HP 4x DDR InfiniBand Switch Module, follow these steps:


1. Prepare the bay by removing any devices, blanks and the divider, as shown by callouts 1
and 2 in Figure 4-16.

Figure 4-16 Prepare the Bay to Install the 4x DDR IB Switch Module

4.6 HP 4x DDR InfiniBand Switch Module for c-Class BladeSystem 173


2. Install the 4x DDR IB switch module into the appropriate double-wide bay and close the
release lever as shown by callouts 1 and 2 in Figure 4-17.

Figure 4-17 Install the 4x DDR IB Switch Module

4.7 HP ProLiant BL2x220c G5 Server Blade


The HP ProLiant BL2x220c G5 server blade can be used as a control node, a utility node, and a
compute node in HP Cluster Platform configurations. The HP BladeSystem c-7000 enclosure can
accommodate up to 16 HP ProLiant BL2x220c G5 half-height double-density server blades.
For the features and specifications of the HP ProLiant BL2x220c G5, see the QuickSpecs on the
HP website:
http://h18004.www1.hp.com/products/quickspecs/13049_na/13049_na.pdf
Figure 4-18 shows the front view of a ProLiant BL2x220c G5 server blade.

Figure 4-18 HP ProLiant BL2x220c G5 Front View

1 2 3 4 5 6 7

8 9 10 11 12 13

The following list describes the callouts shown in Figure 4-18:

174 Server Blades


1. Server B Power On/Standby button and power LED
2. Server B UID LED
3. Server B health LED
4. Server B NIC link and activity LED
5. Server B serial label pull tab
6. Server B HP c-Class Blade SUV cable connector
7. Server blade handle
8. Server A Power On/Standby button and power LED
9. Server A UID LED
10. Server A health LED
11. Server A NIC link and activity LED
12. Server A serial label pull tab
13. Server A HP c-Class Blade SUV cable connector
For an internal view of the HP ProLiant BL2x220c G5 and additional information, such as the
system board layout and installing mezzanine HCAs, see the HP ProLiant BL2x220c Generation
5 Server Maintenance and Service Guide:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01462866/c01462866.pdf

4.8 HP ProLiant BL260c G5 Server Blade


The HP ProLiant BL260c G5 server blade can be used as a control node, a utility node, and a
compute node in HP Cluster Platform configurations.
For the features and specifications of the HP ProLiant BL260c G5, see the QuickSpecs on the HP
website:
http://h18004.www1.hp.com/products/quickspecs/13026_na/13026_na.pdf
Figure 4-19 shows the front view of a ProLiant BL260c G5 server blade.

Figure 4-19 HP ProLiant BL260c Front View

1 2 3 4 5 6 7 8 9 10

The following list describes the callouts shown in Figure 4-19:


1. Serial label pull tab
2. Local I/O connector
3. UID LED/button
4. Health LED
5. NIC 1 LED
6. NIC 2 LED
7. Hard drive activity LED
8. Release button

4.8 HP ProLiant BL260c G5 Server Blade 175


9. Power On/Standby button and system power LED
10. Server blade handle
For an internal view of the HP ProLiant BL260c G5 and additional information, such as the system
board layout and installing mezzanine HCA cards, see the HP ProLiant BL260c Generation 5 Server
Maintenance and Service Guide:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01416733/c01416733.pdf

4.9 HP ProLiant BL460c and BL460c G5 Server Blade Overview


The HP BladeSystem c–Class enclosure supports up to 16 HP ProLiant BL460c server blades.
The HP ProLiant BL460c provides dual-processor, dual-core Intel Xeon processors, DDR2 fully
buffered DIMMs, serial attached SAS or SATA hard drives, support of multifunction NICs, and
multiple I/O cards. The HP ProLiant BL460c includes high-availability features such as
hot-pluggable hard drives, mirrored-memory, online spare memory, memory interleaving,
embedded RAID capability, and enhanced remote Lights-Out management. Table 4-5 lists the
features of the HP ProLiant BL460c server blade.
Table 4-5 HP ProLiant BL460c Features
Item Description

Processor Up to two Intel and Xeon 5000 and 5100 series dual-core processors

Supports up to 3.0 GHz 1333 MHz or 1066 MHz FSB; 4MB level 2 cache memory

Intel 5000P chipset supporting up to a 1333 MHz Front Side Bus

Memory Up to 32 GB of memory, supported by eight slots of PC2-5300 fully buffered DIMMs


at 667 MHz

Supporting memory interleaving (2 x 1); memory mirroring and online spare


capacity

Storage Controller Integrated Smart Array E200i RAID controller with 64 MB cache (with optional
battery-backed write-cache with an upgrade to 128MB cache (BBWC). Supports
RAID 0,1

Internal Drive Support Up to two small form factor (SFF) SAS or SATA hot-plug hard disk drives

Network Controller Two embedded single port NC373i multifunction gigabit network adapters

One additional 10/100 NIC dedicated to iLO 2 management

Mezzanine Support Two additional I/O expansion slots via mezzanine card

Supports up to two mezzanine cards:


• Dual-port Fibre Channel mezzanine (4-Gb) options for SAN connectivity (choice
of Emulex or QLogic).
• Ethernet NIC mezzanine options for additional network ports
— HP NC373m PCI Express Dual Port Multifunction Gigabit Server Adapter
— HP NC326m PCI Express Dual Port 1Gb Server Adapter for c-Class
BladeSystem
• InfiniBand and 10GbE

Internal USB Support One internal USB 2.0 connector for security key devices and USB drive keys

Form Factor HP ProLiant BL460c server blade plugs vertically into the BladeSystem c-Class
enclosure

Management Integrated Lights-Out 2 (iLO 2) Standard Blade Edition (includes virtual KVM
and graphical remote console)

Operating Systems Supports Windows, Linux, and Netware operating systems

Enclosures HP ProLiant BL460c server blade plugs vertically into the BladeSystem c-Class
enclosure

176 Server Blades


For the features and specifications of the HP ProLiant BL460c G5, go to:
http://h18004.www1.hp.com/products/quickspecs/12796_div/12796_div.pdf

4.9.1 HP ProLiant BL460c Front View


Figure 4-20 shows the front view of a ProLiant BL460c server.

Figure 4-20 HP ProLiant BL460c Front View


1 2 3 4

7 6 5

The following list describes the callouts shown in Figure 4-20:


1. Hard drive bay 1
2. Power On/Stand-by button
3. Local I/O connector
4. Hard drive bay 2
5. Server blade handle
6. Release button
7. Serial label pull tab

4.9.2 HP ProLiant BL460c Front Panel LEDs


Figure 4-21 shows the front panel LEDs of the ProLiant BL460c.

Figure 4-21 HP ProLiant BL460c Front Panel LEDs

1 2 3 4 5

The following table describes the callouts in Figure 4-21:

Item Description Status

1 UID LED Blue = Identified

Blue flashing = Active remote management

4.9 HP ProLiant BL460c and BL460c G5 Server Blade Overview 177


Item Description Status

Off = No active remote management

2 Health LED Green = Normal

Flashing = Booting

Amber = Degraded condition

Red = Critical condition

3 NIC 1 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

4 NIC 2 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

5 System power LED Green = On

Amber = Standby (auxiliary power available)

Off = Off
1 Actual NIC numbers depend on several factors, including the operating system installed on the server blade.

4.9.3 HP ProLiant BL460c Internal View


Figure 4-22 shows the internals of the HP ProLiant BL460c.

Figure 4-22 HP ProLiant BL460c Internal View

2 3

The following list corresponds to the callouts in Figure 4-22:


1. Two hot-plug SAS/SATA drive bays
2. Embedded Smart Array Controller integrated on drive backplane
3. Two mezzanine slots: one x4, one x8
4. Eight fully buffered DIMM slots DDR II 667Mhz

4.9.4 HP ProLiant BL460c System Board


Figure 4-23 shows the system board components of an HP ProLiant BL460c.

178 Server Blades


Figure 4-23 HP ProLiant BL460c System Board Components

1 2 3 4 5 6 7 8 9 10 11 12

The following list describes the callouts in Figure 4-23:


1. System board thumbscrew
2. Processor socket 2
3. Processor socket 1 (populated)
4. Hard drive backplane connector
5. FBDIMMs (8)
6. Embedded NICs (2)
7. Mezzanine connector 1 [Type I mezzanine only (shown)]
8. Battery
9. Mezzanine connector 2 [Type I (shown) or Type II mezzanine]
10. System maintenance switch (SW2)
11. System board thumbscrew
12. Enclosure connector

4.9.5 Memory Options


The HP ProLiant BL460c server contains eight FBDIMM slots. You can expand server memory
by installing supported DDR-2 FBDIMMs.

Caution:
Use only HP FBDIMMs. FBDIMMs from other sources may adversely affect data integrity.

The HP ProLiant BL460c supports the following Advanced Memory Protection (AMP) options
to optimize server availability:
• Advanced ECC supporting up to 16 GB of active memory using 2-GB FBDIMMs.
• Online Spare Memory providing additional protection against degrading FBDIMMs,
supporting up to 12 GB of active memory and 4 GB of online spare memory utilizing 2-GB
FBDIMMs.
• Mirrored memory providing protection against failed FBDIMMs supporting up to 8 GB of
active memory and 8 GB of mirrored memory utilizing 2-GB FBDIMMs.

4.9 HP ProLiant BL460c and BL460c G5 Server Blade Overview 179


Maximum memory capacities for all AMP modes will increase with the availability of 4-GB
FBDIMMs, including a maximum of 32 GB in Advanced ECC mode. For the latest memory
configuration information, see the QuickSpecs at:
http://h18004.www1.hp.com/products/servers/proliant-bl/c-class/460c/specifications.html.
The Advanced Memory Protection option is configured in RBSU. By default, the server is set to
Advanced ECC mode. For more information, see the HP ROM-Based Setup Utility in the HP
ProLiant BL460c Server Blade User Guide. If the configured AMP mode is not supported by the
installed FBDIMM configuration, the system boots in Advanced ECC mode. The following
configuration requirements apply to all AMP modes:
• FBDIMMS must be ECC registered DDR-2 SDRAM FBDIMMs.
• FBDIMMs must be installed in pairs.
• FBDIMM pairs in a memory bank must have identical HP part numbers.
• FBDIMMS must be populated as specified for each AMP memory mode.
The memory subsystem for this server blade is divided into two branches. Each memory branch
is essentially a separate memory controller. The FBDIMMs map to the two branches as indicated
in the following table:

Branch 0 Branch 1

FBDIMM 1A FBDIMM 5B

FBDIMM 3A FBDIMM 7B

FBDIMM 2C FBDIMM 6D

FBDIMM 4C FBDIMM 8D

This multibranch architecture provides enhanced performance in Advanced ECC mode. The
concept of multiple branches is important for the operation of online spare mode and mirrored
memory mode.
If the server blade contains more than 4 GB of memory, consult the operating system
documentation about accessing the full amount of installed memory.
For memory options information see the HP ProLiant BL460c Server Blade User Guide.

4.9.6 Mezzanine HCA Card


In HP Cluster Platform, the HP ProLiant BL460c server blades are preconfigured with an HP 4x
DDR IB Mezzanine HCA for the HP c-Class BladeSystem , which works with the HP 4x DDR
InfiniBand Switch Module installed in the rear of the c-Class enclosure. For more information
on the mezzanine HCA card, see the HP ProLiant BL460c Server Blade User Guide. For more
information on the 4x DDR InfiniBand Switch Module, see the HP Cluster Platform InfiniBand
Interconnect Installation and User's Guide.

4.9.7 Supported Storage


The HP ProLiant BL460c supports up to two optional Hot Plug Serial Attach SCSI (SAS) drives
for a maximum of 144 GB (2 x 72 GB Serial SCSI) internal storage or up to two optional hot-plug
serial ATA (SATA) drives for a maximum of 120GB (2 x 60 GB serial ATA) internal storage.
The physical aspect of inserting and removing a disk drive is discussed in the document that
comes with the drive and in the HP ProLiant BL460c Server Blade User Guide.
Two optional Fibre Channel HBAs are supported by the HP ProLiant BL460c. Both mezzanine
circuit boards connect directly to the server blade system board. These Fibre Channel HBAs are
available via option kits and must be ordered separately.

180 Server Blades


For more detailed SAN configuration information for the server blade, see the following
documents:
• The model-specific QuickSpecs document located on the HP ProLiant c-Class server blade
products web page:
http://h18000.www1.hp.com/products/quickspecs/Division/12534.html
• The HP StorageWorks SAN documentation:
http://h18006.www1.hp.com/storage/index.html
Search for the SAN product required, and navigate to technical documentation.
• The HP BladeSystem c-Class storage web page:
http://www.hp.com/go/bladesystem/storage

4.9.8 Removing the HP ProLiant BL460c from the c–Class Enclosure


To remove the HP ProLiant BL460c server blade from the c–Class enclosure, follow these steps:
1. Identify the proper server blade and back up the data.
2. Depending on the Onboard Administrator configuration, use one of the following methods
to power down the server blade:
• Use a virtual power button selection through iLO 2. This method initiates a controlled
remote shutdown of applications and the operating system before the server blade
enters Standby mode.
• Press and release the Power On/Standby button. This method initiates a controlled
shutdown of applications and the operating system before the server blade enters
Standby mode.
• Press and hold the Power On/Standby button for more than 4 seconds to force the server
blade to shut down. This method forces the server blade to enter standby mode without
properly exiting applications and the operating system. It provides an emergency
shutdown method in the event of a hung application.

Important:
When the server blade is in Standby mode, auxiliary power is still being provided. To
remove all power from the server blade, remove the server blade from the enclosure.
After initiating a virtual power down command, be sure that the server blade goes into
Standby mode by observing that the system power LED is amber.

3. Remove the server blade by pressing the release button (see callout 1 in Figure 4-24), pulling
down the release lever (see callout 2 in Figure 4-24), and sliding the server blade out from
the enclosure (see callout 3 in Figure 4-24).

4.9 HP ProLiant BL460c and BL460c G5 Server Blade Overview 181


Figure 4-24 Removing a ProLiant BL460c from the c–Class Enclosure

4. Place the server blade on a flat, level work surface.

Warning!
To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system
components to cool before touching them.

Caution:
To prevent damage to electrical components, properly ground the server blade before beginning
any installation procedure. Improper grounding can cause electrostatic discharge.

To install and power up a server blade, reverse the removal procedure. The Onboard
Administrator initiates an automatic power-up sequence when the server blade is installed. If
the default setting is changed, use one of the following methods to power up the server blade:
• Use a virtual power button selection through iLO 2.
• Press and release the Power On/Standby button.
When the server blade goes from the standby mode to the full power mode, the system power
LED changes from amber to green.
For more information about the Onboard Administrator, see the HP BladeSystem c7000 Enclosure
Setup and Installation Guide at:
http://h71028.www7.hp.com/enterprise/cache/316682-0-0-0-121.html.
The iLO 2 subsystem is a standard component of selected ProLiant servers that provides server
health and remote server manageability. The iLO 2 subsystem includes an intelligent
microprocessor, secure memory, and a dedicated network interface. This design makes iLO 2
independent of the host server and its operating system. The iLO 2 subsystem provides remote
access to any authorized network client, sends alerts, and provides other server management
functions. Using iLO 2, you can:
• Remotely power up, power down, or reboot the host server.
• Send alerts from iLO 2 regardless of the state of the host server.
• Access advanced troubleshooting features through the iLO 2 interface.
• Diagnose iLO 2 using HP SIM through a web browser and SNMP alerting.
For more information about iLO 2, see the iLO 2 documentation on the HP web page:
http://h18004.www1.hp.com/products/servers/management/iloadv2/index.html.

182 Server Blades


4.10 HP ProLiant BL480c Server Blade Overview
The HP BladeSystem c–Class enclosure supports up to eight HP ProLiant BL480c server blades.
The HP ProLiant BL480c provides dual-processor, dual-core Intel Xeon processors, DDR2 fully
buffered DIMMs, serial attached SAS or SATA hard drives, support of multifunction NICS, and
multiple I/O cards. The HP BL480c includes high-availability features such as hot-pluggable
hard drives, mirrored-memory, online spare memory, memory interleaving, embedded RAID
capability, and enhanced remote Lights-Out management. Table 4-6 lists the features of the HP
ProLiant BL480c server blade.
Table 4-6 HP Proliant BL480c Features
Item Description

Processor Up to two dual-core Intel Xeon 5000 or 5100 Sequence processors

Intel 5000P chipset supporting up to a1333 MHz front-side bus (FSB)

Memory Up to 48 GB of memory supported by 12 PC2-5300, fully buffered DIMMs at 667


MHz

Supports memory interleaving (2 x 1), memory mirroring, and online spare capacity

Storage Controller Integrated Smart Array P400i RAID controller with 256 MB cache (with optional
battery-backed write cache) supports RAID 0/1/5

Internal Drive Support Up to four small form factor (SFF) SAS or SATA hot-plug hard drives

Network Controller Four embedded NIC ports, plus one additional management NIC port:

– One embedded NC326i dual-port Gigabit server adapter

– Two embedded NC373i multifunction Gigabit server adapters

– One additional 10/100 NIC dedicated to iLO 2 management

Mezzanine Support Three additional I/O expansion slots via mezzanine card

Supports up to three mezzanine cards:


• Dual-port Fibre Channel mMezzanine (4 Gb) options for SAN connectivity
(choice of Emulex or QLogic).
• Ethernet NIC mezzanine options for additional network ports
— HP NC326m PCI Express dual-port 1 Gb server adapter for c-Class
BladeSystem
— HP NC373m PCI Express dual-port multifunction Gigabit server adapter
• Infiniband (IB) and 10 GbE

Internal USB Support One internal USB 2.0 connector for security key devices and USB drive keys

Management Integrated Lights-Out 2 (iLO 2) Standard Blade Edition (includes virtual KVM
and graphical remote console)

Operating Systems Supports Windows, Linux, and NetWare operating systems

Enclosures HP ProLiant BL480c server blade plugs vertically into the BladeSystem c-Class
enclosure

4.10.1 HP ProLiant BL480c Front View


Figure 4-25 shows the front view of a ProLiant BL480c server.

4.10 HP ProLiant BL480c Server Blade Overview 183


Figure 4-25 HP ProLiant BL480c Front View

1 2 3 4

9 8 7 6 5

The following list describes the callouts in Figure 4-25:


1. Hard drive bay 1
2. Hard drive bay 2
3. Hard drive bay 3
4. Hard drive bay 4
5. Server blade handle
6. Server blade handle release button
7. Serial label pull tab
8. Local I/O cable connector (the local I/O cable connector is used with the local I/O cable to
perform some server blade configuration and diagnostic procedures)
9. Power On/Standby button

4.10.2 HP ProLiant BL480c Front Panel LEDs


Figure 4-26 shows the front panel LEDs of the HP ProLiant BL480c.

Figure 4-26 HP ProLiant BL480c Front Panel LEDs

1 2 3 4 5 6 7

Table 4-7 HP ProLiant BL480c Front Panel LEDs


Item Description Status

1 UID LED Blue = Identified

Blue flashing = Active remote management

Off = No active remote management

2 Health LED Green = Normal

Flashing = Booting

184 Server Blades


Table 4-7 HP ProLiant BL480c Front Panel LEDs (continued)
Item Description Status

Amber = Degraded condition

Red = Critical condition

3 NIC 1 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

4 NIC 2 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

5 NIC 3 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity


1
6 NIC 4 LED Green = Network linked

Green flashing = Network activity

Off = No link or activity

7 System power LED Green = On

Amber = Standby (auxiliary power available)

Off = Off
1 Actual NIC numbers depend on several factors, including the operating system installed on the server blade.

4.10.3 HP ProLiant BL480c Internal View


Figure 4-27 shows the internals of the HP ProLiant BL480c.

4.10 HP ProLiant BL480c Server Blade Overview 185


Figure 4-27 HP ProLiant BL480c Internal View
2 3

The following list describes the callouts in Figure 4-27:


1. Four hot-pluggable SAS/SATA drive bays
2. Embedded smart array controller integrated on drive backplane
3. Three mezzanine slots: one x4, two x8
4. Twelve fully buffered DIMM slots DDR II 667 Mhz

4.10.4 HP ProLiant BL480c System Board


Figure 4-28 shows the system board components.

186 Server Blades


Figure 4-28 HP ProLiant BL480c System Board Components

16

15 1

14 2

13 3
12
11

10 4

9
8 7 6 5

The following list describes the callouts in Figure 4-28:


1. Processor socket 1 (populated)
2. System board thumbscrew
3. Processor socket 2
4. Bezel LED connector
5. Hard drive backplane connector
6. System battery
7. System maintenance switch (SW3)
8. Embedded NICs
9. Mezzanine connector 2 (Type I or Type II mezzanine)
10. Enclosure connector 1
11. Mezzanine connector 1 (Type I mezzanine only)
12. System board thumbscrew
13. Mezzanine connector 3 (Type I or Type II mezzanine)
14. Enclosure connector 2
15. System board thumbscrew
16. FBDIMM slots (1-12)

4.10.5 Memory Options


The HP ProLiant BL480c server blade contains 12 memory expansion slots. You can expand
server memory by installing supported DDR-2 FBDIMMs.

4.10 HP ProLiant BL480c Server Blade Overview 187


Caution:
Use only HP FBDIMMs. FBDIMMs from other sources may adversely affect data integrity.

The ProLiant BL480c supports the following Advanced Memory Protection (AMP) options to
optimize server availability.
• Advanced ECC supporting up to 48 GB of active memory using 4 GB FBDIMMs.
• Online Spare Memory supporting up to 40 GB of active memory and 8 GB of online spare
memory using 4 GB FBDIMMs. Online Spare Memory provides additional protection against
degrading memory.
• Mirrored memory supporting up to 24 GB of active memory and 24 GB of mirrored memory
using 4 GB FBDIMMs. Mirrored memory provides protection against failed memory.
For the latest memory configuration information, see the QuickSpecs on the HP Wweb page:
http://h18004.www1.hp.com/products/servers/proliant-bl/c-class/480c/index.html.
For memory options information, see the HP ProLiant BL480c Server Blade User Guide.
For more information on HP ROM-Based Setup Utility (RBSU), see the HP ROM-Based Setup
Utility User Guide on the HP web page:
http://www.hp.com/servers/smartstart.

4.10.6 Mezzanine HCA Card


In HP Cluster Platform, the HP ProLiant BL480c server blades are pre-configured with an HP
4x DDR InfiniBand Mezzanine HCA for HP c-Class BladeSystems which works with the HP 4x
DDR InfiniBand Switch Module installed in the rear of the c–Class enclosure. For more information
on the mezzanine HCA card, see the HP ProLiant BL480c Server Blade User Guide. For more
information on the HP 4x DDR InfiniBand switch module, see the HP Cluster Platform InfiniBand
Interconnect Installation and User's Guide.

4.10.7 Supported Storage


The HP ProLiant BL480c supports up to four optional hot-pluggable serial attached SCSI (SAS)
drives for a maximum of 144 GB (2 x 72GB Serial SCSI) internal storage or up to two optional
hot-pluggable serial ATA (SATA) drives for a maximum of 120 GB (2 x 60 GB Serial ATA) internal
storage.
The physical aspect of inserting and removing a disk drive is discussed in the document that
ships with the drive and in the HP ProLiant BL480c Server Blade User Guide.
Two optional Fibre Channel HBAs are supported by the HP ProLiant BL480c. Both mezzanine
circuit boards connect directly to the server blade system board. These Fibre Channel HBAs are
available via option kits and must be ordered separately.
For more detailed SAN configuration information for the server blade, see the following
documents:
• The model-specific QuickSpecs document located on the HP ProLiant c-Class server blade
products web page:
http://h18000.www1.hp.com/products/quickspecs/Division/12534.html
• The HP StorageWorks SAN documentation:
http://h18006.www1.hp.com/storage/index.html
Search for the SAN product required, and navigate to technical documentation.
• The HP BladeSystem c-Class storage web page:
http://www.hp.com/go/bladesystem/storage

188 Server Blades


4.10.8 Removing the HP ProLiant BL480c from the c-Class Enclosure
To remove the HP ProLiant BL480c server blade from the c–Class enclosure, follow these steps:
1. Identify the proper server blade and back up the data.
2. Depending on the Onboard Administrator configuration, use one of the following methods
to power down the server blade:
• Use a virtual power button selection through iLO 2. This method initiates a controlled
remote shutdown of applications and the operating system before the server blade
enters standby mode.
• Press and release the Power On/Standby button. This method initiates a controlled
shutdown of applications and the operating system before the server blade enters
standby mode.
• Press and hold the Power On/Standby button for more than 4 seconds to force the server
blade to shut down. This method forces the server blade to enter standby mode without
properly exiting applications and the operating system. It provides an emergency
shutdown method in the event of a hung application.

Important:
When the server blade is in standby mode, auxiliary power is still being provided. To
remove all power from the server blade, remove the server blade from the enclosure.
After initiating a virtual power down command, be sure that the server blade goes into
standby mode by observing that the system power LED is amber.

3. Remove the server blade, as shown in Figure 4-29.

Figure 4-29 Removing a ProLiant BL480c from the c–Class Enclosure

4.10 HP ProLiant BL480c Server Blade Overview 189


4. Place the server blade on a flat, level work surface.

Warning!
To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system
components to cool before touching them.

Caution:
To prevent damage to electrical components, properly ground the server blade before beginning
any installation procedure. Improper grounding can cause electrostatic discharge.

To install and power up a server blade, reverse the removal procedure. The Onboard
Administrator initiates an automatic power-up sequence when the server blade is installed. If
the default setting is changed, use one of the following methods to power up the server blade:
• Use a virtual power button selection through iLO 2.
• Press and release the Power On/Standby button.
When the server blade goes from the standby mode to the full power mode, the system power
LED changes from amber to green.
For more information about the Onboard Administrator, see the HP BladeSystem c7000 Enclosure
Setup and Installation Guideon the HP web page:
http://h71028.www7.hp.com/enterprise/cache/316682-0-0-0-121.html.
The iLO 2 subsystem is a standard component of selected ProLiant servers that provides server
health and remote server manageability. The iLO 2 subsystem includes an intelligent
microprocessor, secure memory, and a dedicated network interface. This design makes iLO 2
independent of the host server and its operating system. The iLO 2 subsystem provides remote
access to any authorized network client, sends alerts, and provides other server management
functions. Using iLO 2, you can:
• Remotely power up, power down, or reboot the host server.
• Send alerts from iLO 2, regardless of the state of the host server.
• Access advanced troubleshooting features through the iLO 2 interface.
• Diagnose iLO 2 using HP SIM through a web browser and SNMP alerting.
For more information about iLO 2, see the iLO 2 documentation on the HP web page:
http://h18004.www1.hp.com/products/servers/management/iloadv2/index.html

4.11 HP ProLiant BL465c Server Blade Overview


The HP BladeSystem c–Class enclosure supports up to 16 HP ProLiant BL465c server blades.
The HP ProLiant BL465c provides AMD Opteron 2000 series 64-bit, dual-core processors, and
supports AMD virtualization. Memory options include PC2-5300 registered DIMMs at 667 MHz.
The HP ProLiant BL465c also accommodates serial attached SAS or SATA hard drives, support
of multifunction NICs and multiple I/O cards. The HP ProLiant BL465c includes high-availability
features such as hot-pluggable hard drives, mirrored-memory, online spare memory, memory
interleaving, embedded RAID capability, and enhanced remote Lights-Out management. Table 4-8
lists the features of the HP ProLiant BL465c server.
Table 4-8 HP ProLiant BL465c Server Blade Features
Item Description

Processor Up to two AMD Opteron 2000 series processors

Supports up to 2.8 GHz 1 GHz HyperTransport; 1 MB Level 2 cache memory per


core

ServerWorks HT-1000 and HT-2100 chipsets

190 Server Blades


Table 4-8 HP ProLiant BL465c Server Blade Features (continued)
Item Description

Memory Up to 32 GB of memory, supported by eight slots of PC2-5300 Registered DIMMs


at 667 MHz

Supporting memory interleaving (2:1)

Storage Controller Integrated smart array E200i RAID controller with 64 MB cache (with optional
battery-backed write-cache with an upgrade to 128MB cache (BBWC)). Supports
RAID 0,1.

Internal Drive Support Up to two small form factor (SFF) SAS or SATA hot-pluggable hard disk drives

Network Controller Two embedded NC370i multifunction gigabit network adapters

One additional 10/100 NIC dedicated to iLO 2 management

Mezzanine Support Two additional I/O expansion slots via mezzanine card.

Supports up to two mezzanine cards:


• Dual-port Fibre Channel mezzanine (4-Gb) options for SAN connectivity (choice
of Emulex or QLogic).
• Ethernet NIC mezzanine options for additional network ports
— HP NC325m PCI Express quad-port gigabit server adapter for c-Class
BladeSystem
— HP NC326m PCI Express dual-port 1 Gb server adapter for c-Class
BladeSystem
— HP NC373m PCI Express dual-port multifunction gigabit server adapter for
c-Class BladeSystem
— 4X DDR InfiniBand (IB) mezzanine (20 Gb/s) options for low latency server
interconnectivity (based on Mellanox technology)
• 10GbE planned for future support

Internal USB Support One internal USB 2.0 connector for security key devices and USB drive keys

Form Factor HP ProLiant BL465c server blade plugs vertically into the BladeSystem c-Class
enclosure

Management Integrated Lights-Out 2 (iLO 2) standard blade edition (includes virtual KVM and
graphical remote console)

Operating Systems Supports Windows and Linux

Enclosures HP ProLiant BL465c server blade plugs vertically into the BladeSystem c-Class
enclosure

4.11.1 HP ProLiant BL465c Front View


Figure 4-30 shows the front view of a ProLiant BL465c server.

4.11 HP ProLiant BL465c Server Blade Overview 191


Figure 4-30 HP ProLiant BL465c Front View

1 2 3 4

5 6 7

The following list describes the callouts in Figure 4-30:


1. Hard drive bay 1
2. Power On/Stand by button
3. Local I/O connector
4. Hard drive bay 2
5. Serial label pull tab
6. Release button
7. Server blade handle

4.11.2 HP ProLiant BL465c Front Panel LEDs


Figure 4-31 shows the front panel LEDs of the ProLiant BL465c.

Figure 4-31 HP ProLiant BL465c Front Panel LEDs

1 2 3 4 5

192 Server Blades


The following table describes the callouts in Figure 4-31:

Item Description Status

1 UID LED Blue = Identified

Blue flashing = Active remote management

Off = No active remote management

2 Health LED Green = Normal

Flashing = Booting

Amber = Degraded condition

Red = Critical condition

3 NIC 1 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

4 NIC 2 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

5 System power LED Green = On

Amber = Standby (auxiliary power available)

Off = Off
1 Actual NIC numbers depend on several factors, including the operating system installed on the server blade.

4.11.3 HP ProLiant BL465c Internal View


Figure 4-32 shows the internals of the HP ProLiant BL465c.

Figure 4-32 HP ProLiant BL465c Internal View

3 4

The following list describes the callouts in Figure 4-32:

4.11 HP ProLiant BL465c Server Blade Overview 193


1. Two hot-pluggable SAS/SATA drive bays
2. Embedded smart array controller integrated on drive backplane
3. Two mezzanine slots: one x4, one x8
4. Eight fully buffered DIMM Slots DDR II 667Mhz

4.11.4 HP ProLiant BL465c System Board


Figure 4-33 shows the system board components of an HP ProLiant BL465c.

Figure 4-33 HP ProLiant BL465c System Board Components

1 2 3 4 5 6 7 8

15 14 13 12 11 10 9

The following list corresponds to the callouts in Figure 4-33:


1. Bezel LED connector
2. Internal USB connector (under hard drive cage)
3. Processor socket 2
4. DIMM slots (processor 1 memory banks A and B)
5. Processor socket 1 (populated)
6. Mezzanine connector 2 (Type I or Type II mezzanine)
7. System maintenance switch (SW1)
8. Enclosure connector
9. Battery
10. System board thumbscrew
11. Mezzanine connector 1 (Type I mezzanine only)
12. Embedded NICs (two)
13. DIMM slots (Processor 2 memory banks C and D)
14. HP Smart Array E200i cache module (under hard drive cage)
15. System board thumbscrew

194 Server Blades


4.11.5 Memory Options
You can expand server memory by installing PC2-5300 registered DDR2 SDRAM DIMMs. The
server supports up to 32 GB of memory using eight 4-GB DIMMs (four DIMMs per processor).
Observe the following guidelines when installing additional memory:
• Install only ECC PC2-5300 registered DDR2 SDRAM DIMMs that meet the following
specifications:
— Supply voltage: 1.8 V
— Bus width: 72 bits
• Observe the following special conditions when installing memory with a second processor:
— Processor 2 can be installed without memory.
— Any memory installed into banks for processor 2 can be used only if processor 2 is
installed.
• DIMMs must always be installed in pairs.
• HP recommends installing DIMMs with the greatest capacity in the banks farthest from
each populated processor first.
• DIMMs installed in the same memory bank must have the same part number.
• DIMMs installed in different banks can be of different sizes.
For the latest memory configuration information, see the QuickSpecs on the HP web page:
http://h18004.www1.hp.com/products/servers/proliant-bl/c-class/465c/index.html.
For memory options information, see the HP ProLiant BL465c Server Blade User Guide.

4.11.6 Mezzanine HCA Card


In HP Cluster Platform systems with InfiniBand, HP ProLiant BL465c server blades are
preconfigured with an InfiniBand mezzanine HCA for the HP c-Class BladeSystem that maps
to the HP 4x DDR InfiniBand Switch Module installed in the rear of the c-Class enclosure.

4.11.7 Supported Storage


The HP ProLiant BL465c supports up to two optional Hot Plug Serial Attach SCSI (SAS) drives
for a maximum of 144GB (2 x 72GB Serial SCSI) internal storage or up to two optional Hot Plug
Serial ATA (SATA) drives for a maximum of 120GB (2 x 60 GB Serial ATA) internal storage.
The physical aspect of inserting and removing a disk drive is discussed in the document that
comes with the drive and in the HP ProLiant BL465c Server Blade User Guide.
Two optional Fibre Channel HBAs are supported by the HP ProLiant BL465c. Both mezzanine
circuit boards connect directly to the server blade system board. These Fibre Channel HBAs are
available via option kits and must be ordered separately.
For more detailed SAN configuration information for the server blade, see the following
documents:
• The model-specific QuickSpecs document located on the HP ProLiant c-Class server blade
products web page:
http://h18000.www1.hp.com/products/quickspecs/Division/12534.html
• The HP StorageWorks SAN documentation:
http://h18006.www1.hp.com/storage/index.html
Search for the SAN product required, and navigate to technical documentation.
• The HP BladeSystem c-Class storage web page:
http://www.hp.com/go/bladesystem/storage

4.11 HP ProLiant BL465c Server Blade Overview 195


4.11.8 Removing the HP ProLiant BL465c from the c–Class Enclosure
The procedure to remove an HP ProLiant BL465c server blade from the c-Class enclosure is the
same as previously described in Section 4.9.8.

4.12 HP ProLiant BL465c G5 Server Blade


The HP ProLiant BL465c G5 server blade can be used as a control node, a utility node and a
compute node in HP Cluster Platform configurations.
For the features and specifications of the HP ProLiant BL465c G5, see the QuickSpecs on the HP
website:
http://h18004.www1.hp.com/products/quickspecs/13026_na/13026_na.pdf
Figure 4-34 shows the front view of a ProLiant BL465c G5 server blade.

Figure 4-34 HP ProLiant BL465c G5 Front View


1 2 3

4 5 6 7 8 9 10 11

The following list describes the callouts shown in Figure 4-34:


1. Serial label pull tab
2. Local I/O connector
3. UID LED/button
4. Health LED
5. NIC 1 LED
6. NIC 2 LED
7. Hard drive activity LED
8. Release button
9. Power On/Standby button and system power LED
10. Server blade handle
11. SUV connector (The SUV connector and the HP c-Class Blade SUV cable are for some server
blade configuration and diagnostic procedures.)
For an internal view of the HP ProLiant BL465c G5 and additional information, such as the system
board layout and how to install mezzanine HCA cards, see the HP ProLiant BL465c Generation 5
Server Maintenance and Service Guide:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00778741/c00778741.pdf

4.13 HP ProLiant BL685c Server Blade Overview


The HP BladeSystem c–Class enclosure supports up to eight full-height HP ProLiant BL685c
server blades. The HP ProLiant BL685c supports up to four AMD Opteron™ 8000 Series
processors, up to 64 GB of memory supporting 16 DDR2-667 memory DIMMS, and serial attached
SAS or SATA hard drives. The BL685c includes high-availability features such as hot-pluggable
hard drives, and remote management. Table 4-9 lists the features of the HP ProLiant BL685c
server blade.

196 Server Blades


Table 4-9 HP Proliant BL685c Server Blade Features
Item Description

Processor Up to four AMD Opteron 8000 Series processors

Memory Up to 64 GB of memory. Supporting 16 DDR2-667 memory DIMMS

Storage Controller Embedded Smart Array E200i controller integrated on system board

Standard RAID 0/1 controller with optional BBWC

Internal Drive Support Supports up to two two small form factor (SFF) SAS or SATA hot-pluggable hard
drives

Network Controller Four integrated NICs consisting of:

– Embedded NC326i dual-port gigabit server adapter

– Two embedded NC373i multifunction gigabit server adapters

– Plus one additional 10/100 NIC dedicated to iLO 2 management

Mezzanine Support Three additional I/O expansion slots via mezzanine card.

Supports up to three mezzanine cards:


• Dual-port Fibre Channel mezzanine (4-Gb) options for SAN connectivity (choice
of Emulex or QLogic).
• Ethernet NIC mezzanine options for additional network ports
— HP NC325m PCI Express quad-port 1 Gb server adapter for c-Class
BladeSystem
— HP NC326m PCI Express dual-port 1 Gb Server adapter for c-Class
BladeSystem
— HP NC373m PCI Express dual-port multifunction gigabit server adapter for
c-Class BladeSystem
— 4X DDR InfiniBand (IB) mezzanine (20 Gb/s) options for low latency server
interconnectivity (based on Mellanox technology)
• 10GbE planned for future support

Internal USB Support One internal USB 2.0 connector for security key devices and USB drive keys

Management Integrated Lights-Out 2 (iLO 2) Standard Blade Edition

4.13.1 HP ProLiant BL685c Front View


Figure 4-35 shows the front view of a ProLiant BL685c server.

Figure 4-35 HP ProLiant BL685c Front View

7 6 5 4 3 2

The following list describes the callouts in Figure 4-35:

4.13 HP ProLiant BL685c Server Blade Overview 197


1. Hard drive bay 2
2. Server blade handle
3. Server blade handle release button
4. Serial label pull tab
5. Hard drive bay 1
6. Local I/O cable connector (the local I/O cable connector is used with the local I/O cable to
perform some server blade configuration and diagnostic procedures)
7. Power On/Standby button

4.13.2 HP ProLiant BL685c Front Panel LEDs


Figure 4-36 shows the front panel LEDs of the HP ProLiant BL685c.

Figure 4-36 HP ProLiant BL685c Front Panel LEDs

7 6 5 4 3 2

The following table describes the callouts in Figure 4-36:

Item Description Status

1 System power LED Green = On

Amber = Standby (auxiliary power available)

Off = Off

2 NIC 4 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

3 NIC 3 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

4 NIC 2 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

5 NIC 1 LED1 Green = Network linked

Green flashing = Network activity

Off = No link or activity

198 Server Blades


Item Description Status

6 Health LED Green = Normal

Flashing = Booting

Amber = Degraded condition

Red = Critical condition

7 UID LED Blue = Identified

Blue flashing = Active remote management

Off = No active remote management


1 Actual NIC numbers depend on several factors, including the operating system installed on the server blade.

4.13.3 HP ProLiant BL685c Internal View


Figure 4-37 shows the internals of the HP ProLiant BL685c.

Figure 4-37 HP ProLiant BL685c Internal View

2 4

The following list describes the callouts in Figure 4-37:


1. Two SAS/SATA drive bays
2. Embedded smart array controller integrated on drive backplane
3. Up to 16 DIMM slots available
4. Three mezzanine slots (all x8)

4.13.4 HP ProLiant BL685c System Board


Figure 4-38 shows the system board components.

4.13 HP ProLiant BL685c Server Blade Overview 199


Figure 4-38 HP ProLiant BL685c System Board Components

2 3 4 5 6 7
1 8

10

11

26

25 12

13

14

15

24 23 22 21 20 19 18 17 16

The following list describes the callouts in Figure 4-38:


1. Bezel LED connector
2. Processor socket 4
3. DIMM slots (Processor 4 memory banks G and H)
4. Processor socket 2 (populated)
5. DIMM slots (Processor 2 memory banks C and D)
6. Internal USB connector
7. Embedded dual-port NIC
8. Mezzanine connector 2 (Type I or Type II mezzanine)
9. Mezzanine connector 1 (Type I mezzanine only)
10. Enclosure connector 1
11. System board thumbscrew
12. Mezzanine connector 3 (Type I or Type II mezzanine)
13. Enclosure connector 2
14. System board thumbscrew
15. Embedded NIC
16. Embedded NIC
17. Smart Array E200i cache module (under mezzanine card 3)
18. SAS cable
19. Processor socket 1 (populated)
20. DIMM slots (Processor 1 memory banks A and B)
21. System battery
22. DIMM slots (Processor 3 memory banks E and F)
23. Processor socket 3

200 Server Blades


24. System board thumbscrew
25. System maintenance switch (SW2)
26. Hard drive backplane connector

4.13.5 Memory Options


The HP ProLiant BL685c server blade contains 16 memory expansion slots. Observe the following
guidelines when installing additional memory:
• Install only ECC PC2-5300 registered DDR2 SDRAM DIMMs that meet the following
specifications:
— Supply voltage: 1.8 V
— Bus width: 72 bits
• Install DIMMs in pairs (banks) beginning with banks farthest from each populated processor.
• Install DIMMs with the greatest capacity in the banks farthest from the processor.
• Install identical DIMMs with the same part number in a bank.
• DIMMs must be installed for processor 1.
• DIMMs installed in different banks can be of different sizes.
• For best performance, populate one bank of memory for each installed processor before
populating more than one bank for a specific processor.
• DIMMs installed in banks for processor 3 and 4 can be used only if processor 3 and 4 are
installed.
• Processors 3 and 4 can be installed without memory.
For the latest memory configuration information and Advanced Memory Protection (AMP)
options, see the HP ProLiant BL685c Server Blade User Guide.

4.13.6 Mezzanine HCA Card


In HP Cluster Platform, the HP ProLiant BL685c server blades are preconfigured with an HP 4x
DDR InfiniBand Mezzanine HCA for HP c-Class BladeSystems which works with the HP 4x
DDR InfiniBand Switch Module installed in the rear of the c–Class enclosure. For more information
on the mezzanine HCA card, see the HP ProLiant BL685c Server Blade User Guide. For more
information on the HP 4x DDR InfiniBand Switch Module, see the HP Cluster Platform InfiniBand
Interconnect Installation and User's Guide.

4.13.7 Supported Storage


The HP ProLiant BL685c supports up to two optional Hot Plug Serial Attach SCSI (SAS) drives
for a maximum of 144 GB (2 x 72 GB Serial SCSI) internal storage or up to two optional
hot-pluggable serial ATA (SATA) drives for a maximum of 120 GB (2 x 60 GB serial ATA) internal
storage.
The physical aspect of inserting and removing a disk drive is discussed in the document that
ships with the drive and in the HP ProLiant BL685c Server Blade User Guide.
Two optional Fibre Channel HBAs are supported by the HP ProLiant BL685c. Both mezzanine
circuit boards connect directly to the server blade system board. These Fibre Channel HBAs are
available via option kits and must be ordered separately.
For more detailed SAN configuration information for the server blade, see the following
documents:
• The model-specific QuickSpecs document located on the HP ProLiant c-Class server blade
products web page:
http://h18004.www1.hp.com/products/servers/proliant-bl/c-class/685c/index.html
• The HP StorageWorks SAN documentation:
http://h18006.www1.hp.com/storage/index.html

4.13 HP ProLiant BL685c Server Blade Overview 201


Search for the SAN product required, and navigate to technical documentation.
• The HP BladeSystem c-Class storage web page:
http://www.hp.com/go/bladesystem/storage

4.13.8 Removing the HP ProLiant BL685c from the c-Class Enclosure


The procedure to remove the HP ProLiant BL685c server blade from the c–Class enclosure is the
same as described previously in Section 4.10.8.

4.14 HP ProLiant BL685c G5 Server Blade Overview


The HP BladeSystem c–Class enclosure supports up to eight full-height HP ProLiant BL685c G5
server blades.
For the features and specifications of the HP ProLiant BL685c G5, see the QuickSpecs on the HP
website:
http://h18004.www1.hp.com/products/quickspecs/13022_na/13022_na.pdf
Figure 4-39 shows the front view of a ProLiant BL685c G5 server blade.

Figure 4-39 HP ProLiant BL685c G5 Front View


7 9

1 2 3 4 5 6 8 10 11 12

The following list describes the callouts shown in Figure 4-39:


1. UID LED Serial label pull tab
2. Health LED
3. NIC 1 LED
4. NIC 2 LED
5. NIC 3 LED
6. NIC 4 LED
7. Power On/Standby button and system power LED
8. Local I/O connector
9. Hard drive bay 2
10. Hard drive bay 1
11. Release button
12. Server blade handle
For an internal view of the HP ProLiant BL685c G5 and additional information, such as the system
board layout and how to install mezzanine HCA cards, see the HP ProLiant BL685c Generation 5
Server Maintenance and Service Guide:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00805082/c00805082.pdf

4.15 HP Integrity BL860c Server Blade Overview


The HP Integrity BL860c server blade is a two-socket full-height server blade featuring three
models of the latest Intel Itanium 2 dual-core processors supported by up to 48 GB memory (12
202 Server Blades
DIMM slots). The BL860c features four Gigabit Ethernet ports standard, support for three standard
c-Class I/O mezzanine cards and up to two internal SFF SAS hot-pluggable disk drives.Table 4-10
lists the features of the HP ProLiant BL860c server.
Table 4-10 HP Proliant BL860c Server Blade Features
Item Description

Processor Up to two Intel Itanium 2 9000 Series processors:


• Intel Itanium 2 Processor (9040) 1.6GHz/18MB L3 cache dual-core
• Intel Itanium 2 Processor (9015) 1.4GHz/12MB L3 cache dual-core
• Intel Itanium 2 Processor (9010) 1.6GHz/6MB L3 cache single-core

Memory Up to 48 GB of memory. Supporting twelve DDR2-533 ECC memory DIMMS

Storage Controller Embedded smart array E200i controller integrated on system board

Standard RAID 1 controller

Internal Drive Support Supports up to two small form factor (SFF) SAS hot-pluggable hard drives

Network Controller Four integrated NICs consisting of:

– Four embedded standard Gigabit NICs

– Plus one additional 10/100 NIC dedicated to Integrity iLO 2 Management

Mezzanine Support Three additional I/O expansion slots via mezzanine card.

Supports up to three mezzanine cards:


• Dual-port Fibre Channel mezzanine (4-Gb) options for SAN connectivity
(QLogic).
• 4X DDR InfiniBand (IB) mezzanine (20 Gb/s) options for low latency server
interconnectivity

Internal USB Support No internal USB port on the BL860c

Management Integrity Integrated Lights Out 2 (iLO 2) management processor

HP Integrity Integrated Light Out 2 (iLO 2) with Advanced Pack (shipped standard)

Ignite-UX for HP-UX

HP System Insight Manager

HP System Management Homepage

4.15.1 HP Integrity BL860c Front View


Figure 4-40 shows the front view of a ProLiant BL860c server blade.

4.15 HP Integrity BL860c Server Blade Overview 203


Figure 4-40 HP ProLiant BL860c Front View

The following list describes the callouts in Figure 4-40:


1. Hard drive bays
2. Status indicator
3. Power button
4. Server blade handle
5. Local I/O cable connector (the local I/O cable connector is used with the local I/O cable to
perform some server blade configuration and diagnostic procedures)

4.15.2 HP Integrity BL860c LEDs


Figure 4-41 shows the HP Integrity BL860c LEDs.

204 Server Blades


Figure 4-41 HP Integrity BL860c LEDs

The following list describes the callouts in Figure 4-41:


1. Unit identification (UID) LED
2. System health LED
3. Internal health LED
4. NIC 1 LED
5. NIC 2 LED
6. NIC 3 LED
7. NIC 4 LED

4.15.3 HP Integrity BL860c Internal View


Figure 4-42 shows the internals of the HP Integrity BL860c.

4.15 HP Integrity BL860c Server Blade Overview 205


Figure 4-42 HP Integrity BL860c Internal View

The following list describes the callouts in Figure 4-42:


1. SAS backplane
2. Memory DIMMs
3. Mezzanine card 1
4. Mezzanine card 2
5. Mezzanine card 3
6. Processors
7. System board
8. Trusted platform module
9. Front panel
10. SAS disk drives

4.15.4 Configuring or Replacing Memory


When installing additional memory in the HP Integrity BL860c server blade, observe the following
guidelines:
1. All DIMMs must be PC2-4200 DIMMs (DDR2 533 MHz).
2. Both DIMM slots in a memory bank must be populated.
3. Banks must be populated in sequential numeric order, starting with bank 0.
4. Both DIMMs in a memory bank must be identical. Install identical DIMMs in pairs, starting
with DIMM socket A, bank 0.

206 Server Blades


5. Double Chip Spare requires that DIMMs are loaded in like quads.
6. DIMMs must be installed in decreasing capacity with the largest DIMMs installed in the
smallest numbered DIMM slot.
For the latest memory configuration information or for replacing memory information, see the
HP Integrity BL860c Server Blade QuickSpecs at:
http://h18004.www1.hp.com/products/servers/integrity-bl/c-class/860c/index.html.

4.15.5 Mezzanine HCA Card


In HP Cluster Platform, the HP Integrity BL860c server blades are pre-configured with the
appropriate mezzanine card for the configuration type selected. For more information on the
mezzanine HCA card, see the HP Integrity BL860c Server Blade QuickSpecs at:
http://h18004.www1.hp.com/products/servers/integrity-bl/c-class/860c/index.html.

4.15.6 Supported Storage


The HP Integrity BL860c supports up to two optional hot-pluggable serial attach SCSI (SAS)
drives for a maximum of 292 GB (2 x 146 GB serial SCSI) internal storage.
The physical aspect of inserting and removing a disk drive is discussed in the document that
comes with the drive.
For more detailed SAN configuration information for the server blade, see the following
documents:
• The model-specific QuickSpecs document located on the HP c-Class server blade products
web page: http://h18004.www1.hp.com/products/blades/components/
c-class-bladeservers.html
• The HP StorageWorks SAN documentation:
http://h18006.www1.hp.com/storage/index.html
Search for the SAN product required, and navigate to technical documentation.
• The HP BladeSystem c-Class storage web page:
http://h18004.www1.hp.com/products/blades/components/c-class-storage.html

4.15.7 Removing the HP Integrity BL860c from the c-Class Enclosure


The procedure to remove the HP Integrity BL860c server blade from the c–Class enclosure is the
same as described previously in Section 4.10.8.

4.15 HP Integrity BL860c Server Blade Overview 207


5 Workstations
Some HP Cluster Platform solutions support HP workstations. This chapter presents an overview
of the following HP workstations:
• HP xw8200 Workstation (see Section 5.1)
• HP xw8400 Workstation (see Section 5.2)
• HP xw9300 Workstation (see Section 5.3)
• HP xw9400 Workstation (see Section 5.4)

5.1 HP xw8200 Workstation Overview


The HP xw8200 is an Intel-based dual processing workstation with Hyper-Threading technology.
It is equipped with the Intel E7525 high-end performance chipset for Intel Xeon processors.
Hyperthreading technology, developed by Intel, enables a single processor to execute multiple
threads of instructions simultaneously. Hyperthreading technology enables the processor to use
its execution resources more efficiently, delivering performance increases and improving user
productivity.
Table 5-1 describes the standard features of the HP xw8200 Workstation.
Table 5-1 HP Workstation xw8200 features
Feature Specification

Processor One or two Intel Xeon processor(s) at 2.8, 3.2, 3.4, and 3.6 GHz with 1MB L2 cache
with Hyper-Threading

Front side bus 800 MHz

Chipset Intel E7525

Memory Up to 16 GB of registered ECC DDR2-400 SDRAM (8 DIMMs in 4 pairs, dual


channel architecture)

Expansion bays Three external 5.25-inch bays (one used for optional floppy), five internal 3.5-inch
bays

Drive controllers Integrated dual-channel SATA/150 controller with RAID (0 or 1) capability,


integrated dual-channel Ultra320 SCSI controller with RAID (0 or 1) capability,
optional 4 channel SATA controller with RAID (0 or 1) capability, optional single
channel Ultra320 SCSI controller with RAID (0, 1, 01, 5) capability

Removable media 48X CD-ROM, 48X CD-RW, 16X DVD-ROM, 48X CD-RW/DVD combo, 8X
DVD+RW

Expansion slots Seven slots: one PCI Express (x16) graphics slot, one PCI Express

Graphics PCI Express (x16) graphics NVIDIA NVS 280 PCI/ PCI-E NVIDIA Quadro FX
540, ATI FireGL V3100 NVIDIA Quadro FX 1400 ATI FireGL V5100 NVIDIA
Quadro FX 3400

High Performance Tuning HP Performance Tuning Framework guides system setup, allowing a custom
Framework configuration that best matches the workstation to user requirements. This custom
feature ensures availability of the graphics drivers and removes some memory
restraints. For specific application support and download instructions, go to the
following web site: http://www.hp.com/go/framework

I/O ports and connectors Front: headphone, microphone, and two USB 2.0, 1 IEEE 1394 Rear: 6 USB, one
standard serial port, one parallel port, PS/2 keyboard and mouse, one RJ-45, one
audio in, one audio out, one mic in, IEEE 1394

Communications Integrated Intel Pro MT 10/100/1000 LAN, optional Intel Pro MT PCI NIC, optional
Intel Pro XT PCI-X NIC, optional Broadcom Gigabit PC NIC

208 Workstations
Table 5-1 HP Workstation xw8200 features (continued)
Feature Specification

Power supply 600 W

Input devices USB or PS/2 keyboard; choice of 2-button scroll mouse (optical or mechanical);
3-button mouse (optical or mechanical)

Figure 5-1 displays the front panel of the HP xw8200 Workstation, and Figure 5-2 displays its
rear panel.

Figure 5-1 HP xw8200 Workstation Front Panel

1
4
2

10

11

The following table describes the callouts shown in Figure 5-1:

Item Description

1 Optical drive

2 Optical drive activity lights

3 5.25-inch drive bays

4 Optical drive eject button

5 Power on light

6 Power button

7 Hard drive activity light

8 USB ports (two)

9 Headphone connector

10 Microphone connector

11 IEEE-1394 connector

5.1 HP xw8200 Workstation Overview 209


Figure 5-2 HP xw8200 Workstation Rear Panel
8

9
1
10

11
2 12

13
3

4 14
5 15
6 16
7

The following table describes the callouts shown in Figure 5-2:

Item Description

1 Power cord connector

2 Keyboard connector

3 Serial connector (teal)

4 USB ports (six)

5 IEEE 1394 connector

6 Microphone connector (pink)

7 Audio line out connector (lime)

8 Universal chassis clamp openings

9 Access panel key

10 Padlock loop

11 Cable lock slot

12 Mouse connector (green)

13 Parallel connector (burgundy)

14 RJ-45 network connector

15 Audio line-in connector (light blue)

16 Graphics adapter (blue)

5.1.1 PCI Slot Assignments


The HP xw8200 workstation has seven PCI expansion slots. Table 5-2 summarizes the slot
assignments, the appropriate PCI cards, and the maximum slot power.

210 Workstations
Table 5-2 HP xw8200 Workstation PCI Slots
Slot Assignment Maximum Slot Power Comment

1 PCI 10W

2 PCI Express X16 150W Graphics adapter

3 PCI 25W

4 PCI Express X4 25W PCI Express interconnect

5 PCI-X 133 25W PCI interconnect

6 PCI-X 100 25W

7 PCI-X 100 25W External Gigabit Ethernet

5.1.2 Replacing or Installing a PCI Card


Use the following procedures to install and replace PCI cards.

Replacing a PCI Card


To replace a PCI or PCI Express card from an HP xw8200 workstation, follow these steps:
1. Disconnect the power cord from the AC outlet and then from the workstation.
2. Disconnect all peripheral device cables from the workstation.
3. Remove the access panel by pulling up on the handle and lifting it off the chassis.
4. Lay the workstation on its side with the system board facing up.
5. Remove the PCI retainer, as shown in Figure 5-3.

Figure 5-3 PCI Retainer

HPTC-0212

6. Remove the PCI card support, if necessary.


7. Lift the PCI levers by first pressing down, then out. If you are removing a PCI Express card,
remove the power supply cable, if required, and move the “hockey stick” lever to release
the card. Figure 5-4 shows the procedure for a PCI card, and Figure 5-5 shows the procedure
for a PCI Express card.

5.1 HP xw8200 Workstation Overview 211


Figure 5-4 PCI Levers

HPTC-0211

Figure 5-5 PCI Express Levers

HPTC-0215

8. Lift the PCI card out of the chassis and store it in an antistatic bag.

Installing a New PCI Card


To install a new PCI or PCI Express card from an HP xw8200 workstation, follow these steps:
1. Disconnect the power cord from the AC outlet, then from the workstation.
2. Disconnect all peripheral device cables from the workstation.
3. Remove the access panel as follows:
a. If necessary, unlock the access panel.
b. Pull up on the handle and lift off the cover.
4. Lay the workstation on its side with the system board facing up.
5. Remove the PCI retainer, as shown in Figure 5-3.
6. Lift the PCI levers by first pressing down on them and then out, as shown by callout 1 in
Figure 5-6.

212 Workstations
Figure 5-6 Installing a PCI Card in the HP xw8200 Workstation
1 4
2
3

7. Remove the PCI slot cover, as shown by callout 2 in Figure 5-6.


8. Lower the PCI 3 or PCI Express 3 card into the chassis. Verify that the keyed components
of the card align with the socket, as shown by callout 3 in Figure 5-6.
9. Close the PCI levers. If the PCI levers do not close, be sure all cards are properly seated and
then try again, as shown by callout 4 in Figure 5-6.
10. If you are installing a PCI Express card, see Figure 5-7.

Figure 5-7 Installing a PCI Express Card in the HP xw8200 Workstation


2

1
4

Plug in the power supply cable, if required.

5.1.3 Removing a Workstation from the Rack


To access internal components in the HP xw8200 workstation, you must first shut down power
to the server and remove it from the rack. All of the workstations in the cluster are secured to
the rack on a sliding rail.
To remove the HP xw8200 workstation from the rack, follow these steps:
1. Power down the workstation.
2. Disconnect all remaining cables on the workstation rear panel, including cables extending
from external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.

5.1 HP xw8200 Workstation Overview 213


3. Remove the two screws on each mounting flange.
4. Slide the server out of the rack until the rail locks engage.
5. Press and hold the rail locks, extend the server until it clears the rack.
6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.

5.2 HP xw8400 Workstation Overview


The HP xw8400 is an Intel-based dual processing, dual-core, high-end graphics workstation. It
is equipped with the high-end performance Intel 5000X–64 GB chipset for Intel Xeon processors.
HP Performance Tuning Framework (PTF) comes pre-installed to guide workstation setup and
custom configuration to help increase performance of selected applications and overall
productivity.
Table 5-3 describes the standard features of the HP xw8400 workstation.
Table 5-3 HP Workstation xw8400 Features
Feature Specification

Processor One or two Dual-Core Intel Xeon Processor 5000 Sequence 3.00 GHz/ 667 MHz
FSB, 3.20 GHz/1066 MHz FSB, 3.73 GHz/1066 MHz FSB with EM64T (2 MB L2
cache per processor core)

One or two Dual-Core Intel Xeon Processor 5100 Sequence 1.60 GHz/1066 MHz
FSB, 1.86 GHz/1066 MHz FSB, 2.00 GHz/1333 MHz FSB, with EM64T (4 MB shared
L2 cache per processor) 2.33 GHz/1333 MHz FSB, 2.66 GHz/1333 MHz FSB, 3.00
GHz/1333 MHz FSB

Chipset Intel 5000X

Memory Up to 8 GB of ECC registered 4 channel DDR 667 MHz fully buffered DIMMS
via 1 GB DIMMs in 8 DIMMs slots

Up to 16 GB of memory via 2 GB DIMMs;

Up to 32 GB of memory via 4 GB DIMMs

The HP xw8400 is enabled to achieve the maximum memory supported by the


chipset of 64 GB

Drive bays Three external 5.25-inch bays

Five internal 3.5-inch bays

Drive controllers Integrated six-channel SATA 3 Gb/s controller with RAID levels 0, 1, 10, 5
capability

Integrated four-channel SAS controller with RAID 0, 1 capability

Hard drives Up to five SATA drives, 2.5 TB max.; 80 GB (7200 RPM) SATA 3 Gb/s, or 160,
250, 500 GB (7200 RPM) SATA 3 Gb/s NCQ, or 80, 160 GB (10K RPM) SATA 1.5
Gb/s NCQ

Up to five serial attached SCSI (SAS) drives, 730 GB max.; 146 GB (10K RPM) or
73, 146 GB (15K RPM)
NOTE: Populating all five drives with SAS will require an additional (optional)
SAS controller

Optical drives 48x CD-ROM, 16X DVD-ROM, 48X CD-RW/DVD combo, 16X DVD+/-RW DL
with LightScribe Direct Disc Labeling (Microsoft Windows XP Professional only,
requires LightScribe media for labeling)

Expansion slots Seven slots: one PCIe x16 graphics slot, one PCIe x16 mechanically (x4 electrically),
one PCIe x8 mechanically (x4 electrically), three PCI-X slots (one 133 MHz, two
100 MHz slots) and one legacy PCI slot

Graphics Professional 2D: NVIDIA Quadro NVS 285 (128 MB, dual display)

214 Workstations
Table 5-3 HP Workstation xw8400 Features (continued)
Feature Specification

Entry 3D: NVIDIA Quadro FX 560 (128 MB), ATI FireGL V3300 (128 MB)

Midrange 3D: NVIDIA Quadro FX 1500 (256 MB), ATI FireGL V7200 (256 MB)

High-end 3D: NVIDIA Quadro FX 3500 (256 MB), NVIDIA Quadro FX 4500 (512
MB) with opt. Quadro G-Sync card

Audio Integrated High Definition audio with Jack Retasking technology, optional Sound
Blaster X-Fi XtremeMusic (PCI)

Network Integrated Broadcom 5752 Netxtreme Gigabit PCIe LAN on Motherboard, opt.
Broadcom 5751 Gigabit PCIe NIC

Ports Front: 2 USB 2.0, one audio line out, one microphone line in, 1 IEEE 1394

Rear: five USB 2.0, one standard serial port, one parallel port, PS/2 keyboard and
mouse, one RJ-45, audio line in, audio line out, one IEEE 1394

Internal: one USB 2.0

Input devices USB or PS/2 keyboard; choice of 2-button scroll mouse (USB optical or PS/2
mechanical); 3-button mouse (USB optical); USB SpaceMouse; USB SpaceBall,
USB SpacePilot

Power 800 W

Figure 5-8 displays the front panel of the HP xw8400 Workstation, and Figure 5-9 displays its
rear panel. The xw8400 front and rear panels are similar to the xw8200. Enhancements for the
HP xw8400 Workstation include a Built-In Self Test LED and an optional MiniSAS 4–port
connector on the rear panel.

Figure 5-8 HP xw8400 Workstation Front Panel

1
4
2

10

11

5.2 HP xw8400 Workstation Overview 215


The following table describes the callouts shown in Figure 5-8:

Item Description

1 Optical drive

2 Optical drive activity lights

3 5.25-inch drive bays

4 Optical drive eject button

5 Power on light

6 Power button

7 Hard drive activity light

8 USB ports (two)

9 Headphone connector

10 Microphone connector

11 IEEE-1394 connector

Figure 5-9 HP xw8400 Workstation Rear Panel


8

9
1
10

11
2 12

13
3

4 14
5 15
6 16
7

The following table describes the callouts shown in Figure 5-9:

Item Description

1 Power cord connector

2 Keyboard connector

3 Serial connector (teal)

4 USB ports (six)

5 IEEE 1394 connector

216 Workstations
Item Description

6 Microphone connector (pink)

7 Audio line out connector (lime)

8 Universal chassis clamp openings

9 Access panel key

10 Padlock loop

11 Cable lock slot

12 Mouse connector (green)

13 Parallel connector (burgundy)

14 RJ-45 network connector

15 Audio line-in connector (light blue)

16 Graphics adapter (blue)

17 MiniSAS 4–port connector (optional) – not shown, but item is in lower


right corner

18 Built In Self Test (BIST) LED – not shown, but item is in upper left corner

5.2.1 PCI Slot Assignments


The HP xw8400 Workstation has seven PCI expansion slots. Figure 5-10 shows the xw8400 slots
and Table 5-4 summarizes the slot assignments, the appropriate PCI cards, and the maximum
slot power.

Figure 5-10 xw8400 PCI Slots

2
3

4
5

Table 5-4 HP xw8400 Workstation PCI Slots


Slot Assignment Maximum Slot Power Comment

1 PCI (32–bit, 33 MHz) 10W G-Sync

2 PCI Express x16 150W PCI Express graphics adapter

3 PCI Express x8 mechanical 25W N/A


(x4 electrical)

5.2 HP xw8400 Workstation Overview 217


Table 5-4 HP xw8400 Workstation PCI Slots (continued)
Slot Assignment Maximum Slot Power Comment

4 PCI Express x16 mechanical 25W SI – gigabit Ethernet or InfiniBand


(x4 electrical)

5 PCI-X 133 25W SI – Myrinet or Quadrics

6 PCI-X 100 25W Available

7 PCI-X 100 25W NIC

5.2.2 xw8400 PCI Slot Rules


The slot rules for the xw8400 are shown in Table 5-5.
Table 5-5 xw8400 PCI Slot Rules
Slot 1-2 CPU, Two 1-slot 1-2 CPU, Two 2-slot
Number Slot Type 1-2 CPU, 1-slot GPU 1-2 CPU, 2-slot GPU GPU GPU

1 PCI G-Sync G-Sync G-Sync G-Sync

2 PCI Express x16 Graphics Graphics Graphics 1 2-slot Graphics 1

3 PCI Express x8 available NA1 SI-E2 NA1


(x4)

4 PCI Express x16 SI-E2 SI-E2 Graphics 2 2-slot Graphics 2


(x4)

5 PCI-X 133 SI-X34 SI-X34 SI-X34 NA1

6 PCI-X 100 available available available SI-X34

7 PCI-X 100 NIC4 NIC4 NIC4 NIC4


1 NA = not available, slot is blocked by GPU
2 IB or GigE PCI-E adapters
3 Myrinet or Quadrics PCI-X adapters
4 ISS card installed by Integration Center

Note:
The 2-slot GPU is an NVIDIA FX4500 or FX5500.

5.2.3 NVIDIA Quadro FX 4500 Graphics Card


The NVIDIA Quadro FX 4500 graphics card includes the following features:
• G70GL graphics processor
• 512 MB GDDR3 graphics memory
• Dual dual-link DVI-I
• SLI capable
• New dual slot thermal solution
• High Precision Dynamic Range Technology (HPDR)
• Full 128-bit Precision Graphics Pipeline, 12-bit sub-pixel precision
• Rotated Grid FSAA
• Infinite length vertex programs and dynamic flow control
• Fully Programmable Video GPU
Figure 5-11 shows an NVIDIA Quadro FX 4500 graphics card.

218 Workstations
Figure 5-11 NVIDIA Quadro FX 4500 Graphics Card

5.2.4 NVIDIA Quadro G-Sync Option Card


The NVIDIA Quadro G-Sync Option Card enables the synchronization of multiple
displays/sources to drive HP workstation based solutions such as:
• Cluster visualization
• Power walls
• Caves/Immersive environments
• On-Air Broadcast Graphics and Post Production Station
The NVIDIA Quadro G-Sync option card is supported as an option to the NVIDIA Quadro FX
4500. Features include:
• Enables full Genlock/Framelock functionality through GUI or API
• ATX form factor
• x1 PCI-E interface for stability only — can be plugged into any PCI-E or PCI slot
• Draws power from FX 4500 board
• Ribbon cable = 6 inches in length
• Windows and Linux drivers
• One G-Sync card can support 2 FX 4500 cards in the HP xw9400
Figure 5-12 shows an NVIDIA Quadro G-Sync card.

5.2 HP xw8400 Workstation Overview 219


Figure 5-12 NVIDIA Quadro G-Sync Card

5.2.5 NVIDIA Quadro FX 5500 Graphics Card


The NVIDIA Quadro FX 5500 graphics card includes the following features:
• Quadro FX 5500 graphics processor
• 1 GB GDDR2 SDRAM
• Dual dual-link DVI-I
• SLI capable
• New dual slot thermal solution
• High Precision Dynamic Range (HPDR) technology
• Full 128-bit Precision Graphics Pipeline, 12-bit sub-pixel Precision
• Rotated Grid FSAA
• Infinite length vertex programs and dynamic flow control
• Fully Programmable Video GPU
Figure 5-13 shows an NVIDIA Quadro FX 5500 graphics card.

Figure 5-13 NVIDIA Quadro FX 5500 Graphics Card

220 Workstations
5.2.6 Replacing or Installing a PCI Card
Use the following procedures to install and replace PCI cards.

Replacing a PCI Card


To replace a PCI or PCI Express card from an HP xw8400 workstation, follow these steps:
1. Disconnect the power cord from the AC outlet and then from the workstation.
2. Disconnect all peripheral device cables from the workstation.
3. Remove the access panel by pulling up on the handle and lifting it off the chassis.
4. Lay the workstation on its side with the system board facing up.
5. Remove the PCI retainer, if necessary.

Note:
For added protection, some cards have PCI retainers installed to prevent movement during
shipping.

For short or tall cards, lift the PCI retainer arm (callout 1) with one hand, press in on the
sides (callout 2) of the retainer, and rotate it (callout 3) out of the chassis as shown in
Figure 5-14.

Figure 5-14 PCI Retainer

3 3

2 2

2 2

6. Open the PCI retention clamp by pressing down on the two green clips (callout 1 in
Figure 5-15) at the ends of the clamp and rotating the clamp toward the back of the system.

5.2 HP xw8400 Workstation Overview 221


Figure 5-15 PCI Retention Clamp

1 1
3
2

7. Lift the PCI card out of the chassis (callout 2 in Figure 5-15). If you are removing a PCI
Express high-end graphics card, remove the auxiliary power supply cable (not illustrated)
if required, and move the lever to release the card and lift it out of the chassis (callout 3 in
Figure 5-15). Store the card in an anti-static bag.

Installing a New PCI Card


To install a new PCI or PCI Express card in an HP xw8400 workstation, follow these steps:
1. Disconnect the power cord from the AC outlet, then from the workstation.
2. Disconnect all peripheral device cables from the workstation.
3. Remove the access panel as follows:
a. If necessary, unlock the access panel.
b. Pull up on the handle and lift off the cover.
4. Lay the workstation on its side with the system board facing up.
5. If necessary, remove the PCI retainer, as shown in Figure 5-14.
6. Lift the PCI retention clamp by first pressing down on them and then out, as shown by
callout 1 in Figure 5-16.

Figure 5-16 Installing a PCI or PCI Express Card in the HP xw8400 Workstation
1 1

2
4

222 Workstations
7. Remove the PCI slot cover (callout 2 in Figure 5-16).
8. Lower the PCI or PCI Express card into the chassis (callout 3 in Figure 5-16). Verify that the
keyed components of the card align with the socket.
9. If installing a card with an auxiliary power connector, plug in the power supply cable or
adapter cable supplied with the card (callout 4 in Figure 5-16). This type of card includes,
but is not limited to, a PCI Express graphics card greater than 75 W, and a 1394a I/O card.
10. Close the PCI retention clamps. If the PCI retention clamps do not close, be sure all cards
are properly seated and then try again.

5.2.7 Removing a Workstation from the Rack


To access internal components in the HP xw8400 workstation, you must first shut down power
to the server and remove it from the rack. All of the workstations in the cluster are secured to
the rack on a sliding rail.
To remove the HP xw8400 workstation from the rack, follow these steps:
1. Power down the workstation.
2. Disconnect all remaining cables on the workstation rear panel, including cables extending
from external connectors on expansion boards. Make note of which Ethernet and interconnect
cables are connected to which ports.
3. Remove the two screws on each mounting flange.
4. Slide the server out of the rack until the rail locks engage.
5. Press and hold the rail locks, extend the server until it clears the rack.
6. Remove the server from the rack and position it securely on a workbench or other solid
surface for stability and safety.

5.3 HP xw9300 Workstation Overview


The HP xw9300 Workstation is a 64-bit personal workstation designed for visualization and
compute intensive environments. It supports dual PCI Express x16 graphics and dual single-
and dual-core AMD Opteron processors.
The AMD Direct Connect Architecture connects the HP xw9300's memory and I/O directly to
the CPU, optimizing performance. This helps balance throughput and enable expandable I/O.
The CPUs are interconnected allowing more Linear Symmetrical Multiprocessing capability. The
HP xw9300 has two x16 PCI Express ports supporting dual, high-end graphics and
SLI-enablement. Enabling up to four, full-performance, 3D displays, the HP xw9300 provides
cost-effective, scalable visualization capability for demanding high-performance graphics solutions
such as parallel rendering or compositing. In addition, the HP xw9300 with PCI Express will
accommodate future upgrades to the latest graphics and I/O components. Table 5-6 describes
the standard features of the HP xw9300 workstation.
Table 5-6 HP Workstation xw9300 Specifications
Feature Specification

Processor Single or dual AMD Opteron 200 series processors 246 (2.0 GHz), 248 (2.2 GHz),
250 (2.4 GHz), 252 (2.6 GHz), 254 (2.8 GHz); dual-core processors 270 (2.0 GHz),
275 (2.2 GHz), 280 (2.4 GHz) with AMD64 Technology and AMD HyperTransport

System bus 1 GHz AMD HyperTransport

Chipset NVIDIA nForce Professional with AMD-8131 HyperTransport PCI-X tunnel

Memory 16 GB maximum; 8 registered DIMMs; DDR1-400 ECC (512 MB, 1 GB, 2 GB); up
to 12.8 GB/sec throughput

Expansion bays Three external 5.25 inch bays, 5 internal 3.5 inch bays

Drive controllers Integrated SATA 3 Gb/s controller (4 channels) with RAID 0, 1, 0+1 capability;
Integrated dual channel Ultra320 SCSI controller with opt. external connector;

5.3 HP xw9300 Workstation Overview 223


Table 5-6 HP Workstation xw9300 Specifications (continued)
Feature Specification

Opt. Ultra 320 SCSI controller – basic; Opt. Ultra 320 SCSI controller – advanced
with RAID 0, 1, 10, 5, 50, JBOD capability, opt. 4 channel 3 Gb/s SATA RAID
controller

Removable media 48x CD-ROM, 48X CD-RW, 16X DVD-ROM, 48X CD-RW/DVD combo, 16X
DVD+/- RW DL LightScribe Disc Labeling (Windows 2K & XP only, requires
LightScribe media for labeling)

Expansion slots Six slots: 2 PCI Express (PCIe) x16 graphics and I/O; 3 full-height PCI-X slots (one
133 MHz, two 100 MHz slots); 1 fulllength PCI slot

Graphics Professional 2D: NVIDIA Quadro NVS 280 (PCIe) or Quadro NVS 285 with
NVIDIA TurboCache Technology (PCIe) Entry 3D: NVIDIA Quadro FX 540
Mid-range 3D: NVIDIA Quadro FX 1400; NVIDIA SLI Technology capable
High-end 3D: NVIDIA Quadro FX 3450, Quadro FX 4500 with opt. Quadro G-Sync
card; NVIDIA SLI Technology capable

High Performance Tuning HP Performance Tuning Framework guides system setup, allowing a custom
Framework configuration that best matches the workstation to user requirements. This
customization ensures availability of the graphics drivers and removes some
memory restraints. For specific application support and download instructions,
go to the following Web site: http://www.hp.com/go/framework

I/O ports and connectors Front: 2 USB 2.0, Headphone, Microphone, IEEE 1394 Back: 4 USB 2.0, 1 standard
serial port, IEEE 1394, PS/2 keyboard and mouse, 1 RJ-45 to integrated Gigabit
LAN, Audio In, Audio Out, Mic In

Communications Integrated NVIDIA Gigabit LAN-on-motherboard; optional Broadcom 5751


Gigabit (PCIe)

Power supply 700 W

Input devices USB or PS/2 keyboard; choice of two-button scroll mouse (optical or mechanical);
three-button mouse (optical or mechanical); USB SpaceBall 5000, USB SpacePilot

The front panel of the xw9300 is identical to the front panel of the xw8200, as shown in Figure 5-1.
Figure 5-17 shows the rear panel of the xw9300.

224 Workstations
Figure 5-17 HP xw9300 Workstation Rear Panel

1 9

2 10
11
3
12
4
5 13

6
14
7
15

16

The following tables describes the callouts in Figure 5-17:

Item Description

1 Power cord connector

2 Power supply built-in self test (BIST) LED

3 Serial connector (teal)

4 PS/2 keyboard connector (purple)

5 USB (x4)

6 IEEE 1394 connector

7 Microphone connector (pink)

8 Universal chassis clamp opening

9 Access panel keys

10 Padlock loop

11 Cable lock slot

12 PS/2 mouse connector (green)

13 RJ-45 network connector

14 Audio line-in connector (light blue)

15 Audio line-out connector (lime)

16 Graphics adapter

5.3.1 PCI Slot Indentification


The HP xw9300 workstation has six PCI expansion slots. Figure 5-18 shows a rear view of the
xw9300 workstation, and Table 5-7 identifies the slots.

5.3 HP xw9300 Workstation Overview 225


Note:
Although the xw9300 has six PCI slots on the mother board, there are seven apertures in the
chassis. Ensure that you remove the correct blank when installing a card. See Figure 5-18.

Figure 5-18 HP xw9300 Slot Numbering

0 1 2 3 4 5 6

Table 5-7 HP xw9300 Workstation PCI Slots


Slot Assignment

0 Blank

1 PCI Express x 16 graphics

2 PCI

3 PCI Express x 16 graphics

4 PCI-X 100

5 PCI-X 100

6 PCI-X 133

5.3.2 Slot Assignment Rules


The slot assignment rules for the xw9300 are complicated by the availability of dual-slot graphics
cards, and by the optional G-sync card used in some configurations. The rules are as follows:
• Slot 1: This slot always contains a PCI Express 16X graphics card (either a single-width or
dual-width card).
• Slot 3: If used in the configuration, a PCI Express interconnect card (HCA or HBA) is always
installed in slot 3, a PCI-E 16X slot.
Otherwise, if there is a second graphics card, it must be installed in slot 3 (a PCI Express
16X slot) and you cannot use a PCI-E interconnect in this slot
• Slot 4: The G-sync card is installed in slot 4 as a first choice, slot 6 is the second choice.
• Slot 5: The network interface (NIC) card is installed in slot 5. If two graphics cards are
installed and an interconnect card is in slot 6, then the G-sync card must be installed in slot
5 (the only open slot) and there is no slot available for a NIC.
• Slot 6: If used in the configuration, a PCI-X interconnect card (HCA or HBA) is always
installed in slot 6, the PCI-X 133 slot. If unused by an HCA or HBA, the G-sync card is
installed in slot 6 as the second choice.

226 Workstations
Table 5-8 and Table 5-9 list the slot assignments for narrow (1–slot), or wide (2–slot) graphics
cards.
Table 5-8 Narrow (1-Slot) Graphics Cards
Slot Single CPU 2 CPU, 1 GFX 2 CPU, 2GFX

1 GFX GFX GFX

3 GFX

4 G-sync G-sync G-sync

5 NIC NIC NIC

6 Interconnect Interconnect Interconnect

Table 5-9 Wide (2-Slot) Graphics Cards


Slot Single CPU 2 CPU, 1 GFX 2 CPU, 2GFX

1 GFX GFX GFX

2 GFX GFX GFX

3 GFX

4 G-sync G-sync GFX

5 NIC NIC NIC, or G-sync

6 Interconnect Interconnect Interconnect

5.3.3 Typical PCI Slot Configuration


Figure 5-19 shows a typical PCI slot configuration for the xw9300.

5.3 HP xw9300 Workstation Overview 227


Figure 5-19 Typical xw9300 Slot Configuration

Figure 5-19 shows the slot usages for a typical xw9300 configuration:
• Slot 0 is a blank chassis aperture; there is no PCI slot on the motherboard.
• Slot 1 contains a narrow graphics card
• Slot 2 is empty.
• Slot 3 contains a narrow graphics card.
• Slot 4 is empty, and available for an optional G-sync card.
• Slot 5 contains a NIC card
• Slot 6 contains a high-speed interconnect card.
The actual configuration of a workstation depends on the customer’s original specification and
the workstation role. For example, it might be a display node rather than a render node.

5.3.4 NVIDIA PCI Graphics Cards


NVIDIA PCI Express 16x graphics cards are used in xw9300 workstations. The Quadro FX3450
is a high-end graphics card, and the Quadro FX4500 is an ultra high-end graphics card. The
following sections describe these graphics cards.

5.3.4.1 NVIDIA FX3450 Graphics Card


The FX3450 graphic card has the following features:
• PCI Express 16x, single slot
• Two DVI-I outputs, each can provide:
— digital output for flat-panel
— analog output via a dongle for CRT

228 Workstations
• 3-pin Mini DIN stereo output (used for shutter glasses)
• Connects to system power supply

5.3.4.2 NVIDIA FX4500 Graphics Card


The FX4500 graphics card has the following features:
• PCI-E 16x, Dual slot
• Two DVI-I outputs, each can provide:
— digital output for flat-panel
— analog output via a dongle for CRT
• Three-pin mini DIN stereo output (used for shutter glasses)
• Connects to system power supply

5.3.5 Connecting PCI Graphics Cards to Displays


• Connection to display devices is done at the customer site
• Digital and analog displays are supported by the graphics card
• Display nodes connect to display devices via
— DVI-I output on graphics card
— Second (lower) DVI-I output connects to KVM using analog adapter
• Render nodes are not connected to display devices
• Display devices may be:
— HP monitors (digital or analog) or projectors
— Third party devices such as projection systems from:
◦ Fakespace
◦ Barco
◦ Others
• Connection from the graphics card to the display device may be via:
— Direct cable-acceptable cable length depends on display resolution (typically a couple
of meters)
— A third-party product that extends the reach of the video cable
— Indirectly through a third-party video switch

5.3.6 The NVIDIA FX G-Sync PCI Graphics Card (Optional)


You might choose the optional NVIDIA FX G-sync PCI graphics card. The NVIDIA FX G-sync
PCI graphics card:
• Synchronizes video refresh and frames for multi-tile displays and stereo.
• Connects FX 4500 cards across nodes externally.
• Synchronizes the refresh of multiple cards in multiple workstations.
• Required for active stereo.
• Interconnected using CAT-5 cables.
The G-Sync card uses the PCI slot only for physical support. An internal cable attaches the G-sync
card to one, or both of the graphics cards. Connections to the G-Sync card are not defined in the
Cluster cabling tables. If you are using the Quadro FX G-Sync board in conjunction with a graphics
card, the following additional setup steps are required.

5.3 HP xw9300 Workstation Overview 229


Important:
These steps must be performed when the system is off.

1. On the Quadro FX G-Sync board, locate the fourteen-pin connector labeled "primary".
Connect the ribbon cable to this card.
2. Install the Quadro FX G-Sync board in any available slot. Note that the slot itself is only
used for support. The slot must be close enough to the graphics card that the ribbon cable
can reach.
3. Connect the other end of the ribbon cable to the fourteen-pin connector on the graphics card.
4. Only the end-user customer makes external connections to this card. The G-Sync card
provides two modular connectors similar to Ethernet RJ-45 jacks. NVIDIA G-sync cards are
connected by using CAT-5 cables.

Warning!
The voltage and signal on the frame lock ports are different from Ethernet signals. Do not connect
a Frame lock port to an Ethernet card, or network hub. Doing so can cause damage to the
hardware.

5.3.7 System Interconnect Cards


Table 5-10 shows the supported interconnect cards depending on the system configuration. Some
configurations depend on whether there are one or two graphics processor units (GPUs).
Table 5-10 Supported Interconnect Cards
Interconnect PCI-X PCI Express

Gigabit Ethernet 290563-B21 PP618AV

(if 2 GPU) (if 1 GPU)

Myrinet XP Rev D 257894-006

Myrinet XP Rev E 360040-B21

InfiniBand 380299-B21 380298-B21

(if 2 GPU) (if 1 GPU)

Quadrics AC071A

5.3.8 Memory Configurations


Table 5-11 shows the supported memory configurations.
Table 5-11 Supported Memory Configurations
Memory Size 1 CPU 2 CPU

1 GB (4x512) SDR-400 PP639AV 1.

2 GB (4x512) DDR-400 PR788AV PP641AV

4 GB (4x1GB) DDR-400 PR789AV PP642AV

4 GB (8x512) DDR-400 PP643AV

6 GB(4x1G+4x512) DDR-400 PP644AV

8 GB (8x1GB) DDR-400 PP645AV

8 GB (4x2GB) DDR-400 EM499AV PU987AV

230 Workstations
Table 5-11 Supported Memory Configurations (continued)
Memory Size 1 CPU 2 CPU

16 GB (4x4GB) DDR-333 EK738AV

32 GB (8x4GB) DDR-333 EK737AV

5.3.9 PCI Card Installation and Removal Instructions


This section describes PCI card installation, and removal procedures for the xw9300 workstation.

Note:
The HP xw9300 Workstation contains two PCI Express x16 slots. PCI Express slot three is only
active in the dual-processor configuration. This slot cannot be used in a single-processor
configuration.

5.3.9.1 PCI Card Support


For added protection, some cards have PCI holders installed to stabilize the card. To remove the
PCI card holder, follow these steps:
1. Disconnect power from the system and remove the access panel.
2. Referring to the callouts in Figure 5-20, for short or tall PCI cards, lift up on the holder arm
(callout 1) with one hand and press in on the sides (callout 2) of the holder and rotate it out
(callout 3) of the chassis.

Figure 5-20 Removing PCI Card Holders


3

To install the PCI card support holder, follow these steps:


1. Disconnect power from the system and remove the access panel.
2. Referring to the callouts in Figure 5-21, for short or tall PCI cards, attach the lips of the
support arm (callout 1) under the slots on the rear of the chassis, then rotate the card support
down until the black part of the arm (callout 2) supports the card.

5.3 HP xw9300 Workstation Overview 231


5.3.9.2 Removing and Installing PCI Express Cards
Figure 5-21 Installing PCI Card Holders
1

To remove a PCI Express Card, refer to the callouts in Figure 5-22, and follow these steps:
1. Disconnect power from the system and remove the access panel. Remove the PCI card
support, if installed.
2. Lift the PCI levers (callout 1) by first pressing down and then up.
3. Remove the power supply cable (callout 2), if installed, and press in on the “hockey stick”
lever (callout 3) while lifting the card (callout 4) out of the chassis. Store the card in an
antistatic bag.
4. Install a PCI slot cover and close the PCI levers. If the PCI levers do not close, be sure all of
the cards are properly seated and then try again.

Figure 5-22 Removing PCI Express Cards


1
1

2 4

To install a PCI Express card, refer to the callouts in Figure 5-23, and follow these steps:
1. Disconnect power from the system, remove the access panel and remove the PCI card
support, if installed.
2. Lift the PCI levers (callout 1) by first pressing down and then up.
3. Remove the PCI slot cover (callout 2).
4. Lower the PCI Express (callout 3) card into the chassis. Verify that the keyed components
of the card align with the socket.
232 Workstations
5. If required, plug in the power supply cable (callout 4).
6. Close the PCI levers (callout 5). If the PCI levers do not close, be sure all of the cards are
properly seated.

Figure 5-23 Installing PCI Express Cards


1 1 5 5

5.3.9.3 Removing and Installing PCI or PCI-X Cards

Note:
The following illustration shows a PCI card being removed from a PCI slot. A PCI-X card is
removed from a PCI-X slot.

To remove a PCI or PCI-X card, refer to the callouts in Figure 5-24 and follow these steps:
1. Disconnect power from the system, remove the access panel and remove the PCI card
support, if installed.
2. Lift the PCI levers (callout 1) by first pressing down and then up.
3. Lift the PCI card (callout 2) out of the chassis. Store the card in an antistatic bag.
4. Install a PCI slot cover and close the PCI levers. If the PCI levers do not close, be sure all
cards are properly seated and then try again.

Figure 5-24 Removing PCI, or PCI-X Cards


1
1

5.3 HP xw9300 Workstation Overview 233


Note:
Figure 5-25 shows a PCI card being installed in a PCI slot. A PCI-X card must be installed in a
PCI-X slot.

To install a PCI or PCI-X card, refer to the callouts in Figure 5-25 and follow these steps:
1. Disconnect power from the system, remove the access panel, and remove the PCI card
support.
2. Lift the PCI levers (callout 1) by first pressing down and then up.
3. Remove the PCI slot cover (callout 2).
4. Lower the PCI (callout 3) card into the chassis. Verify that the keyed components of the card
align with the socket.
5. Close the PCI levers (callout 4). If the PCI levers do not close, be sure all of the cards are
properly seated and then try again.

Figure 5-25 Installing PCI or PCI-X Cards


1 4
1 4

5.3.10 Removing a Workstation from the Rack


The procedure for removing the xw9300 from the rack the same as the xw8200. See Section 5.1.3.

5.4 HP xw9400 Workstation Overview


The HP xw9400 Workstation mechanical design is based on the xw8400 and xw9300 designs. It
incorporates improvements made for the xw8400 while keeping the features from the xw9300
specific to the AMD architecture. Nearly all of the customer-accessible components can be
removed and replaced without a tool. The airflow, especially through the disk drives and PCI
area has been improved, and it incorporates a number of additional features that enhance the
HP xw9400 Workstation.
The xw9400 supports dual AMD Opteron microprocessors, each one having four memory slots
available. The AMD Opteron architecture, in conjunction with the NVIDIA nForce Professional
3600 and NVIDIA nForce Professional 3650, enables the xw9400 to drive dual, high-performance
graphics slots (entry level to High End 3D) at full PCI–e x16 bandwidth. Additionally, the use
of SLI (Scalable Link Interface) between two graphics boards enhances graphics performance.
Table 5-12 describes the standard features of the HP xw9400 workstation.

234 Workstations
Table 5-12 HP Workstation xw9400 Features
Feature Specification

Operating systems Genuine Windows XP 32-bit Edition SP2 (WHQL certified) Genuine Windows XP
Professional x64 Edition (WHQL certified) Microsoft Vista capable Red Hat Enterprise
Linux 4 (Optional 32-bit or 64-bit: Red Hat Enterprise Linux WS 3 or WS 4 and HP
Linux Installer Kit)

Processor Single or Dual Core AMD Opteron 2000 series processors 2210 (1.80 GHz), 2212 (2.00
GHz), 2214 (2.20 GHz), 2216 (2.40 GHz), 2218 (2.60 GHz), 2220E (2.8 GHz) with AMD64
Technology, 1 MB of L2 cache per core and 1 GHz AMD HyperTransport™ technology

Chipset NVIDIA nForce Professional 3600 and NVIDIA nForce Pro 2050

Memory Up to 64 GB of ECC registered DDR2 667 MHz SDRAM in 8 DIMM slots (a max. of
16 GB with one processor). The HP xw9400 is expected to support 64 GBiv of memory
with 8 GB DIMMs.

Drive controllers Integrated SATA 3 Gb/s controller (6 channels) with RAID 0, 1, 5 and 10 capability;
Integrated SAS controller (8 channels) with opt. external connector and RAID 0, 1,
and 10

Hard drive(s) Up to five SATA drives supported natively (3.75 TB max.); 80, 160 GB (10K rpm) SATA
1.5 Gb/s or 80 GB (7200 rpm) SATA 3 Gb/s, 160, 250, 500, 750 GB SATA 3 Gb/s NCQ;
or up to five serial attached SCSI (SAS) drives supported natively (1.5 TB max.); 146
GB (10K rpm) or 146, 300 GB (15K rpm) SAS drives.

Optical drives CD-ROM, DVD-ROM, CD-RW/DVD combo, DVD+/- RW Dual-Layer with LightScribe
Direct Disc Labeling (Microsoft XP only, requires LightScribe media for labeling)

Drive bays Three external 5.25 inch bays (opt. StorCase enclosure enables 3.5 inch SATA drive
to be added to 5.25 inch bay), five internal 3.5 inch bays

Slots Seven slots: 2 PCI Express (PCIe) x16 graphics, 2 PCIe x16 (x8 electrical) I/O; 2
full-height PCI-X 100 MHz slots; 1 full-length PCI

Graphics Professional 2D: NVIDIA Quadro NVS 285 (128 MB, up to 2 cards supported)

Entry 3D: NVIDIA Quadro FX 560 (128 MB)

Mid-range 3D: NVIDIA Quadro FX 1500 (256 MB, up to 2 cards supported)

High-end 3D: NVIDIA Quadro FX 3500 (256 MB), NVIDIA Quadro FX 4500 (512 MB
with optional Quadro G-Sync card), NVIDIA Quadro FX 5500 (512 MB, up to 2 cards
supported)

Audio Integrated High Definition audio with jack retasking capability, opt. PCI Sound Blaster
X-Fi XtremeMusic

Network Dual NVIDIA Gigabit LAN-On-Motherboardx, opt. Broadcom 5751 NetXtreme Gigabit
PCIe NICx, opt. Intel Pro/1000 GT Gigabit PCIe NICx

Ports Front: two USB 2.0, Headphone, Microphone In, Audio Out, optional IEEE 1394

Back: six USB 2.0, one standard serial port, IEEE 1394, PS/2 keyboard and mouse, two
RJ-45 to integrated Gigabit LAN, Audio In, Audio Out, Mic In

Input devices USB or PS/2 keyboard; choice of 2-button scroll mouse (optical or mechanical); USB
3-button mouse (optical); USB SpaceBall, USB SpacePilot

Dimensions (H × W × D) 17.9 inch (45.5 cm) x 8.3 inch (21.0 cm) x 20.7 inch (52.5 cm)

Power supply 800 W

The front panel of the xw9400 is identical to the front panel of the xw8200, as shown in Figure 5-1.
Figure 5-26 shows the rear view of the xw9400.

5.4 HP xw9400 Workstation Overview 235


Figure 5-26 HP xw9400 Workstation Rear View

18 17 16 15 14 13 12 11 10 9

1 2 3 4 5 6 7 8

The following table describes the callouts in Figure 5-26:

Item Description

1 Power cord connector

2 Power supply built-in self test (BIST) LED

3 Serial connector (teal)

4 SPDIF OUT (single RCA jack to support SPDIF digital audio output via coax
cable)

5 Keyboard connector

6 USB 2.0 ports

7 Microphone connector (pink)

8 Audio line-out connector

9 MiniSAS 4–port connector (optional)

10 Graphics adapter

11 Audio line-in connector

12 RJ-45 network connectors

13 IEEE-1394a connector

14 Mouse connector

15 Cable lock slot

16 Padlock loop

17 Universal chassis clamp opening

18 Access panel key

236 Workstations
5.4.1 PCI Slot Indentification
The HP xw9400 workstation has seven PCI expansion slots. Figure 5-27 shows the PCI slots
available in the xw9400 workstation, and Table 5-13 identifies the slots.

Figure 5-27 HP xw9400 Slot Numbering

Table 5-13 HP xw9400 Workstation PCI Slots


Slot Assignment

1 PCI Express x16 (x8) 1

2 PCI Express x16 graphics

3 PCI 32 bit, 33 MHz

4 PCI Express x16 (x8)

5 PCI Express x16 graphics

6 PCI–X 100

7 PCI–X 100/133
1 Slot 1 is for short cards only

5.4.2 Slot Assignment Rules


Table 5-14 lists the slot assignments for graphics cards.
Table 5-14 Graphics Cards
Slot Slot Type 1 CPU 2 CPU, 1 Graphics Card 2 CPU, 2 Graphics Cards

1 PCI Express, x16 Graphics 1 Graphics 1 Graphics 1

2 PCI Express, x16 N/A N/A N/A

3 PCI 32/33 N/A SI – Gigabit Ethernet or Graphics 2


InfiniBand

4 PCI Express, x16 G-Sync G-Sync N/A

5.4 HP xw9400 Workstation Overview 237


Table 5-14 Graphics Cards (continued)
Slot Slot Type 1 CPU 2 CPU, 1 Graphics Card 2 CPU, 2 Graphics Cards

5 PCI Express, x16 NIC NIC G-Sync or NIC

6 PCI-X 100 SI – All SI – Myrinet or Quadrics SI – All

7 PCI-X 100/133

5.4.3 xw9400 Graphics Options


The following graphics options are used in xw9400 workstations:
• NVIDIA Quadro FX 3500 (see Figure 5-28)
• NVIDIA Quadro FX 4500 (see Figure 5-29)
• NVIDIA Quadro G-Sync (see Figure 5-30)

5.4.3.1 NVIDIA Quadro FX 3500


The NVIDIA Quadro FX 3500 includes the following features:
• G71GL-U graphics processor
• 256 MB GDDR3 graphics memory
• Dual dual-link DVI-I
• Stereo output
• SLI Capable
• High Precision Dynamic Range Technology (HPDR)
• Full 128-bit Precision Graphics Pipeline, 12-bit sub-pixel Precision
• Rotated Grid FSAA
• Infinite length vertex programs and dynamic flow control
• Fully Programmable Video GPU
Figure 5-28 shows an NVIDIA Quadro FX 3500 graphics card.

238 Workstations
Figure 5-28 NVIDIA Quadro FX 3500 Graphics Card

5.4.3.2 NVIDIA Quadro FX 4500


The NVIDIA Quadro FX 4500 includes the following features:
• G70GL graphics processor
• 512 MB GDDR3 graphics memory
• Dual dual-link DVI-I
• SLI Capable
• New dual slot thermal solution
• High Precision Dynamic Range Technology (HPDR)
• Full 128-bit Precision Graphics Pipeline, 12-bit sub-pixel Precision
• Rotated Grid FSAA
• Infinite length vertex programs and dynamic flow control
• Fully Programmable Video GPU
Figure 5-29 shows an NVIDIA Quadro FX 4500 graphics card.

5.4 HP xw9400 Workstation Overview 239


Figure 5-29 NVIDIA Quadro FX 4500 Graphics Card

5.4.3.3 NVIDIA Quadro G-Sync


The NVIDIA Quadro G-Sync enables the synchronization of multiple displays/sources to drive
HP workstation based solutions such as:
• Cluster Visualization
• Power Walls
• Caves / Immersive Environments
• On Air Broadcast Graphics and Post Production Station
The NVIDIA Quadro G-Sync Option Card is supported as an option to the NVIDIA Quadro FX
4500. Features include:
• Enables full Genlock/Framelock functionality through GUI or API
• ATX form factor
• x1 PCI-E interface for stability only — can be plugged into any PCI-E or PCI slot
• Draws power from FX 4500 board
• Ribbon Cable = 6” in length
• Windows and Linux drivers
• One G-Sync card can support 2 FX 4500 cards in the HP xw9400
Figure 5-30 shows an NVIDIA Quadro G-Sync card.

240 Workstations
Figure 5-30 NVIDIA Quadro G-Sync Card

5.4.4 Memory Configurations


Use only PC2-5300 ECC DIMMs. Match DIMM pairs by size and type. Refer to the HP xw9400
Workstation Service and Technical Reference Guide for more information on upgrading memory in
the HP xw9400 workstation.

5.4.5 PCI Card Installation and Removal Instructions


PCI and PCI Express card installation, and removal procedures for the xw9400 workstation are
the same as described previously in the xw9300 overview (see Section 5.3.9 (page 231)).

5.4.6 Removing a Workstation from the Rack


The procedure for removing the xw9400 from the rack the same as the xw8200. See Section 5.1.3.

5.4 HP xw9400 Workstation Overview 241


A USB Drive Key Support on ProLiant G4 Models
All fourth generation HP ProLiant servers include support for a USB drive key. This device can
be formatted as a floppy drive or as a hard disk drive and made bootable using the HP Drive
Key Boot Utility software. When formatted as a floppy, the drive key looks like a bootable 1.44
MB USB floppy. When formatted as a hard drive, the drive key is bootable with the full device
capacity and can be used to flash system ROMs or boot existing floppy images.
The process of upgrading system and option ROMs is referred to as flashing the ROM or flashing
the firmware. A ROM flash uses software to replace the current system or option ROM on a
target server with a new ROM image.
HP Drive Key Boot Utility software can be downloaded from the following Web site:
http://h18000.www1.hp.com/support/files/serveroptions/us/download/21621.html
Use the following procedure to make your drive key bootable and capable of flashing firmware:
1. Install the HP Drive Key Boot Utility on a Windows system. After installation the utility
places a shortcut in HP System Tools in the Programs start menu folder.
2. Insert your HP USB drive key into an available USB port.
3. Select the HP Drive Key Boot Utility shortcut in the HP System Tools folder.
4. Complete each step presented by the application.
If the drop-down menu is empty when you are asked to select the drive letter, the drive key
might be improperly connected, or it is seen as a fixed disk. To determine the drive type of
the USB device, right click on the USB drive key in My Computer and choose Properties.
The drive type is displayed in the Properties window. If the drive is labeled fixed or local,
the following steps must be taken to assign a drive letter to the USB drive key prior to
running the HP Drive Key Boot Utility:
a. Insert the drive key.
b. Log in as administrator.
c. Select Start >Programs> Administrative Tools>Computer Management.
d. From Computer Management (local), select Storage>Disk Management (local).
e. Select Change/Add Drive Letter for the disk to map the drive key.
f. Choose a drive letter.
The drive key is now bootable and capable of flashing firmware. You can drop firmware files
onto the drive from either a Windows or Linux system. Firmware files should be put in the
components directory on the USB drive. The firmware images are part of the Offline ROM Flash
for SmartStart Maintenance: ROM Update Utility package. Refer to the following ROM update
Web page:
http://h18023.www1.hp.com/support/files/server/us/smartstartGP.html

242 USB Drive Key Support on ProLiant G4 Models


Index
A D
access panel device bays
ProLiant DL145, 104 full-height, 167
ProLiant DL145 G3, 114 half-height, 167
ProLiant DL585, 140 DL360 G5 LEDs
ProLiant DL585 G2, 148 system insight display, 78
advanced ECC memory, 69 documentation
application node cluster, 15
characteristics, 99 reporting errors, 22
characteristics of, 30, 45, 50 Drive Key Boot Utility software, 242
application nodes
removing from a rack, 102, 113, 139, 147 E
Automatic Server Recovery-2, 69 enclosure
c–Class, 164
B p–Class, 153
BL260c G5 server blade, 156
server blade, 175 enclosures
BL2x220c G5 c7000, 164
server blade, 174 expansion board, 88, 98, 128
BL465c G5 expansion slot covers, 108
server blade, 196 expansion slots, 142
BL860c
internal view, 205 F
LEDs, 204 fan redundancy, 64
blade servers, 153 feedback e-mail address, 22
enclosure, 156 Fiber Channel, 157, 163, 180, 188, 195, 201
sleeve, 156 firmware, 242
BladeSystem full-length expansion slot, 108
c7000, 164
G
C graphics card
c-7000 NVIDIA FX 3500, 238
port map, 171 NVIDIA FX 4500, 239
c7000 NVIDIA FX 5500, 220
BladeSystem, 164 graphics cards
cable management G-Sync, 229
Integrity rx1620, 28 NVIDIA, 228
Integrity rx3600, 50 NVIDIA FX 4500, 218
Integrity rx4640, 54 NVIDIA FX3450, 228, 229
card cage, 108
characteristics H
xw9300, 223 HBA
xw9400, 234 installing, 48, 52
cluster hood latch, 127
documentation, 15 HP
configuration documentation contact, 22
duplex, 136 feedback, 22
simplex, 136 URL, 22
control node, 136, 144, 152 HP BladeSystem, 153
c–Class c–Class, 164
enclosure, 164 p-Class sleeve, 156
c–Class enclosure HPC URL, 15
x4 DDR IB switch module, 172 Hyper-Threading technology, 208, 214

243
I displays, connecting to, 229
iLO, 68, 124, 139, 153, 156, 164 PCI retainer, 212, 222
installing PCI slot assignments
HBA, 48, 52 ProLiant DL160 G5, 63
integrated lights-out, 68 ProLiant DL165 G5, 123
(see also iLO) PCI slots
Integrity 2620, 14 ProLiant DL360 G5, 80
Integrity rx1620 xw9300, 225
cable management, 28 xw9400, 237
Integrity rx2600, 14 power buttons, 70
Integrity rx3600, 45 ProLiant BL35p, 157
cable management, 50 ProLiant DL145, 103
Integrity rx4640, 14, 50 ProLiant DL145 G3, 113
cable management, 54 ProLiant DL380, 86
Interconnect Cards ProLiant DL380 G5, 96
xw9300, 230 ProLiant DL385 G1, 126
IP address, 68 ProLiant DL585, 140
Itanium processors, 25 ProLiant DL585 G2, 147
ProLiant BL260c, 14
L ProLiant BL2x220c, 14
legal notices and trademarks, 2 ProLiant BL35p, 14, 153
low-profile expansion slot, 108 characteristics, 153
memory, 156
M removing from a sleeve, 157
Management Processor, 30, 45, 50 supported storage, 157
management processor, 99 ProLiant BL45p, 14, 153
memory configurations characteristics, 158
xw9300, 230 memory, 162
xw9400, 241 removing from an enclosure, 163
MP, 30, 45, 50 supported storage, 163
system boards, 160
N ProLiant BL460c, 14, 176
network adapters, 156 supported storage, 180
NIC ProLiant BL465c, 190
embedded dual-port, 69 supported storage, 195
NC7781, 69 ProLiant BL480c, 14, 183
NIC1, 69 supported storage, 188
NIC2, 69 ProLiant BL485c, 14
node, 124 ProLiant BL485c G5, 14
application, 99 ProLiant BL680c G5, 14
control, 136, 144, 152 ProLiant BL685c, 14, 196
NVIDIA supported storage, 201
FX G-Sync, 229 ProLiant BL685c G5, 14, 202
FX3450, 228, 229 ProLiant BL860c, 14, 202
PCI graphics cards, 228 supported storage, 207
NVIDIA FX 3500, 238 ProLiant DL 385 G1
NVIDIA FX 4500, 218, 239 power buttons, 126
NVIDIA FX 5500, 220 ProLiant DL140 G1, 14
NVIDIA Quadro G-Sync, 219, 240 ProLiant DL140 G2, 14, 55
features, 55
O front panel, 55
online spare memory, 69 front panel features, 55
Opteron processors, 99 installing a PCI card, 57
memory configurations, 57
P memory module sequence, 57
PCI Express rear panel, 56
lever, 222 ProLiant DL140 G3, 14, 59
levers, 212 features, 59
PCI Graphics Cards front panel, 60

244 Index
front panel features, 60 systems insight display LEDs, 93
installing PCI card, 62 ProLiant DL385, 14
memory configurations, 61 ProLiant DL385 G1
memory module sequence, 61 characteristics, 124
PCI slot assignments, 61 removing from a rack, 126
ProLiant DL145 replacing a PCI card, 127
accessing internal components, 104 shutting down, 126
characteristics, 99 ProLiant DL385 G2
memory, 99 characteristics, 129
power buttons, 103 new PCI slot assignments, 132
replacing a PCI card, 105 replacing a PCI card, 134
shutting down, 103 ProLiant DL385 G5, 136
ProLiant DL145 G1, 14 ProLiant DL585, 14
ProLiant DL145 G2, 14 accessing internal components, 140
replacing a PCI card, 107 characteristics, 136
ProLiant DL145 G3, 14 memory configuration, 139
accessing internal components, 114 power buttons, 140
power buttons, 113 removing from a rack, 139
shutting down, 113 replacing a PCI card, 142
ProLiant DL160 G5, 14 shutting down, 140
ProLiant DL165 G5, 14 slot assignments, 137
ProLiant DL360 ProLiant DL585 G2
accessing internal components, 71 accessing internal components, 148
characteristics of, 63 characteristics, 144
embedded technologies, 68 power buttons, 147
high-availability features, 69 removing from a rack, 147
inserting a riser card, 72 shutting down, 147
removing from a rack, 70 ProLiant DL585 G5, 14
replacing a PCI card, 72 characteristics, 152
ProLiant DL360 G1, 14
ProLiant DL360 G2, 14 Q
ProLiant DL360 G3, 14 Quadrics HBA
ProLiant DL360 G4, 14 installing, 48, 52
front panel, 66 quick release levers, 126
rear panel, 67
ProLiant DL360 G4p, 14 R
ProLiant DL360 G5, 14, 73 reader skills, 14
features, 73 redundant power supply, 136, 144, 152
front panel LEDs, 76 redundant ROM, 69
inserting a riser card, 82 replacing a PCI card
PCI slots, 80 ProLiant DL145, 105
rear panel LEDs, 77 ProLiant DL145 G2, 107
removing from a rack, 81 ProLiant DL360, 72
replacing a PCI card, 81 ProLiant DL360 G5, 81
ProLiant DL380, 14 ProLiant DL380, 86
accessing internal components, 86, 97 ProLiant DL380 G5, 97
characteristics, 83 ProLiant DL385 G1, 127
removing from a rack, 86 ProLiant DL385 G2, 134
replacing a PCI card, 86 ProLiant DL585, 142
ProLiant DL380 G5 xw8200 workstation, 211
cable management brackets, 95 xw8400 workstation, 221
characteristics, 89 xw9300 workstation, 231
front panel LEDs, 91 xw9400 workstation, 241
new PCI slot assignments, 95 reporting documentation errors, 22
PCI slot assignments, 94 retaining clip, 88, 128
rear panel LEDs, 92 riser cage, 105, 109, 127
removing from a rack, 96 door latch, 127
replacing a PCI card, 97 riser cage door, 87
server, 89 riser card

245
ProLiant DL360, 72 ProLiant DL585 G1, 14
ProLiant DL360 G5, 82 ProLiant DL585 G2, 14, 144
ROM flash, 242 ProLiant DL585 G5, 14, 152
rx2600 server blade
Installing PCI card, 35 BL260c G5, 175
rx3600, 45 BL2x220c G5, 174
Installing PCI card, 48 BL465c G5, 196
upgrade, 47 ProLiant BL460c, 176
rx4640, 50 ProLiant BL465c, 190
Installing PCI card, 52 ProLiant BL480c, 183
upgrade, 51 ProLiant BL685c, 196
ProLiant BL685c G5, 202
S ProLiant BL860c, 202
SAN, 157, 163, 180, 188, 195, 201 service node
SCSI controller, 136 characteristics of, 30, 45, 50
SCSI drive SIM, 79
supported, 28, 34 (see also system insight manager)
server, 30, 45, 50 slot assignment rules
blades, 153 xw9300, 226
Integrity rx1620, 25 xw9400, 237
Integrity rx2600, 14, 30 slot assignments
Integrity rx2620, 14, 36 graphics cards, 237
Integrity rx2660, 14, 39 narrow graphics cards, 227
Integrity rx3600, 14, 45 wide graphics cards, 227
Integrity rx4640, 14, 50 slot configuration
ProLiant BL260c G5, 14 typical, xw9300, 227
ProLiant BL2x220c G5, 14 SVA
ProLiant BL35p, 14, 153 Proliant DL140 G3, 61
ProLiant BL45p, 14, 158 system insight display
ProLiant BL460c, 14, 176 and internal health LED combinations, 79
ProLiant BL465c , 14, 190 DL360 G5 LEDs, 78
ProLiant BL480c , 14, 183 system insight manager, 79
ProLiant BL680c G5, 14 (see also SIM)
ProLiant BL685c, 14, 196
ProLiant BL685c G5, 14, 202 T
ProLiant BL860c, 202 techservers URL, 15
ProLiant DL140, 14
ProLiant DL140 G2, 14, 55 U
ProLiant DL140 G3, 14, 59 upgrade
ProLiant DL145, 99 rx3600, 47
ProLiant DL145 G1, 14 rx4640, 51
ProLiant DL145 G2, 14 USB drive key, 242
ProLiant DL145 G3, 14, 110
ProLiant DL160 G5, 14, 62 W
ProLiant DL165 G5, 14, 122 workstation
ProLiant DL360 G1, 14 xw8200, 14
ProLiant DL360 G2, 14 xw8400, 14
ProLiant DL360 G3, 14, 63 xw9300, 14
ProLiant DL360 G4, 14, 63 workstations, 208
ProLiant DL360 G4p, 14
ProLiant DL360 G5, 14, 73 X
ProLiant DL380 G3, 14, 83 x4 DDR IB switch module
ProLiant DL380 G4, 14, 83 c–Class enclosure, 172
ProLiant DL380 G5, 14, 89 installation, 172
ProLiant DL385, 14 Xeon processors, 55
ProLiant DL385 G1, 124 xw8200, 14
ProLiant DL385 G2, 129 characteristics, 208
ProLiant DL385 G5, 14, 136 removing from a rack, 213
ProLiant DL585, 136 replacing a PCI card, 211

246 Index
xw8400, 14
characteristics, 214
PCI slot rules, 218
removing from a rack, 223
replacing a PCI card, 221
xw9300, 14
characteristics, 223
memory configurations, 230
PCI slot numbering, 226
PCI slots, 225
rear panel, 224
removing from a rack, 234
replacing a PCI card, 231
slot assignment rules, 226
specifications, 223
typical slot configuration, 227
xw9400
characteristics, 234
memory configurations, 241
PCI slot numbering, 237
PCI slots, 237
rear view, 235
removing from a rack, 241
replacing a PCI card, 241
slot assignment rules, 237
specifications, 234

247
*A-CPSOV-1H*

Printed in the US

S-ar putea să vă placă și