Sunteți pe pagina 1din 125

Inspur group

Inspur Server Products


Introduction

2019/8/29
2

Contents

Server Product List

Naming Specifications for Servers

Product Description

Typical Issue
Server Product List
3

1U Dual-
Processor
NF5170M4 NF5180M4 NF5180M5

2U Dual-
Processor
NF5280M4 SA5212M4 NF5280M5/
SA5212H5
SA5212M5

4U Quad-
Processor
NF8480M5

Storage
Server SA5224M4/SA5224M4A
SA5224L4 NF5466M5

High Density
Server I24/NS5162M5

GPU Server
NF5288M5 NF5588M4
4
M5 Product Family Overview
5

Contents

Server Product List

Naming Specifications for Servers

Product Description

Typical Issue
6
Naming Specifications for Servers
NF 5 2 8 0 M4
Series CPU Chassis Class Supplement Generation
NP(tower) 8(E7) height high to 8(GPU) M1
NF/SA(rack) 5(E5) 0(UP tower) low 6(Storage) M2
NX(blade) 3(E3/Atom) 1(1U) 8 0(Standard M3
SN(Rack node) 2(Xeon-D) 2(2U) (7) ) M4
4(4U) 6 M5
5(5U)
4 …
(2)

The first number represents the high to low level of the server.
The second number represents the height of the server.
The third number represents the level of the server in the same category of the rack
product, which intuitively reflects the external form and processing performance of the
server.
The last number represents the server type.
7

Contents

Server Product List

Naming Specifications for Servers

Product Description

Typical Issue
8 NF5180M5/SA5112M5

Server Introduction
Product Type:
Purely Platform 1U 2 CPU high-
end server products, mainly for
China, the United States, Japan,
South Korea, Western Europe and
other markets. NF is named for
non-Internet customers, and SA is
named for Internet customers. The
two are the same machine.

Two type:
3.5*4 bay +2.5*2 bay;
2.5*10
9 NF5180M5/SA5112M5

Key points:

• Ultra High-performance:
Supporting up to 205W
CPU(Skylake) and all-NVME
configuration. Providing the
highest computing power in
limited space.

• Extreme expandability:
Supporting 3 PCIe expansion
slots in 1U height, providing
possibility for multi-I/O
application acceleration.
10 NF5180M5/SA5112M5
Specifications

Purley Platform, 1U Dual-Processor High end server product


Focus
Main markets: China, US, Japan, Korea, and Western Europe, etc.

1. Two Intel Xeon Scalable Processors


CPU 2. Up to 205W TDP CPU
3. 2 UPI links for 9.6 or 10.4 GT/s

1. 24 x DDR4 DIMM;
MEM 2. Supports RDIMM, LRDIMM, NVDIMM
3. Supports 2400, 2666 MT/s

PCH LBG-4 or LBG-2 ,depending on costs/needs


Key points
1. Built-in CONN C to support Inspur PHY card, with 1G/10G NIC support
NIC 2. Built-in CONN A/B to support Inspur/3rd party standard OCP 25G NIC
3. Built-in PCIE slot to support Inspur/3rd party standard PCIE 100G NIC

1. Up to 3.5* 4 + 2.5 *2(front)+ 2.5 *2(rear)


Storage
2. Up to 2.5* 10+ 2.5*2

PCIE 1. Up to three PCIE expansion slots, including one FHHL and two HFHL cards
11 NF5180M5/SA5112M5
Front panel introduction -3.5*4

# Name
1 USB2.0+LCD
2 USB3.0
# Name
3 Power switch
1-2 2.5” SSD drives 0-1( watch out for
NVME drives) 4 UID LED/button
3 VGA port 5 Network status LED
4 Front control panel 6 Memory error LED
( Check front control panel LED and
7 Power fail LED
buttons )
8 Overheat LED
5 Mounting ears ( one on each side)
9 Fan fail LED
6-9 3.5” drives 0-3(watch out for NVME
drives) 10 System error LED
12 NF5180M5/SA5112M5

Front panel
introduction -2.5*10

# Name

1 DVI port
# Name Can be converted to 2 USB2.0 ports + 1 VGA port

1-10 2.5” drives 0-9(watch out for 2 Power switch


NVME drives) 3 UID LED/button

11 Front control panel 4 System error LED


( Check front control panel LED 5 Memory error LED
and buttons )
6 Fan fail LED
12 Mounting ears ( one on each 7 Network status LED
side)
8 Overheat LED

9 Power fail LED


13 NF5180M5/SA5112M5
Rear panel introduction - hard disk configuration

# Name
1 Rear 2.5” drives ( Supports standard PCIE card if the optional drive cage is not configured)
2 PCIE card ( Optional)
3 PSU 0
4 PSU 1
5 OCP card 1( optional )
6 UID LED/button
7 BMC reset button
8 VGA
9 USB3.0
10 IPMI
11 OCP card 0(optional )
14 NF5180M5/SA5112M5
Rear panel introduction - PCIE configuration

Numbe Numb
Module name Module name
ring ering
PCIE 3.0 x16 6 OCP 1(Optional)
1
Full height and half length 7 UID Lights and buttons
P C I E 3.0 x8 Half height and half
2 8 BMC Restart button
length
P C I E 3.0 x16 Half height and half
3 9 VGA
length
4 Sys Power0 10 USB3.0
5 Sys Power1 11 BMC Management port
12 OCP/PHY0(Optional)
15 NF5180M5/SA5112M5
Internal view
7 Tubeaxial
FAN Modules
N+1 redundancy, up to
2K rpm
Redundant
Skylake platform
1+1 PSU
2x New Up to 1300W per item
Generation Intel® Xeon® + +
scalable processors per node
(TDP up to 205W) +

DDR4-2666 +
2xOCP/PHY
+ Network Cards
24 x 2666 MT/s DDR4 +
RDIMM or LRDIMM memory,
offers 1,5TB + Industrial Specifications

memory, up to 3GB +
maximum

SAS/SATA Up to
/ NVMe 3 x PCIe 3.0
M.2 SSD
Full NVMe Independent OS Storage, Support for NIC, HBA, HCA,
Storage, or SAS/SATA with 1+1 optional setting PCIe SSD, GPU, etc.
NVMe cache
16 NF5180M5/SA5112M5
Motherboard Topology
17 NF5180M5/SA5112M5
Motherboard topology # Name # Name
1 NVME5_CPU1 port 25 COM0
26 BP_I2C0
2 NVME4_CPU1 port 27 BP_I2C1
3 NVME2_CPU1 port 28 IPMB
4 NVME3_CPU1 port 29 Front Panel Control
5 OCPA_CPU1 slot 30 Chassis Intrusion
6 UID LED/button 31 M.2 power connector 2
7 BMC reset button 32 Backplane power 0
8 VGA
33 Backplane power 1
9 BMC_TF card slot
34 System FAN(7x)
10 Rear USB3.0(2x)
35 CPU0
11 IPMI
12 PCIE1_CPU0/1 slots 36 DIMM slots (CPU0)

13 OCPC card slot 37 CPU1


14 SATA4-7 port 38 DIMM slots (CPU1)
15 SATA0-3 port 39 Front VGA
16 sSATA2-5 port 40 Front USB+LCD
17 OS TF card slot 41 Internal USB
18 NVME1_CPU0 port 42 TPM
19 OCPA_CPU0 slot
43 sSATA M.2_0
21 PCIE0_CPU0 slot
22 PSU 0 44 sSATA M.2_1
23 PSU 1
45 OCPB_CPU1
24 Backplane power 2
18 NF5180M5/SA5112M5

CPU, memory mix


Only SAME MODEL of memory can be used in a system. Installation order is as follows:
A、White slots take priority. Install CPU0 slots and CPU1 slots in symmetry
B、When there is only one CPU, install memory following the silkscreen order: CPU0_C0D0、CPU0_C1D0、
CPU0_C2D0、CPU0_C3D0、CPU0_C4D0、CPU0_C5D0、CPU0_C0D1……;
C、When there are two CPUs,
Install memory following the silkscreen order on CPU0: CPU0_C0D0、CPU0_C1D0、CPU0_C2D0、CPU0_C3D0、
CPU0_C4D0、CPU0_C5D0、CPU0_C0D1……;
Install memory following the silkscreen order on CPU1: CPU1_C0D0、CPU1_C1D0、CPU1_C2D0、CPU1_C3D0、
CPU1_C4D0、CPU1_C5D0、CPU1_C0D1……。 CPU, memory mix
19 NF5180M5/SA5112M5

Hard disk backplane

3.5x4 Type 2.5x10 Type


20 SA5212M4
Server introduction

Product Type:
Inspur self-developed 2U dual-
channel high-end server, using
Intel Grantley-EP platform,
mainly for Internet customers.

Supports 12 front 3.5/2.5 inch


SAS/SATA/SSD/NVME hard
drives

Major customers:
Ali, Baidu, Qihoo, Tencent, etc.
21 SA5212M4
System characteristics
model SA5212M4
motherboard 主板_INSPUR_SHUYU_WBG_6W_1G*4+10G*2+3008
主板_INSPUR_SHUYU_WBG_4W_I350AM4+3008
主板_INSPUR_SHUYU_WBG_4W_I350AM2+82599ES
chipset PCH C610(Wellsburg)
Board SAS Onboard LSI 3008 (IT、IR、IMR mode) 12GB interface
CPU Intel Xeon E5-26XX V3/V4 series(Support up to two 145W)
RAM DDR4 ECC RDIMM/LRDIMM,Number of slots:16
ECC REG RAM support up to two 1024GB(Single 64GB)
Power Single/double 550W/800W and above output power; 1+1 redundancy; 2 power modules; support PMBus power supply,
realize Node Manager 3.0 function
Hard Disk 3.5SAS/SATA*12+2.5SAS/SATA*4 (rear)
Network card 1 Intel I350, Provide two or four 1000M adaptive RJ45 network ports;
1 Intel 82599,Provide one or two 10 Gigabit SFP+ network ports.
Graphics card Aspeed2400 In-chip integration,Maximum resolution support 1280*1024。
PCI Riser card motherboard:
1 PCI Express 3.0 x24 slots onboard (for PCI-E Riser, no plug-in card); 3 vertical PCIE slots;
1 CPU:
Supports 1 PCIE x8+x1 slot(Support for management function network card)
Supports 1个PCIE x8 (in x16 Slot)
2 CPU :
Supports up to one PCIE x8+x1 slot (supports management function network card)
Supports vertical insertion of 1 PCIE x8 (in x16 Slot)
Supports up to 1 PCIE x16 (in x16 Slot)
Supports a full-height, half-length card by installing a riser card adapter that supports one PCIE x8 (in x8 Slot) and one PCIE
x16 (in x16 Slot);
22 SA5212M4
Front panel

Number Module name


1 Front VGA interface
2 Front USB 3.0 interface
3 Server and cabinet retaining tabs
4 Front hard disks
5 Server switch button
6 ID lights and buttons
7 Reset button
8 LCD module
23 SA5212M4

Number Module name


1 Server switch button

2 ID light and button

3 Reset button

4 LCD module

5 Network Status Indicator

6 Memory fault indicator

7 Power failure indicator

8 Overheat indicator

9 Fan fault indicator

10 System Fault indicator


24 SA5212M4
Rear panel
25 SA5212M4

The same with SA5112M4.


26 SA5212M4

CPU, memory mix


1,Support single CPU and single
memory to minimal test, minimal
testing : CPU0+CHA_0

2, When the server configures a


single CPU, the memory is in the order
of installation:CHA_0、CHB_0、
CHC_0、CHD_0、CHA_1、CHB_1、
CHC_1、CHD_1…

3, When the server configures


double CPUs, the memory is in the
order of installation: :CHA-0、
CHE_0 、CHB_0、 CHF_0、 CHC_0 、
CHG_0、 CHD_0 、CHH_0 、CHA_1 、
CHE_1 、CHB_1、 CHF_1 、CHC_1、
CHG_1、 CHD_1 、CHH_1…
27 NF5280M4

Inspur NF5280M4 is a application-


optimized 2U dual rack flagship high-
end products in the generation of high-
quality products on the basis of more
excellence. The wave of server products
to maintain consistently high quality,
reliable performance, and similar
products with the highest performance,
scalability ideal and good management
features.
28 NF5280M4
NF5280M4 technical characteristics
CPU It supports two Intel ® Xeon ® E5-2600V3 / V4
cache 10-55M
QPI bus speed 6.4-9.6GT/s
RAM 24 memory slots support up to DDR4-2400 memory, the largest expansion of 1536GB memory
(when using a single capacity of 64GB of memory)
Supports advanced error-correcting memory, memory mirroring, memory hot standby and other
advanced features
Hard disk
Optional 8-channel SAS 6Gb and 12Gb disk controller
controller
RAID Optional SAS disk controller
Or have a high-performance cache SAS RAID controller cache and extended power protection
module
storage Front:
Maximum support 25 pre-2.5-inch hot-swap SATA / SAS interface hard disk or solid state disk,
It can support 2.5 * 8,2.5 * 16,2.5 * 24,2.5 * 25 configuration;
Maximum support 12 pre-3.5-inch hot-swap SATA / SAS interface hard disk or solid state disk,
It can support 3.5 * 3.5 * 12 * 8,3.5 configuration
Rear:
Supports up to four 3.5-inch or four 2.5-inch hot-swap SATA / SAS interface hard disk or solid
state disk
I / O expansion
Maximum support 8 PCI-E3.0 slots, maximum support four full-length full-height
slots
29 NF5280M4
Overall view
Can support front cover

Hard disk information label

Upper cover security lock

Front hot swap hard drive

Asset tag Backlit indicator

Tool free shelf lock


30 NF5280M4

Front panel:
2.5*24 HDD

3.5*8 HDD

3.5*12 HDD

2.5*25 HDD

Inspur group 2019/8/29


2019/8/29
31 NF5280M4
name Function and Description
System fault Normal does not shine
indicator Failure (CPU Thermal Trip, Scatter Error / Error 2, QPI, PCIE Error), the indicator
light, red light
Occurs QPI, PCIE Warning, flashing lights, flashing red
Memory fault Normal does not shine
indicator Occur Uncorrectable ECC Error, indicator light, red light
Correctable ECC Error occurred a certain number of blinking lights, blinking red
Fan fault indicator Normal does not shine
It does not exist or can not read the fan speed indicator light, red light
Read fan speed abnormal indicator is flashing, flashing red
Power failure Normal does not shine,
indicator Power supply module is inserted, but there is no power supply output, indicator
light, red light
Power module into the power cord is not plugged in, indicator light, red light
Power status word feedback the power failure: input, output current / voltage /
power anomalies, indicator light, red light
Word power status feedback of a power failure: power supply temperature, fan,
CML and other abnormal status indicator is flashing, flashing red
Overheat Normal does not shine
indicator CPU Hot Detect, indicator light, red light
MEM Hot, indicator light, red light
Network Status Network connection indicator is flashing, flashing green
Indicator No network connection, the light does not shine
32 NF5280M4
Rear panel:
33 NF5280M4

Chassis cover

With the thumb side of the chassis semi-snap buckle, forced hold up, on
chassis panel removed;
34 NF5280M4

mainboard installation

1、 The internal chassis sidewall black rationale trunking Remove (Remove the
plastic parts only).
2、 The chassis flat on the table, will hoist the chassis stud hole on the
motherboard corresponds well, lift the self-locking studs on the motherboard, the
level of force, the motherboard is fully card stud on the chassis.
35 NF5280M4

Power installation
1) Installation sequence Power, PSU1, PSU0.
2) A single power supply, power supply module to PSU0 set. Meanwhile,
PSU1 position load power bezel.
36 NF5280M4
Backboard installation of hard disk
8 hard drive configuration a backplane to install the "Backplane 1" position; 16 hard disk
configuration 2 backplane attached to the "Backplane 1", "2 back" position; 24 hard disk
configuration 3 Backplane , followed by the installation. Shown at bottom left.
Open the retaining buckle back upper middle position, and then back (2.5 hard disk) caught
red box flaps, even down firmly fixed to live backplane. Finally, the column is closed and securely
fixed to the chassis.

Backplane 3 Backplane 2 Backplane 1


37 NF5280M4

Backboard installation of hard disk


3.5 × 8 configuration of the hard disk backplane is from bottom to top,
respectively Backplane 1 backplane 2, (a backplane connector 4 hard drives). The
backplane card into the slot, press the black figure snap, stuck fixing column.

Backplane 1

Backplane 2
38 NF5280M4

Backboard installation of hard disk

Hard box sequence and the


sequence number of the hard disk
backplane, as shown below:

If you configure two rear hard disk backplane, you need to make the following two backplane jumper
39 NF5280M4
Backboard installation of hard disk
3.5 × 12 configuration: four from top to bottom of the backplane Backplane 1
backplane 2 backplane 3 (3.5 × 8 configuration attention and different). The
backplane card into the slot, press the black snap, stuck fixing column.

2.5 × 2 rear mezzanine installation


40 NF5280M4
Motherboard topology

CLEAR CMOS jumper (J46): CMOS clear jumper


Pin1-2 short: the normal state
Pin2-3 shorted: Clear CMOS
41 NF5280M4

Internal view

PCI-E module
Rear HDD
L-tool

BBU location

Fans Air hood module

Label
pool Front HDD
42 NF5280M4

Internal view Network card


24 DDR4 DIMMs

PSU module
2*E5-2600V3/V4
43 SA5212H5

SA5212H5 is a customized server which is only supplied to Alibaba.


44 SA5212H5

Left ear Cover

HDD bracket Right ear HDD BP screw Fan bracket screw


45 SA5212H5
Front panel view

Power button & UI


USB 2.0 * 2 button & state lamp

HDD1 HDD4 HDD7 HDD10

HDD2 HDD5 HDD8 HDD11

HDD3 HDD6 HDD9 HDD12

Installation Installation
screw screw
46 SA5212H5
Rear panel view

Number Module
1 Rear 2.5 inches HD
2 PSU1
3 PSU2
4 ID button
5 USB port
6 Network sub-card port
7 VGA port
8 Indicator light
9 IPMI management port
10 SFP+1
11 SFP+2
12 State light
13 Power button
47 SA5212H5
Rear panel view

HDD1 PCIEX8 HHHL PCIEX8/16 HHHL PCIEX16 HHHL

HDD2 PCIEX8 HHHL PCIEX8 FHFL PCIEX8 FHFL

PSU1 PSU2

USB*2 25G *2 VGA IPMI 10G *2


Power Button
48 SA5212H5
Module Name
1 PWR BTN
Motherboard topology 2 CPU0_1C-1D_MEZZ_SLOT

3 CPU0_2C-2D_X8_SLOT

4 CPU0
5 SPECIAL_USB_CONN
6 INTERNAL_USB
28 Rear USB3.0
7 sSATA0
8 sSATA1
9 SSI FP 29 CPU0_2A-2B_MEZZ_SLOT

10 RAM Slots(CPU0)
11 M.2 Power 30 VGA
12 Fan interface(8个)
13 RAM Slots(CPU1)
31 CPU0_3A-3D_X16_SLOT
14 HDD BP Power1
15 HDD BP Power2
32 IPMI
16 IPMB

17 NODE PWR CONN


33 SFP+1
18 NODE_SIG_CONN
34 SFP+2
19 GPU_PWR
20 HDD BP PWR3
21 PSU1 CONN 35 SYS HLTH LED
22 I2C CONN
23 PSU2 CONN
24 CPU1

25 CPU1_3A-3D_X16_SLOT

26 CPU1_1A-1D_X16_SLOT

27 UID BTN
49 SA5212H5
Internal view
Fan: M.2*2
BDD BP 8056*4/8038*4

FHFL PCIE *2
OCP HBA*1
CPU

FHFL PCIE *2

CPU LP PCIE卡*2

2.5 U.2*2

Power module*2

Front panel: 12*3.5 HDD Motherboard


24* 2.5 NVME/SSD
50 SA5212H5

System interconnection

HDD*12 OCP1
CPU*2 OCP2
M.2*2
HDD DIMM*24
Riser3
U.2*2 Motherboard
BP Riser2

Riser1
Fan*4 Power module*2
PCIE
Red line: power supply
HDD BP: Green line: signal
3.5 *12 SATA HDD: From PCH
M.2:Onboard SATA connector
2.5*24 SATA BP:
2.5*18 NVME BP:
51 SA5212H5

Memory installation principles:


The same machine can only use the same type of memory. The specific combination of memory installation
principles are as follows:
1. priority white slot, CPU1 memory and CPU0 symmetrical installation.
2. Single CPU, the memory in accordance with the order: CPU0_C0D0, CPU0_C1D0, CPU0_ C2D0,
CPU0_C3D0, CPU0_C4D0, CPU0_C5D0; CPU0_C0D1, CPU0_C1D1 ... ...
3. Double CPUs,CPU1 memory in accordance with the order: CPU0_C0D0, CPU0_C1D0, CPU0_C2D0 ... ...;
CPU1 memory and CPU0 memory symmetrical installation: CPU1_C0D0, CPU1_C1D0, CPU1_C2D0 ... ...
52 SA5212H5

Memory ECC error location method


53 SA5212H5

CPLD refresh

1. Choose different version to download and refresh by the SA5212H5’s MB CPID


version

2. Download the CPLD firmware from the Inspur Overseas knowledge base:
http://218.57.146.166:8443/display/H/SA5212H5-Ali

CPLD refresh SOP


54 NF5280M5

Inspur Server NF5280M5 maintains the high quality and high reliability as Inspur Servers
has been known for, conceptualizing in performance, storage, and expandability, reaching an
innovative breakthrough in computing power, expandability, flexibility, and intelligent
management. Especially suitable for telecom, finance, internet, and enterprise customer who
demand stringent reliability.

2.5 inches HD*24 configuration 3.5 inches HD*12 configuration


55 NF5280M5
Technical characteristics
Item Features
Purely platform 2U dual-processor Flagship product; Satisfying the needs of all kinds of businesses in
Focus the general market
General market model: NF5280M5, Internet customers model: SA5212M5
Processor type: Supports 2 Intel Scalable Processors (Skylake) up to 205W and low voltage models as
CPU well
Voltage regulators: Integrated into motherboard
Supports up to 24 DIMM RDIMM, LRDIMM, NVDIMM DDR4 memory
MEM Supports memory mirroring and memory sparing features
Supports 12x Apache Pass
LBG-4(YZMB-00882-101)supports 4-port 10G PHY card
PCH
LBG-2(YZMB-00882-102)does not support 4-port 10G PHY card
Supports 3 OCU link from PCH SATA/SSATA port,
SATA/SAS
Product OCUlink (Optical Copper Link) is different from Slimline (SFF-8654) Two different types of protocols
Features Integrated BMC management chip with KVM function standard, providing one dedicated 1Gb RJ45
NIC management port supporting auto-negotiating 10/100/1000M speeds
Supports expansion NIC card with NCSI function (sharelink feature)
1. Supports up to 3.5* 12 (front) + 3.5*4 (rear) +2.5 *4 (rear)
Storage
2. Supports up to 2.5* 24 (front) + 3.5*4 (rear) +2.5*4 (rear)
Supports two 24X PCIE slot + one 24XPCIE slot (16 lanes) in rear
1. Built-in CONN C to support Inspur PHY card, with 1G/10G NIC support
2. Built-in CONN A/B to support Inspur/3rd party standard OCP 25G NIC
PCIE
Differences between OCP A and OCP C, Open Core Protocol
1. Supports up to 8 PCI expansion slots
2. 2. Supports up to 8 GPU(HHHL),4GPU(HL)
56 NF5280M5
Technical characteristics
Item Features
2 x USB 3.0 connector(compatible 2.0)
2 x USB 3.0 header for front IO( one is for LCD and support USB 2.0 only)
1 x TPM 2.0( LPC interface)
1 x SD/TF ,from BMC SDIO, internal
1 x SD/TF ,from PCH, internal
1 x M.2 riser
1 x oculink, from SSATA2-5 IO port,
I/O 2 x oculink, from SATA 0-7 IO port,
4 x oculink, for PCIe,
1 x front VGA connector
Product 1 x LED+BTN FP connector
Features 1 x NCSI connector
1 x OCPA connector
1 x OCPC connector
3 x PCIe x24 SLOT(one for x16 lane)
2 x USB 3.0(compatible 2.0)
1xManagement network interface
Rear of
1 VGA
the
1x BMC reset;
mother
1 x UID
board
1 x COM
1 x 80 port diag LED
57 NF5280M5

Front panel introduction

Numb Module name


er
1 Server switch button
2 ID light and button
3 Reset button
4 LCD module
5 Network Status
Indicator
6 Memory fault
indicator
7 Power failure
indicator
8 Overheat indicator
9 Fan fault indicator
10 System Fault
indicator
58 NF5280M5
Rear panel introduction
59 NF5280M5
Internal view
4 Tubeaxial
FAN Modules
M.2 SSD
DDR4-2666 N+1 redundancy Redundant
Independent OS Storage,
1+1 optional setting PLATINUM
24 x 2666 MT/s DDR4 1+1 PSU
RDIMM or LRDIMM memory, Up to 1600W per item
offers 1,5TB memory,
up to 3GB maximum

Skylake platform + +
OCP/PHY
2x New Generation
Intel® Xeon® scalable Network Card
processors + Industrial Specifications
per node (TDP up to 205W) +
+

SAS/SATA/ + Up to
NVMe + 8 x PCIe 3.0
SAS/SATA disks with + Support for NIC, HBA, HCA,
up to 8 NVMe PCIe SSD, GPU, etc.
60 NF5280M5

PCIE: x24 support PCIE: x24 support 3*x8


3*x8 or x8 + x16 or x8 + x16 Fan: 4* 8056

Front HD
2.5′′ *24
3.5′′ *12
OCP*1

M.2: 2*M.2 SSD


PSU*2 PCIE: x24 support 1* x16
61 NF5280M5
Support 1* x16 Slot + 1* x8 slot/3* x8 slot by PCIE riser 2*USB 3.0 TPM

MGMT Port
3* OCULink ports for SATA
DIMM*6 Support 12 SATA/SSD
VGA Port
PCH
2*USB 3.0 Port

COM Port CPU0

Support 1* x16 Slot + 1* x8


slot/3* x8 slot by PCIE riser
BMC reset Button
DIMM*6
UID Button

Conn C for OCP Card


DIMM*6
Conn A for PHY Card

CPU1 4* FAN
connector
Support 2 * PCIE/SATA
M.2 by M.2 riser Support 1* x16 Slot /2*
x8 slot by PCIE riser

4* x4 OCULink ports for U.2 DIMM*6


62 NF5280M5
Motherboard Topology
63 NF5280M5

# Name # Name
1 DIMM slots (CPU0) 24 PCIE1_CPU0/1 slots
2 CPU0 25 BMC_TF_slot
3 DIMM slots (CPU1) 26 SATA CONN(2x)
4 MB handle(2x) 27 Serial port
28 USB 3.0 Port
5 CPU1
29 VGA
6 System FAN(4x)
7 Front Backplane power(2x) 30 MLAN port
8 Front Panel Control
31 PCIE_CPU0 slot
9 GPU power (2x) 32 CLR_CMOS
10 Rear Backplane power(2x) 33 Front USB port
11 NVME_CONN(4x) 34 Front VGA
12 PSU 1 35 USB 3.0*2
13 PSU 0 36 TPM CONN
14 PCIE2_CPU1 slot
15 M.2_CONN
16 OCP_A CONN
17 I2C CONN(7x)
18 NCSI CONN
19 SSATA CONN
20 OCP_C CONN
21 SYS_TF_slot
22 UID |RST button
23 BMC_RST button
64 NF5280M5

Memory installation principles:


The same machine can only use the same type of memory. The specific combination of memory installation
principles are as follows:
1. priority white slot, CPU1 memory and CPU0 symmetrical installation.
2. Single CPU, the memory in accordance with the order: CPU0_C0D0, CPU0_C1D0, CPU0_ C2D0,
CPU0_C3D0, CPU0_C4D0, CPU0_C5D0; CPU0_C0D1, CPU0_C1D1 ... ...
3. Double CPUs,CPU1 memory in accordance with the order: CPU0_C0D0, CPU0_C1D0, CPU0_C2D0 ... ...;
CPU1 memory and CPU0 memory symmetrical installation: CPU1_C0D0, CPU1_C1D0, CPU1_C2D0 ... ...
65 NF8480M5

NF8480M5
High scalability computing platform with key application
load

Product positioning
It is applied to enterprise level key business applications,
with strong computing capability, expansion capability
and excellent RAS features

Target customer
Large enterprises and senior government customers in
finance, insurance, securities, communications and
energy

Context of Use
Memory database, ERP, CRM, business intelligence
analysis system, large-scale virtualization application
integration
66 NF8480M5
 Product specifications
Component Discription

specification 4U RACK

CPU Supports 2/4 Intel® Xeon® Platinum and Gold full range of scalable processors;
Supports up to 28 cores (frequency 2.5GHz);
The highest frequency is 3.6GHz (4 cores);
Two or three UPI interconnect links, a single link with a high rate of 10.4 GT/s;
The maximum design power consumption is 205W;
chipset Intel C624&C627

RAM Supports up to 48 memory, each processor supports 6 memory channels, and each channel
supports up to 2 memory slots;
The maximum memory speed is up to 2666MT/s. Supports RDIMM, LRDIMM and NVDIMM
memory.Memory protection supports ECC, memory mirroring, memory level protection;
Maximum RDIMM: Four processors support up to 48 x 64GB memory and up to 3TB
memory capacity
LRDIMM: Four processors support up to 48×128GB and capacity up to 6TB
NVDIMM: Maximum support 1×16GB, capacity up to 16GB
Storage 3.5"×24 SATA/SAS/SSD/Support hot swap
3.5"× 12 SATA/SAS/SSD+12 x NVMe Support hot swap
M.2 Supports up to two PCIE x2 M.2 or two SATA M.2
67 NF8480M5
 Product specifications
Component Discription
Storage RAID card controller: SAS 3108, 3008IMR, 9361, PM8060, 9460
controller
SAS card controller: 3008IT
Provide RAID 0/1/5/6/10/50/60
NVMe needs to be configured with RAID key (Intel VROC technology) to support RAID 0/1/5
Network 1 OCP/PHY module provides 1Gb/s, 10Gb/s, 25Gb/s
Interface
Support standard 1Gb/10Gb/25Gb/40G/100Gb NIC
I/O expansion Supports up to 16 standard PCIe and 1 OCP/PHY card slot;
slot
Onboard 2 PCIe3.0 x8 full height full length
RAID card module expands 2 PCIe3.0 interfaces
2 PCIe3.0 x8 half height and half length
Riser Module 1
expands 4 PCIe 3.0 interfaces:4 PCIe3.0 x8 half height and half length, only PCIE1 position
Riser Module 2
expands 4 PCIe 3.0 interfaces:4 PCIe3.0 x8 full-length full-length, pluggable in PCIE4 or PCIE5
The Riser Module 3 expands two PCIe 3.0 interfaces:
2 PCIe3.0 x16 full-length full-length, pluggable in PCIE4 or PCIE5 position
68 NF8480M5

 Product - Chassis Size

Overall system size(4U)

175.5mm
height
448.0mm
width (T-type chassis, mounting width
435.0mm)

812mm
depth (Ear wing to chassis depth 775mm)

Note:
1,The chassis used 6.8mm thick pull slide, the maximum pull-out stroke of 850mm.
2,Mount the ear to the rear IO interface 800mm, and the ear to the rearmost window 812mm.
69 NF8480M5
 Product - front view

1 2 3 4

Left mounting ear: 1*VAG+1*Sys UART


0 3 1 +2*USB3.0

2 24-bay 3.5-inch tray, tool-free


HDD tray

3 HDD cage, reuse existing


mold products
Right mounting ear: 1*PWR
4 BTN+1*UID
+1*Sys RST+1*SYS Heathy LED
20
5 LCD diagnostic screen

Right-sided ear
5
multiplexing eight-way,
UI acrylic panel retains
half of the partition hole
70 NF8480M5
 Product - Rear View

Vertical PCIE card configuration ---- This configuration product requirement has been
canceled, but the rear window 1-7 definition has not changed
8
1*BMC RST+1*UID+1*PWR
1 BTN+2*Hot plug
OCP card, support
2 4SFP+/2SFP+/4RJ45/2RJ45
PCIE2:
Fr CPU3
PCIE3-HP: PCIE5:
Fr CPU1 x8 Fr CPU2
PCIE6:
Fr CPU4
3 USB3.0 *4
x16 x16 x16
PCIE4-HP:
Fr CPU1
4 VGA
x8+CPU2x4
5 serial

6 BMC Diagnostic serial port

7 RJ45 management port


PCIE Slot Support FHHL card
1 2 3 4 5 6 7 8
71 NF8480M5
 Product - Rear View

Full Height Full Length Full Height PCIE X8(4)


Half-height PCIE Full height PCIE
PCIE X16(2) or Full
X8(4) X8(2) Height PCIE X8(4)

PSU(Left to right
0-->3)
72 NF8480M5

 Product - Top View

Fan power 8038 fan Full Height PCIE Riser PCIE X8(2)
board wall Raid/Expander Card

Full Height
PCIE Riser PCIE
X8(4)

Full-height PCIE Riser:


PCIE X16 (2) supports GPU
modules

Half-height
PCIE Riser:
PCIE X8 (4)

Middle bracket
73 NF8480M5
 Product - Side View

Preliminary Fan module


evaluation of uses vertical
sharing double
Existing mold pumping
products
8038 Fan Wall

Fan plate
74 NF8480M5
 Product - Explosion Map

Cover Middle
bracket Rear
window

Power
supply

Fan module
Computing
3.5-inch hard Edition
disk module Power Board

Hard disk backplane


75 NF8480M5
 Computing board Slimline connector
(extended NVME or
back-end riser) slimlineX4*2 (Transfer Eight
Left ear SATA Disk For Baidu)
Motherboard connector
PCB thickness: 2.6mm
signal connector
Baidu signal Raid Key
connector
TF card*2
busbar
USB3.0 *2

421.6mm OCP daughter card


Support 2/4SFP+ or
Handle 2/4RJ45
Motherboard handle
busbar

Right
mounting
ear 520mm
connector Slimline connector (extended
NVME or back-end riser)
76 NF8480M5
 Computing board
M.2*2

RJ45 Management network port


BMC diagnostic serial port
System serial port
VGA

USB3.0*4

421.6mm

Hot plug BTN*2


Thermal sensor for baidu
PWR BTN
UID
520mm BMC RST

Stand By Power connector


77 NF8480M5
 System topology
78 NF8480M5
 Memory installation principle

Only the same type of memory can be used on


the same machine;
The corresponding slot for the specific
installation combination is as follows. The
specific memory installation combination
principles are as follows:
A. Priority white slot, CPU0/CPU1/CPU2/CPU3
memory must be installed symmetrically;
B. When two CPUs are used, the CPU0 position
memory follows the silkscreen order:
CPU0_C0D0, CPU0_C1D0, CPU0_C2D0...
The CPU1 location memory installation is
installed symmetrically with the CPU0 memory:
CPU1_C0D0、CPU1_C1D0、CPU1_C2D0 ……
C、When four CPUs are installed, similar to the
memory installation mode when two CPUs are
configured, the memory of the four CPUs is
symmetrically installed.
79 NF5466M5
 NF5466M5 is a new generation of 4U dual-channel storage rack servers, based on the new generation of Intel
scalable processor design 。
 The excellent computing, storage and expansion capabilities
 Support a new generation of Intel scalable processors, maximum support for TDP 165W CPU; up to 24 DIMMs
, support RDIMM, LRDIMM, NVdimm type memory, support 12x Apache Pass, significantly improving
application performance (reusing NF5280M5’s motherboard).
 Support up to 40 3.5"hard disks in 4U space, or 36 3.5" hard disks and 4 U.2 SSDs, providing ultra-high
storage performance.
 Optimization for different applications
 Multiple storage module, I/O module, network module and GPU module can realize different combinations of
multiple application scenarios, and users can choose flexibly according to the configuration of business
requirements.
 Provides abundant I/O, up to 10 PCI-E 3.0; supports 2 FHFL GPU cards with GPU cage.

similar products: NF5460M4, SA5224M4A


Differences: The motherboards of these products are different, The other parts are basically the same.
80 NF5466M5

Mass storage
Supports up to 40 3.5-inch hard drivesOr 40 large
plates + rear 4 2.5-inch hard drives

Energy efficient
Support single-disk power-on and power-saving to
ensure high efficiency and energy saving. Optimize the
cooling strategy for matching different configurations.

Strong performance
Supports 2 scalable processors and supports up to 28
cores of high frequency CPU. Supports 6 single-wide
GPUs for high-performance scenes such as smart
video storage NF5466M5
I/O expansion
Storage-optimized server for warm storage
Supports up to 8 standard PCIe 3.0 slotsSupport
OCP network card and OCP PHY card freely switch
81 NF5466M5
Product Features
Category Parameter

Name 4U Pack Server

Supports 1 to 2 Intel® Xeon® 3100, 4100, 5100, 6100, 8100 Series Scalable Processors:Supports
up to 28 cores (frequency 2.5GHz)The highest frequency is 3.6GHz (4 cores)Two UPI
CPU
interconnect links, single link high rate 10.4GT/sSingle core maximum L3 level cache
1.375MBMaximum thermal design power 165W

Chipset Intel C622

Supports up to 24 memory. Each processor supports 6 memory channels, each channel


supports up to 2 memory slots. The maximum memory speed is up to 2666MT/s. Supports
RAM
RDIMM, LRDIMM and NVDIMM memory. Memory protection supports ECC, memory Mirroring,
memory level protection

Maximum RDIMM: Two processors support up to 24×64GB memory modules with a capacity of 1.5TB
memory LRDIMM: Two processors support up to 24×128GB and capacity up to 3TB
capacity NVDIMM: Two processors support up to 12×16GB and capacity up to 192GB
82 NF5466M5
Product Features
Category Parameter
Front panel
24x3.5” SATA/SAS/SSD supports hot swap
Rear panel
Storage 16x3.5” SATA/SAS/ SSD supports hot swap or 12x3.5” SATA/SAS/ SSD+4x2.5” SATA/SAS/
SSD/NVMe
Built in
4x3.5” SATA/SAS support hot swap (expected to be implemented in April 19)
Supports up to two PCIe M.2 or two SATA M.2
M.2 & SD
Maximum support for two TF cards
Storage
Motherboard integrated SATA controller, PCIe add-in card
controller
Network
Support OCP network card or PHY card, PCIe add-in card
Interface
Supports up to 8 standard PCIes, 1 OCP NIC (connector A) slot, 1 PHY card (connector C) slot
Riser module 1 expands 3 PCIe 3.0x8 interfaces
I/O
Riser Module 2 expands 3 PCIe 3.0x8 interfaces
expansio
Riser Module 3 expands two PCIe 3.0x8 interfaces
n slot
PCIe extensions are available in a modular tool-free disassembly design while retaining the screw
retention
83 NF5466M5
Front panel

Numb Numbe No Module name


Module Name Module Name
ering ring
1 Button and indicator
1 Power switch button 5 Fan fault indicator 2 Front HDD
3 Front VGA
2 UID│RST button 6 Power fault indicator
4 Front USB 3.0
System overheat 5 Front USB 2.0 + LCD
3 System fault indicator 7
indicator
Network status
4 RAM fault indicator 8
indicator
84 NF5466M5
Rear panel

No Module name No Module name


1 Rear 3.5” * 4 HDD 9 Serial port
2 Rear 2.5” * 4 HDD 10 BMC_RST button
3 Rear 12” * 4 HDD 11 UID|RST button
4 PCIE slot(0-2) 12 OCP card
5 PCIE slot(3-5) 13 Extractor
6 BMC 14 PSU0
7 Rear VGA 15 PSU1
8 USB 3.0*2
85 NF5466M5
Rear panel

Configure 3: 36 market +8 PCIe Slots Configure 4: 36 big disk +6 half width GPU+2PCIe Slots

Module Module
Numbering Numbering Module Name Numbering Numbering Module Name
Name Name
PCIe Slot Half wide
1 3 PCIe slot(7-8) 1 3 PCIe Slot(7-8)
(1-3) GPU slot(1-3)

PCIe Slot Half wide


2 2
(4-6) GPU slot(4-6)
86 NF5466M5

Front hard disk location


HDD location

Rear hard disk location


87 NF5466M5
# Name # Name
Motherboard Parts 1 DIMM slots (CPU0) PCIE1_CPU0/1
24
2 CPU0 slots
3 DIMM slots (CPU1) 25 BMC_TF_slot
4 MB handle(2x) 26 SATA CONN(2x)
5 27 Serial port
CPU1
28 USB 3.0 Port
6 System FAN(4x)
29 VGA
Front Backplane
7
power(2x) 30 MLAN port
8 Front Panel Control
31 PCIE_CPU0 slot
9 GPU power (2x)
32 CLR_CMOS
10 Rear Backplane power(2x) 33 Front USB port
11 NVME_CONN(4x) 34 Front VGA
12 PSU 1 35 USB 3.0*2
13 PSU 0 36 TPM CONN
14 PCIE2_CPU1 slot
15 M.2_CONN
16 OCP_A CONN
17 I2C CONN(7x)
18 NCSI CONN
19 SSATA CONN
20 OCP_C CONN
21 SYS_TF_slot
22 UID |RST button
23 BMC_RST button
88 NF5466M5
Memory

Memory installation principles:


The same machine can only use the same type of memory. The specific combination of memory installation
principles are as follows:
1. priority white slot, CPU1 memory and CPU0 symmetrical installation.
2. Single CPU, the memory in accordance with the order: CPU0_C0D0, CPU0_C1D0, CPU0_ C2D0, CPU0_C3D0,
CPU0_C4D0, CPU0_C5D0; CPU0_C0D1, CPU0_C1D1 ... ...
3. Double CPUs,CPU1 memory in accordance with the order: CPU0_C0D0, CPU0_C1D0, CPU0_C2D0 ... ...; CPU1
memory and CPU0 memory symmetrical installation: CPU1_C0D0, CPU1_C1D0, CPU1_C2D0 ... ...
89 NF5466M5
Backplane
NF5466M5 can support up to 36 or more HDDs, so different BPs can be installed on the
NF5466M5 server. 3.5*12 BP, 2.5*2 BP, 3.5*2 BP.
NF5466M5 can also support the expander card.

3.5*12 BP(the server can support up to 3 pcs, the BPs’ product number are different).

OCU 2 OCU1 OCU0

Expender card

3.5*2 BP
2.5*2 BP
90 NF5466M5

HDD: 0-11
Data link

BP 1
BP
1

0-3 port PCI-E X8


HDD: 12-23 SAS SASRaid
Card1
Card
4-7 port
CPU1
BP
BP 2
2 0-3 port
SAS Raid
Card2
Card
4-7 port
HDD: 24-35
PCI-E X8
BP Network
3 card*1
CPU0

HDD: 0-3 Network


BP 2 card*1 PCI-E X8
BP
0
91 High density server -i24/NS5162M5

Server introduction
Product Type: The i24 is a four-star
product based on a new
generation of Intel pure platform
designed for a high-end dual rack-
mount high-density server. Mainly
for domestic Internet,
communications, and overseas
customers. I24 is the name of the
whole machine, and NS5162M5 is
the name of the node.
92 High density server -i24/NS5162M5

Product Pictures

3.5 inches HD*12 configuration-i24 2.5 inches HD*24 configuration-i24

Picture of the node-NS5162M5


93 High density server -i24/NS5162M5
Overall Introduction
NS5162M5

Computing
Double intel skylake CPU I/O
1*OCP/PHY+2*PCI-E x16
+
+
+

Internal storage
Memory 2*M.2 SSD
16 slot DDR4

i24
+
overall management NVME configuration
CMC+BMC + High performance cache
94 High density server -i24/NS5162M5
Features and Specifications
Processor HDD backplane
Support Haswell-EP and Broadwell-EP SAS 3.0 backplane Support hot-plug SAS/SATA/SSD
Processor type processors; TDP up to 135W; QPI up to 9.6 2.5”x24 HDDs
GT/s
Management chip (each node)
Processor interface 2 Socket-R3 (LGA2011-3) slots
Integrate 1 independent network
Chipset Management chip interface, special for IPMI remote
Chipset type PCH C610 (Wellsburg) management
Memory NIC (each node)
Memory type DDR4 ECC RDIMM/LRDIMM 2x RJ45+management network port
NIC controller or 2x SFP+ management network
Memory slot Qty. 16 per node, 64 in total port
Each node supports up to 512GB
PCI expansion slot
RDIMM/LRDIMM (32G per 1 PCI-E 3.0 x16 Low Profile Riser Slot
Total memory capacity RDIMM/LRDIMM), 2048GB for the whole PCI expansion slot 1 PCI-E 3.0 x8 High Density Slot
machine
I/O interface (each node) HDD
USB interface 2 rear USB 3.0 interfaces Support up to 2.5”x24 front
HDD type SAS/SATA/SSD HDDs (6 HDDs per
Display interface 1 rear VGA interface node)
Serial interface None Power supply
Front panel LEDs Power button and LED, network LED, fault LED, Double power supplies of 2000W and
UID LED Specifications above output power; 1+1
redundancy; 2 power modules
Power input Please refer to the nameplate label on
the host
95 High density server -i24/NS5162M5
Product Picture of the whole server (i24)

Front View

Rear View

Top View
96 High density server -i24/NS5162M5

Product Picture of one node (NS5162M5)

Top View of node

Left View of node

Rear View of node


97 High density server -i24/NS5162M5
Front panel-3.5 inches HD

No. Module Name Function No. Module Name Function


On-- Green Light 1 The HD of node A From top to bottom, slot 0,1,2
Off—Orange Light 2 The HD of node B From top to bottom, slot 0,1,2
1 Power Button
3 The HD of node C From top to bottom, slot 0,1,2
Press 4s to force shutdown
4 The HD of node D From top to bottom, slot 0,1,2
Turn on/off UID, blue light The front control board Control the server to power on/off
2 UID│RST Button 5
Press 6s to force restart of node C and system fault indicator
The front control board Control the server to power on/off
Off– System Normal 6
of node D and system fault indicator
3 System Fault Indicator Error Occurred—Red light
The front control board Control the server to power on/off
Alarm—Red Light Blinking 7
of node B and system fault indicator
The front control board Control the server to power on/off
Mini USB to RJ45 interface, CMC 8
of node A and system fault indicator
4 CMC Mgmt Port debug connector
CMC debug connector
9 CMC Mgmt port
98 High density server -i24/NS5162M5

Front panel-2.5 inches HD

No. Module Name Function

1 The HD of node A From left to right, slot 0,1,2,3,4,5


2 The HD of node B From left to right, slot 0,1,2,3,4,5
3 The HD of node C From left to right, slot 0,1,2,3,4,5
4 The HD of node D From left to right, slot 0,1,2,3,4,5
5 The front control board of node C Control the server to power on/off and system fault indicator
6 The front control board of node D Control the server to power on/off and system fault indicator
7 The front control board of node B Control the server to power on/off and system fault indicator
8 The front control board of node A Control the server to power on/off and system fault indicator
9 CMC Mgmt port CMC debug connector
99 High density server -i24/NS5162M5
Rear panel view

No. Module Name Function


UID Light and Reset Button
1 UID
Press 6s to force restart
2 choices for customers: OCP
2 OCP or PHY Card
card or PHY card
BMC Reset Button
3 BMC Reset
No. Module Name
MGMT port of node 1 Node C
4 IPMI Management Port

High-density interface: 2 PSU0


USB2.0 x2 3 Node D
5 SUV Port VGA x1
4 PSU1
Integrated serial port x2 (for
BMC & System) 5 Node A
PCIE 3.0 X16 Devices 6 Node B
6 PCIE GEN3 X16
Supported
PCIE 3.0 X16 Devices
7 PCIE GEN3 X16
Supported
100

Functional Module-MB
Topology
Key Point:

1. Each CPU support 8 memory


in 6 channels.

2. The MB with OCP A/OCP


B/OCP C interface supports
OCP standard card and PHY
card.

3. The M.2 interface on the MB


supports PCIE signal or SATA
signal, so we can install M.2
HDD or SATA/SAS HDD.

4.There are 2 OCUlink interfaces


on the MB which support
SATA*8 signal.
101 High density server -i24/NS5162M5

Functional
Module-
MB Layout

No. Module Name


1 RAM Slots(CPU1) 11 CLR_COMS 21 SYS_TF_SLOT
2 CPU1 12 OCPA_CPU0 Interface 22 CPU0
3 TPM Interface 13 BMC RST Button 23 RAM Slot(CPU0)
4 M.2 RISER Interface 14 SUV Interface 24 EDGE_PWER Interface
5 RAM Slots(CPU0) 15 MLAN Interface 25 EDGE_PCIE Interface
6 PCIE0_CPU0 Interface 16 BMC_TF_SLOT 26 RAM Slot(CPU1)
7 OCPB_CPU0 Interface 17 RTC Battery
8 UID RST Button 18 PCIE1_CPU0 Interface
9 BMC_RELOAD 19 SATA4-7 Interface
10 OCPC Interface 20 SATA0-3 Interface
102 High density server -i24/NS5162M5

Functional Module-System scheme


CMC

Attention: side board 1 and side


board 2 are two independent side
panels, supporting different hard
disk configurations. When used, a
motherboard can only be matched
with one side panel.

Red line: power supply


Green line: signal
103 High density server -i24/NS5162M5
Function Module-Storage Module
2.5”×24
Attentions:
1. The server with 3.5” HDD BP can
not support NVME disk, only
SATA/SAS/SSD are supported.
2. We have 2 kinds of backplane for the
1)2.5”×24 Front view. From left to right:① HDD A0-
2.5”X24 model. One of them supports
A5 match Node A , ②HDD B0~B5 match Node B ,
NVME HDD*16, and another one
③HDD C0~C5 match Node C , ④HDD D0~D5
supports NVME HDD*24.
match Node D.
The one supports NVME*16, from left
3.5”×12 to right, the first 4 slots support NMVE
HDD. And it is same for every node. So,
the order to install disk should be
NVME->SSD->SAS->SATA.

2)3.5”×12 Front view. From top to Bottom:①HDD


A0~A3 match Node A, ②HDD B0~B3 match Node
B, ③HDD C0~C3 Match Node C, ④HDD D0~D3
match Node D.
104 High density server -i24/NS5162M5

Function Module-Storage Module


3 kinds of HDD backplane: For 3.5”*12 disk; for 2.5”*16 NVME disk; for 2.5”*24 NVME disk.
The interfaces of the 3 kinds of HDD BP are all same.

NODE C
Power
supply &
Data
interface
NODE D
Power
supply &
Data
interface

NODE C
Power
supply & NODE D
Data Power
interface supply &
Data
interface
105 High density server -i24/NS5162M5

Function Module-Storage Module


Node Side board Main Function:
1.Data Signal: There is a SAS interface on the Side-Plane to connect
the Oculink interface or SAS/RAID card on the MB, and connect the
HDD backplane via High-density interface.
2.Power Supply Signal: Supply power for nodes. High-density
interface connected to HDD backplane to be powered.

PCIE SAS cable is


Nodes’ interface necessary
power connecte when the
supply & d with configuratio
Data the MB n contain
interface of nodes SAS cable
106 High density server -i24/NS5162M5

Function Module-Power Supply Module


Power Board : Interfaces are as below. The other side is connected to HDD backplane. Take P14 for reference

PSU: The order is as below. 2 PSU. One/two 2000W output. 1+1 redundance

PSU0

PSU1
107 High density server -i24/NS5162M5

Function Module-Cooling Module

4 FAN in total, for 4 nodes concentrated cooling


108 High density server -i24/NS5162M5
Function Module-Management Module
2-Level Management:
BMC Management:
BMC: There is BMC chip followed standard IPMI2.0 in every nodes.
●Remote Control
Through KVM(Keyboard Video and Mouse)/SOL(Serial Over Lan)/Virtual Media to control
servers remotely.
Attention: SOL must be supported by IPMITool.
●Alarm Management
Report real-time alarm info and handle accordingly.
●Status Monitor
Monitor real-time status of every unit.
●Server’s Information Management
Servers’ FW version, model and assets info.
●Cooling Control
Ability to dynamically adjust fan speed based on ambient temperature and workload.
●IPMITool Management
Send command via IPMITool.
Note:IPMITool Download Link:http://ipmitool.sourceforge.net/manpage.html
●Web Management
Friendly visual interface management , just click to setting configurations or query task.
●Account centralized management
Store accounts in Active Directory server, directs the authentication process to the server,
domain account login management system.
109 High density server -i24/NS5162M5

Function Module-Management Module

2-level Management :
CMC Management:Every chassis has a CMC management module. In order to
control and manage the status of all nodes , power modules, fans and other
modules .
Note: We can configure the CMC IP address by DHCP tool, and we can see the
CMC IP address from BIOS of NS5162M5, but we can’t change the IP address
from the BIOS of NS5162M5
110 NF5288M5

NF5288M5
"AI Supercomputer" for Smart Computing and HPC Applications
The world's unique server that interconnects 8 GPUs with 300GB/s high-speed
NVIDIA® NVLinkTM in 2U space
The world's highest density and highest performance AI server
111 NF5288M5

HPC AI Video acceleration processing


Heterogeneous computing, HPC cluster
applicationLinear algebra, Matlab Deep Learning Training Video real-time transcoding
acceleration, spectral analysis, genetic Deep Learning reasoning Professional audio processing, video
research, geographic information systems, Training Cluster compression applications.
meteorological prediction, etc.
112 NF5288M5
NF5288M5
Chassis
2U Rack
specification
CPU 1/2 Intel® Xeon® Scalable Processor

CPU TDP Supports up to 165W processor


Chipset Intel® C620 Series Chipset(Lewisburg-4)
RAM 16 memory slots, support DDR4 2133/2400/2666MHz RAM

NVLink GPU board


NF5288M5 with 8×NVLink™ GPU Support 8 SXM2 interface GPUs, NVLink high-speed interconnect
Rear 4 half-height and half-length PCIex16 slots
GPU board
PCIE GPU board
8 full height full length double wide PCIe 3.0x16 slots
Support GPGPU/XEON Phi etc.

Internal 1 PCIe 3.0x8 interface for RAID Mezz card connection


Front 2 half-height and half-length PCIe 3.0x16 slots
PCIE slots
Rear 4 half-height and half-length PCIe 3.0x16 slots (when configuring
NVLink GPU)
Front 8 hot-swappable 2.5-inch SAS/SATA/NVMe SSD drives
Local Storage
Built in 2 M.2 SSDs
GPU Supporting Supports SXM2 and PCIE interface GPUs, supporting up to 8 300W GPUs
System Fans Redundant hot swap system fan
Power Supply 1+1 redundant 3000W 80Plus titanium power supply
NF5288M5 with 8×PCIE GPU
113 NF5288M5
Internal View 8 full-height full-
length dual-width
PCIe interface GPUs
4 half-height and
half-length PCIe x16 Liquid cooling
slots interface (reserved)

8 NVIDIA® TESLA®
V100/P100

5 sets of redundant
system fans

2 Intel® Xeon®
Scalable Processors

16 DDR4 2666MHz
memory
Front 2 half-height
half-length PCIe x16
slots
1+1 redundant 3000W
80Plus titanium power
supply
114 NF5288M5

2 half-height and half-length PCIe


x16 slots(from bottom to top
1+1 redundant 3000W 80Plus titanium
slot0~slot1)
power supply(from left to right
PSU0~PSU1)

System switch & light


VGA 2*USB
8 2.5-inch hard drives supporting System reset button
U.2 interface NVMe(hard disk ID
ID button & ID light & BMC reset button (long press 7
from left to right 0~7)
seconds BMC reset)
115 NF5288M5

SAS/SATA hard disk sequence diagram

NVME hard disk sequence diagram


116 NF5288M5

4 half-height and half-length PCIe


x16 slots(left to right
Reserved GPU liquid C20 power cord
C20 power cord slot0~slot3)
cooling interface socket(Connect to PSU0,
socket(Connect to
use C19 plug power cord)
PSU1, use C19 plug
power cord)

16 Onboard 4 optical port 10G


Serial port VGA 2*USB3.0
Ethernet(from left to right
BMC Management port
eth0~eth3)
ID button & light
117 NF5288M5
Motherboard topology

Power management signal line


interface Memory slot (corresponding to CPU1)

Power Port +12V Hard disk backplane power supply and SAS, SATA signal interface
PCIE3.0x16 slot
Power Port (GND) M.2 Riser card interface (supports 2 M.2
SSDs on the riser card)
CPU1
System switch & light
System reset button
2*USB
ID button & ID light & BMC reset button (long
8 slimline x8 (PCIE x8 signal) VGA press 7 seconds BMC reset)
Above 4 interfaces from CPU0
Below 4 interfaces from CPU1 RAID Mezz
.......
. ..... TPM
PCH
CPU0 Signal line interface (network, USB and VGA
signals are connected to the rear IO board)
Power Port (GND) Memory slot (corresponding to CPU0)

Power Port +12V

Power management signal


line interface
System fan power supply interface Signal line interface (BMC, UID, serial signal connected to the rear IO board)

System fan management signal line interface


118 NF5288M5

NVIDIA® NVLink™ 50GB/s

Fixed PCIe Gen3

NVLink GPU Topology PCIE GPU Topology Flexible PCIe Gen3


119 NF5288M5

Use the nvidia-smi command to view GPU0-GPU7 with a total of 8 GPUs and two GPU-
configured machines. The corresponding relationships are as follows:

SXM2 GPU configuration as shown below: PCIE GPU configuration as shown below :

机箱
Chassis
机箱top
Chassis 前端
front
(俯视图)
view
GPU7 GPU4 GPU0 GPU1

Chassis
机箱后端
rear end
SXM2 GPU 板卡 剖面图
profile
GPU5 GPU4 GPU1 GPU0 Note: Standing on the air
GPU2
GPU6 GPU5 GPU1 GPU3
outlet of the chassis (behind
the chassis), see the section
GPU7 GPU6 GPU3 GPU2 Chassis
机箱 view
backend
后端
120 NF5288M5

Front IO Riser
board

Rear IO board GPU board Fan module Motherboard


121 NF5288M5

Front IO Riser
board

Rear IO board GPU board Fan module Motherboard


122

CPU1_C2D0
CPU1_C1D0
CPU1_C0D0
CPU1_C0D1

CPU1_C3D1
CPU1_C3D0
CPU1_C4D0
CPU1_C5D0
CPU0_C2D0
CPU0_C1D0
CPU0_C0D0
CPU0_C0D1

CPU0 C3D1
CPU0 C3D0
CPU0 C4D0
CPU0 C5D0
NF5288M5
123

Contents

Server Product List

Naming Specifications for Servers

Product Description

Typical Issue
124
Typical Issue

Login in the knowledge base, view the typical issue of all products.
The website:
http://218.57.146.166:8443/display/H/SV+Troubleshooting+Document+an
d+Technical+Instruction
The username: overseas The password: overseas

2017-inspur 3108 4G cache RAID card operation occurs ECC error handling
strategy
2017-Ali Baoxue SA5212H5 memory ECC error location method
2017-After deleted the udev rule file 70-persistent-net.rules was not created
on system boot
2018-Hard Disk replacement solution of Alibaba customized server
SA5224M4A (SOP)
125

Thank You!

2019/8/29

S-ar putea să vă placă și