Documente Academic
Documente Profesional
Documente Cultură
for Analytics
(Formerly known as, IBM Netezza
- Ravi
www.etraining.guru
info@etraining.guru
C1000 models are similar to N1001 systems, but with more storage/rack
Scales to more than 10 petabytes of user data capacity
Each Netezza C1000 rack has one S-Blade chassis that contains 4 S-blades. Each S-blade
has 8 CPU/FPGA processors. Same as N1001
1 Rack = 4 Storage Arrays (or Storage Groups)
1 Storage Array = 1 disk raid controller + 2 disk enclosures
1 disk raid controller = 12 disks; 1 disk enclosure = 12 disks
So, 1 Storage array/group = 12 + (2*12) = 36 disks
So, 1 Rack = 4 Storage array/groups = 4 * 36 disks = 144 TB
Note: 2 spare disks per each storage array
C1000-4:
1 Rack,
1 S-Blade Chassis 8 CPU, 8 FPGA
4 Storage groups 144 TB (8 Spares)
C1000-8:
2 Racks
2 S-Blade Chassis 16 CPU, 16 FPGA
8 Storage groups 288 TB (16 Spares)
3x faster analytics performance & 50% more usable capacity per rack
128 GB/Sec scan rate
1:1:1 ration between disks, FPGA engines, and CPU cores doesnt apply
1 Rack = 7 S-blades + 288 disks
1 S-blade = 16 CPU cores + 16 FPGA engines
288 disks = 240 active disks + 34 spare + 14 used for swap/log space
Note: Each disk size in striper: 600 GB (user space:200GB, Mirror:200GB, Temp: 200GB)
When the system starts up, 32 or 40 dataslices are assigned to each s-blade
At no point during operation can one s-blade access the data on a dataslice which has been assigned to another sblade
There in no attachment of CPUs to disks. The only attachment is that dataslice is assigned to 1 S-blade when the
system starts. CPU and FPGA resources are assigned as they are available.
If an s-blade fails, the dataslices which were assigned to the failed s-blade will be rebalanced and assigned to the
remaining still operational s-blades. That is exactly the same as the system worked in later releases of NPS on the
TwinFin architecture as well.
When a query starts, NPS will start 240 processes, one for each dataslice.
The processes start reading data off disk; As each page of data (128KB) comes off disk, that page gets assigned to
the first available FPGA on that s-blade.
FPGA decompresses and filters data and passes the result back to the process.
The Linux CPU scheduler assigns the process to one of the CPUs which processes the remaining data that came out
of the FPGA.
Once complete, the next 128KB is read off disk and that continues until all of the data has been processed for the
table being scanned.
option1: nz_get_model
[/export/home/nz]$nz_get_model
IBM PureData System for Analytics N2001-010
option2: select * from _t_environ where name like %NPS%
/export/home/nz->nzsql -c select * from _t_environ where name like NPS%;
NAME | VAL
+
NPS_PLATFORM | xs
NPS_MODEL | P1000X_A_E
NPS_FAMILY | Pseries
Note: Pseries is for Twinfin; Qseries is for Striper