Sunteți pe pagina 1din 4

SYNOPSIS ON MASSIVE PARALLEL PROCESSING(MPP)

SUBMITTED TO :- MR. HARSHPREET SINGH SUBMITTED BY:- SARBJEET KAUR ROLL NO.:- B23

Massive parallel processing


A massively parallel processor (MPP) is a single computer with many networked processors. MPPs have many of the same characteristics as clusters, but MPPs have specialized interconnect networks (whereas clusters use commodity hardware for networking). MPPs also tend to be larger than clusters, typically having far more than 100 processors. In an MPP, each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect. HISTORY The most prominent two eras of computing are parallel and sequential era. In past decades , parallel processing have become significant competitors to vectors machines in the quest of high performance computing. The computing era starts with the development of hardware architectures followed by system software. High performance computing requires the use of Massively Parallel Processing containing thousands of CPUs. High end of super computers are massively parallel processing interconnected with each other. Processing of multiple tasks on multiple processors is called parallel processing. A program is divided into multiple sub tasks using divide and conquer. One

of the most significant differences between a Symmetric Multi-Processing or SMP and Massive Parallel Processing is that with MPP, each of the many CPUs has its own memory to assist it in preventing a possible hold up that the user may experience

with using SMP when all of the CPUs attempt to access the memory at simultaneously.

INTRODUCTION Massively Parallel refers to the hardware that comprises a given parallel system - having many processors. The meaning of "many" keeps increasing, but currently, the largest parallel computers can be comprised of processors numbering in the hundreds of thousands. Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing:1) bit-level 2) instruction level 3) Data parallelism 4) task parallelism. Parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors. Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. ADVANTAGES: Why Use Parallel Computing? Main Reasons:

1) Save time and money ::- It saves time by adding more components to perform the task. Parallel computers can be built from cheap, commodity components. 2) Solve larger problems::- Many problems are so large and complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. 3) Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously. For example, the Access Grid provides a global collaboration network where people from around the world can meet and conduct work "virtually". General Characteristics: Shared memory parallel computers vary widely, but generally have in common the ability for all processors to access all memory as global address space. Multiple processors can operate independently but share the same memory resources. Changes in a memory location effected by one processor are visible to all other processors. Hardware architecture for parallel processing:The core elements of parallel processing are CPUs. Based on a number of instructions and data stream that can be processed simultaneously, computer system are classified into four categories:1) Single instruction single data:--

In computing, SISD (single instruction, single data) is a term referring to a computer architecture in which a single processor, a uniprocessor, executes a single instruction stream, to operate on data stored in a single memory.
2) Single instruction multiple data:--

Single instruction, multiple data (SIMD), is a class of parallel computers. It describes computers with multiple processing elements that perform

the same operation on multiple data simultaneously.

3) Multiple instruction single data :--.

In computing, MISD (multiple instruction, single data) is a type of parallel computing architecture where many functional units perform different operations on the same data. Pipeline architectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline.
4) Multiple instruction multiple data:--.

In computing, MIMD (multiple instruction, multiple data) is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data. MIMD architectures may be used in a number of application areas such as computer-aided design/manufacturing, simulation, modeling, and as communication switches. THANK U.. PLEASE CORRECT THIS SIR ACCORDING TO REQUIREMENT.

S-ar putea să vă placă și