Documente Academic
Documente Profesional
Documente Cultură
SUBMITTED TO :- MR. HARSHPREET SINGH SUBMITTED BY:- SARBJEET KAUR ROLL NO.:- B23
of the most significant differences between a Symmetric Multi-Processing or SMP and Massive Parallel Processing is that with MPP, each of the many CPUs has its own memory to assist it in preventing a possible hold up that the user may experience
with using SMP when all of the CPUs attempt to access the memory at simultaneously.
INTRODUCTION Massively Parallel refers to the hardware that comprises a given parallel system - having many processors. The meaning of "many" keeps increasing, but currently, the largest parallel computers can be comprised of processors numbering in the hundreds of thousands. Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently ("in parallel"). There are several different forms of parallel computing:1) bit-level 2) instruction level 3) Data parallelism 4) task parallelism. Parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multicore processors. Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance. ADVANTAGES: Why Use Parallel Computing? Main Reasons:
1) Save time and money ::- It saves time by adding more components to perform the task. Parallel computers can be built from cheap, commodity components. 2) Solve larger problems::- Many problems are so large and complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. 3) Provide concurrency: A single compute resource can only do one thing at a time. Multiple computing resources can be doing many things simultaneously. For example, the Access Grid provides a global collaboration network where people from around the world can meet and conduct work "virtually". General Characteristics: Shared memory parallel computers vary widely, but generally have in common the ability for all processors to access all memory as global address space. Multiple processors can operate independently but share the same memory resources. Changes in a memory location effected by one processor are visible to all other processors. Hardware architecture for parallel processing:The core elements of parallel processing are CPUs. Based on a number of instructions and data stream that can be processed simultaneously, computer system are classified into four categories:1) Single instruction single data:--
In computing, SISD (single instruction, single data) is a term referring to a computer architecture in which a single processor, a uniprocessor, executes a single instruction stream, to operate on data stored in a single memory.
2) Single instruction multiple data:--
Single instruction, multiple data (SIMD), is a class of parallel computers. It describes computers with multiple processing elements that perform
In computing, MISD (multiple instruction, single data) is a type of parallel computing architecture where many functional units perform different operations on the same data. Pipeline architectures belong to this type, though a purist might say that the data is different after processing by each stage in the pipeline.
4) Multiple instruction multiple data:--.
In computing, MIMD (multiple instruction, multiple data) is a technique employed to achieve parallelism. Machines using MIMD have a number of processors that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data. MIMD architectures may be used in a number of application areas such as computer-aided design/manufacturing, simulation, modeling, and as communication switches. THANK U.. PLEASE CORRECT THIS SIR ACCORDING TO REQUIREMENT.