Sunteți pe pagina 1din 20

Principles of Parallel and Distributed Computing

By: Mayur N. Chotaliya

Parallel Computing
What is parallel computing? It is homogeneity of components with similar configurations and a shared memory between all the systems. Programs are broken into different units for execution where same instruction set is applied to all the information.

Why Parallel Processing?

Computation requirements are ever


increasing -- visualization, distributed databases, simulations, scientific prediction (earthquake), etc.

Sequential

architectures reaching physical limitation (speed of light, thermodynamics)

Computational Power Improvement

Multiprocessor

C.P.I.

Uniprocessor

2. . . .

No. of Processors

Why Parallel Processing?

The Tech. of PP is mature and can be


Significant

exploited commercially; significant R & D work on development of tools & environment.


development in Networking technology is paving a way for heterogeneous computing.

Why Parallel Processing?

Hardware
Vector

improvements like Pipelining, Superscalar, etc., are nonscalable and requires sophisticated Compiler Technology. Processing works well for certain kind of problems.

Processing Elements

Simple classification by Flynn:


(No. of instruction and data streams)
SISD - conventional SIMD - data parallel, vector computing MISD - arrays MIMD - very general, multiple approaches.

Current

focus is on MIMD model, using general purpose processors. (No shared memory)

Approaches to parallel programming


Data parallelism: Divide and conquer to split data into multiple sets, each data processed in different PEs(processing element). Process parallelism: Given operation have multiple (distinct) activities that are processed in multiple processors. Farmer and worker model: Master and slave, where one processor is master and rest are slaves.

Laws of caution.....
Speed of computers is proportional to the square of their cost.
i.e. cost =
C

Speed
S

(speed = cost2)
S
Pro.

Distributed Computing

Definition A distributed system consists of multiple autonomous computers that communicate through a computer network. Distributed computing utilizes a network of many computers, each accomplishing a portion of an overall task, to achieve a computational result much more quickly than with a single computer. Distributed computing is any computing that involves multiple computers remote from each other that each have a role in a computation problem or information processing.

A distributed system is a collection of independent computers that appears to its users as a single coherent system.
Agent Agent Agent

Cooperation

Cooperation Distribution Distribution Cooperation

Distribution

Internet
Subscription Distribution

Agent

Job Request

Resource Management

Large-scale Application

Components of Distributed systems


Applications (SaaS) Social Networks,
Scientific comp.

Middleware (Paas) Frameworks for cloud app. OS, (Iaas) Virtual hardware, Hardware images and storage

Architectural styles
1) Software Architectural styles: Logical organization of software Basic details to understand
Components and connectors:
-> Component represents unit of software -> Connector is communication mechanism

Software Architectural styles


i) Data Centered: Repository: Most relevant and has two components mainly, -> Central data structure -> Collection of independent components -> Blackboards: Knowledge source updates the knowledge base in the blackboards, blackboards represents the data structure and control is collection of triggers and procedures that govern the interaction with the blackboards.

Cont.
ii) Data flow Architectures: Here the availability of the data controls the computation -> Batch sequence: Chained to provide input for the next program, o/p generated after the completion of the last program. -> Pipe and filter style: Each component is filter and connection between filters are data streams which is called pipelining where no pipeline knows what other does.

Cont.
iii) Virtual machine architectures: -> Rule based style: It is characterized by abstract execution environment as an inference engine, here the programs are expressed in set of rules which could identify abnormal behavior if breached. -> Interpreter style: Interpretation engine executes the core activities, internal memory contains pseudo code for interpretation, representation of current state engine, representation of the current state program running.

Cont.
iv) Call and return architectures: -> Top- down style: Divide and conquer, single program divided in sub parts. -> Object- oriented style: Class define the type of objects and components the data represents, coupled data. -> Layered style: Layers of abstraction, each layer connected with other two, stack of layers, have protocols for connections.

Cont.
v) Independent components: They have their own life cycles and interact with other platforms also. -> Communicating process: Inter process communication, each process provides other processes with services and can also use others services. -> Event systems: Each component registers with a handler which provides callback when the events are activated, are loosely coupled so are useful in making open systems, integration is easy.

2) System Architectural styles: Physical organization of distributed software systems. ->Three components: Presentation, application logic and data storage. Client-server: Thin client, thick client. 2- tier architecture 3-tier architecture/N-tier architecture Peer-to-peer: A peer is a client as well as server.

Thank you

S-ar putea să vă placă și