Sunteți pe pagina 1din 14

Message Passing Interface (MPI)

Arash Bakhtiari

2013-01-13 Sun

Distributed Memory

Processors have their own local memory

Figure : Distributed Memory [1]

Advantages and Disadvantages

Advantages:
I Memory is scalable with the number of processors
I Each processor can rapidly access its own memory
without interference
Disadvantages:
I programmer is responsible:
I
I
I

to provide data in another processor


to explicitly define how and when data is communicated
to synchronize between tasks

What is MPI?

I
I

MPI is a specification for the developers and users of


message passing libraries
Provide a standard for writing message passing programs
Specifications is available for C/C++ and Fortran

Advantages of MPI

I
I

Standardization: MPI is the only message passing


library which can be considered a standard
Portability: no need to modify your source code when
you port your application to a different platform
Functionality: Many routines available to use
Availability: A variety of implementations are available

MPI Program Structure

Figure : MPI Program Structure [1]

Core Routines
I

MPI_Init: Initializes the MPI execution environment

i n t MPI_Init ( i n t a r g c , char a r g v )
I

MPI_Finalize: Terminates the MPI execution


environment

MPI_Finalize ()
I

MPI_Comm_size: Returns the total number of MPI


processes in the specified communicator

i n t MPI_Comm_size ( MPI_Comm comm , i n t s i z e )


I

MPI_Comm_rank: Returns the rank of the calling


MPI process within the specified communicator.

i n t MPI_Comm_rank( MPI_Comm comm , i n t r a n k )

DEMO
#i n c l u d e <mpi . h>
#i n c l u d e < s t d i o . h>
i n t main ( i n t a r g c , char a r g v ) {
i n t my_rank ;
int size ;
MPI_Init (& a r g c , &a r g v ) ; /START MPI /
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank ) ;
MPI_Comm_size (MPI_COMM_WORLD, & s i z e ) ;
p r i n t f ( " H e l l o w o r l d ! I m r a n k %d . \ n" , my_rank ) ;
MPI_Finalize ( ) ;
}

/ EXIT MPI /

Communicaiton Routines

Point to Point Communication:


I Involve message passing between two, and only two,
different MPI tasks
I One task performe a send operation and the other task
performe a matching receive operation
Collective Communication:
I Collective communication must involve all processes in
the scope of a communicator

Communicaiton Routines: Point-to-Point

MPI_Send:

i n t MPI_Send ( v o i d buf , i n t count , MPI_Datatype d a t a t y p e


MPI_Comm comm)
I

MPI_Recv:

i n t MPI_Recv ( v o i d buf , i n t count , MPI_Datatype d a t a t y p e


MPI_Comm comm , MPI_Status s t a t u s )

Communicaiton Routines: Collective


I

MPI_Bcast: Broadcasts (sends) a message from the


process with rank root to all other processes in the
group

i n t MPI_Bcast ( v o i d b u f f e r , i n t count ,
MPI_Datatype d a t a t y p e ,
i n t r o o t , MPI_Comm comm )
I

MPI_Barrier: Creates a barrier synchronization in a


group

i n t MPI_Barrier ( MPI_Comm comm )

DEMO
#i n c l u d e " mpi . h"
#i n c l u d e < s t d i o . h>
i n t main ( i n t a r g c , char a r g v [ ] )
{
i n t rank , s i z e , i ;
i n t b u f f e r [ 10 ] ;
MPI_Status s t a t u s ;
MPI_Init (& a r g c , &a r g v ) ;
MPI_Comm_size (MPI_COMM_WORLD, & s i z e ) ;
MPI_Comm_rank(MPI_COMM_WORLD, &r a n k ) ;
i f ( r a n k == 0 )
{
f o r ( i =0; i <10; i ++)
buffer [ i ] = i ;
MPI_Send ( b u f f e r , 1 0 , MPI_INT ,
1 , 1 2 3 , MPI_COMM_WORLD) ;
}

DEMO

i f ( r a n k == 1 )
{
f o r ( i =0; i <10; i ++)
b u f f e r [ i ] = 1;
MPI_Recv ( b u f f e r , 1 0 , MPI_INT ,
0 , 1 2 3 , MPI_COMM_WORLD,
&s t a t u s ) ;
f o r ( i =0; i <10; i ++)
{
p r i n t f ( " b u f f e r [%d ] = %d\n" , i , b u f f e r [ i ] ) ;
}
f f l u s h ( stdout );
}
MPI_Finalize ( ) ;
return 0;

References

Blaise Barney, Lawrence Livermore National Laboratory,


[[https://computing.llnl.gov/tutorials/mpi/
][https://computing.llnl.gov/tutorials/mpi/]]
DeinoMPI [[http://mpi.deino.net/][http:
//mpi.deino.net/]]

S-ar putea să vă placă și