Documente Academic
Documente Profesional
Documente Cultură
MPI
1. Message Passing
2. SPMD/MPMD
3. MPI, clase de functii
4. Supliment
Message Passing
Model de programare paralela in sisteme cu memorie distribuita
(Distributed memory )
NUMA,MPP
1. Read array a[] from input 1. Read array a[] from input 1. Read array a[] from input
2. Get my rank 2. Get my rank 2. Get my rank
3. If rank==0 then is=1,ie=2 3. If rank==0 then is=1,ie=2 3. If rank==0 then is=1,ie=2
If rank==0 then is=1,ie=2 If rank==0 then is=1,ie=2 If rank==0 then is=1,ie=2
If rank==0 then is=1,ie=2 If rank==0 then is=1,ie=2 If rank==0 then is=1,ie=2
4. Process from a(is) to a(ie) 4. Process from a(is) to a(ie) 4. Process from a(is) to a(ie)
5. Gather the results to 5. Gather the results to 5. Gather the results to
process 0 process 0 process 0
6) If rank==0 then write 6) If rank==0 then write 6) If rank==0 then write
array a() to the output file array a() to the output file array a() to the output file
MPI. Clase de functii
• Functii de management mediu
• Operatii colective
• Grupuri de procese/Comunicatori
#include "mpi.h"
#include <stdio.h>
int main(argc,argv)
int argc;
char *argv[]; {
Operatii blocante
Operatii blocante
Observatie:
Toate operatiile colective sunt blocante
Operatii colective
MPI_Barrier
MPI_Barrier (comm)
MPI_BARRIER (comm,ierr)
Transfer de date
Operatii colective
Operatii colective
Operatii colective
Operatii colective
Grupuri de procese/Comunicatori
•Grupurile/comunicatori
sunt dinamice (pot fi
create/distruse in timpul
executiei)
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
sendbuf = rank;
/* Extract the original group handle */
MPI_Comm_group(MPI_COMM_WORLD, &orig_group);
/* Divide tasks into two distinct groups based upon rank */
if (rank < NPROCS/2) {
MPI_Group_incl(orig_group, NPROCS/2, ranks1, &new_group);
} else {
MPI_Group_incl(orig_group, NPROCS/2, ranks2, &new_group); }
/* Create new new communicator and then perform collective communications */
MPI_Comm_create(MPI_COMM_WORLD, new_group, &new_comm);
MPI_Allreduce (&sendbuf, &recvbuf, 1, MPI_INT, MPI_SUM, new_comm);
MPI_Group_rank (new_group, &new_rank);
printf("rank= %d newrank= %d recvbuf= %d\n",rank,new_rank,recvbuf); MPI_Finalize
();
}
Grupuri de procese/Comunicatori
Comunicatie intra- si inter-grup
Comm_A Comm_B
Topologii virtuale
– Overhead-ul de comunicatie
– Punctele de sincronizare
– Diferenta de balansare intre taskurile executate
pe elemente de procesare distincte.
ref: [2]&[3]
Mandelbrot paralel
Intrebari? Observatii?