Wednesday, August 1, 2012

MPI program to send two messages from multiple processors to one processor


MPI is a language independent communication protocol used in parallel programming. It a message passing library specification where data is directly sent to processes via messages. MPI is not an implementation. MPI is designed for high performance on both massively parallel machines and on workstation clusters. MPI provides a powerful, efficient, and portable way to express parallel programs. Message passing in MPI can be point to point operations or global operations.  MPI was explicitly designed to enable libraries. MPI specification has been defined for both Fortran and C++ languages.
The main objective of MPI programming is to enable computers to communicate and perform one task collaborating with each other. Communication will be managed by clusters. There are three main functionalities of MPI. Process Management Functionalities, Point to point communication and collective calls. (Broadcaste calls).

Blocking send and receive:-

In a MPI program, one address space is used in communication. Within the communication world there is one specific problem to be addressed. All the elements within a particular group should communicate within that context. In data transmission, two processes are involved. (MPI_SEND and MPI_RECV). Buffers are used by both of these processes within the transmission process. In one buffer communication world, when the buffer is refreshed, data will be lost. Sender has to keep the data in the buffer (blocking send) until you make sure that the receiver has received all data. Receiver also should wait (blocking receive) until the data received. Data loss will be reduced by this mechanism. 

Code:-

#include "mpi.h"#include <stdio.h>#include "stdlib.h"

int main(int argc,char *argv[]) {    int tasks, rank, dest, source, error, source_id;    char incomemsg[20]="Acknowledgment",   outgoingmsg[10]="Generated";    
    MPI_Status Status;   
    char errorString[BUFSIZ];     
    int errorsize;
    error = MPI_Init(&argc,&argv); 
//initializing MPI environment      
    MPI_Comm_size(MPI_COMM_WORLD, &tasks);  
//get the number of processes    
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);  
//get the  ID's of processes     
if (error != MPI_SUCCESS) {
    printf("MPI initialization failed!\n");
    exit(1);
  }

if (tasks < 2) {
    printf("You need to have at least 2 processors to run this program!\n");
    MPI_Finalize();   // Quit if there is only one processor     exit(0);
  }
if (rank != 0) {
  dest = 0;  /*process 0 is the receiver. other processes are sending messages to process 0 */
        error = MPI_Send(&outgoingmsg, 1, MPI_CHAR, dest, 1, MPI_COMM_WORLD);
//receives acknowledgment from process 0
        error = MPI_Recv(&incomemsg, 1, MPI_CHAR, dest, 1, MPI_COMM_WORLD, &Status);
        printf("Process %d received %s message from Process %d \n", rank,&incomemsg, Status.MPI_SOURCE, Status.MPI_TAG);
       //error handling if (error != MPI_SUCCESS) {
MPI_Error_string(error, errorString, &errorsize);
fprintf(stderr, "%d: %s\n", rank, errorString);
MPI_Abort(MPI_COMM_WORLD, error);
}
}
         else {
for(int j=1; j<tasks ; j++){
  //process 0 receives multiple messages from other processes source =j;
  error = MPI_Recv(&outgoingmsg, 1, MPI_CHAR, source, 1, MPI_COMM_WORLD, &Status);
printf("Process %d received %s message from Process %d \n", rank,&outgoingmsg, source, Status.MPI_TAG);
source_id = Status.MPI_SOURCE;
//process 0 sends acknowledgement
error = MPI_Send(&incomemsg, 1, MPI_CHAR, source_id, 1, MPI_COMM_WORLD);

if (error != MPI_SUCCESS) {
MPI_Error_string(error, errorString, &errorsize); (stderr, "%d: %s\n", rank, errorString); MPI_Abort(MPI_COMM_WORLD, error); }            } }     
MPI_Finalize();  //finalizing MPI environment}