Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

open MPI e intel MPI DATATYPE

$
0
0

Dear all,

 

I have notice small difference between OPEN-MPI and intel MPI. 

For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same variable in send and receiving Buff.

 

I have written my code in OPEN-MPI, run in on a intel-MPI cluster. 

Now I have the following error:

 

Fatal error in MPI_Isend: Invalid communicator, error stack:
MPI_Isend(158): MPI_Isend(buf=0x1dd27b0, count=1, INVALID DATATYPE, dest=0, tag=0, comm=0x0, request=0x7fff9d7dd9f0) failed

 

 

 

This is how I create my type:

 

  CALL  MPI_TYPE_VECTOR(1, Ncoeff_MLS, Ncoeff_MLS, MPI_DOUBLE_PRECISION, coltype, MPIdata%iErr)
  CALL  MPI_TYPE_COMMIT(coltype, MPIdata%iErr)
  !
  CALL  MPI_TYPE_VECTOR(1, nVar, nVar, coltype, MPI_WENO_TYPE, MPIdata%iErr)
  CALL  MPI_TYPE_COMMIT(MPI_WENO_TYPE, MPIdata%iErr)

 

 

 

do you believe that is here the problem?

Is also this the way how intel MPI create a datatype?

 

What do you think?


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>