Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

Maximum MPI Buffer Dimension

$
0
0

HI,

there is a maximum dimension in MPI buffer size? I have a buffer dimension problem with my MPI code when trying to MPI_Pack large arrays. The offending instruction is the first pack call:

CALL MPI_PACK( VAR(GIB,LFMG)%R,LVB,MPI_DOUBLE_PRECISION,BUF,LBUFB,ISZ,MPI_COMM_WORLD,IE )

where the double precision array R has LVB=6331625 elements, BUF = 354571000, and LBUF = BUF*8 = 2836568000 (since I have to send other 6 arrays with the same dimension as R).

The error output is the following:

Fatal error in PMPI_Pack: Invalid count, error stack:
PMPI_Pack(272): MPI_Pack(inbuf=0x2b4384000010, incount=6331625, MPI_DOUBLE_PRECISION, outbuf=0x2b51e593d010, outcount=-1458399296, position=0x7fffe24fbaa8, MPI_COMM_WORLD) failed
PMPI_Pack(190): Negative count, value is -1458399296

It is a Fortran 2008 code on Intel MPI on a cluster with Infiniband connection between nodes, here are the versions:

ifort (IFORT) 15.0.0 20140723

Intel(R) MPI Library for Linux* OS, Version 5.0 Update 1 Build 20140709

So, how can I solve the problem? There is some environment variable to seto in order to increase buffer limit size? I could break up mpi_pack using multiple mpi_send (it is a code routine that is executed once at startup, so performances are not an issue), but before doing this, I would like to be sure about the problem.

Thank you in advance.


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>