Hello,
I am bit confused about the run-time behaviour of one of our codes compiled with Intel ParallelStudio XE 2017.2.050 and run with the corresponding IntelMPI. Depending on the choice of communication layer and eager threshold settings, the hybrid-parallel code crashes - potentially with corrupted data (it seems to have wrong numbers in its data arrays which then lead to program errors) - or runs through.
What does work:
- I_MPI_FABRICS shm:tcp
- I_MPI_FABRICS shm:ofa with I_MPI_EAGER_THRESHOLD=2097152 (2MB)
What does not work:
- I_MPI_FABRICS shm:ofa without specification of or with a too low I_MPI_EAGER_THRESHOLD value (default should be 256kB)
The problem definitely seems to be the inter-node communication, as specifying only I_MPI_INTRANODE_EAGER_THRESHOLD does not help while switching from Infiniband (ofa) to Ethernet (tcp) does. Just for information, on other clusters (admittedly also with different Compiler/MPI Suite combinations) we have not seen the same issue. Do you have some general information on how such a behavior can be explained - without going into the details of this specific software? Is it possible that data is overwritten in too small message buffers or due to wrong memory addressing, or something along this path? Would you recommend to start on the code side or on the Infiniband configuration side for debugging, and do you have suggestions as to the possible cause? As you might guess by now I am not a software developer, especially when it comes to parallel applications. Therefore, I would greatly appreciate any help and hints.
Thanks and best regards,
Alexander