Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

iMPI 5.2.1 Omni-Path tuning

$
0
0

 

For older Intel MPI versions, we were told to set this:

export I_MPI_HYDRA_PMI_CONNECT=alltoall.   Is this needed with Intel MPI 5.2.1?  And is 5.2.1 the latest version? 

we also set I_MPI_FABRICS="shm:tmi" or mpirun with -PSM2.  Is this the correct setting for Omni-Path fabric?

Do you have any other env vars or options we should explore to get optimal perf of Intel MPI on Omni-Path (Broadwell host, RHEL).  Willing to experiment a bit for best performance.

Thanks!

Ron


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>