For older Intel MPI versions, we were told to set this:
export I_MPI_HYDRA_PMI_CONNECT=alltoall. Is this needed with Intel MPI 5.2.1? And is 5.2.1 the latest version?
we also set I_MPI_FABRICS="shm:tmi" or mpirun with -PSM2. Is this the correct setting for Omni-Path fabric?
Do you have any other env vars or options we should explore to get optimal perf of Intel MPI on Omni-Path (Broadwell host, RHEL). Willing to experiment a bit for best performance.
Thanks!
Ron