Hi,
The Intel MPI library 2019 uses PSM 2.x interface for Omni-Path fabric and PSM 1.x interface for QDR fabric. For very smaller message size, there is a switch-over in MPI implementation for Omni-Path network, but not for Infiniband interconnect network? Further, what are the default eager limits for intra-node and inter-node communication for Intel 2019? And specific control variables to tune these values? Thanks
↧
Default switch-over limits for Intel MPI Library 2019
↧