Hi, we are currently standing up a new cluster with Mellanox ConnectX-5 adapters. I have found that using openMPI, mvapich2, and intel2018-mpi, we can run MPI jobs on all 960 cores in the cluster, however, using intel2019-mpi we can't get beyond ~300 mpi ranks. If we do, we get the following error for every rank:
Abort(273768207) on node 650 (rank 650 in comm 0): Fatal error in PMPI_Comm_split: Other MPI error, error stack:
PMPI_Comm_split(507)...................: MPI_Comm_split(MPI_COMM_WORLD, color=0, key=650, new_comm=0x7911e8) failed
PMPI_Comm_split(489)...................:
MPIR_Comm_split_impl(167)..............:
MPIR_Allgather_intra_auto(145).........: Failure during collective
MPIR_Allgather_intra_auto(141).........:
MPIR_Allgather_intra_brucks(115).......:
MPIC_Sendrecv(344).....................:
MPID_Isend(662)........................:
MPID_isend_unsafe(282).................:
MPIDI_OFI_send_lightweight_request(106):
(unknown)(): Other MPI error
----------------------------------------------------------------------------------------------------------
This is using the default FI_PROVIDER of ofi_rxm. If we switch to using "verbs", we can run all 960 cores, but tests show an order of magnitude increase in latency and much longer run times.
We have tried installing our own libfabrics (from the git repo ; also we verified with verbose debugging that we are using this libfabrics) and this behavoir does not change
Is there anything I can change to allow all 960 cores using the default ofi_rxm provider? Or, is there a way to improve performance using the verbs provider?
For completeness:
Using MLNX_OFED_LINUX-4.6-1.0.1.1-rhel7.6-x86_64 ofed
CentOS 7.6.1810 (kernel = 3.10.0-957.21.3.el7.x86_64)
Intel Parallel studio version 19.0.4.243
Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]
Thanks!
Eric