Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

UCX_TLS settings for Intel MPI 2019.6.166

$
0
0

With Intel MPI-2019.6.166 (IPSXE 2020.0.166, Mellanox HDR, MLNX_OFED_LINUX-4.7-3.2.9.0) getting 2.5x slower performance,  compared to another cluster with Intel MPI 2019.1 (Mellanox EDR, MLNX_OFED_LINUX-5.0-2.1.8.0). 

I'm suspecting that Intel MPI-2019.6.166 is not picking right IB transport. What values from below need to be set for UCX_TLS env variable in mpiexec?

$ ucx_info -d | grep Transport
#   Transport: posix
#   Transport: sysv
#   Transport: self
#   Transport: tcp
#   Transport: tcp
#   Transport: rc
#   Transport: rc_mlx5
#   Transport: dc_mlx5
#   Transport: ud
#   Transport: ud_mlx5
#   Transport: cm
#   Transport: cma
#   Transport: knem

Also, OpenMPI has env var UCX_NET_DEVICES=mlx5_0:1 to set what IB interface to use. Please let me know similar variable for Intel MPI-2020.

# ibstat

CA 'mlx5_0'

        CA type: MT4123

 


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>