Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

Performance difference running IMB-MPI1 with mpirun -PSM2 and -OFI

$
0
0

Hi, I found that the performance difference is significant when running

I_MPI_DEBUG=1 mpirun -PSM2 -host node1 -n 1 ./IMB-MPI1 Sendrecv : -host node2 -n 1 ./IMB-MPI1
......
       #bytes #repetitions  t_min[usec]  t_max[usec]  t_avg[usec]   Mbytes/sec
......
      4194304           10       364.40       364.52       364.46     23012.87

and

I_MPI_DEBUG=1 mpirun -OFI -host node1 -n 1 ./IMB-MPI1 Sendrecv : -host node2 -n 1 ./IMB-MPI1
......
       #bytes #repetitions  t_min[usec]  t_max[usec]  t_avg[usec]   Mbytes/sec
......
      4194304           10       487.40       487.80       487.60     17196.66

 Output of the latter seems to indicate that it uses psm2 backend too.

[0] MPID_nem_ofi_init(): used OFI provider: psm2
[0] MPID_nem_ofi_init(): max_buffered_send 64
[0] MPID_nem_ofi_init(): max_msg_size 64
[0] MPID_nem_ofi_init(): rcd switchover 32768
[0] MPID_nem_ofi_init(): cq entries count 8
[0] MPID_nem_ofi_init(): MPID_REQUEST_PREALLOC 128
#------------------------------------------------------------
#    Intel (R) MPI Benchmarks 2018 Update 1, MPI-1 part    
#------------------------------------------------------------
# Date                  : Tue Sep 10 17:36:59 2019
# Machine               : x86_64
# System                : Linux
# Release               : 4.4.175-89-default
# Version               : #1 SMP Thu Feb 21 16:05:09 UTC 2019 (585633c)
# MPI Version           : 3.1
# MPI Thread Environment: 
.......

It's getting more wired when I run with I_MPI_FABRICS set, I get only an error.

I_MPI_DEBUG=1 I_MPI_FABRICS=shm,psm2 mpirun -host node81 -n 1 ./IMB-MPI1 Sendrecv : -host node82 -n 1 ./IMB-MPI1
[1] MPI startup: syntax error in intranode path of I_MPI_FABRICS = shm,psm2 and fallback is disabled, allowed value(s) shm,ofi,tmi,dapl,ofa,tcp
[0] MPI startup: syntax error in intranode path of I_MPI_FABRICS = shm,psm2 and fallback is disabled, allowed value(s) shm,ofi,tmi,dapl,ofa,tcp

Is the performance difference expected results? If so, can I make mpirun defaults to use -PSM2 by changing environment or configurations? (except aliasing mpirun to "mpirun -PSM2" of course)


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>