Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

Intel MPI benchmark fails when # bytes > 128: IMB-EXT

$
0
0

Hi Guys,

I just installed Linux and Intel MPI to two machines:

(1) Quite old (~8 years old) SuperMicro server, which has 24 cores (Intel Xeon X7542 X 4). 32 GB memory. OS: CentOS 7.5

(2) New HP ProLiant DL380 server, which has 32 cores (Intel Xeon Gold 6130 X 2). 64 GB memory. OS: OpenSUSE Leap 15

After installing OS and Intel MPI, I compiled intel MPI benchmark and ran it:

$ mpirun -np 4 ./IMB-EXT

It is quite surprising to me that I find the same error when running IMB-EXT and IMB-RMA, though I have a different OS and everything (even GCC version used to compile Intel MPI benchmark is different -- in CentOS, I used GCC 6.5.0, and in OpenSUSE, I used GCC 7.3.1).

On the CentOS machine, I get:

#---------------------------------------------------
# Benchmarking Unidir_Put
# #processes = 2
# ( 2 additional processes waiting in MPI_Barrier)
#---------------------------------------------------
#
#    MODE: AGGREGATE
#
       #bytes #repetitions      t[usec]   Mbytes/sec
            0         1000         0.05         0.00
            4         1000        30.56         0.13
            8         1000        31.53         0.25
           16         1000        30.99         0.52
           32         1000        30.93         1.03
           64         1000        30.30         2.11
          128         1000        30.31         4.22

and on the OpenSUSE machine, I get

#---------------------------------------------------
# Benchmarking Unidir_Put
# #processes = 2
# ( 2 additional processes waiting in MPI_Barrier)
#---------------------------------------------------
#
#    MODE: AGGREGATE
#
       #bytes #repetitions      t[usec]   Mbytes/sec
            0         1000         0.04         0.00
            4         1000        14.40         0.28
            8         1000        14.04         0.57
           16         1000        14.10         1.13
           32         1000        13.96         2.29
           64         1000        13.98         4.58
          128         1000        14.08         9.09

When I don't use mpirun (which means there is only one process to run IMB-EXT), the benchmark runs through, but Unidir_Put needs >=2 processes, so doesn't help so much, and I also find that the functions with MPI_Put and MPI_Get is extremely slower than I expected (from my experience). Also, using MVAPICH on the OpenSUSE machine did not help.

Is there any chance that I am missing something in installing MPI, or configuring and installing OS in these machines?

Thanks a lot in advance,

Jae


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>