Hi all,
I installed two Omni Path Fabric cards on two Xeon Servers.
Following the instructions present in this web site: https://software.intel.com/en-us/articles/using-intel-omni-path-architec...
The performance tests in this link shows that the network achieved 100 Gb/s - (4194304 10 360.39 360.39 360.39 23276.25)
I the network i deployed i achieved half of this performance ( 4194304 10 661.40 661.40 661.40 12683.17):
Is there some configuration needed to achieve 100 Gb/s using Omni Path?
Here is the complete output of benchmark execute:
mpirun -PSM2 -host 10.0.0.3 -n 1 /opt/intel/impi/2018.1.163/bin64/IMB-MPI1 Sendrecv : -host 10.0.0.1 -n 1 /opt/intel/impi/2018.1.163/bin64/IMB-MPI1 Sendrecv
[silvio@phi03 ~]$ mpirun -PSM2 -host 10.0.0.3 -n 1 /opt/intel/impi/2018.1.163/bin64/IMB-MPI1 Sendrecv : -host 10.0.0.1 -n 1 /opt/intel/impi/2018.1.163/bin64/IMB-MPI1 Sendrecv
#------------------------------------------------------------
# Intel (R) MPI Benchmarks 2018 Update 1, MPI-1 part
#------------------------------------------------------------
# Date : Fri Feb 2 11:14:01 2018
# Machine : x86_64
# System : Linux
# Release : 3.10.0-693.17.1.el7.x86_64
# Version : #1 SMP Thu Jan 25 20:13:58 UTC 2018
# MPI Version : 3.1
# MPI Thread Environment:
# Calling sequence was:
# /opt/intel/impi/2018.1.163/bin64/IMB-MPI1 Sendrecv
# Minimum message length in bytes: 0
# Maximum message length in bytes: 4194304
#
# MPI_Datatype : MPI_BYTE
# MPI_Datatype for reductions : MPI_FLOAT
# MPI_Op : MPI_SUM
#
#
# List of Benchmarks to run:
# Sendrecv
#-----------------------------------------------------------------------------
# Benchmarking Sendrecv
# #processes = 2
#-----------------------------------------------------------------------------
#bytes #repetitions t_min[usec] t_max[usec] t_avg[usec] Mbytes/sec
0 1000 1.92 1.92 1.92 0.00
1 1000 1.85 1.85 1.85 1.08
2 1000 1.84 1.84 1.84 2.17
4 1000 1.84 1.84 1.84 4.35
8 1000 1.76 1.76 1.76 9.10
16 1000 2.07 2.07 2.07 15.44
32 1000 2.06 2.07 2.07 30.98
64 1000 2.02 2.02 2.02 63.46
128 1000 2.08 2.08 2.08 123.26
256 1000 2.11 2.11 2.11 242.41
512 1000 2.25 2.25 2.25 454.30
1024 1000 3.56 3.56 3.56 575.46
2048 1000 4.19 4.19 4.19 976.91
4096 1000 5.16 5.16 5.16 1586.69
8192 1000 7.15 7.15 7.15 2290.80
16384 1000 14.32 14.32 14.32 2288.44
32768 1000 20.77 20.77 20.77 3154.69
65536 640 26.08 26.09 26.09 5024.04
131072 320 34.77 34.77 34.77 7538.32
262144 160 53.03 53.03 53.03 9886.58
524288 80 93.55 93.55 93.55 11208.78
1048576 40 172.25 172.28 172.26 12173.26
2097152 20 355.15 355.21 355.18 11808.02
4194304 10 661.40 661.40 661.40 12683.17
# All processes entering MPI_Finalize
Thanks in advance!
Silvio