Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

tmi

$
0
0

Hello everyone,

I am trying to use tmi fabric for intel mpi library but when i run my application with dynamic process management using MPI_Comm_spawn, application fails to run, but if i run without any I_MPI_FABRICS arguments then it works fine. Could someone please suggest what I might be doing wrong? Please see the rows that have been marked with "-->" are debugging statements in my program.

//***************************

/opt/intel/impi/4.1.1.036/intel64/bin/mpirun -n 1 -perhost 1 -f ./mpd.hosts -env I_MPI_DEBUG 2 -env I_MPI_FABRICS shm:tmi ./parent

[0] MPI startup(): shm and tmi data transfer modes
-->provided: MPI_THREAD_MULTIPLE
-->Initalizing MPI environment...
-->Finished initalizing MPI environment.
-->Spawning child binary ...

Fatal error in PMPI_Init_thread: Invalid port, error stack:
MPIR_Init_thread(658)............................:
MPID_Init(320)...................................: spawned process group was unable to connect back to the parent on port <tag#0$epaddr_size#16$epaddr#0C00000000000000030A0C0000000000$>
MPID_Comm_connect(206)...........................:
MPIDI_Comm_connect(579)..........................: Named port tag#0$epaddr_size#16$epaddr#0C00000000000000030A0C0000000000$ does not exist
MPIDI_Comm_connect(380)..........................:
MPIDI_Create_inter_root_communicator_connect(134):
MPIDI_CH3_Connect_to_root(309)...................:
MPID_nem_tcp_connect_to_root(1082)...............:
MPID_nem_tcp_get_addr_port_from_bc(1236).........: Missing port or invalid host/port description in business card

//***************************

$ cat mpd.hosts

10.20.xx.xx
10.20.xx.xx

//***************************

// without any I_MPI_FABRICS arguments

/opt/intel/impi/4.1.1.036/intel64/bin/mpirun -n 1 -perhost 1 -f ./mpd.hosts -env I_MPI_DEBUG 2 -env I_MPI_FABRICS shm:tmi ./parent

[0] MPI startup(): shm data transfer mode

-->provided: MPI_THREAD_MULTIPLE
-->Initalizing MPI environment...
-->Finished initalizing MPI environment.
-->Spawning child binary ...
[0] MPI startup(): cannot open dynamic library libdat2.so.2
[0] MPI startup(): cannot open dynamic library libdat2.so
[0] MPI startup(): cannot open dynamic library libdat.so.1
[0] MPI startup(): cannot open dynamic library libdat.so
[0] MPI startup(): shm and tcp data transfer modes
[0] MPI startup(): reinitialization: shm and tcp data transfer modes

libibverbs: Warning: no userspace device-specific driver found for /sys/class/infiniband_verbs/uverbs1
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device
librdmacm: Fatal: unable to open RDMA device

-->Finished.
-->fine

//***************************


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>