Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

mpirun seems to set GOMP_CPU_AFFINITY

$
0
0

It appears Intel MPI is setting GOMP_CPU_AFFINITY, why?  How do I prevent this?

When I print my env I get:

bash-4.2$ env | grep OMP
OMP_PROC_BIND=true
OMP_PLACES=threads
OMP_NUM_THREADS=2

When I mpirun env I see that GOMP_CPU_AFFINITY has been set for me, WHY?

bash-4.2$
bash-4.2$ mpirun -n 1 env | grep OMP
OMP_PROC_BIND=true
OMP_NUM_THREADS=2
OMP_PLACES=threads
GOMP_CPU_AFFINITY=0,1

The reason this is a problem is that I'm using OMP env vars to control affinity.  Observe:

bash-4.2$ env | grep I_MPI
I_MPI_PIN_DOMAIN=2:compact
I_MPI_FABRICS=shm:tmi
I_MPI_RESPECT_PROCESS_PLACEMENT=0
I_MPI_CC=icc
I_MPI_DEBUG=4
I_MPI_PIN_ORDER=bunch
I_MPI_PIN_RESPECT_CPUSET=off
I_MPI_ROOT=/opt/intel-mpi/2017

Why this is a problem:  I get a bunch of bizarre warnings about GOMP_CPU_AFFINITY, and invalid OS proc ID for the procs listed in GOMP_CPU_AFFINITY  like this:

bash-4.2$ mpirun -n 1 ./hello_mpi
OMP: Warning #181: OMP_PROC_BIND: ignored because GOMP_CPU_AFFINITY has been defined
OMP: Warning #181: OMP_PLACES: ignored because GOMP_CPU_AFFINITY has been defined
OMP: Warning #123: Ignoring invalid OS proc ID 1.

 hello from master thread
[0] MPI startup(): Multi-threaded optimized library
[0] MPI startup(): shm and tmi data transfer modes
[0] MPI startup(): Rank    Pid      Node name           Pin cpu
[0] MPI startup(): 0       81689    kit002.localdomain  {0,36}
hello_parallel.f: Number of tasks=  1 My rank=  0 My name=kit002.localdomain

I have a hybrid MPI/OpenMP code compiled with Intel 2017 and run with Intel MPI 2017, on a Linux cluster under SLURM.  The code has a simple OMP master region which prints hello from the master thread, then exits the parallel region and prints the number of ranks, which rank this is, and the host name for the node.  Simple stuff:

program hello_parallel

  ! Include the MPI library definitons:
  include 'mpif.h'

  integer numtasks, rank, ierr, rc, len, i
  character*(MPI_MAX_PROCESSOR_NAME) name

  !$omp master
   print*, "hello from master thread"
  !$omp end master

  ! Initialize the MPI library:
  call MPI_INIT(ierr)
  if (ierr .ne. MPI_SUCCESS) then
     print *,'Error starting MPI program. Terminating.'
     call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
  end if

  ! Get the number of processors this job is using:
  call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)

  ! Get the rank of the processor this thread is running on.  (Each
  ! processor has a unique rank.)
  call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)

  ! Get the name of this processor (usually the hostname)
  call MPI_GET_PROCESSOR_NAME(name, len, ierr)
  if (ierr .ne. MPI_SUCCESS) then
     print *,'Error getting processor name. Terminating.'
     call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
  end if

  print "('hello_parallel.f: Number of tasks=',I3,' My rank=',I3,' My name=',A,'')",&
       numtasks, rank, trim(name)

  ! Tell the MPI library to release all resources it is using:
  call MPI_FINALIZE(ierr)

end program hello_parallel

Compiled simply:   mpiifort -g -qopenmp -o hello_mpi hello_mpi.f90

 


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>