Hello,
I'm running into a fatal error when trying to run the simple Hello world test with mpirun -np 2 and above. It works fine when using only one process. See the output below. Do you have an idea what the problem is?
lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpiifort -v mpiifort for the Intel(R) MPI Library 2019 for Linux* Copyright 2003-2018, Intel Corporation. ifort version 19.0.0.117 lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ cat test.f90 ! ! Copyright 2003-2018 Intel Corporation. ! ! This software and the related documents are Intel copyrighted materials, and ! your use of them is governed by the express license under which they were ! provided to you (License). Unless the License provides otherwise, you may ! not use, modify, copy, publish, distribute, disclose or transmit this ! software or the related documents without Intel's prior written permission. ! ! This software and the related documents are provided as is, with no express ! or implied warranties, other than those that are expressly stated in the ! License. ! program main use mpi implicit none integer i, size, rank, namelen, ierr character (len=MPI_MAX_PROCESSOR_NAME) :: name integer stat(MPI_STATUS_SIZE) call MPI_INIT (ierr) call MPI_COMM_SIZE (MPI_COMM_WORLD, size, ierr) call MPI_COMM_RANK (MPI_COMM_WORLD, rank, ierr) call MPI_GET_PROCESSOR_NAME (name, namelen, ierr) if (rank.eq.0) then print *, 'Hello world: rank ', rank, ' of ', size, ' running on ', name do i = 1, size - 1 call MPI_RECV (rank, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr) call MPI_RECV (size, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr) call MPI_RECV (namelen, 1, MPI_INTEGER, i, 1, MPI_COMM_WORLD, stat, ierr) name = '' call MPI_RECV (name, namelen, MPI_CHARACTER, i, 1, MPI_COMM_WORLD, stat, ierr) print *, 'Hello world: rank ', rank, ' of ', size, ' running on ', name enddo else call MPI_SEND (rank, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr) call MPI_SEND (size, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr) call MPI_SEND (namelen, 1, MPI_INTEGER, 0, 1, MPI_COMM_WORLD, ierr) call MPI_SEND (name, namelen, MPI_CHARACTER, 0, 1, MPI_COMM_WORLD, ierr) endif call MPI_FINALIZE (ierr) end lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpiifort test.f90 lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpirun -np 1 ./a.out Hello world: rank 0 of 1 running on sol48 lion@sol48: ~/FHI-aims/09_06_20/testcases/H2O-relaxation $ mpirun -np 2 ./a.out Abort(1093903) on node 0 (rank 0 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack: MPIR_Init_thread(607)..........: MPID_Init(731).................: MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4 In: PMI_Abort(1093903, Fatal error in PMPI_Init: Other MPI error, error stack: MPIR_Init_thread(607)..........: MPID_Init(731).................: MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4) Abort(1093903) on node 1 (rank 1 in comm 0): Fatal error in PMPI_Init: Other MPI error, error stack: MPIR_Init_thread(607)..........: MPID_Init(731).................: MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4 In: PMI_Abort(1093903, Fatal error in PMPI_Init: Other MPI error, error stack: MPIR_Init_thread(607)..........: MPID_Init(731).................: MPIR_NODEMAP_build_nodemap(710): PMI_KVS_Get returned 4)