Quantcast
Channel: Clusters and HPC Technology
Viewing all articles
Browse latest Browse all 927

Code hangs when variables value increases

$
0
0

Hi,

I have the latest intel mpi 5.1, together with the fortran compiler in my own ubuntu linux. I tried to run my code but it hangs. It was working fine in different clusters before.

I realised the problem lies with mpi_bcast. I wrote a very simple program:

program mpi_bcast_test

implicit none

include 'mpif.h'

integer :: no_vertices,no_surfaces,size,myid,ierr,status

integer, allocatable :: tmp_mpi_data1(:)

call MPI_INIT(ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, myid, ierr)

if (myid==0) then

    no_vertices = 1554
    
    no_surfaces = 3104

end if

call MPI_BCAST(no_surfaces,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

call MPI_BCAST(no_vertices,1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

allocate (tmp_mpi_data1(3*no_surfaces+11*no_vertices+1), STAT=status)

tmp_mpi_data1 = 0

if (myid==0) tmp_mpi_data1 = 100 

call MPI_BCAST(tmp_mpi_data1,3*no_surfaces+11*no_vertices+1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

print *, "myid,tmp_mpi_data1(2)",myid,tmp_mpi_data1(2)

call MPI_FINALIZE(ierr)

end program mpi_bcast_test

If I run as it is, it will hang at :

call MPI_BCAST(tmp_mpi_data1,3*no_surfaces+11*no_vertices+1,MPI_INTEGER,0,MPI_COMM_WORLD,ierr)

But if I change the values of no_vertices and no_surfaces to small values, like 1 or 2, it works without problem.

I wonder why? Is there a bug in intel mpi 5.1 or my own problem?

Thanks

 

 

 

 

 


Viewing all articles
Browse latest Browse all 927

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>