Performance loss migrating from Xeon X5550 to Xeon E5-2650 v2
Hi, I'm migrating Fortran in-house developed software from a cluster with Intel Xeon X5550 processors, to a cluster with Intel Xeon E5-2650 v2 processors and experiencing a loss of performance. I...
View Articlempirun and LSF
Hi,I'm trying to use option -ppn (or -perhost) with Hydra and LSF 9.1 but It doesn't work (nodes have 16 cores):$ bsub -q q_32p_1h -I -n 32 mpirun -perhost 8 -np 16 ./a.outJob <750954> is...
View ArticleMPI within parallel_studio_xe_2016_update2 not working under certain conditions
Hi, We just migrated from XE 2013 to XE 2016 update2. We use the compier suite and the MPI library to build the MPI enviroment for ab initio software such as PWscf(Quantum ESPRESSO) and OpenMX. Before...
View ArticleMPI process hangs on 'MPI_Finalize'
Hi,When I run my MPI application over 40 machines, one mpi process does not finish and hangs on 'MPI_Finalize' (The other mpi processes in the other machines show zero cpu usage)In the bellow, I...
View ArticleMPI generates numerous SCIF/scif_connect failure warning
I am running a heterogeneous job - on host and xeon phi coprocessor. If I run the mpi job on just the host or just the card everything is smooth. When I split the job between the host and the xeon...
View Articlemmap() + MPI one-sided communication fails when DAPL UD enabled
Hi!I used a trick in order to read a page located in a remote machine's disk. (using mmap() over the whole file in each machine and creating MPI_one_sided communication windows on it) It works fine...
View Articleerror while loading shared libraries: libiomp5.so: cannot open shared object...
I wrote a MPI Program to run on Intel Xeon Phi in native mode. I am using Stampede super computer. I have defined omp pragmas in the code.I have compiled using following command $mpiicc program.c...
View Articlemvapich2-2.2b and intel-15.0.4
Hi all, apologies if this is not the correct forum. I am trying to compile mvapich-2.2b with the intel 2015 compiler (15.0.4) and I am getting the following error during make:make[2]: Entering...
View ArticleAllocate the memory of an entire node on a single MPI process
Dear all,I want to benchmark an implementation in cluster architectures. The number of processes need not be high, however I need as much memory as possible on each MPI process. For example, I have...
View Articlelibgcc independent binary
Hi there, I want to generate binaries that are independent of libgcc. This can be done by the following compiling options, ifort -O0 -fp-model source -ip -inline-factor=100 -unroll-aggressive x.f90...
View ArticleMPI_Finalize() won't finalize if stdout and stderr are redirected via freopen
Hi,I have a problem using Intel MPI (5.1 Update 3) and redirection of stdout and stderr.When launched with multiple processes, if both stdout and stderr are redirected in (two different) files, then...
View ArticleMPI_Comm_rank , MPI_THREAD_MULTIPLE, and performance
Hi everyone,We found the following behavior in Intel MPI (5.0.3) using both the intel compilers and gcc:In an OpenMP-MPI environment, the performance of MPI_Comm_rank goes down if MPI is initialized...
View ArticleInstalling an older version of Intel Parallel Studio XE
HiI wish to install intel parallel 2015 using a 2016 named user license on our linux cluster.Do I use the hose id of the head node so that the compiler is available on all nodes?also I'm not a root...
View Articlenested MPI application
Hi there,I have a MPI application, say p1.exe, compiled with command,mpiifort p1.f90 /Od /Qopenmp /linkThe program receives a parameter from command line in runtime, p1.exe SRN:n. When n=1, the MPI is...
View ArticleDAPL startup: RLIMIT_MEMLOCK too small Error
I am trying to execute my code on 25 processors using intel mpi. I am using intel parallel_studio_xe_2016_update2 cluster edition. Even after setting the maximum lock memory to unlimited I am...
View ArticlePS cluster V16.0 update 2 Windows install issue
Installation of Windows version of Parallel Studio XE 2016 Update 2 Cluster Edition over/after having Update 1 installed (and not uninstalling update 1) bombs out while trying to write to (insufficient...
View Articlempi_reduce issue with mpi_f08
I am running the following test code on intel mpi 5.1.2 and compilng with ifort 16.0.1program testreduce use mpi_f08 implicit none integer :: ierr,id, nnodes real(kind(1.d0)),allocatable :: a(:),b(:)...
View Article2 nodes on Windows 7 x64 assistance
Installed Parallel Studio XE 2016 Update 2 Cluster edition on Windows 7 Pro x64 (installed on two systems) and attempting to run the MPI test program.Each system can run the test program on itself, but...
View ArticleIntel® Parallel Studio XE 2017 Beta is now open – register and provide feedback!
Intel® Parallel Studio XE 2017 Beta is brings together exciting new technologies along with improvements to Intel’s existing software development tools. Registration gets you early access to the Intel®...
View ArticleA Bus error
Hi all,I'm running a pretty heavy MPI application (the WRF model) on Linux and get a BUS error (please see below the output for the type of the error) --- any idea how to isolate the cause (specific...
View Article