Quantcast
Channel: Clusters and HPC Technology
Browsing all 927 articles
Browse latest View live

PS XE 2019 MPI not reading from stdin on Windows

Hi,This is a problem associated with the latest release of PS 2019 cluster ed. When running an MPI program from mpiexec, process 0 is unable to read stdin, at least on a Windows box (have not installed...

View Article


Intel MPI Library Runtime Environment for Windows

Dear Intel TeamIf one intends to create a software (the "Software") on a Windows platform that utilizes the free version of the Intel MPI Library (the "MPILIB") does a user of that Software has to...

View Article


How to map consecutive ranks to same node

Hi,Intel Parallel Studio Cluster Edition, 2017 Update 5, on CentOS 7.3I am trying to run a hybrid parallel NWChem job with 2 ranks per 24-core node, 12 threads per rank. The underlying ARMCI library...

View Article

Extraordinarily Slow First AllToAllV Performance with Intel MPI Compared to MPT

Dear Intel MPI Gurus,We've been trying to track down why a code we can run quite well with HPE MPT on our Haswell-based SGI/HPE Infiniband-network cluster, but when we use Intel MPI, it's just way too...

View Article

OpenMP slower than no OpenMP

Here is a Friday post that has a sufficient lack of information that will probably be impossible to answer.  I have some older Fortran code I'm trying to improve the performance of. VTune shows 75% of...

View Article


Bug: I_MPI_VERSION vanished into 2019 release and different than...

Hi,I just noticed that the mpi.h file included with 2019 release is missing any #define that helps to detect the mpi flavor used like former I_MPI_VERSION #define.The wrong thing is that, when calling...

View Article

What is the differences between “-genvall” and “-envall” ?

In the man doc of mpirun, it says that: -genvall              Use this option to enable propagation of all environment variables to all MPI processes.  -envall              Use this option to propagate...

View Article

Image may be NSFW.
Clik here to view.

Troubles with Intel MPI library

Hello, im student and have just started to learn HPC using Intel MPI lib. I created two virtual machines wich using CentOs. First of all i ran this code   "mpirun -n 2 -ppn 1 ip1, ip2 hostname" and it...

View Article


Bugged MPICH 3.3b2 used in Parallel Studio 2019 initial release

Hi,I just realized that the Parallel Studio 2019 initial release is using MPICH 3.3b2 which is a buggy release as reported here:https://lists.mpich.org/pipermail/discuss/2018-April/005447.htmlI confirm...

View Article


Running coupled executables with different thread counts using LSF

 Under LSF how can I run mutiple executables with different thread counts and still use the nodes efficiently?Currently I have to do#BSUB -R [ptile=7]#BSUB -R affinity[core(4)]mpirun  -n 8 -env...

View Article

ifort not reporting on outer loops re parallelisation capabilities &...

hi all, just to say I've an enquiry on the Compiler forum https://software.intel.com/en-us/forums/intel-fortran-compiler-for-linux...re why the FORTRAN compiler appears to not consider outer DO loops...

View Article

MPI: I_MPI_NUMVERSION set to 0

Why in mpi.h is the I_MPI_NUMVERSION set to 0?  The comments indicate it should be set to a non-zero value corresponding to the numerically expanded version string I_MPI_VERSION.  I have checked Intel...

View Article

Assertion Failure, Intel MPI (Linux), 2019

Intel MPI 2019 on Linux was installed and tested with several MPI programs (gcc, g++, gfortran from GCC 8.2), with no issues, using the following environment setup.export I_MPI_DEBUG=5 export...

View Article


IntelMPI DAPL Question

Dear MPI team, I started receiving these messages from a node after I restarted a slowly moving MPI job.I can tell these originate from IntelMPI. Do you have any suggestions as to what may be...

View Article

Debugging 'Too many communicators'-Error

I have a large code, that fails with the Error:Fatal error in PMPI_Comm_split: Other MPI error, error stack: PMPI_Comm_split(532)................: MPI_Comm_split(comm=0xc4027cf0, color=0, key=0,...

View Article


Intel MPI benchmark fails when # bytes > 128: IMB-EXT

Hi Guys,I just installed Linux and Intel MPI to two machines:(1) Quite old (~8 years old) SuperMicro server, which has 24 cores (Intel Xeon X7542 X 4). 32 GB memory. OS: CentOS 7.5(2) New HP ProLiant...

View Article

Image may be NSFW.
Clik here to view.

MPI without MPIRUN ;point to point using Multiple EndPoints

Hello,I want to create a cluster dynamically, with say 5 nodes . I want to have members join with communicate and accept.(something like -...

View Article


New MPI error with Intel 2019.1, unable to run MPI hello world

After upgrading to update 1 of Intel 2019 we are not able to run even an MPI hello world example. This is new behavior and e.g. a spack installed gcc 8.20 and OpenMPI have no trouble on this system....

View Article

Image may be NSFW.
Clik here to view.

Severe Memory Leak with 2019 impi

Both 2019 impi releases have a severe memory leak which goes away when I regress to the 2015 version (i.e. source /opt/intel/comp2015/impi/5.0.2.044/intel64/bin/mpivars.sh). I am attaching two valgrind...

View Article

Image may be NSFW.
Clik here to view.

The parameter localroot is not recognized at start of run

Hi,I use MPI to parallellize parts in my Quickwin project run with Fortran 2019 Cluster Edition. Earlier I was helped byIntel to manage my QuickkWin graphics output by using the parameter localroot....

View Article
Browsing all 927 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>