Cache management instructions for Intel Xeon Processors
Hi all,I looked around but couldn't see any cache management instructions for Xeon processors (I am working on Xeon E7-8860 v4). I found that we can use _mm_clevict for MIC architecture.Is there a...
View ArticleINTERNAL ERROR with SLURM and PMI2
I was pleasantly surprised to read that PMI2 & SLURM is supported by Intel MPI in the 2017 release. I tested it, but it fails immediately on my setup. I'm using intel parallel studio 2017 update 4...
View Articlefailed generate trace file with mpirun
Hi, My application is a python program and mpi is called as mpi4py(built with intel mpi), and it needs to be killed during the runing(it needs a long time, we only profile a little). I use...
View Articleperformance of Iallreduce on xeon phi
Hi, We are trying to use non blocking api(Iallreduce) on computation intensive program, we tried on two nodes(xeon phi) and find two nodes are not balance with intel trace analyzer tool, it said that...
View ArticleParallel jobs running on same processors.
Hello,I just got a KNL system which has 68 cores with 4 threads each. So basically, it should run 272 jobs. I submit my first job using mpiexec and used 64 of them. Then I submitted another one with...
View Articleopenmp application performance dropped with I_MPI_ASYNC_PROGRESS=enable
Hi,I tried MPI/openmp process pining, it seems that When I use non-blocking api(Iallreduce) and specific I_MPI_ASYNC_PROGRESS like the following command, it I set I_MPI_ASYNC_PROGRESS=enable, then...
View Articleinstall Intel Studio Cluster Edition after installing Composer Edition
Hi all,I am a student and was using Intel Studio XE Composer edition for the past year. Recently I realized Intel® Trace Analyzer and Collector is also available for students with Cluster Edition. I...
View ArticleSeparate processes on separate cores
I'm using MPI to run processes that are nearly independent. They only talk at the very end, for an MPI_GATHER operation. My machine has a 4-core, 8-thread CPU. I run it with:mpirun -n 101 ./a.outWhen I...
View ArticleIssue with MPI_Iallreduce and MPI_IN_PLACE
Hi, I'm having some issues with using MPI_Iallreduce and MPI_IN_PLACE with FORTRAN (I haven't tested with C at this point), and I'm unclear if I'm doing something wrong w.r.t the standard. I've created...
View ArticlePerformance degration with larger message on knl(>128M)
Hi, When I ran with IMPI benchmark, it always got an obvious performance drop when buffer size>128MB with OFI, is this reasonable or there is some configuration? Thanksmpirun -genv...
View ArticleFine-grain time synchronization among HPC nodes
Hi all,I need to profile an HPC application on multiple nodes with very low overhead impact. In the application code, I need to monitor MPI synchronization points (barrier, alltoall, etc.). I'm using...
View ArticleIntel MPI installation problem
Hi,I am trying to install Intel MPI on Windows Server 2012 R2 SERVERSTANDARDCORE but during installation occurs error 1603 connected with installation error 0x80040154: wixCreateInternetShortcuts:...
View ArticleNo mpiicc or mpiifort with composer_xe/2016.0.109 ?
I started a new job and our company has composer_xe/2016.0.109 .When I load the module I do not get any mpiicc or mpiifort compiler? Does one need to have cluster edition for those?
View ArticleMeasuring data movement from DRAM to KNL memory
Dear All,I am implementing and testing LOBPCG algorithm on KNL machine for some big sparse matrices. For the performance report, I need to measure how much data is transferred from DRAM to KNL memory....
View ArticleBCAST error for message size greater than 2 GB
Hello,I'm using Intel Fortran 16.0.1 and Intel MPI 5.1.3 and I'm getting an error with bcast as follows:Fatal error in PMPI_Bcast: Other MPI error, error stack: PMPI_Bcast(2231)........:...
View ArticleMPI_Alltoall error when running more than 2 cores per node
We have 6 Intel(R) Xeon(R) CPU D-1557 @ 1.50GHz nodes, each containing 12 cores. hpcc version 1.5.0 has been compiled with Intel's MPI and MLK. We are able to run hpcc successfully when configuring...
View Articlecluster error: /mpi/intel64/bin/pmi_proxy: No such file or directory found
Hi,I've installed Intel parallel studio cluster edition in single node installation configuration on the master node cluster of 8 nodes with 8 processors each. I've performed the pre-requisite steps...
View ArticleScalapack raise error under certain circumstance
Dear All, I am using IntelMPI + ifort + MKL to compile Quantum-Espresso 6.1. Everthing works fine except invoking scalapack routines. Calls to PDPOTRF may exit with non-zero error code under...
View ArticleRegarding cluster_sparse_solver
I am Mehdi and this is my first time using this forum.I need to used cluster_sparse_solver in my FORTRAN Finite Element program. Because the degree of freedom of my system is very high (1^6), the...
View ArticleNotification of a failed dead node existence using the PSM2
Hello everyone,I am writing because I am currently implementing a failure recovery system for a cluster with Intel OmniPath that will be designated for handling computations in a physical experiment....
View Article