Hybrid MPI/Multithread runs with High memory requirements

Parallelization with OpenMP and MPI, scalability, reproducibility, errors, problems suggestions
Diego
Posts: 73
Joined: Wed Jun 01, 2011 8:49 pm

Re: Hybrid MPI/Multithread runs with High memory requirements

Post by Diego » Tue Jan 16, 2018 11:08 am

Jaakko,
Unfortunately I can not log on the nodes through an ssh and execute a top o something.
Nevertheless, I think the problem is related to the non-thread safe OMP compilation, as far as when I avoid omp threading (same case, run in different nodes using mpi with mpirun, but without -omp) I get no error at all.
I'm trying to use different compilers and MPI libraries, without success so far.
Diego

Diego
Posts: 73
Joined: Wed Jun 01, 2011 8:49 pm

Re: Hybrid MPI/Multithread runs with High memory requirements

Post by Diego » Wed Jan 24, 2018 1:16 pm

Jaakko,
After several tests I figure out that the problem is related to the MPI library version + compiler. For some reason, some of the available versions lead to memory errors (which are quite random but are related to the burnup routines).
Basically (for the record), I have no problems when compiling with:
- intel icc (17.0.5) + Open MPI v 1.8 / 2.0 / 2.1
- gnu 7.2.0 + Open MPI v 1.8 / 2.1
Nevertheless, I get (diverse) memory problems (some times calloc, some times just errors from mpiexec.hydra or mpirun) for the following compilation schemes:
- intel icc (17.0) + Open MPI v 1.10 --> Memory allocation failed (calloc, 655360, 8, 4612.92)
- intel icc (17.0) + IntelMPI v 2018 , v 2017 , v5.0 --> BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES - EXIT CODE: 9 (this is a kill signal by one process)
- gnu 7.2.0 + Open MPI v 1.10 / v 2.0 --> Memory allocation failed (calloc, 655360, 8, 4612.92) / Memory allocation failed (calloc, 190045, 8, 4612.92)

So the combination is quite strange (specially for OpenMPI v 2.0 case).
Maybe It is quite dependent for the input ( I have only encountered this issues in burnup problems), but anyway for the alternatives above it looks like there is no problem.
Thanks,
Diego

s.pfeiffer
Posts: 4
Joined: Fri Mar 23, 2018 6:14 pm
Security question 1: No
Security question 2: 93

Re: Hybrid MPI/Multithread runs with High memory requirements

Post by s.pfeiffer » Thu Sep 20, 2018 4:15 pm

Hi Diego,

It's great that you've found a solution to this problem. We've been having the same issue with BURN UP cases.
We have been trying to compile with Intel Composer XE and mvapich2 on our Linux cluster but we are still having the same problem.
(1) Can you please give me the run command that you are using to run the case with Serpent.
(2) Are you using a job scheduler like SLURM to run your cases?

Thanks in advance!

s.pfeiffer
Posts: 4
Joined: Fri Mar 23, 2018 6:14 pm
Security question 1: No
Security question 2: 93

Re: Hybrid MPI/Multithread runs with High memory requirements

Post by s.pfeiffer » Fri Sep 28, 2018 7:01 pm

We have finally succeeded in running Serpent 2.1.29 (Hybrid MPI/OMP) on our High Performance Cluster!!
We compiled Serpent with Intel-Parallel-Studio-XE-2018 + Intel MPI Library. *Intel claims the Intel-MPI is thread safe. https://software.intel.com/sites/defaul ... -linux.pdf

Diego
Posts: 73
Joined: Wed Jun 01, 2011 8:49 pm

Re: Hybrid MPI/Multithread runs with High memory requirements

Post by Diego » Tue Jan 15, 2019 3:19 pm

Thanks for the data!

BTW, I was just running with a simple msub script (example for the omp2.1 case):

Code: Select all

#MSUB -l nodes=50:ppn=20

EXE=/pathtosss/sss
INP=/pathtoinp/inp
# load module in cluster
module load mpi/openmpi/2.1

# Calculating number of threads per core:         
export OMP_NUM_THREADS=$((${MOAB_PROCCOUNT}/${MOAB_NODECOUNT}))  

## Setting up MPIRUN options:
MPIRUN_OPTIONS="--bind-to core --map-by node:PE=${OMP_NUM_THREADS} -report-bindings -output-filename stdout.dat -tag-output"       

## Wrapping up the executable with my input file:
EXECUTABLE="${EXE} ${INP_FILE} -omp ${OMP_NUM_THREADS}"                                    

## Execute program                                       
startexe="mpirun -n ${MOAB_NODECOUNT} ${MPIRUN_OPTIONS} ${EXECUTABLE}" 
echo $startexe                                                         
exec $startexe               
The only detail is that I include some options from mpirun just to improve the traceback of errors. All the other stuff is as usual.
Diego

User avatar
Jaakko Leppänen
Site Admin
Posts: 2388
Joined: Thu Mar 18, 2010 10:43 pm
Security question 2: 0
Location: Espoo, Finland
Contact:

Re: Hybrid MPI/Multithread runs with High memory requirements

Post by Jaakko Leppänen » Wed Jan 16, 2019 2:17 pm

Just a side note... The description of parallel calculation at Serpent Wiki is not very good at the moment:

http://serpent.vtt.fi/mediawiki/index.p ... ng_Serpent

so if you come up with some good instructions of practices, feel free to contribute.
- Jaakko

Post Reply