Shared memory with MPI

New ideas for code development
Post Reply
Andrei Fokau
Posts: 77
Joined: Thu Mar 25, 2010 12:25 am
Security question 2: 0
Location: KTH, Stockholm, Sweden
Contact:

Shared memory with MPI

Post by Andrei Fokau » Sat Sep 25, 2010 11:19 am

Serpent requires quite significant amount of RAM per processor, especially for burnup calculations. If the user wants to run Serpent in MPI mode, then the required amount of memory is multiplied by the number of parallel processes, limiting the performance or precision. On the other hand, the vast majority of the data stored in RAM is the same for each process, so it would be beneficial for the user to separate the repeating part to a shared memory. Assuming, that calculations are run on several nodes, the user would be able to reduce the memory demand for each node. One can implement such shared memory by using Global Arrays library. There is also a possibility to map memory to a volume mounted on each node, which would allow to share memory between nodes as well, decreasing the demand even further. However, applicability of the last approach would strongly depend on data access time.

I propose to start moving in this direction by splitting the DATA array into shared and process-specific parts, so we can test shared memory possibilities.
KTH Reactor Physics (Stockholm, Sweden) neutron.kth.se

User avatar
Jaakko Leppänen
Site Admin
Posts: 2377
Joined: Thu Mar 18, 2010 10:43 pm
Security question 2: 0
Location: Espoo, Finland
Contact:

Re: Shared memory with MPI

Post by Jaakko Leppänen » Sat Sep 25, 2010 1:17 pm

Andrei,

I am currently in the process of revising the fundamental structure of the program code in order to implement the use of shared memory in parallel calculation. This is a major project, and so far I have been looking into the use of Open MP for the purpose, mainly because I thought that MPI didn't have this capability at all. Thank you for the link, I will certainly look into this possibility.
- Jaakko

Post Reply