Page 1 of 1
Posted: Mon Jan 17, 2011 10:46 am
I would like to suggest you get some information about GPU-CUDA computing possibilities. There is a lot of references about very successful acceleration of the MonteCarlo simulation in general with speedup ratio about 100 and more.
I am not C/C++ expert but I am using last few month GPU toolbox Jacket for MATLAB (http://www.accelereyes.com/
) and results are really very impressive. Just now I am trying to perform some initial test with PGI Fortran compiler with CUDA accelerator (http://www.pgroup.com/resources/accel.htm
) on deterministic 3-D multi-group finite difference neutronic solver, and first results are very promising too. Typical speed up ratio is about 35-40. Of course there is still serious problem with data transfer between CPU and GPU and RAM, but for some specific tasks is this massive parallel approach very suitable and inexpensive.
Posted: Tue Jan 18, 2011 2:32 pm
I think this is a very intresting topic that should certainly be considered. There are two things that concern me:
1) I believe the use of GPU's requires some hardware-dependent coding?
2) Geometry routine is not the most CPU time-intensive part of the simulation when dealing irradiated fuels
So in my opinion, and I'm not an expert in this, GPU computing should probably not be considered as a general solution for the main (delta-tracking based) tracking routine, but instead as a special feature for some particular applications. One specific case that comes to my mind is combined neutron-gamma transport simulation, in which the gamma part could be handled by GPU's. I don't have that much experience in gamma calculations, but I believe they require much less data handling compared to neutron transport, so the use of a faster tracking routine would also result in a more significant reduction in the overall calculation time.
Or maybe the whole tracking routine should be re-invented and tailored for GPU computing? Delta-tracking is probably not the way to go, since it requires more memory access compared to normal ray-tracing.
Posted: Tue Jan 18, 2011 3:15 pm
ad1) The forthcoming release of the PGI compiler will not require HW-dependent coding on source level. For more info see: http://www.pgroup.com/about/news.htm#42
In general, you are right, the current state of GPU computing is suitable for photon transport more than for neutron transport. On the other hand the situation is rapidly changing.
On the other hand if the PGI develop the new compiler, there will completely new possibilities to develop massively parallel applications independently on HW platform (x86, CUDA or GPGPU in general).
Posted: Mon Apr 09, 2012 1:34 am
Another possible application for GPGPU computation is in depletion calculations, since inherently it treats large matrix problems quite efficiency:
"GPU Based General-Purpose Parallel computing to Solve Nuclear Reactor In-Core fuel Management Design and Operation Problem" Prayudhatama, D.; Waris, A.; Kurniasih, N.; Kurniadi, R. THE 2ND INTERNATIONAL CONFERENCE ON ADVANCES IN NUCLEAR SCIENCE AND ENGINEERING 2009-ICANSE 2009. AIP Conference Proceedings, Volume 1244, pp. 121-126 (2010).
Posted: Tue Apr 10, 2012 5:13 pm
Solving the matrix equations is not really the problem. The CRAM solution used by Serpent takes up only a few percent of the overall CPU time required for the burnup routines, and compared to the overall calculation time (with transport included) the solution time is completely negligible. Most of the CPU time spent in the burnup routines goes to calculating the transmutation cross sections and forming the burnup matrix.