Serpent vs. MCNP5 Parallel Calculations

Share your results and discuss how Serpent compares to other neutron transport codes or experimental data
Post Reply
bherman
Posts: 30
Joined: Wed May 19, 2010 7:27 pm
Security question 2: 0
Location: Massachusetts Institute of Technology - Cambridge, MA

Serpent vs. MCNP5 Parallel Calculations

Post by bherman » Wed Aug 25, 2010 1:27 am

Dear all,

I would like to share some of my recent code comparisons between Serpent, MCNP5 coupled with BGCORE and MCNP5 coupled with MCODE for burnup. I am running a three dimensional Hexagonal RBWR assembly, and below is a figure showing the k-eff as a function of burnup.
Burnup.PNG
Burnup.PNG (50.88 KiB) Viewed 6466 times
As you can see, you can not run the exact same input file in 1 processor and in N (when N is large) processors and expect to get the same results. This is due to the way Serpent is structured. In MCNP5, you may due this and you will get the exact same mean and standard deviation. In Serpent, you will never be able to get exact reproducibility because it does not conserve the random number sequence between a single proc calculation and a multiprocessor calculation. However, we should still get an answer that is within statistics. As you can see from the attached plot there is a significant deviation when running the same input file ( 1 proc 2000 hist/cycle vs. 33 proc 2000 hist/cycle).

I have attached some simple flow charts to illustrate the differences between Serpent parallel calculations and MCNP5 parallel calculations (from what I understand, please correct me if I am wrong):
SERPENT_parallel_diag.png
SERPENT_parallel_diag.png (43.26 KiB) Viewed 6466 times
MCNP_parallel_diag.png
MCNP_parallel_diag.png (48.66 KiB) Viewed 6466 times
MCNP5 will communicate back to the master, keff and source sites after each cycle from each slave. The master will then combine the keffs and order the source sites where it will randomly sample for the next cycle. The master then sends the same keff to each slave and divides up the source sites as needed. When doing this method, one can preserve the sequence of random numbers such that you can get the same results with the same input file using N processors.

With Serpent, the master divides the source histories evenly between the slaves at the beginning of the calculation and each slave does an independent calculation. In this case, one will not be able to conserve the random number sequence since the keffs and source sites are never combined and depend on the number of processors. So currently, there is a dependence on the number of processors that you choose to run. This however is not the reason for the strong deviation in the above burnup plot. When I ran the same input file on 33 processors with 2000 histories/cycle, I effectively am running 33 independent calculations with ~60 histories per cycle. Therefore, I am not converging the source distribution on each slave and will have inherent biases in keff do to the renormalization procedure after each cycle. I then tested this out by having Serpent run a parallel calculation with 66,000 histories/cycle on 33 processors where now I am running 33 independent calculations with 2000 histories/cycle. From the plot, the results are more in line with MCNP5. Obivously, while running the 66000 case I reduced the standard deviation in the answers because the 33 independent runs are combined statistically at the end, but the time it took compared to the single proc case was equivalent.

Therefore a suggestion for a future task with Serpent would be to implement a parallel structure similar to MCNP5 and a dedicated random number (instead of built-in C RNG) where the random number sequence can be preserved for N processors and achieve the same results with the same input file. This is extremely important for reproducibility where increasing the number of processors just means speeding up the calculation while not changing the answer.

-Bryan

User avatar
Jaakko Leppänen
Site Admin
Posts: 2441
Joined: Thu Mar 18, 2010 10:43 pm
Security question 2: 0
Location: Espoo, Finland
Contact:

Re: Serpent vs. MCNP5 Parallel Calculations

Post by Jaakko Leppänen » Wed Aug 25, 2010 2:12 am

Thank you for clearing this out!

I will look into the problem and make some changes in the MPI implementation. Update 1.1.13 is practically completed, so the changes will be included in the next update after that. Meanwhile, when running parallel calculation, make sure that you are running a sufficient number (at least a few thousand) of source neutrons and remember that the population size entered in the pop card is divided by the number of MPI tasks.

Also note that this post is related to an earlier topic: Bug reports / Burnup Calculation Issues. The bug related to xscalc=2 with unresolved resonance probability table sampling and parallel calculation is fixed in update 1.1.13.
- Jaakko

Post Reply