Matlab / Octave script for comparing Serpent runs

Share your results and discuss how Serpent compares to other neutron transport codes or experimental data
Post Reply
User avatar
Jaakko Leppänen
Site Admin
Posts: 2447
Joined: Thu Mar 18, 2010 10:43 pm
Security question 2: 0
Location: Espoo, Finland

Matlab / Octave script for comparing Serpent runs

Post by Jaakko Leppänen » Mon Mar 29, 2010 5:47 pm

In the link page at Serpent website there is a Matlab / Octave script called "sss_stat_test.m" that is convenient for making quick comparisons between two or more Serpent runs. The script works by comparing the parameters in the standard "_res.m" output files and prints a message if the difference seems statistically significant.

To give a more or less arbitrary example, I made two calculations using the example BWR assembly case, also found at the website. The calculations were run using JEFF-3.1 based cross section libraries with 574K (lwj3.11t) and 624K (lwj3.13t) thermal scattering data for light water. The input files are called "bwr1" and "bwr2", respectively, and the comparison reflects differences originating from a 50 degree variation in moderator temperature. I ran both cases using 10 million neutron histories to get better statistics. The comparison using Octave proceeds as follows.

1) Read the data:

octave:1> bwr1_res
octave:2> bwr1_res

2) Set the first case as the reference:

octave:3> ref=1

3) Run the script:

octave:4> sss_stat_test

The script picks the largest statistical error of the two cases under comparison and prints the difference if it exceeds the two-sigma confidence interval. In this case the difference in k-eff (IMP_KEFF), for example, is about 230 pcm, which exceeds the statistical criterion by a factor of 5.5. The discrepancies are even greater for homogenized cross sections. Total scattering cross section (SCATTXS) in the thermal energy region differs by 1.5%, which is 56 times more than the order of magnitude that could be considered statistical noise.

The script leaves a lot for the user to figure out, but it is pretty convenient for spotting significant differences from a large set of output variables. The comparison is not limited to two output files either, but several files can be read and compared simultaneously to one set of results selected as the reference case.
- Jaakko

Post Reply