Development of Serpent 2

General information etc.
Post Reply
User avatar
Jaakko Leppänen
Site Admin
Posts: 2179
Joined: Thu Mar 18, 2010 10:43 pm
Security question 2: 0
Location: Espoo, Finland
Contact:

Development of Serpent 2

Post by Jaakko Leppänen » Fri Jan 14, 2011 5:51 pm

I have been working on the next version of Serpent for several months now, and I think it's time to open the topic for discussion. The source code is basically being re-written from scratch, and the main reason for this is that there are some fundamental flaws in the strucure of the present version that make it very difficult to develop new features while keeping everything together. Serpent 1 has about six years of history now, during which new capabilities have been constantly added on top of existing calculation routines, without any grand vision on how things should look as a whole. The code has a lot of undocumented and obsolete features that only make things complicated without adding any value, and significant effort has been put to discovering things that don't really work in practice.

Starting everything from zero allows me to take into account all the lessons and best practices learned from the previous work. One major thing is the wasteful use of computer memory. The use of the unionized energy grid and pre-calculated material-wise cross sections allows the simulation to run very fast, but the excessive memory demand limits the number of materials in burnup calculation. At first this wasn't a big issue, since the code was mainly intended for 2D lattice calculations. Now it seems, however, that the trend is towards larger systems with thousands of depletion zones. Another major limitation is that Serpent cannot use shared memory in parallel calculation mode (threading), which seriously limits the applicability in clusters and multi-processor workstations. There are also some important capabilities still missing (gamma transport, etc.), and the implementation of the calculation routines becomes much easier starting from scratch.

The work is estimated to take several years, but I should have a working beta version with limited capability out for distribution by the end of this year. Here is a brief description on where the project is at the moment (January 14, 2011):
  • Cross section data, decay constants and fission yields are read from ACE and ENDF format files into an internal data structure. Decay and transmutation chains are generated automatically from intial composition.
  • Geometry is read and processed. All major geometry features in Serpent 1 are included (universes, lattices, explicit stochasting geometry, etc.) and cell definitions now include also the union operator. Plotter routine and Monte Carlo volume calculation routines are completed and tested.
  • Energy grid is unionized, microscopic, macroscopic and majorant cross sections can be interpolated for an arbitrary energy. Four levels of optimization control the trade-off between speed an memory usage. The "tricks" used in Serpent 1 are available but optional, which essentially removes the memory limitations (at the cost of CPU time) and should eventually increase the maximum number of burnable materials to more than 100,000.
  • Geometry plotter, MC volume calculation routine and a tester routine for cross section data are implemented with threading using OpenMP. There are several issues related to parallel calculation with shared memory, which I will address soon in another topic.
The code is still missing all interaction laws, tracking routine, tallies, etc., so I cannot run any real simulations yet, but the tester routines work as expected, which suggests that the work is on the right track. Parallel calculation using the combination of MPI and OpenMP, and the capability to run simulations in massively-parallelized computing environments will have a major role in the project. Because of this supercomputing aspect, I have nicknamed the new code "Super-Serpent" (with or without the dash, I haven't decided yet), which is used as a working title until the final version is released.

I hope this topic will inspire a lot of discussion and ideas - I know I will have a lot of questions, especially regarding supercomputing and parallelization, which is still a relatively new area for me. I will still continue working on Serpent 1 as well, correcting bugs and implementing new features, but all major development goals will be deferred to the new version from now on.
- Jaakko

User avatar
Jaakko Leppänen
Site Admin
Posts: 2179
Joined: Thu Mar 18, 2010 10:43 pm
Security question 2: 0
Location: Espoo, Finland
Contact:

Re: Development of Serpent 2

Post by Jaakko Leppänen » Tue May 03, 2011 10:46 pm

There has been some major progress in the development of Serpent 2 ("Super-Serpent") during the past months. The developer team includes myself, Maria Pusa and Tuomas Viitanen from VTT. The work is funded from the KÄÄRME project under the Finnish Research Programme on Nuclear Power Plant Safety (SAFIR-2014), with the total budget for 16 person-months for this year. We also have some significant collaboration with a few universities and other research institutes.

The code has now a working tracking routine and burnup capability is under development. Here is a brief summary on what has been done so far.

Tracking and geometry routine

The geometry routine has been completed. Tracking is based on a similar combination of delta-tracking and surface tracking as in Serpent 1, with very similar efficiency. External and criticality source simulation modes have been implemented.

What is new compared to Serpent 1:

- The union operator for combining surfaces
- More options for nest-type universes
- Monte Carlo based volume calculation routine can be run prior to the transport cycle to get volumes for complicated geometry regions within a single calculation

Physics and interaction data

Cross section data is read and processed similar to Serpent 1. The code generates the unionized energy grid and optionally reconstructs the cross sections to speed up the calculation. Unresolved resonance probability table sampling has been implemented but not thoroughly tested. The same applies to DBRC.

What is new compared to Serpent 1:

- Four levels of optimization that affect memory usage and performance. The highest level of optimization corresponds to the methods used in Serpent 1, and lower levels reduce memory demand per nuclide and material (I'll post some examples later)
- Materials can be defined by mixing two or several existing materials or other mixtures
- Implicit capture and other non-analog techniques

Group constant generation, detectors, etc.

The code calculates k-eff and some homogenized multi-group constants and has some detector capability similar to Serpent 1. The development of the methodology is still under way.

Physics and interaction data

Cross section data is read and processed similar to Serpent 1. The code generates the unionized energy grid and optionally reconstructs the cross sections to speed up the calculation. Unresolved resonance probability table sampling has been implemented but not thoroughly tested. The same applies to DBRC.

What is new compared to Serpent 1:

- Four levels of optimization that affect memory usage and performance. The highest level of optimization corresponds to the methods used in Serpent 1, and lower levels reduce memory demand per nuclide and material (I'll post some examples later)
- Materials can be defined by mixing two or several existing materials or other mixtures

Burnup calculation

Decay and transmutation paths are generated recursively starting from the initial composition. Cross section, decay, fission yield and isomeric branching data is read and processed. Transmutation cross sections are calculated during the transport cycle, using similar methods as in Serpent 1 (direct tally or spectrum collapse with special treatment in the unresolved resonance region).

The implementation of the CRAM solver is under way, after which we will start looking into advanced time integration methods.

What is (or will be) new compared to Serpent 1:

- Secondary products in (n,alpha), (n,t), etc. reactions are included in depletion
- Optimization modes to dramatically reduce memory consumption in burnup mode
- Energy-dependent data for isomeric branching
- Advanced time integration methods to replace simple predictor-corrector calculation
- Critical spectrum calculation extended to depletion

Gamma transport

Gamma transport capability is entirely new and not included in Serpent 1. The data is read from ACE format cross section libraries. The code handles four reaction modes: coherent and incoherent scattering, photo-electric effect and pair production. The physics treatment is still relatively simple, and it doesn't account for secondary fluorescent or brehmstrahlung photons (TTB treatment will be implemented later). Gamma spectra are read from ENDF format decay files, and the data will be an option for source definitions. Gamma transport will be coupled to neutron transport.

Parallelization

Parallelization is currently implemented using OpenMP with shared memory access. Parallelization with MPI will be implemented later. We have found that scalability is strongly hardware-dependent.

What is new compared to Serpent 1:

- Shared memory parallelization removes the excessive memory consumption issues in Serpent 1
- A new random number generator with the capability to reproduce single-cpu results in parallel mode
- Jaakko

User avatar
Jaakko Leppänen
Site Admin
Posts: 2179
Joined: Thu Mar 18, 2010 10:43 pm
Security question 2: 0
Location: Espoo, Finland
Contact:

Re: Development of Serpent 2

Post by Jaakko Leppänen » Sat Jul 16, 2011 1:22 pm

The burnup routines are now working similar to Serpent 1, with the addition of some advanced options for predictor-corrector calculation. The results look good after running a few tests, but there is still work to do with parallelization, unresolved resonance probability table treatment and the new memory options.
- Jaakko

Post Reply