Files
LRT_OpenMC/README_config.md
2025-12-11 14:24:16 +01:00

1.9 KiB
Executable File
Raw Permalink Blame History

Configurations options

About MPI parallelism (corrected for depletion)

For OpenMC there are two layers of MPI, but the rules are a bit different when you use the Python depletion module.

1. How depletion actually uses MPI

When you use openmc.deplete:

  • Python imports the OpenMC shared library (libopenmc.so) directly via openmc.lib.

  • The depletion integrator (e.g. Integrator, CECMIntegrator, etc.) calls the transport solver inside that shared library, not via openmc.run() and not via subprocesses.

  • If you want parallel depletion, you run your script like:

    mpiexec -n 4 python depletion_mpi.py
    
  • Inside the script, mpi4py is used to:

    • Get MPI.COMM_WORLD or a sub-communicator
    • Distribute depletion work across ranks
    • Coordinate calls into the OpenMC C++ side

Because the OpenMC library itself also uses MPI for transport in this mode, two things must match:

  1. OpenMC must be compiled with MPI

    • CMake:

      cmake .. -DOPENMC_USE_MPI=on ...
      
    • openmc --version should show:

      MPI enabled:  yes
      
  2. mpi4py must be built against the same MPI implementation that was used to build OpenMC. When installing the Python API from the openmc dir, put the depletion-mpi option.

    python -m pip install .[depletion-mpi,test,docs,vtk]
    

You do not call openmc.run() yourself in depletion; the integrator does that via the in-memory C++ API.

2. Outer vs inner MPI

Conceptually you can still think of:

  • Outer MPI = mpi4py distributing work between Python ranks
  • Inner MPI = OpenMC C++ solver running on multiple ranks for one calculation

For OpenMCs depletion module, the integrator assumes there is a valid MPI world communicator shared between mpi4py and the OpenMC C++ library.That only works cleanly if OpenMC was compiled with MPI and linked to the same MPI libraries as mpi4py.