1.9 KiB
Executable File
Configurations options
About MPI parallelism (corrected for depletion)
For OpenMC there are two layers of MPI, but the rules are a bit different when you use the Python depletion module.
1. How depletion actually uses MPI
When you use openmc.deplete:
-
Python imports the OpenMC shared library (
libopenmc.so) directly viaopenmc.lib. -
The depletion integrator (e.g.
Integrator,CECMIntegrator, etc.) calls the transport solver inside that shared library, not viaopenmc.run()and not via subprocesses. -
If you want parallel depletion, you run your script like:
mpiexec -n 4 python depletion_mpi.py -
Inside the script,
mpi4pyis used to:- Get
MPI.COMM_WORLDor a sub-communicator - Distribute depletion work across ranks
- Coordinate calls into the OpenMC C++ side
- Get
Because the OpenMC library itself also uses MPI for transport in this mode, two things must match:
-
OpenMC must be compiled with MPI
-
CMake:
cmake .. -DOPENMC_USE_MPI=on ... -
openmc --versionshould show:MPI enabled: yes
-
-
mpi4pymust be built against the same MPI implementation that was used to build OpenMC. When installing the Python API from the openmc dir, put the depletion-mpi option.python -m pip install .[depletion-mpi,test,docs,vtk]
You do not call openmc.run() yourself in depletion; the integrator does that via the in-memory C++ API.
2. Outer vs inner MPI
Conceptually you can still think of:
- Outer MPI =
mpi4pydistributing work between Python ranks - Inner MPI = OpenMC C++ solver running on multiple ranks for one calculation
For OpenMC’s depletion module, the integrator assumes there is a valid MPI world communicator shared between mpi4py and the OpenMC C++ library.That only works cleanly if OpenMC was compiled with MPI and linked to the same MPI libraries as mpi4py.