2.2 KiB
Executable File
Configurations options
About MPI parallelism
When talking about MPI with OpenMC + Python, it helps to separate two different layers:
1. MPI at the Python level (mpi4py)
This is the outer parallelism:
-
You run your script with:
mpiexec -n 4 python depletion_mpi.py -
mpi4pyhandles communication between ranks (e.g. splitting burnup steps, distributing materials, etc.). -
Each rank calls
openmc.depleteoropenmc.run()from Python. -
For this you only need:
python -m pip install mpi4pyin the same virtual environment where
openmcis installed.
This works even if OpenMC itself was built without MPI — each rank just runs its own independent OpenMC process.
2. MPI inside OpenMC (transport parallelism)
This is the inner parallelism, in the C++ code:
-
Enabled at build time with:
cmake .. -DOPENMC_USE_MPI=on ... -
Then each OpenMC run can use multiple MPI ranks for particle transport.
-
Typically you’d start OpenMC with
mpiexec(directly or via the Python API / executor).
This is useful if you want a single calculation to run faster using multiple ranks.
Do you need OpenMC compiled with MPI for depletion-MPI?
-
For a typical depletion-MPI pattern using only
mpi4py(one Python rank ↔ one OpenMC process, no inner MPI):- ❌ You do not strictly need OpenMC compiled with MPI.
mpiexec -n N python depletion_mpi.pywith a serial OpenMC build is perfectly fine.
-
If you want both:
- MPI between depletion tasks (
mpi4py), and - MPI inside each OpenMC transport solve,
then ✅ you do need
OPENMC_USE_MPI=onin your OpenMC build and a careful layout of ranks/cores. - MPI between depletion tasks (
What this means for you
You already have an OpenMC build with:
MPI enabled: yes
So you’re fully covered:
-
To use depletion-MPI, just install
mpi4pyin your venv:python -m pip install mpi4py -
Run your script with:
mpiexec -n 4 python your_depletion_script.py
You can choose to:
- Use only outer MPI (
mpi4py, each rank runs serial OpenMC), or - Combine outer MPI + inner MPI if you really need both levels of parallelism.