updated config notes on MPI depletion

This commit is contained in:
Stafie Alex PSI
2025-12-11 14:24:16 +01:00
parent ec1b5c0bf8
commit 7687175d4e
+31 -56
View File
@@ -1,79 +1,54 @@
# Configurations options
## About MPI parallelism
## About MPI parallelism (corrected for depletion)
When talking about **MPI** with OpenMC + Python, it helps to separate two different layers:
For OpenMC there are **two layers of MPI**, but the rules are a bit different when you use the **Python depletion module**.
### 1. MPI at the Python level (`mpi4py`)
### 1. How depletion actually uses MPI
This is the **outer** parallelism:
When you use `openmc.deplete`:
* You run your script with:
* Python imports the **OpenMC shared library** (`libopenmc.so`) directly via `openmc.lib`.
* The depletion **integrator** (e.g. `Integrator`, `CECMIntegrator`, etc.) calls the transport solver **inside that shared library**, not via `openmc.run()` and not via subprocesses.
* If you want **parallel depletion**, you run your script like:
```bash
mpiexec -n 4 python depletion_mpi.py
```
* `mpi4py` handles communication between ranks (e.g. splitting burnup steps, distributing materials, etc.).
* Each rank calls `openmc.deplete` or `openmc.run()` from Python.
* For this you only need:
* Inside the script, `mpi4py` is used to:
```bash
python -m pip install mpi4py
```
* Get `MPI.COMM_WORLD` or a sub-communicator
* Distribute depletion work across ranks
* Coordinate calls into the OpenMC C++ side
in the same virtual environment where `openmc` is installed.
Because the OpenMC library itself also uses MPI for transport in this mode, two things must match:
This works **even if OpenMC itself was built *without* MPI** — each rank just runs its own independent OpenMC process.
1. OpenMC must be compiled with MPI
### 2. MPI inside OpenMC (transport parallelism)
* CMake:
This is the **inner** parallelism, in the C++ code:
```bash
cmake .. -DOPENMC_USE_MPI=on ...
```
* `openmc --version` should show:
* Enabled at build time with:
```text
MPI enabled: yes
```
```bash
cmake .. -DOPENMC_USE_MPI=on ...
```
* Then each OpenMC run can use multiple MPI ranks for particle transport.
* Typically youd start OpenMC with `mpiexec` (directly or via the Python API / executor).
2. `mpi4py` must be built against the same MPI implementation that was used to build OpenMC. When installing the Python API from the openmc dir, put the depletion-mpi option.
This is useful if you want a **single calculation** to run faster using multiple ranks.
```bash
python -m pip install .[depletion-mpi,test,docs,vtk]
```
### Do you need OpenMC compiled with MPI for depletion-MPI?
You do **not** call `openmc.run()` yourself in depletion; the integrator does that via the in-memory C++ API.
* For a typical **depletion-MPI pattern using only `mpi4py`** (one Python rank ↔ one OpenMC process, no inner MPI):
### 2. Outer vs inner MPI
* ❌ You do **not strictly need** OpenMC compiled with MPI.
* `mpiexec -n N python depletion_mpi.py` with a **serial** OpenMC build is perfectly fine.
Conceptually you can still think of:
* If you want **both**:
* MPI between depletion tasks (`mpi4py`), **and**
* MPI *inside* each OpenMC transport solve,
then ✅ you **do** need `OPENMC_USE_MPI=on` in your OpenMC build and a careful layout of ranks/cores.
### What this means for you
You already have an OpenMC build with:
* `MPI enabled: yes`
So youre fully covered:
* To use depletion-MPI, just install `mpi4py` in your venv:
```bash
python -m pip install mpi4py
```
* Run your script with:
```bash
mpiexec -n 4 python your_depletion_script.py
```
You can choose to:
* Use **only outer MPI** (`mpi4py`, each rank runs serial OpenMC), or
* Combine outer MPI + inner MPI if you really need both levels of parallelism.
* **Outer MPI** = `mpi4py` distributing work between Python ranks
* **Inner MPI** = OpenMC C++ solver running on multiple ranks for one calculation
For OpenMCs depletion module, the integrator assumes there is a valid MPI world communicator shared between `mpi4py` and the OpenMC C++ library.That only works cleanly if OpenMC was compiled with MPI and linked to the same MPI libraries as `mpi4py`.