updated documentation
This commit is contained in:
Regular → Executable
Regular → Executable
Regular → Executable
+79
@@ -0,0 +1,79 @@
|
||||
# Configurations options
|
||||
|
||||
## About MPI parallelism
|
||||
|
||||
When talking about **MPI** with OpenMC + Python, it helps to separate two different layers:
|
||||
|
||||
### 1. MPI at the Python level (`mpi4py`)
|
||||
|
||||
This is the **outer** parallelism:
|
||||
|
||||
* You run your script with:
|
||||
|
||||
```bash
|
||||
mpiexec -n 4 python depletion_mpi.py
|
||||
```
|
||||
* `mpi4py` handles communication between ranks (e.g. splitting burnup steps, distributing materials, etc.).
|
||||
* Each rank calls `openmc.deplete` or `openmc.run()` from Python.
|
||||
* For this you only need:
|
||||
|
||||
```bash
|
||||
python -m pip install mpi4py
|
||||
```
|
||||
|
||||
in the same virtual environment where `openmc` is installed.
|
||||
|
||||
This works **even if OpenMC itself was built *without* MPI** — each rank just runs its own independent OpenMC process.
|
||||
|
||||
### 2. MPI inside OpenMC (transport parallelism)
|
||||
|
||||
This is the **inner** parallelism, in the C++ code:
|
||||
|
||||
* Enabled at build time with:
|
||||
|
||||
```bash
|
||||
cmake .. -DOPENMC_USE_MPI=on ...
|
||||
```
|
||||
* Then each OpenMC run can use multiple MPI ranks for particle transport.
|
||||
* Typically you’d start OpenMC with `mpiexec` (directly or via the Python API / executor).
|
||||
|
||||
This is useful if you want a **single calculation** to run faster using multiple ranks.
|
||||
|
||||
### Do you need OpenMC compiled with MPI for depletion-MPI?
|
||||
|
||||
* For a typical **depletion-MPI pattern using only `mpi4py`** (one Python rank ↔ one OpenMC process, no inner MPI):
|
||||
|
||||
* ❌ You do **not strictly need** OpenMC compiled with MPI.
|
||||
* `mpiexec -n N python depletion_mpi.py` with a **serial** OpenMC build is perfectly fine.
|
||||
|
||||
* If you want **both**:
|
||||
|
||||
* MPI between depletion tasks (`mpi4py`), **and**
|
||||
* MPI *inside* each OpenMC transport solve,
|
||||
|
||||
then ✅ you **do** need `OPENMC_USE_MPI=on` in your OpenMC build and a careful layout of ranks/cores.
|
||||
|
||||
### What this means for you
|
||||
|
||||
You already have an OpenMC build with:
|
||||
|
||||
* `MPI enabled: yes`
|
||||
|
||||
So you’re fully covered:
|
||||
|
||||
* To use depletion-MPI, just install `mpi4py` in your venv:
|
||||
|
||||
```bash
|
||||
python -m pip install mpi4py
|
||||
```
|
||||
* Run your script with:
|
||||
|
||||
```bash
|
||||
mpiexec -n 4 python your_depletion_script.py
|
||||
```
|
||||
|
||||
You can choose to:
|
||||
|
||||
* Use **only outer MPI** (`mpi4py`, each rank runs serial OpenMC), or
|
||||
* Combine outer MPI + inner MPI if you really need both levels of parallelism.
|
||||
|
||||
|
||||
Regular → Executable
Reference in New Issue
Block a user