From f2fa0a7ef6ca726f707adcb2f535fe75938bcde2 Mon Sep 17 00:00:00 2001 From: Stafie Alex PSI Date: Thu, 11 Dec 2025 13:28:47 +0100 Subject: [PATCH] updated documentation --- .gitignore | 0 LICENSE | 0 README.md | 0 README_LIB.md | 0 README_config.md | 79 ++++++++++++++++++++++++++++++++++++++++ tests/xml_test/README.md | 0 6 files changed, 79 insertions(+) mode change 100644 => 100755 .gitignore mode change 100644 => 100755 LICENSE mode change 100644 => 100755 README.md mode change 100644 => 100755 README_LIB.md mode change 100644 => 100755 README_config.md mode change 100644 => 100755 tests/xml_test/README.md diff --git a/.gitignore b/.gitignore old mode 100644 new mode 100755 diff --git a/LICENSE b/LICENSE old mode 100644 new mode 100755 diff --git a/README.md b/README.md old mode 100644 new mode 100755 diff --git a/README_LIB.md b/README_LIB.md old mode 100644 new mode 100755 diff --git a/README_config.md b/README_config.md old mode 100644 new mode 100755 index e69de29..0152a32 --- a/README_config.md +++ b/README_config.md @@ -0,0 +1,79 @@ +# Configurations options + +## About MPI parallelism + +When talking about **MPI** with OpenMC + Python, it helps to separate two different layers: + +### 1. MPI at the Python level (`mpi4py`) + +This is the **outer** parallelism: + +* You run your script with: + + ```bash + mpiexec -n 4 python depletion_mpi.py + ``` +* `mpi4py` handles communication between ranks (e.g. splitting burnup steps, distributing materials, etc.). +* Each rank calls `openmc.deplete` or `openmc.run()` from Python. +* For this you only need: + + ```bash + python -m pip install mpi4py + ``` + + in the same virtual environment where `openmc` is installed. + +This works **even if OpenMC itself was built *without* MPI** — each rank just runs its own independent OpenMC process. + +### 2. MPI inside OpenMC (transport parallelism) + +This is the **inner** parallelism, in the C++ code: + +* Enabled at build time with: + + ```bash + cmake .. -DOPENMC_USE_MPI=on ... + ``` +* Then each OpenMC run can use multiple MPI ranks for particle transport. +* Typically you’d start OpenMC with `mpiexec` (directly or via the Python API / executor). + +This is useful if you want a **single calculation** to run faster using multiple ranks. + +### Do you need OpenMC compiled with MPI for depletion-MPI? + +* For a typical **depletion-MPI pattern using only `mpi4py`** (one Python rank ↔ one OpenMC process, no inner MPI): + + * ❌ You do **not strictly need** OpenMC compiled with MPI. + * `mpiexec -n N python depletion_mpi.py` with a **serial** OpenMC build is perfectly fine. + +* If you want **both**: + + * MPI between depletion tasks (`mpi4py`), **and** + * MPI *inside* each OpenMC transport solve, + + then ✅ you **do** need `OPENMC_USE_MPI=on` in your OpenMC build and a careful layout of ranks/cores. + +### What this means for you + +You already have an OpenMC build with: + +* `MPI enabled: yes` + +So you’re fully covered: + +* To use depletion-MPI, just install `mpi4py` in your venv: + + ```bash + python -m pip install mpi4py + ``` +* Run your script with: + + ```bash + mpiexec -n 4 python your_depletion_script.py + ``` + +You can choose to: + +* Use **only outer MPI** (`mpi4py`, each rank runs serial OpenMC), or +* Combine outer MPI + inner MPI if you really need both levels of parallelism. + diff --git a/tests/xml_test/README.md b/tests/xml_test/README.md old mode 100644 new mode 100755