mirror of
https://github.com/slsdetectorgroup/aare.git
synced 2026-04-21 06:44:34 +02:00
a6afa45b3b
## Unified Minuit2 fitting framework with FitModel API ### Models (`Models.hpp`) Consolidate all model structs (Gaussian, RisingScurve, FallingScurve) into a single header. Each model provides: `eval`, `eval_and_grad`, `is_valid`, `estimate_par`, `compute_steps`, and `param_info` metadata. No Minuit2 dependency. ### Chi2 functors (`Chi2.hpp`) Generic `Chi2Model1DGrad` (analytic gradient) templated on the model struct. Replaces the separate Chi2Gaussian, Chi2GaussianGradient, Chi2Scurves, and Chi2ScurvesGradient headers. ### FitModel (`FitModel.hpp`) Configuration object wrapping `MnUserParameters`, strategy, tolerance, and user-override tracking. User constraints (fixed parameters, start values, limits) always take precedence over automatic data-driven estimates. ### Fit functions (`Fit.hpp`) - `fit_pixel<Model, FCN>(model, x, y, y_err)` -> single-pixel, self-contained - `fit_pixel<Model, FCN>(model, upar_local, x, y, y_err)` -> pre-cloned upar for hot loops - `fit_3d<Model, FCN>(model, x, y, y_err, ..., n_threads)` -> row-parallel over pixel grid ### Python bindings - `Pol1`, `Pol2`, `Gaussian`, `RisingScurve`, `FallingScurve` model classes with `FixParameter`, `SetParLimits`, `SetParameter`, and properties for `max_calls`, `tolerance`, `compute_errors` - Single `fit(model, x, y, y_err, n_threads)` dispatch replacing the old `fit_gaus_minuit`, `fit_gaus_minuit_grad`, `fit_scurve_minuit_grad`, etc. ### Benchmarks - Updated `fit_benchmark.cpp` (Google Benchmark) to use the new FitModel API - Jupyter notebooks for 1D and 3D S-curve fitting (lmfit vs Minuit2 analytic) - ~1.8x speedup over lmfit, near-linear thread scaling up to physical core count --------- Co-authored-by: Erik Fröjdh <erik.frojdh@psi.ch>
71 KiB
71 KiB
In [1]:
import numpy as np import matplotlib.pyplot as plt import sys sys.path.insert(0, '/home/ferjao_k/sw/aare/build') from aare import Pol1, Pol2
Ground truth¶
In [2]:
x = np.linspace(0, 10, 50) # Pol1: y = 3.0 + 1.5*x true_pol1 = [3.0, 1.5] y_pol1 = true_pol1[0] + true_pol1[1] * x y_pol1_noisy = y_pol1 + np.random.default_rng(42).normal(0, 0.5, x.size) # Pol2: y = 2.0 - 0.5*x + 0.3*x^2 true_pol2 = [2.0, -0.5, 0.3] y_pol2 = true_pol2[0] + true_pol2[1] * x + true_pol2[2] * x**2 y_pol2_noisy = y_pol2 + np.random.default_rng(7).normal(0, 1.0, x.size)
Fit with error estimation¶
In [11]:
m1 = Pol1() m2 = Pol2() res1 = m1.fit(x, y_pol1_noisy) res2 = m2.fit(x, y_pol2_noisy) p1 = res1['par'] print("=== Pol1 ===") print(f" True: p0={true_pol1[0]:.4f} p1={true_pol1[1]:.4f}") print(f" Fitted: p0={p1[0]:.4f} p1={p1[1]:.4f}") print(f" Chi2: {res1['chi2']}") print() p2 = res2['par'] print("=== Pol2 ===") print(f" True: p0={true_pol2[0]:.4f} p1={true_pol2[1]:.4f} p2={true_pol2[2]:.4f}") print(f" Fitted: p0={p2[0]:.4f} p1={p2[1]:.4f} p2={p2[2]:.4f}") print(f" Chi2: {res2['chi2']}")
=== Pol1 === True: p0=3.0000 p1=1.5000 Fitted: p0=2.9325 p1=1.5226 Chi2: [7.00870822] === Pol2 === True: p0=2.0000 p1=-0.5000 p2=0.3000 Fitted: p0=2.1391 p1=-0.8442 p2=0.3383 Chi2: [34.11224393]
Plot¶
In [13]:
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5)) # Pol1 ax1.plot(x, y_pol1_noisy, 'o', ms=4, alpha=0.6, label='data') ax1.plot(x, y_pol1, 'k--', label='truth') ax1.plot(x, m1(x, res1['par']), 'r-', lw=2, label='fit') ax1.set_title('Pol1') ax1.legend() ax1.grid(alpha=0.3) # Pol2 ax2.plot(x, y_pol2_noisy, 'o', ms=4, alpha=0.6, label='data') ax2.plot(x, y_pol2, 'k--', label='truth') ax2.plot(x, m2(x, res2['par']), 'r-', lw=2, label='fit') ax2.set_title('Pol2') ax2.legend() ax2.grid(alpha=0.3) fig.tight_layout() plt.show()
In [ ]: