71 Commits

Author SHA1 Message Date
10e4e10431 function signature for push back 2025-04-07 15:33:37 +02:00
017960d963 added push_back property
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 37s
2025-04-07 13:41:14 +02:00
a12e43b176 underlying container of ClusterVcetor is now a std::vector 2025-04-07 12:27:44 +02:00
9de84a7f87 added some python tests
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 41s
2025-04-04 17:19:15 +02:00
885309d97c fix build
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 43s
2025-04-03 17:14:28 +02:00
e24ed68416 fixed include 2025-04-03 16:50:02 +02:00
248d25486f refactored python files 2025-04-03 16:38:12 +02:00
a24bbd9cf9 started to do python refactoring
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 44s
2025-04-03 11:56:25 +02:00
d7ef9bb1d8 missed some refactoring of datatypes
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 49s
2025-04-03 11:36:15 +02:00
de9fc16e89 generalize is_selected 2025-04-03 09:28:54 +02:00
85a6b5b95e suppress compiler warnings 2025-04-03 09:28:02 +02:00
50eeba4005 restructured GainMap to have own class and generalized
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 40s
2025-04-02 17:58:26 +02:00
98d2d6098e refactored other cpp files 2025-04-02 16:00:46 +02:00
61af1105a1 templated eta and updated test 2025-04-02 14:42:38 +02:00
240960d3e7 generalized FindCluster to read in general cluster sizes - assuming that finding cluster center is equal for all clusters 2025-04-02 12:05:16 +02:00
04728929cb implemented sum_2x2() for general clusters, only one calculate_eta2 function for all clusters
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 37s
2025-04-01 18:29:08 +02:00
3083d51699 merge conflict 2025-04-01 17:50:11 +02:00
4240942cec solved merge conflict 2025-04-01 17:48:48 +02:00
745d09fbe9 changed push_back to take Cluster as input argument 2025-04-01 15:30:10 +02:00
a42c0d645b added roi, noise and gain (#143)
- Moved definitions of Cluster_2x2 and Cluster_3x3 to it's own file
- Added optional members for ROI, noise_map and gain_map in ClusterFile

**API:**

After creating the ClusterFile the user can set one or all of: roi,
noise_map, gain_map

```python
f = ClusterFile(fname)
f.set_roi(roi) #aare.ROI
f.set_noise_map(noise_map) #numpy array
f.set_gain_map(gain_map) #numpy array
```

**When reading clusters they are evaluated in the order:**

1. If ROI is enabled check that the cluster is within the ROI
1. If noise_map is enabled check that the cluster meets one of the
conditions
    - Center pixel above noise
    - Highest 2x2 sum above 2x noise
    - 3x3 sum above 3x noise
1. If gain_map is set apply the gain map before returning the clusters
(not used for noise cut)

**Open questions:**
1. Check for out of bounds access in noise and gain map?

closes #139 
closes #135 
closes #90
2025-04-01 14:31:25 +02:00
508adf5016 refactoring of remaining files
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 40s
Build the package using cmake then documentation / deploy (push) Has been skipped
2025-04-01 10:01:23 +02:00
e038bd1646 refactored and put calculate_eta function in seperate file 2025-03-31 17:35:39 +02:00
7e5f91c6ec added benchmark to time generalize calculate_eta - twice as long so will keep specific version for 2x2 and 3x3 clusters 2025-03-31 17:04:57 +02:00
ed9ef7c600 removed analyze_cluster function as not used anymore
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 52s
Build the package using cmake then documentation / deploy (push) Has been skipped
2025-03-31 12:26:29 +02:00
57bb6c71ae ClusterSize should be larger than 1
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 51s
Build the package using cmake then documentation / deploy (push) Has been skipped
2025-03-28 14:49:55 +01:00
f8f98b6ec3 Generalized calculate_eta2 function to work with general cluster types 2025-03-28 14:29:20 +01:00
0876b6891a cpp Cluster and ClusterVector and ClusterFile are templated now, they support generic cluster types 2025-03-25 21:42:50 +01:00
6ad76f63c1 Fixed reading clusters with ROI (#142)
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 9s
Fixed incorrect reading of clusters with ROI


closes #141
2025-03-24 14:28:10 +01:00
6e7e81b36b complete mess but need to install RedHat 9 2025-03-21 16:32:54 +01:00
b529b6d33b Merge branch 'main' into developer
All checks were successful
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Successful in 1m33s
2025-03-19 19:29:15 +01:00
602b04e49f bumped version number
All checks were successful
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Successful in 1m35s
2025-03-18 17:47:05 +01:00
11cd2ec654 Interpolate (#137)
- added eta based interpolation
2025-03-18 17:45:38 +01:00
e59a361b51 removed workspace
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 48s
Build the package using cmake then documentation / deploy (push) Has been skipped
2025-03-17 15:23:55 +01:00
1ad362ccfc added action for gitea (#136)
All checks were successful
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Successful in 1m30s
2025-03-17 15:21:59 +01:00
332bdeb02b modified algo 2025-03-14 11:07:09 +01:00
3a987319d4 WIP 2025-03-05 21:51:23 +01:00
5614cb4673 WIP 2025-03-05 17:40:08 +01:00
8ae6bb76f8 removed warnings added clang-tidy 2025-02-21 11:18:39 +01:00
1d2c38c1d4 Enable VarClusterFinder (#134)
Co-authored-by: xiangyu.xie <xiangyu.xie@psi.ch>
2025-02-19 16:11:24 +01:00
b7a47576a1 Multi threaded fitting and returning chi2 (#132)
Co-authored-by: Patrick <patrick.sieberer@psi.ch>
Co-authored-by: JulianHeymes <julian.heymes@psi.ch>
Co-authored-by: Dhanya Thattil <dhanya.thattil@psi.ch>
2025-02-19 07:19:59 +01:00
fc1c9f35d6 Merge branch 'main' into developer 2025-02-18 21:52:20 +01:00
5d2f25a6e9 bumped version number 2025-02-18 21:44:03 +01:00
6a83988485 Added chi2 to fit results (#131)
- fit_gaus and fit_pol1 now return a dict
- calculate chi2 after fit
- cleaned up code
2025-02-18 21:13:27 +01:00
8abfc68138 fixed linking to lmfit (#130)
using "$<BUILD_INTERFACE:lmfit>" to exclude the target lmfit from being
included in the installed aare target
2025-02-18 15:54:52 +01:00
8ff6f9f506 fixed linking to lmfit 2025-02-18 15:49:46 +01:00
dcb9a98faa bumped version 2025-02-12 16:49:30 +01:00
7309cff47c Added fitting with lmfit (#128)
- added stand alone fitting using:
https://jugit.fz-juelich.de/mlz/lmfit.git
- fit_gaus, fit_pol1 with and without errors
- multi threaded fitting

---------

Co-authored-by: JulianHeymes <julian.heymes@psi.ch>
2025-02-12 16:35:48 +01:00
c0c5e07ad8 added decoding of adc_sar_04 (#127) 2025-02-12 16:17:32 +01:00
2faa317bdf removed debug line 2025-02-12 10:59:18 +01:00
f7031d7f87 Update CMakeLists.txt
Removed flto=auto which caused issues with gcc 8.5
2025-02-12 10:52:55 +01:00
d86cb533c8 Fix minor warnings (#126)
- Unused variables
- signed vs. unsigned
- added -flto=auto
2025-02-11 11:48:01 +01:00
4c750cc3be Fixing ROI read of RawFile (#125)
- Bugfixes
- New abstraction for detector geometry
- Tests for updating geo with ROI
2025-02-11 11:08:22 +01:00
e96fe31f11 removed main and token 2025-02-05 15:55:55 +01:00
cd5a738696 disable upload on dev 2025-02-05 15:44:45 +01:00
1ba43b69d3 fix 2025-02-05 15:16:16 +01:00
fff536782b disable auto upload 2025-02-05 15:13:53 +01:00
5a3ca2ae2d Decoding for ADC SAR 05 64->16bit (#124)
Co-authored-by: Patrick <patrick.sieberer@psi.ch>
2025-02-05 14:40:26 +01:00
078e5d81ec docs 2025-01-15 16:40:34 +01:00
6cde968c60 summing 2x2 2025-01-15 16:12:06 +01:00
f6d736facd docs for ClusterFile 2025-01-15 09:15:41 +01:00
e1cc774d6c Multi threaded cluster finder (#117) 2025-01-14 21:36:25 +01:00
d0f435a7ab bounds checking on subfiles 2025-01-10 19:02:50 +01:00
7ce02006f2 clear pedestal 2025-01-10 17:26:23 +01:00
7550a2cb97 fixing read bug 2025-01-10 15:33:56 +01:00
caf7b4ecdb added docs for ClusterFinderMT 2025-01-10 10:22:04 +01:00
72d10b7735 Multi threaded cluster finder. (#115)
Added a prototype for the multi threaded cluster finder including python
bindings
2025-01-09 16:55:35 +01:00
cc95561eda MultiThreaded Cluster finder 2025-01-09 16:53:22 +01:00
dc9e10016d WIP 2025-01-08 16:45:24 +01:00
21ce7a3efa bumped version 2025-01-07 16:33:16 +01:00
acdce8454b moved pd to double 2025-01-07 15:01:43 +01:00
d07da42745 bitdepths 2025-01-07 12:27:01 +01:00
54 changed files with 2803 additions and 1235 deletions

42
.clang-tidy Normal file
View File

@ -0,0 +1,42 @@
---
Checks: '*,
-altera-*,
-android-cloexec-fopen,
-cppcoreguidelines-pro-bounds-array-to-pointer-decay,
-cppcoreguidelines-pro-bounds-pointer-arithmetic,
-fuchsia*,
-readability-else-after-return,
-readability-avoid-const-params-in-decls,
-readability-identifier-length,
-cppcoreguidelines-pro-bounds-constant-array-index,
-cppcoreguidelines-pro-type-reinterpret-cast,
-llvm-header-guard,
-modernize-use-nodiscard,
-misc-non-private-member-variables-in-classes,
-readability-static-accessed-through-instance,
-readability-braces-around-statements,
-readability-isolate-declaration,
-readability-implicit-bool-conversion,
-readability-identifier-length,
-readability-identifier-naming,
-hicpp-signed-bitwise,
-hicpp-no-array-decay,
-hicpp-braces-around-statements,
-google-runtime-references,
-google-readability-todo,
-google-readability-braces-around-statements,
-modernize-use-trailing-return-type,
-llvmlibc-*'
HeaderFilterRegex: \.hpp
FormatStyle: none
CheckOptions:
- { key: readability-identifier-naming.NamespaceCase, value: lower_case }
# - { key: readability-identifier-naming.FunctionCase, value: lower_case }
- { key: readability-identifier-naming.ClassCase, value: CamelCase }
# - { key: readability-identifier-naming.MethodCase, value: CamelCase }
# - { key: readability-identifier-naming.StructCase, value: CamelCase }
# - { key: readability-identifier-naming.VariableCase, value: lower_case }
- { key: readability-identifier-naming.GlobalConstantCase, value: UPPER_CASE }
...

View File

@ -0,0 +1,58 @@
name: Build the package using cmake then documentation
on:
workflow_dispatch:
push:
permissions:
contents: read
pages: write
id-token: write
jobs:
build:
strategy:
fail-fast: false
matrix:
platform: [ubuntu-latest, ] # macos-12, windows-2019]
python-version: ["3.12",]
runs-on: ${{ matrix.platform }}
# The setup-miniconda action needs this to activate miniconda
defaults:
run:
shell: "bash -l {0}"
steps:
- uses: actions/checkout@v4
- name: Setup dev env
run: |
sudo apt-get update
sudo apt-get -y install cmake gcc g++
- name: Get conda
uses: conda-incubator/setup-miniconda@v3.0.4
with:
python-version: ${{ matrix.python-version }}
channels: conda-forge
- name: Prepare
run: conda install doxygen sphinx=7.1.2 breathe pybind11 sphinx_rtd_theme furo nlohmann_json zeromq fmt numpy
- name: Build library
run: |
mkdir build
cd build
cmake .. -DAARE_SYSTEM_LIBRARIES=ON -DAARE_DOCS=ON
make -j 2
make docs

View File

@ -31,7 +31,7 @@ set(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
# General options # General options
option(AARE_PYTHON_BINDINGS "Build python bindings" ON) option(AARE_PYTHON_BINDINGS "Build python bindings" OFF)
option(AARE_TESTS "Build tests" OFF) option(AARE_TESTS "Build tests" OFF)
option(AARE_BENCHMARKS "Build benchmarks" OFF) option(AARE_BENCHMARKS "Build benchmarks" OFF)
option(AARE_EXAMPLES "Build examples" OFF) option(AARE_EXAMPLES "Build examples" OFF)
@ -60,6 +60,8 @@ if(AARE_SYSTEM_LIBRARIES)
set(AARE_FETCH_CATCH OFF CACHE BOOL "Disabled FetchContent for catch2" FORCE) set(AARE_FETCH_CATCH OFF CACHE BOOL "Disabled FetchContent for catch2" FORCE)
set(AARE_FETCH_JSON OFF CACHE BOOL "Disabled FetchContent for nlohmann::json" FORCE) set(AARE_FETCH_JSON OFF CACHE BOOL "Disabled FetchContent for nlohmann::json" FORCE)
set(AARE_FETCH_ZMQ OFF CACHE BOOL "Disabled FetchContent for libzmq" FORCE) set(AARE_FETCH_ZMQ OFF CACHE BOOL "Disabled FetchContent for libzmq" FORCE)
# Still fetch lmfit when setting AARE_SYSTEM_LIBRARIES since this is not available
# on conda-forge
endif() endif()
if(AARE_VERBOSE) if(AARE_VERBOSE)
@ -78,15 +80,30 @@ endif()
set(CMAKE_EXPORT_COMPILE_COMMANDS ON) set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
if(AARE_FETCH_LMFIT) if(AARE_FETCH_LMFIT)
set(lmfit_patch git apply ${CMAKE_CURRENT_SOURCE_DIR}/patches/lmfit.patch) #TODO! Should we fetch lmfit from the web or inlcude a tar.gz in the repo?
FetchContent_Declare( set(LMFIT_PATCH_COMMAND git apply ${CMAKE_CURRENT_SOURCE_DIR}/patches/lmfit.patch)
lmfit
GIT_REPOSITORY https://jugit.fz-juelich.de/mlz/lmfit.git # For cmake < 3.28 we can't supply EXCLUDE_FROM_ALL to FetchContent_Declare
GIT_TAG main # so we need this workaround
PATCH_COMMAND ${lmfit_patch} if (${CMAKE_VERSION} VERSION_LESS "3.28")
UPDATE_DISCONNECTED 1 FetchContent_Declare(
EXCLUDE_FROM_ALL lmfit
) GIT_REPOSITORY https://jugit.fz-juelich.de/mlz/lmfit.git
GIT_TAG main
PATCH_COMMAND ${LMFIT_PATCH_COMMAND}
UPDATE_DISCONNECTED 1
)
else()
FetchContent_Declare(
lmfit
GIT_REPOSITORY https://jugit.fz-juelich.de/mlz/lmfit.git
GIT_TAG main
PATCH_COMMAND ${LMFIT_PATCH_COMMAND}
UPDATE_DISCONNECTED 1
EXCLUDE_FROM_ALL 1
)
endif()
#Disable what we don't need from lmfit #Disable what we don't need from lmfit
set(BUILD_TESTING OFF CACHE BOOL "") set(BUILD_TESTING OFF CACHE BOOL "")
set(LMFIT_CPPTEST OFF CACHE BOOL "") set(LMFIT_CPPTEST OFF CACHE BOOL "")
@ -94,12 +111,16 @@ if(AARE_FETCH_LMFIT)
set(LMFIT_CPPTEST OFF CACHE BOOL "") set(LMFIT_CPPTEST OFF CACHE BOOL "")
set(BUILD_SHARED_LIBS OFF CACHE BOOL "") set(BUILD_SHARED_LIBS OFF CACHE BOOL "")
if (${CMAKE_VERSION} VERSION_LESS "3.28")
if(NOT lmfit_POPULATED)
FetchContent_Populate(lmfit)
add_subdirectory(${lmfit_SOURCE_DIR} ${lmfit_BINARY_DIR} EXCLUDE_FROM_ALL)
endif()
else()
FetchContent_MakeAvailable(lmfit)
endif()
FetchContent_MakeAvailable(lmfit)
set_property(TARGET lmfit PROPERTY POSITION_INDEPENDENT_CODE ON) set_property(TARGET lmfit PROPERTY POSITION_INDEPENDENT_CODE ON)
target_include_directories (lmfit PUBLIC "${libzmq_SOURCE_DIR}/lib")
message(STATUS "lmfit include dir: ${lmfit_SOURCE_DIR}/lib")
else() else()
find_package(lmfit REQUIRED) find_package(lmfit REQUIRED)
endif() endif()
@ -111,10 +132,13 @@ if(AARE_FETCH_ZMQ)
if (${CMAKE_VERSION} VERSION_GREATER_EQUAL "3.30") if (${CMAKE_VERSION} VERSION_GREATER_EQUAL "3.30")
cmake_policy(SET CMP0169 OLD) cmake_policy(SET CMP0169 OLD)
endif() endif()
set(ZMQ_PATCH_COMMAND git apply ${CMAKE_CURRENT_SOURCE_DIR}/patches/libzmq_cmake_version.patch)
FetchContent_Declare( FetchContent_Declare(
libzmq libzmq
GIT_REPOSITORY https://github.com/zeromq/libzmq.git GIT_REPOSITORY https://github.com/zeromq/libzmq.git
GIT_TAG v4.3.4 GIT_TAG v4.3.4
PATCH_COMMAND ${ZMQ_PATCH_COMMAND}
UPDATE_DISCONNECTED 1
) )
# Disable unwanted options from libzmq # Disable unwanted options from libzmq
set(BUILD_TESTS OFF CACHE BOOL "Switch off libzmq test build") set(BUILD_TESTS OFF CACHE BOOL "Switch off libzmq test build")
@ -307,6 +331,8 @@ endif()
set(PUBLICHEADERS set(PUBLICHEADERS
include/aare/ArrayExpr.hpp include/aare/ArrayExpr.hpp
include/aare/CalculateEta.hpp
include/aare/Cluster.hpp
include/aare/ClusterFinder.hpp include/aare/ClusterFinder.hpp
include/aare/ClusterFile.hpp include/aare/ClusterFile.hpp
include/aare/CtbRawFile.hpp include/aare/CtbRawFile.hpp
@ -318,6 +344,7 @@ set(PUBLICHEADERS
include/aare/Fit.hpp include/aare/Fit.hpp
include/aare/FileInterface.hpp include/aare/FileInterface.hpp
include/aare/Frame.hpp include/aare/Frame.hpp
include/aare/GainMap.hpp
include/aare/geo_helpers.hpp include/aare/geo_helpers.hpp
include/aare/NDArray.hpp include/aare/NDArray.hpp
include/aare/NDView.hpp include/aare/NDView.hpp
@ -330,13 +357,11 @@ set(PUBLICHEADERS
include/aare/RawSubFile.hpp include/aare/RawSubFile.hpp
include/aare/VarClusterFinder.hpp include/aare/VarClusterFinder.hpp
include/aare/utils/task.hpp include/aare/utils/task.hpp
) )
set(SourceFiles set(SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/CtbRawFile.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/CtbRawFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/ClusterFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/defs.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/decode.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/decode.cpp
@ -346,6 +371,7 @@ set(SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/geo_helpers.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/geo_helpers.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyFile.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyHelpers.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyHelpers.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Interpolator.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/PixelMap.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/PixelMap.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawFile.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/RawFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawSubFile.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/RawSubFile.cpp
@ -360,8 +386,6 @@ target_include_directories(aare_core PUBLIC
"$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>" "$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>"
) )
target_link_libraries( target_link_libraries(
aare_core aare_core
PUBLIC PUBLIC
@ -370,7 +394,8 @@ target_link_libraries(
${STD_FS_LIB} # from helpers.cmake ${STD_FS_LIB} # from helpers.cmake
PRIVATE PRIVATE
aare_compiler_flags aare_compiler_flags
lmfit $<BUILD_INTERFACE:lmfit>
) )
set_target_properties(aare_core PROPERTIES set_target_properties(aare_core PROPERTIES
@ -384,6 +409,7 @@ endif()
if(AARE_TESTS) if(AARE_TESTS)
set(TestSources set(TestSources
${CMAKE_CURRENT_SOURCE_DIR}/src/algorithm.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/defs.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Frame.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/Frame.test.cpp
@ -393,6 +419,9 @@ if(AARE_TESTS)
${CMAKE_CURRENT_SOURCE_DIR}/src/NDView.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/NDView.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/ClusterFinder.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/ClusterFinder.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/ClusterVector.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/ClusterVector.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Cluster.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/CalculateEta.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/ClusterFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Pedestal.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/Pedestal.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyFile.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyHelpers.test.cpp ${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyHelpers.test.cpp

View File

@ -1,11 +1,27 @@
find_package(benchmark REQUIRED)
add_executable(ndarray_benchmark ndarray_benchmark.cpp) include(FetchContent)
target_link_libraries(ndarray_benchmark benchmark::benchmark aare_core aare_compiler_flags)
# target_link_libraries(tests PRIVATE aare_core aare_compiler_flags)
set_target_properties(ndarray_benchmark PROPERTIES FetchContent_Declare(
RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR} benchmark
# OUTPUT_NAME run_tests GIT_REPOSITORY https://github.com/google/benchmark.git
GIT_TAG v1.8.3 # Change to the latest version if needed
)
# Ensure Google Benchmark is built correctly
set(BENCHMARK_ENABLE_TESTING OFF CACHE BOOL "" FORCE)
FetchContent_MakeAvailable(benchmark)
add_executable(benchmarks)
target_sources(benchmarks PRIVATE ndarray_benchmark.cpp calculateeta_benchmark.cpp)
# Link Google Benchmark and other necessary libraries
target_link_libraries(benchmarks PRIVATE benchmark::benchmark aare_core aare_compiler_flags)
# Set output properties
set_target_properties(benchmarks PROPERTIES
RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}
OUTPUT_NAME run_benchmarks
) )

View File

@ -0,0 +1,70 @@
#include "aare/CalculateEta.hpp"
#include "aare/ClusterFile.hpp"
#include <benchmark/benchmark.h>
using namespace aare;
class ClusterFixture : public benchmark::Fixture {
public:
Cluster<int, 2, 2> cluster_2x2{};
Cluster<int, 3, 3> cluster_3x3{};
private:
using benchmark::Fixture::SetUp;
void SetUp([[maybe_unused]] const benchmark::State &state) override {
int temp_data[4] = {1, 2, 3, 1};
std::copy(std::begin(temp_data), std::end(temp_data),
std::begin(cluster_2x2.data));
cluster_2x2.x = 0;
cluster_2x2.y = 0;
int temp_data2[9] = {1, 2, 3, 1, 3, 4, 5, 1, 20};
std::copy(std::begin(temp_data2), std::end(temp_data2),
std::begin(cluster_3x3.data));
cluster_3x3.x = 0;
cluster_3x3.y = 0;
}
// void TearDown(::benchmark::State& state) {
// }
};
BENCHMARK_F(ClusterFixture, Calculate2x2Eta)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
Eta2 eta = calculate_eta2(cluster_2x2);
benchmark::DoNotOptimize(eta);
}
}
// almost takes double the time
BENCHMARK_F(ClusterFixture,
CalculateGeneralEtaFor2x2Cluster)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
Eta2 eta = calculate_eta2<int, 2, 2>(cluster_2x2);
benchmark::DoNotOptimize(eta);
}
}
BENCHMARK_F(ClusterFixture, Calculate3x3Eta)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
Eta2 eta = calculate_eta2(cluster_3x3);
benchmark::DoNotOptimize(eta);
}
}
// almost takes double the time
BENCHMARK_F(ClusterFixture,
CalculateGeneralEtaFor3x3Cluster)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
Eta2 eta = calculate_eta2<int, 3, 3>(cluster_3x3);
benchmark::DoNotOptimize(eta);
}
}
// BENCHMARK_MAIN();

View File

@ -1,6 +1,8 @@
package: package:
name: aare name: aare
version: 2025.2.12 #TODO! how to not duplicate this? version: 2025.4.1 #TODO! how to not duplicate this?
source: source:

View File

@ -0,0 +1,122 @@
#pragma once
#include "aare/Cluster.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/NDArray.hpp"
namespace aare {
typedef enum {
cBottomLeft = 0,
cBottomRight = 1,
cTopLeft = 2,
cTopRight = 3
} corner;
typedef enum {
pBottomLeft = 0,
pBottom = 1,
pBottomRight = 2,
pLeft = 3,
pCenter = 4,
pRight = 5,
pTopLeft = 6,
pTop = 7,
pTopRight = 8
} pixel;
template <typename T> struct Eta2 {
double x;
double y;
int c;
T sum;
};
/**
* @brief Calculate the eta2 values for all clusters in a Clsutervector
*/
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
NDArray<double, 2> calculate_eta2(const ClusterVector<ClusterType> &clusters) {
NDArray<double, 2> eta2({static_cast<int64_t>(clusters.size()), 2});
for (size_t i = 0; i < clusters.size(); i++) {
auto e = calculate_eta2(clusters.at(i));
eta2(i, 0) = e.x;
eta2(i, 1) = e.y;
}
return eta2;
}
/**
* @brief Calculate the eta2 values for a generic sized cluster and return them
* in a Eta2 struct containing etay, etax and the index of the respective 2x2
* subcluster.
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
Eta2<T>
calculate_eta2(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType> &cl) {
Eta2<T> eta{};
auto max_sum = cl.max_sum_2x2();
eta.sum = max_sum.first;
auto c = max_sum.second;
size_t index_bottom_left_max_2x2_subcluster =
(int(c / (ClusterSizeX - 1))) * ClusterSizeX + c % (ClusterSizeX - 1);
if ((cl.data[index_bottom_left_max_2x2_subcluster] +
cl.data[index_bottom_left_max_2x2_subcluster + 1]) != 0)
eta.x = static_cast<double>(
cl.data[index_bottom_left_max_2x2_subcluster + 1]) /
static_cast<double>(
(cl.data[index_bottom_left_max_2x2_subcluster] +
cl.data[index_bottom_left_max_2x2_subcluster + 1]));
if ((cl.data[index_bottom_left_max_2x2_subcluster] +
cl.data[index_bottom_left_max_2x2_subcluster + ClusterSizeX]) != 0)
eta.y =
static_cast<double>(
cl.data[index_bottom_left_max_2x2_subcluster + ClusterSizeX]) /
static_cast<double>(
(cl.data[index_bottom_left_max_2x2_subcluster] +
cl.data[index_bottom_left_max_2x2_subcluster + ClusterSizeX]));
eta.c = c; // TODO only supported for 2x2 and 3x3 clusters -> at least no
// underyling enum class
return eta;
}
// calculates Eta3 for 3x3 cluster based on code from analyze_cluster
// TODO only supported for 3x3 Clusters
template <typename T> Eta2<T> calculate_eta3(const Cluster<T, 3, 3> &cl) {
Eta2<T> eta{};
T sum = 0;
std::for_each(std::begin(cl.data), std::end(cl.data),
[&sum](T x) { sum += x; });
eta.sum = sum;
eta.c = corner::cBottomLeft;
if ((cl.data[3] + cl.data[4] + cl.data[5]) != 0)
eta.x = static_cast<double>(-cl.data[3] + cl.data[3 + 2]) /
(cl.data[3] + cl.data[4] + cl.data[5]);
if ((cl.data[1] + cl.data[4] + cl.data[7]) != 0)
eta.y = static_cast<double>(-cl.data[1] + cl.data[2 * 3 + 1]) /
(cl.data[1] + cl.data[4] + cl.data[7]);
return eta;
}
} // namespace aare

121
include/aare/Cluster.hpp Normal file
View File

@ -0,0 +1,121 @@
/************************************************
* @file Cluster.hpp
* @short definition of cluster, where CoordType (x,y) give
* the cluster center coordinates and data the actual cluster data
* cluster size is given as template parameters
***********************************************/
#pragma once
#include <algorithm>
#include <array>
#include <cstdint>
#include <numeric>
#include <type_traits>
namespace aare {
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = int16_t>
constexpr bool is_valid_cluster =
std::is_arithmetic_v<T> && std::is_integral_v<CoordType> &&
(ClusterSizeX > 0) && (ClusterSizeY > 0);
// requires clause c++20 maybe update
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = int16_t,
typename Enable = std::enable_if_t<
is_valid_cluster<T, ClusterSizeX, ClusterSizeY, CoordType>>>
struct Cluster {
CoordType x;
CoordType y;
T data[ClusterSizeX * ClusterSizeY];
T sum() const {
return std::accumulate(data, data + ClusterSizeX * ClusterSizeY, 0);
}
std::pair<T, int> max_sum_2x2() const {
constexpr size_t num_2x2_subclusters =
(ClusterSizeX - 1) * (ClusterSizeY - 1);
std::array<T, num_2x2_subclusters> sum_2x2_subcluster;
for (size_t i = 0; i < ClusterSizeY - 1; ++i) {
for (size_t j = 0; j < ClusterSizeX - 1; ++j)
sum_2x2_subcluster[i * (ClusterSizeX - 1) + j] =
data[i * ClusterSizeX + j] +
data[i * ClusterSizeX + j + 1] +
data[(i + 1) * ClusterSizeX + j] +
data[(i + 1) * ClusterSizeX + j + 1];
}
int index = std::max_element(sum_2x2_subcluster.begin(),
sum_2x2_subcluster.end()) -
sum_2x2_subcluster.begin();
return std::make_pair(sum_2x2_subcluster[index], index);
}
};
// Specialization for 2x2 clusters (only one sum exists)
template <typename T> struct Cluster<T, 2, 2, int16_t> {
int16_t x;
int16_t y;
T data[4];
T sum() const { return std::accumulate(data, data + 4, 0); }
std::pair<T, int> max_sum_2x2() const {
return std::make_pair(data[0] + data[1] + data[2] + data[3],
0); // Only one possible 2x2 sum
}
};
// Specialization for 3x3 clusters
template <typename T> struct Cluster<T, 3, 3, int16_t> {
int16_t x;
int16_t y;
T data[9];
T sum() const { return std::accumulate(data, data + 9, 0); }
std::pair<T, int> max_sum_2x2() const {
std::array<T, 4> sum_2x2_subclusters;
sum_2x2_subclusters[0] = data[0] + data[1] + data[3] + data[4];
sum_2x2_subclusters[1] = data[1] + data[2] + data[4] + data[5];
sum_2x2_subclusters[2] = data[3] + data[4] + data[6] + data[7];
sum_2x2_subclusters[3] = data[4] + data[5] + data[7] + data[8];
int index = std::max_element(sum_2x2_subclusters.begin(),
sum_2x2_subclusters.end()) -
sum_2x2_subclusters.begin();
return std::make_pair(sum_2x2_subclusters[index], index);
}
};
// Type Traits for is_cluster_type
template <typename T>
struct is_cluster : std::false_type {}; // Default case: Not a Cluster
template <typename T, uint8_t X, uint8_t Y, typename CoordType>
struct is_cluster<Cluster<T, X, Y, CoordType>> : std::true_type {}; // Cluster
template <typename T> constexpr bool is_cluster_v = is_cluster<T>::value;
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
struct extract_template_arguments; // Forward declaration
// helper struct to extract template argument
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
struct extract_template_arguments<
Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
using value_type = T;
static constexpr int cluster_size_x = ClusterSizeX;
static constexpr int cluster_size_y = ClusterSizeY;
using coordtype = CoordType;
};
} // namespace aare

View File

@ -2,29 +2,31 @@
#include <atomic> #include <atomic>
#include <thread> #include <thread>
#include "aare/ProducerConsumerQueue.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/ClusterFinderMT.hpp" #include "aare/ClusterFinderMT.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/ProducerConsumerQueue.hpp"
namespace aare { namespace aare {
class ClusterCollector{ template <typename ClusterType,
ProducerConsumerQueue<ClusterVector<int>>* m_source; typename = std::enable_if_t<is_cluster_v<ClusterType>>>
std::atomic<bool> m_stop_requested{false}; class ClusterCollector {
std::atomic<bool> m_stopped{true}; ProducerConsumerQueue<ClusterVector<ClusterType>> *m_source;
std::chrono::milliseconds m_default_wait{1}; std::atomic<bool> m_stop_requested{false};
std::thread m_thread; std::atomic<bool> m_stopped{true};
std::vector<ClusterVector<int>> m_clusters; std::chrono::milliseconds m_default_wait{1};
std::thread m_thread;
std::vector<ClusterVector<ClusterType>> m_clusters;
void process(){ void process() {
m_stopped = false; m_stopped = false;
fmt::print("ClusterCollector started\n"); fmt::print("ClusterCollector started\n");
while (!m_stop_requested || !m_source->isEmpty()) { while (!m_stop_requested || !m_source->isEmpty()) {
if (ClusterVector<int> *clusters = m_source->frontPtr(); if (ClusterVector<ClusterType> *clusters = m_source->frontPtr();
clusters != nullptr) { clusters != nullptr) {
m_clusters.push_back(std::move(*clusters)); m_clusters.push_back(std::move(*clusters));
m_source->popFront(); m_source->popFront();
}else{ } else {
std::this_thread::sleep_for(m_default_wait); std::this_thread::sleep_for(m_default_wait);
} }
} }
@ -32,21 +34,21 @@ class ClusterCollector{
m_stopped = true; m_stopped = true;
} }
public: public:
ClusterCollector(ClusterFinderMT<uint16_t, double, int32_t>* source){ ClusterCollector(ClusterFinderMT<ClusterType, uint16_t, double> *source) {
m_source = source->sink(); m_source = source->sink();
m_thread = std::thread(&ClusterCollector::process, this); m_thread = std::thread(&ClusterCollector::process, this);
} }
void stop(){ void stop() {
m_stop_requested = true; m_stop_requested = true;
m_thread.join(); m_thread.join();
} }
std::vector<ClusterVector<int>> steal_clusters(){ std::vector<ClusterVector<ClusterType>> steal_clusters() {
if(!m_stopped){ if (!m_stopped) {
throw std::runtime_error("ClusterCollector is still running"); throw std::runtime_error("ClusterCollector is still running");
}
return std::move(m_clusters);
} }
return std::move(m_clusters);
}
}; };
} // namespace aare } // namespace aare

View File

@ -1,51 +1,16 @@
#pragma once #pragma once
#include "aare/Cluster.hpp"
#include "aare/ClusterVector.hpp" #include "aare/ClusterVector.hpp"
#include "aare/GainMap.hpp"
#include "aare/NDArray.hpp" #include "aare/NDArray.hpp"
#include "aare/defs.hpp" #include "aare/defs.hpp"
#include <filesystem> #include <filesystem>
#include <fstream> #include <fstream>
#include <optional>
namespace aare { namespace aare {
struct Cluster3x3 {
int16_t x;
int16_t y;
int32_t data[9];
};
typedef enum {
cBottomLeft = 0,
cBottomRight = 1,
cTopLeft = 2,
cTopRight = 3
} corner;
typedef enum {
pBottomLeft = 0,
pBottom = 1,
pBottomRight = 2,
pLeft = 3,
pCenter = 4,
pRight = 5,
pTopLeft = 6,
pTop = 7,
pTopRight = 8
} pixel;
struct Eta2 {
double x;
double y;
corner c;
};
struct ClusterAnalysis {
uint32_t c;
int32_t tot;
double etax;
double etay;
};
/* /*
Binary cluster file. Expects data to be layed out as: Binary cluster file. Expects data to be layed out as:
int32_t frame_number int32_t frame_number
@ -56,6 +21,8 @@ uint32_t number_of_clusters
.... ....
*/ */
// TODO: change to support any type of clusters, e.g. header line with
// clsuter_size_x, cluster_size_y,
/** /**
* @brief Class to read and write cluster files * @brief Class to read and write cluster files
* Expects data to be laid out as: * Expects data to be laid out as:
@ -63,16 +30,23 @@ uint32_t number_of_clusters
* *
* int32_t frame_number * int32_t frame_number
* uint32_t number_of_clusters * uint32_t number_of_clusters
* int16_t x, int16_t y, int32_t data[9] x number_of_clusters * int16_t x, int16_t y, int32_t data[9] * number_of_clusters
* int32_t frame_number * int32_t frame_number
* uint32_t number_of_clusters * uint32_t number_of_clusters
* etc. * etc.
*/ */
template <typename ClusterType,
typename Enable = std::enable_if_t<is_cluster_v<ClusterType>>>
class ClusterFile { class ClusterFile {
FILE *fp{}; FILE *fp{};
uint32_t m_num_left{}; uint32_t m_num_left{}; /*Number of photons left in frame*/
size_t m_chunk_size{}; size_t m_chunk_size{}; /*Number of clusters to read at a time*/
const std::string m_mode; const std::string m_mode; /*Mode to open the file in*/
std::optional<ROI> m_roi; /*Region of interest, will be applied if set*/
std::optional<NDArray<int32_t, 2>>
m_noise_map; /*Noise map to cut photons, will be applied if set*/
std::optional<GainMap> m_gain_map; /*Gain map to apply to the clusters, will
be applied if set*/
public: public:
/** /**
@ -87,7 +61,6 @@ class ClusterFile {
ClusterFile(const std::filesystem::path &fname, size_t chunk_size = 1000, ClusterFile(const std::filesystem::path &fname, size_t chunk_size = 1000,
const std::string &mode = "r"); const std::string &mode = "r");
~ClusterFile(); ~ClusterFile();
/** /**
@ -95,41 +68,388 @@ class ClusterFile {
* If EOF is reached the returned vector will have less than n_clusters * If EOF is reached the returned vector will have less than n_clusters
* clusters * clusters
*/ */
ClusterVector<int32_t> read_clusters(size_t n_clusters); ClusterVector<ClusterType> read_clusters(size_t n_clusters);
/** /**
* @brief Read a single frame from the file and return the clusters. The * @brief Read a single frame from the file and return the clusters. The
* cluster vector will have the frame number set. * cluster vector will have the frame number set.
* @throws std::runtime_error if the file is not opened for reading or the file pointer not * @throws std::runtime_error if the file is not opened for reading or the
* at the beginning of a frame * file pointer not at the beginning of a frame
*/ */
ClusterVector<int32_t> read_frame(); ClusterVector<ClusterType> read_frame();
void write_frame(const ClusterVector<ClusterType> &clusters);
void write_frame(const ClusterVector<int32_t> &clusters);
// Need to be migrated to support NDArray and return a ClusterVector
// std::vector<Cluster3x3>
// read_cluster_with_cut(size_t n_clusters, double *noise_map, int nx, int ny);
/** /**
* @brief Return the chunk size * @brief Return the chunk size
*/ */
size_t chunk_size() const { return m_chunk_size; } size_t chunk_size() const { return m_chunk_size; }
/**
* @brief Set the region of interest to use when reading clusters. If set
* only clusters within the ROI will be read.
*/
void set_roi(ROI roi);
/** /**
* @brief Close the file. If not closed the file will be closed in the destructor * @brief Set the noise map to use when reading clusters. If set clusters
* below the noise level will be discarded. Selection criteria one of:
* Central pixel above noise, highest 2x2 sum above 2 * noise, total sum
* above 3 * noise.
*/
void set_noise_map(const NDView<int32_t, 2> noise_map);
/**
* @brief Set the gain map to use when reading clusters. If set the gain map
* will be applied to the clusters that pass ROI and noise_map selection.
*/
void set_gain_map(const NDView<double, 2> gain_map);
void set_gain_map(const GainMap &gain_map);
void set_gain_map(const GainMap &&gain_map);
/**
* @brief Close the file. If not closed the file will be closed in the
* destructor
*/ */
void close(); void close();
private:
ClusterVector<ClusterType> read_clusters_with_cut(size_t n_clusters);
ClusterVector<ClusterType> read_clusters_without_cut(size_t n_clusters);
ClusterVector<ClusterType> read_frame_with_cut();
ClusterVector<ClusterType> read_frame_without_cut();
bool is_selected(ClusterType &cl);
ClusterType read_one_cluster();
}; };
int analyze_data(int32_t *data, int32_t *t2, int32_t *t3, char *quad, template <typename ClusterType, typename Enable>
double *eta2x, double *eta2y, double *eta3x, double *eta3y); ClusterFile<ClusterType, Enable>::ClusterFile(
int analyze_cluster(Cluster3x3 &cl, int32_t *t2, int32_t *t3, char *quad, const std::filesystem::path &fname, size_t chunk_size,
double *eta2x, double *eta2y, double *eta3x, double *eta3y); const std::string &mode)
: m_chunk_size(chunk_size), m_mode(mode) {
NDArray<double, 2> calculate_eta2(ClusterVector<int> &clusters); if (mode == "r") {
Eta2 calculate_eta2(Cluster3x3 &cl); fp = fopen(fname.c_str(), "rb");
if (!fp) {
throw std::runtime_error("Could not open file for reading: " +
fname.string());
}
} else if (mode == "w") {
fp = fopen(fname.c_str(), "wb");
if (!fp) {
throw std::runtime_error("Could not open file for writing: " +
fname.string());
}
} else if (mode == "a") {
fp = fopen(fname.c_str(), "ab");
if (!fp) {
throw std::runtime_error("Could not open file for appending: " +
fname.string());
}
} else {
throw std::runtime_error("Unsupported mode: " + mode);
}
}
template <typename ClusterType, typename Enable>
ClusterFile<ClusterType, Enable>::~ClusterFile() {
close();
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::close() {
if (fp) {
fclose(fp);
fp = nullptr;
}
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_roi(ROI roi) {
m_roi = roi;
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_noise_map(
const NDView<int32_t, 2> noise_map) {
m_noise_map = NDArray<int32_t, 2>(noise_map);
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_gain_map(
const NDView<double, 2> gain_map) {
m_gain_map = GainMap(gain_map);
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_gain_map(const GainMap &gain_map) {
m_gain_map = gain_map;
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_gain_map(const GainMap &&gain_map) {
m_gain_map = gain_map;
}
// TODO generally supported for all clsuter types
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::write_frame(
const ClusterVector<ClusterType> &clusters) {
if (m_mode != "w" && m_mode != "a") {
throw std::runtime_error("File not opened for writing");
}
if (!(clusters.cluster_size_x() == 3) &&
!(clusters.cluster_size_y() == 3)) {
throw std::runtime_error("Only 3x3 clusters are supported");
}
int32_t frame_number = clusters.frame_number();
fwrite(&frame_number, sizeof(frame_number), 1, fp);
uint32_t n_clusters = clusters.size();
fwrite(&n_clusters, sizeof(n_clusters), 1, fp);
fwrite(clusters.data(), clusters.item_size(), clusters.size(), fp);
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
ClusterFile<ClusterType, Enable>::read_clusters(size_t n_clusters) {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
if (m_noise_map || m_roi) {
return read_clusters_with_cut(n_clusters);
} else {
return read_clusters_without_cut(n_clusters);
}
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
ClusterFile<ClusterType, Enable>::read_clusters_without_cut(size_t n_clusters) {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
ClusterVector<ClusterType> clusters(n_clusters);
int32_t iframe = 0; // frame number needs to be 4 bytes!
size_t nph_read = 0;
uint32_t nn = m_num_left;
uint32_t nph = m_num_left; // number of clusters in frame needs to be 4
// auto buf = reinterpret_cast<Cluster3x3 *>(clusters.data());
auto buf = clusters.data();
// if there are photons left from previous frame read them first
if (nph) {
if (nph > n_clusters) {
// if we have more photons left in the frame then photons to read we
// read directly the requested number
nn = n_clusters;
} else {
nn = nph;
}
nph_read += fread((buf + nph_read * clusters.item_size()),
clusters.item_size(), nn, fp);
m_num_left = nph - nn; // write back the number of photons left
}
if (nph_read < n_clusters) {
// keep on reading frames and photons until reaching n_clusters
while (fread(&iframe, sizeof(iframe), 1, fp)) {
clusters.set_frame_number(iframe);
// read number of clusters in frame
if (fread(&nph, sizeof(nph), 1, fp)) {
if (nph > (n_clusters - nph_read))
nn = n_clusters - nph_read;
else
nn = nph;
nph_read += fread((buf + nph_read * clusters.item_size()),
clusters.item_size(), nn, fp);
m_num_left = nph - nn;
}
if (nph_read >= n_clusters)
break;
}
}
// Resize the vector to the number of clusters.
// No new allocation, only change bounds.
clusters.resize(nph_read);
if (m_gain_map)
m_gain_map->apply_gain_map(clusters);
return clusters;
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
ClusterFile<ClusterType, Enable>::read_clusters_with_cut(size_t n_clusters) {
ClusterVector<ClusterType> clusters;
clusters.reserve(n_clusters);
// if there are photons left from previous frame read them first
if (m_num_left) {
while (m_num_left && clusters.size() < n_clusters) {
ClusterType c = read_one_cluster();
if (is_selected(c)) {
clusters.push_back(c);
}
}
}
// we did not have enough clusters left in the previous frame
// keep on reading frames until reaching n_clusters
if (clusters.size() < n_clusters) {
// sanity check
if (m_num_left) {
throw std::runtime_error(
LOCATION + "Entered second loop with clusters left\n");
}
int32_t frame_number = 0; // frame number needs to be 4 bytes!
while (fread(&frame_number, sizeof(frame_number), 1, fp)) {
if (fread(&m_num_left, sizeof(m_num_left), 1, fp)) {
clusters.set_frame_number(
frame_number); // cluster vector will hold the last frame
// number
while (m_num_left && clusters.size() < n_clusters) {
ClusterType c = read_one_cluster();
if (is_selected(c)) {
clusters.push_back(c);
}
}
}
// we have enough clusters, break out of the outer while loop
if (clusters.size() >= n_clusters)
break;
}
}
if (m_gain_map)
m_gain_map->apply_gain_map(clusters);
return clusters;
}
template <typename ClusterType, typename Enable>
ClusterType ClusterFile<ClusterType, Enable>::read_one_cluster() {
ClusterType c;
auto rc = fread(&c, sizeof(c), 1, fp);
if (rc != 1) {
throw std::runtime_error(LOCATION + "Could not read cluster");
}
--m_num_left;
return c;
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType> ClusterFile<ClusterType, Enable>::read_frame() {
if (m_mode != "r") {
throw std::runtime_error(LOCATION + "File not opened for reading");
}
if (m_noise_map || m_roi) {
return read_frame_with_cut();
} else {
return read_frame_without_cut();
}
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
ClusterFile<ClusterType, Enable>::read_frame_without_cut() {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
if (m_num_left) {
throw std::runtime_error(
"There are still photons left in the last frame");
}
int32_t frame_number;
if (fread(&frame_number, sizeof(frame_number), 1, fp) != 1) {
throw std::runtime_error(LOCATION + "Could not read frame number");
}
int32_t n_clusters; // Saved as 32bit integer in the cluster file
if (fread(&n_clusters, sizeof(n_clusters), 1, fp) != 1) {
throw std::runtime_error(LOCATION +
"Could not read number of clusters");
}
ClusterVector<ClusterType> clusters(n_clusters);
clusters.set_frame_number(frame_number);
if (fread(clusters.data(), clusters.item_size(), n_clusters, fp) !=
static_cast<size_t>(n_clusters)) {
throw std::runtime_error(LOCATION + "Could not read clusters");
}
clusters.resize(n_clusters);
if (m_gain_map)
m_gain_map->apply_gain_map(clusters);
return clusters;
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
ClusterFile<ClusterType, Enable>::read_frame_with_cut() {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
if (m_num_left) {
throw std::runtime_error(
"There are still photons left in the last frame");
}
int32_t frame_number;
if (fread(&frame_number, sizeof(frame_number), 1, fp) != 1) {
throw std::runtime_error("Could not read frame number");
}
if (fread(&m_num_left, sizeof(m_num_left), 1, fp) != 1) {
throw std::runtime_error("Could not read number of clusters");
}
ClusterVector<ClusterType> clusters;
clusters.reserve(m_num_left);
clusters.set_frame_number(frame_number);
while (m_num_left) {
ClusterType c = read_one_cluster();
if (is_selected(c)) {
clusters.push_back(c);
}
}
if (m_gain_map)
m_gain_map->apply_gain_map(clusters);
return clusters;
}
template <typename ClusterType, typename Enable>
bool ClusterFile<ClusterType, Enable>::is_selected(ClusterType &cl) {
// Should fail fast
if (m_roi) {
if (!(m_roi->contains(cl.x, cl.y))) {
return false;
}
}
auto cluster_size_x = extract_template_arguments<
std::remove_reference_t<decltype(cl)>>::cluster_size_x;
auto cluster_size_y = extract_template_arguments<
std::remove_reference_t<decltype(cl)>>::cluster_size_y;
size_t cluster_center_index =
(cluster_size_x / 2) + (cluster_size_y / 2) * cluster_size_x;
if (m_noise_map) {
auto sum_1x1 = cl.data[cluster_center_index]; // central pixel
auto sum_2x2 = cl.max_sum_2x2().first; // highest sum of 2x2 subclusters
auto total_sum = cl.sum(); // sum of all pixels
auto noise =
(*m_noise_map)(cl.y, cl.x); // TODO! check if this is correct
if (sum_1x1 <= noise || sum_2x2 <= 2 * noise ||
total_sum <= 3 * noise) {
return false;
}
}
// we passed all checks
return true;
}
} // namespace aare } // namespace aare

View File

@ -3,35 +3,41 @@
#include <filesystem> #include <filesystem>
#include <thread> #include <thread>
#include "aare/ProducerConsumerQueue.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/ClusterFinderMT.hpp" #include "aare/ClusterFinderMT.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/ProducerConsumerQueue.hpp"
namespace aare{ namespace aare {
class ClusterFileSink{ template <typename ClusterType,
ProducerConsumerQueue<ClusterVector<int>>* m_source; typename = std::enable_if_t<is_cluster_v<ClusterType>>>
class ClusterFileSink {
ProducerConsumerQueue<ClusterVector<ClusterType>> *m_source;
std::atomic<bool> m_stop_requested{false}; std::atomic<bool> m_stop_requested{false};
std::atomic<bool> m_stopped{true}; std::atomic<bool> m_stopped{true};
std::chrono::milliseconds m_default_wait{1}; std::chrono::milliseconds m_default_wait{1};
std::thread m_thread; std::thread m_thread;
std::ofstream m_file; std::ofstream m_file;
void process() {
void process(){
m_stopped = false; m_stopped = false;
fmt::print("ClusterFileSink started\n"); fmt::print("ClusterFileSink started\n");
while (!m_stop_requested || !m_source->isEmpty()) { while (!m_stop_requested || !m_source->isEmpty()) {
if (ClusterVector<int> *clusters = m_source->frontPtr(); if (ClusterVector<ClusterType> *clusters = m_source->frontPtr();
clusters != nullptr) { clusters != nullptr) {
// Write clusters to file // Write clusters to file
int32_t frame_number = clusters->frame_number(); //TODO! Should we store frame number already as int? int32_t frame_number =
clusters->frame_number(); // TODO! Should we store frame
// number already as int?
uint32_t num_clusters = clusters->size(); uint32_t num_clusters = clusters->size();
m_file.write(reinterpret_cast<const char*>(&frame_number), sizeof(frame_number)); m_file.write(reinterpret_cast<const char *>(&frame_number),
m_file.write(reinterpret_cast<const char*>(&num_clusters), sizeof(num_clusters)); sizeof(frame_number));
m_file.write(reinterpret_cast<const char*>(clusters->data()), clusters->size() * clusters->item_size()); m_file.write(reinterpret_cast<const char *>(&num_clusters),
sizeof(num_clusters));
m_file.write(reinterpret_cast<const char *>(clusters->data()),
clusters->size() * clusters->item_size());
m_source->popFront(); m_source->popFront();
}else{ } else {
std::this_thread::sleep_for(m_default_wait); std::this_thread::sleep_for(m_default_wait);
} }
} }
@ -39,18 +45,18 @@ class ClusterFileSink{
m_stopped = true; m_stopped = true;
} }
public: public:
ClusterFileSink(ClusterFinderMT<uint16_t, double, int32_t>* source, const std::filesystem::path& fname){ ClusterFileSink(ClusterFinderMT<ClusterType, uint16_t, double> *source,
m_source = source->sink(); const std::filesystem::path &fname) {
m_thread = std::thread(&ClusterFileSink::process, this); m_source = source->sink();
m_file.open(fname, std::ios::binary); m_thread = std::thread(&ClusterFileSink::process, this);
} m_file.open(fname, std::ios::binary);
void stop(){ }
m_stop_requested = true; void stop() {
m_thread.join(); m_stop_requested = true;
m_file.close(); m_thread.join();
} m_file.close();
}
}; };
} // namespace aare } // namespace aare

View File

@ -1,15 +1,16 @@
#pragma once #pragma once
#include "aare/core/defs.hpp" #include "aare/core/defs.hpp"
#include <filesystem> #include <filesystem>
#include <string>
#include <fmt/format.h> #include <fmt/format.h>
#include <string>
namespace aare { namespace aare {
struct ClusterHeader { struct ClusterHeader {
int32_t frame_number; int32_t frame_number;
int32_t n_clusters; int32_t n_clusters;
std::string to_string() const { std::string to_string() const {
return "frame_number: " + std::to_string(frame_number) + ", n_clusters: " + std::to_string(n_clusters); return "frame_number: " + std::to_string(frame_number) +
", n_clusters: " + std::to_string(n_clusters);
} }
}; };
@ -24,7 +25,8 @@ struct ClusterV2_ {
data_str += std::to_string(d) + ", "; data_str += std::to_string(d) + ", ";
} }
data_str += "]"; data_str += "]";
return "x: " + std::to_string(x) + ", y: " + std::to_string(y) + ", data: " + data_str; return "x: " + std::to_string(x) + ", y: " + std::to_string(y) +
", data: " + data_str;
} }
return "x: " + std::to_string(x) + ", y: " + std::to_string(y); return "x: " + std::to_string(x) + ", y: " + std::to_string(y);
} }
@ -34,13 +36,15 @@ struct ClusterV2 {
ClusterV2_ cluster; ClusterV2_ cluster;
int32_t frame_number; int32_t frame_number;
std::string to_string() const { std::string to_string() const {
return "frame_number: " + std::to_string(frame_number) + ", " + cluster.to_string(); return "frame_number: " + std::to_string(frame_number) + ", " +
cluster.to_string();
} }
}; };
/** /**
* @brief * @brief
* important not: fp always points to the clusters header and does not point to individual clusters * important not: fp always points to the clusters header and does not point to
* individual clusters
* *
*/ */
class ClusterFileV2 { class ClusterFileV2 {
@ -48,13 +52,15 @@ class ClusterFileV2 {
std::string m_mode; std::string m_mode;
FILE *fp{nullptr}; FILE *fp{nullptr};
void check_open(){ void check_open() {
if (!fp) if (!fp)
throw std::runtime_error(fmt::format("File: {} not open", m_fpath.string())); throw std::runtime_error(
fmt::format("File: {} not open", m_fpath.string()));
} }
public: public:
ClusterFileV2(std::filesystem::path const &fpath, std::string const &mode): m_fpath(fpath), m_mode(mode) { ClusterFileV2(std::filesystem::path const &fpath, std::string const &mode)
: m_fpath(fpath), m_mode(mode) {
if (m_mode != "r" && m_mode != "w") if (m_mode != "r" && m_mode != "w")
throw std::invalid_argument("mode must be 'r' or 'w'"); throw std::invalid_argument("mode must be 'r' or 'w'");
if (m_mode == "r" && !std::filesystem::exists(m_fpath)) if (m_mode == "r" && !std::filesystem::exists(m_fpath))

View File

@ -10,17 +10,21 @@
namespace aare { namespace aare {
template <typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double, template <typename ClusterType = Cluster<int32_t, 3, 3>,
typename CT = int32_t> typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double>
class ClusterFinder { class ClusterFinder {
Shape<2> m_image_size; Shape<2> m_image_size;
const int m_cluster_sizeX;
const int m_cluster_sizeY;
const PEDESTAL_TYPE m_nSigma; const PEDESTAL_TYPE m_nSigma;
const PEDESTAL_TYPE c2; const PEDESTAL_TYPE c2;
const PEDESTAL_TYPE c3; const PEDESTAL_TYPE c3;
Pedestal<PEDESTAL_TYPE> m_pedestal; Pedestal<PEDESTAL_TYPE> m_pedestal;
ClusterVector<CT> m_clusters; ClusterVector<ClusterType> m_clusters;
static const uint8_t ClusterSizeX =
extract_template_arguments<ClusterType>::cluster_size_x;
static const uint8_t ClusterSizeY =
extract_template_arguments<ClusterType>::cluster_size_x;
using CT = typename extract_template_arguments<ClusterType>::value_type;
public: public:
/** /**
@ -31,15 +35,12 @@ class ClusterFinder {
* @param capacity initial capacity of the cluster vector * @param capacity initial capacity of the cluster vector
* *
*/ */
ClusterFinder(Shape<2> image_size, Shape<2> cluster_size, ClusterFinder(Shape<2> image_size, PEDESTAL_TYPE nSigma = 5.0,
PEDESTAL_TYPE nSigma = 5.0, size_t capacity = 1000000) size_t capacity = 1000000)
: m_image_size(image_size), m_cluster_sizeX(cluster_size[0]), : m_image_size(image_size), m_nSigma(nSigma),
m_cluster_sizeY(cluster_size[1]), c2(sqrt((ClusterSizeY + 1) / 2 * (ClusterSizeX + 1) / 2)),
m_nSigma(nSigma), c3(sqrt(ClusterSizeX * ClusterSizeY)),
c2(sqrt((m_cluster_sizeY + 1) / 2 * (m_cluster_sizeX + 1) / 2)), m_pedestal(image_size[0], image_size[1]), m_clusters(capacity) {};
c3(sqrt(m_cluster_sizeX * m_cluster_sizeY)),
m_pedestal(image_size[0], image_size[1]),
m_clusters(m_cluster_sizeX, m_cluster_sizeY, capacity) {};
void push_pedestal_frame(NDView<FRAME_TYPE, 2> frame) { void push_pedestal_frame(NDView<FRAME_TYPE, 2> frame) {
m_pedestal.push(frame); m_pedestal.push(frame);
@ -56,23 +57,29 @@ class ClusterFinder {
* same capacity as the old one * same capacity as the old one
* *
*/ */
ClusterVector<CT> steal_clusters(bool realloc_same_capacity = false) { ClusterVector<ClusterType>
ClusterVector<CT> tmp = std::move(m_clusters); steal_clusters(bool realloc_same_capacity = false) {
ClusterVector<ClusterType> tmp = std::move(m_clusters);
if (realloc_same_capacity) if (realloc_same_capacity)
m_clusters = ClusterVector<CT>(m_cluster_sizeX, m_cluster_sizeY, m_clusters = ClusterVector<ClusterType>(tmp.capacity());
tmp.capacity());
else else
m_clusters = ClusterVector<CT>(m_cluster_sizeX, m_cluster_sizeY); m_clusters = ClusterVector<ClusterType>{};
return tmp; return tmp;
} }
void find_clusters(NDView<FRAME_TYPE, 2> frame, uint64_t frame_number = 0) { void find_clusters(NDView<FRAME_TYPE, 2> frame, uint64_t frame_number = 0) {
// // TODO! deal with even size clusters // // TODO! deal with even size clusters
// // currently 3,3 -> +/- 1 // // currently 3,3 -> +/- 1
// // 4,4 -> +/- 2 // // 4,4 -> +/- 2
int dy = m_cluster_sizeY / 2; int dy = ClusterSizeY / 2;
int dx = m_cluster_sizeX / 2; int dx = ClusterSizeX / 2;
int has_center_pixel_x =
ClusterSizeX %
2; // for even sized clusters there is no proper cluster center and
// even amount of pixels around the center
int has_center_pixel_y = ClusterSizeY % 2;
m_clusters.set_frame_number(frame_number); m_clusters.set_frame_number(frame_number);
std::vector<CT> cluster_data(m_cluster_sizeX * m_cluster_sizeY); std::vector<CT> cluster_data(ClusterSizeX * ClusterSizeY);
for (int iy = 0; iy < frame.shape(0); iy++) { for (int iy = 0; iy < frame.shape(0); iy++) {
for (int ix = 0; ix < frame.shape(1); ix++) { for (int ix = 0; ix < frame.shape(1); ix++) {
@ -87,8 +94,8 @@ class ClusterFinder {
continue; // NEGATIVE_PEDESTAL go to next pixel continue; // NEGATIVE_PEDESTAL go to next pixel
// TODO! No pedestal update??? // TODO! No pedestal update???
for (int ir = -dy; ir < dy + 1; ir++) { for (int ir = -dy; ir < dy + has_center_pixel_y; ir++) {
for (int ic = -dx; ic < dx + 1; ic++) { for (int ic = -dx; ic < dx + has_center_pixel_x; ic++) {
if (ix + ic >= 0 && ix + ic < frame.shape(1) && if (ix + ic >= 0 && ix + ic < frame.shape(1) &&
iy + ir >= 0 && iy + ir < frame.shape(0)) { iy + ir >= 0 && iy + ir < frame.shape(0)) {
PEDESTAL_TYPE val = PEDESTAL_TYPE val =
@ -109,8 +116,12 @@ class ClusterFinder {
// pass // pass
} else { } else {
// m_pedestal.push(iy, ix, frame(iy, ix)); // Safe option // m_pedestal.push(iy, ix, frame(iy, ix)); // Safe option
m_pedestal.push_fast(iy, ix, frame(iy, ix)); // Assume we have reached n_samples in the pedestal, slight performance improvement m_pedestal.push_fast(
continue; // It was a pedestal value nothing to store iy, ix,
frame(iy,
ix)); // Assume we have reached n_samples in the
// pedestal, slight performance improvement
continue; // It was a pedestal value nothing to store
} }
// Store cluster // Store cluster
@ -122,13 +133,14 @@ class ClusterFinder {
// It's worth redoing the look since most of the time we // It's worth redoing the look since most of the time we
// don't have a photon // don't have a photon
int i = 0; int i = 0;
for (int ir = -dy; ir < dy + 1; ir++) { for (int ir = -dy; ir < dy + has_center_pixel_y; ir++) {
for (int ic = -dx; ic < dx + 1; ic++) { for (int ic = -dx; ic < dx + has_center_pixel_y; ic++) {
if (ix + ic >= 0 && ix + ic < frame.shape(1) && if (ix + ic >= 0 && ix + ic < frame.shape(1) &&
iy + ir >= 0 && iy + ir < frame.shape(0)) { iy + ir >= 0 && iy + ir < frame.shape(0)) {
CT tmp = CT tmp =
static_cast<CT>(frame(iy + ir, ix + ic)) - static_cast<CT>(frame(iy + ir, ix + ic)) -
m_pedestal.mean(iy + ir, ix + ic); static_cast<CT>(
m_pedestal.mean(iy + ir, ix + ic));
cluster_data[i] = cluster_data[i] =
tmp; // Watch for out of bounds access tmp; // Watch for out of bounds access
i++; i++;
@ -136,10 +148,13 @@ class ClusterFinder {
} }
} }
ClusterType new_cluster{};
new_cluster.x = ix;
new_cluster.y = iy;
std::copy(cluster_data.begin(), cluster_data.end(),
new_cluster.data);
// Add the cluster to the output ClusterVector // Add the cluster to the output ClusterVector
m_clusters.push_back( m_clusters.push_back(new_cluster);
ix, iy,
reinterpret_cast<std::byte *>(cluster_data.data()));
} }
} }
} }

View File

@ -30,14 +30,16 @@ struct FrameWrapper {
* @tparam PEDESTAL_TYPE type of the pedestal data * @tparam PEDESTAL_TYPE type of the pedestal data
* @tparam CT type of the cluster data * @tparam CT type of the cluster data
*/ */
template <typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double, template <typename ClusterType = Cluster<int32_t, 3, 3>,
typename CT = int32_t> typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double>
class ClusterFinderMT { class ClusterFinderMT {
using CT = typename extract_template_arguments<ClusterType>::value_type;
size_t m_current_thread{0}; size_t m_current_thread{0};
size_t m_n_threads{0}; size_t m_n_threads{0};
using Finder = ClusterFinder<FRAME_TYPE, PEDESTAL_TYPE, CT>; using Finder = ClusterFinder<ClusterType, FRAME_TYPE, PEDESTAL_TYPE>;
using InputQueue = ProducerConsumerQueue<FrameWrapper>; using InputQueue = ProducerConsumerQueue<FrameWrapper>;
using OutputQueue = ProducerConsumerQueue<ClusterVector<int>>; using OutputQueue = ProducerConsumerQueue<ClusterVector<ClusterType>>;
std::vector<std::unique_ptr<InputQueue>> m_input_queues; std::vector<std::unique_ptr<InputQueue>> m_input_queues;
std::vector<std::unique_ptr<OutputQueue>> m_output_queues; std::vector<std::unique_ptr<OutputQueue>> m_output_queues;
@ -66,7 +68,8 @@ class ClusterFinderMT {
switch (frame->type) { switch (frame->type) {
case FrameType::DATA: case FrameType::DATA:
cf->find_clusters(frame->data.view(), frame->frame_number); cf->find_clusters(frame->data.view(), frame->frame_number);
m_output_queues[thread_id]->write(cf->steal_clusters(realloc_same_capacity)); m_output_queues[thread_id]->write(
cf->steal_clusters(realloc_same_capacity));
break; break;
case FrameType::PEDESTAL: case FrameType::PEDESTAL:
@ -114,28 +117,31 @@ class ClusterFinderMT {
* expected number of clusters in a frame per frame. * expected number of clusters in a frame per frame.
* @param n_threads number of threads to use * @param n_threads number of threads to use
*/ */
ClusterFinderMT(Shape<2> image_size, Shape<2> cluster_size, ClusterFinderMT(Shape<2> image_size, PEDESTAL_TYPE nSigma = 5.0,
PEDESTAL_TYPE nSigma = 5.0, size_t capacity = 2000, size_t capacity = 2000, size_t n_threads = 3)
size_t n_threads = 3)
: m_n_threads(n_threads) { : m_n_threads(n_threads) {
for (size_t i = 0; i < n_threads; i++) { for (size_t i = 0; i < n_threads; i++) {
m_cluster_finders.push_back( m_cluster_finders.push_back(
std::make_unique<ClusterFinder<FRAME_TYPE, PEDESTAL_TYPE, CT>>( std::make_unique<
image_size, cluster_size, nSigma, capacity)); ClusterFinder<ClusterType, FRAME_TYPE, PEDESTAL_TYPE>>(
image_size, nSigma, capacity));
} }
for (size_t i = 0; i < n_threads; i++) { for (size_t i = 0; i < n_threads; i++) {
m_input_queues.emplace_back(std::make_unique<InputQueue>(200)); m_input_queues.emplace_back(std::make_unique<InputQueue>(200));
m_output_queues.emplace_back(std::make_unique<OutputQueue>(200)); m_output_queues.emplace_back(std::make_unique<OutputQueue>(200));
} }
//TODO! Should we start automatically? // TODO! Should we start automatically?
start(); start();
} }
/** /**
* @brief Return the sink queue where all the clusters are collected * @brief Return the sink queue where all the clusters are collected
* @warning You need to empty this queue otherwise the cluster finder will wait forever * @warning You need to empty this queue otherwise the cluster finder will
* wait forever
*/ */
ProducerConsumerQueue<ClusterVector<int>> *sink() { return &m_sink; } ProducerConsumerQueue<ClusterVector<ClusterType>> *sink() {
return &m_sink;
}
/** /**
* @brief Start all processing threads * @brief Start all processing threads

View File

@ -1,4 +1,5 @@
#pragma once #pragma once
#include "aare/Cluster.hpp" //TODO maybe store in seperate file !!!
#include <algorithm> #include <algorithm>
#include <array> #include <array>
#include <cstddef> #include <cstddef>
@ -8,8 +9,15 @@
#include <fmt/core.h> #include <fmt/core.h>
#include "aare/Cluster.hpp"
#include "aare/NDView.hpp"
namespace aare { namespace aare {
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
class ClusterVector; // Forward declaration
/** /**
* @brief ClusterVector is a container for clusters of various sizes. It uses a * @brief ClusterVector is a container for clusters of various sizes. It uses a
* contiguous memory buffer to store the clusters. It is templated on the data * contiguous memory buffer to store the clusters. It is templated on the data
@ -21,15 +29,16 @@ namespace aare {
* @tparam CoordType data type of the x and y coordinates of the cluster * @tparam CoordType data type of the x and y coordinates of the cluster
* (normally int16_t) * (normally int16_t)
*/ */
template <typename T, typename CoordType = int16_t> class ClusterVector { #if 0
using value_type = T; template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
size_t m_cluster_size_x; typename CoordType>
size_t m_cluster_size_y; class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
std::byte *m_data{}; std::byte *m_data{};
size_t m_size{0}; size_t m_size{0};
size_t m_capacity; size_t m_capacity;
uint64_t m_frame_number{0}; // TODO! Check frame number size and type uint64_t m_frame_number{0}; // TODO! Check frame number size and type
/* /**
Format string used in the python bindings to create a numpy Format string used in the python bindings to create a numpy
array from the buffer array from the buffer
= - native byte order = - native byte order
@ -40,29 +49,26 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
constexpr static char m_fmt_base[] = "=h:x:\nh:y:\n({},{}){}:data:"; constexpr static char m_fmt_base[] = "=h:x:\nh:y:\n({},{}){}:data:";
public: public:
using value_type = T;
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
/** /**
* @brief Construct a new ClusterVector object * @brief Construct a new ClusterVector object
* @param cluster_size_x size of the cluster in x direction
* @param cluster_size_y size of the cluster in y direction
* @param capacity initial capacity of the buffer in number of clusters * @param capacity initial capacity of the buffer in number of clusters
* @param frame_number frame number of the clusters. Default is 0, which is * @param frame_number frame number of the clusters. Default is 0, which is
* also used to indicate that the clusters come from many frames * also used to indicate that the clusters come from many frames
*/ */
ClusterVector(size_t cluster_size_x = 3, size_t cluster_size_y = 3, ClusterVector(size_t capacity = 1024, uint64_t frame_number = 0)
size_t capacity = 1024, uint64_t frame_number = 0) : m_capacity(capacity), m_frame_number(frame_number) {
: m_cluster_size_x(cluster_size_x), m_cluster_size_y(cluster_size_y), allocate_buffer(m_capacity);
m_capacity(capacity), m_frame_number(frame_number) {
allocate_buffer(capacity);
} }
~ClusterVector() { delete[] m_data; } ~ClusterVector() { delete[] m_data; }
// Move constructor // Move constructor
ClusterVector(ClusterVector &&other) noexcept ClusterVector(ClusterVector &&other) noexcept
: m_cluster_size_x(other.m_cluster_size_x), : m_data(other.m_data), m_size(other.m_size),
m_cluster_size_y(other.m_cluster_size_y), m_data(other.m_data), m_capacity(other.m_capacity), m_frame_number(other.m_frame_number) {
m_size(other.m_size), m_capacity(other.m_capacity),
m_frame_number(other.m_frame_number) {
other.m_data = nullptr; other.m_data = nullptr;
other.m_size = 0; other.m_size = 0;
other.m_capacity = 0; other.m_capacity = 0;
@ -72,8 +78,6 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
ClusterVector &operator=(ClusterVector &&other) noexcept { ClusterVector &operator=(ClusterVector &&other) noexcept {
if (this != &other) { if (this != &other) {
delete[] m_data; delete[] m_data;
m_cluster_size_x = other.m_cluster_size_x;
m_cluster_size_y = other.m_cluster_size_y;
m_data = other.m_data; m_data = other.m_data;
m_size = other.m_size; m_size = other.m_size;
m_capacity = other.m_capacity; m_capacity = other.m_capacity;
@ -100,26 +104,22 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
/** /**
* @brief Add a cluster to the vector * @brief Add a cluster to the vector
* @param x x-coordinate of the cluster
* @param y y-coordinate of the cluster
* @param data pointer to the data of the cluster
* @warning The data pointer must point to a buffer of size cluster_size_x *
* cluster_size_y * sizeof(T)
*/ */
void push_back(CoordType x, CoordType y, const std::byte *data) { void push_back(const ClusterType &cluster) {
if (m_size == m_capacity) { if (m_size == m_capacity) {
allocate_buffer(m_capacity * 2); allocate_buffer(m_capacity * 2);
} }
std::byte *ptr = element_ptr(m_size); std::byte *ptr = element_ptr(m_size);
*reinterpret_cast<CoordType *>(ptr) = x; *reinterpret_cast<CoordType *>(ptr) = cluster.x;
ptr += sizeof(CoordType); ptr += sizeof(CoordType);
*reinterpret_cast<CoordType *>(ptr) = y; *reinterpret_cast<CoordType *>(ptr) = cluster.y;
ptr += sizeof(CoordType); ptr += sizeof(CoordType);
std::copy(data, data + m_cluster_size_x * m_cluster_size_y * sizeof(T), std::memcpy(ptr, cluster.data, ClusterSizeX * ClusterSizeY * sizeof(T));
ptr);
m_size++; m_size++;
} }
ClusterVector &operator+=(const ClusterVector &other) { ClusterVector &operator+=(const ClusterVector &other) {
if (m_size + other.m_size > m_capacity) { if (m_size + other.m_size > m_capacity) {
allocate_buffer(m_capacity + other.m_size); allocate_buffer(m_capacity + other.m_size);
@ -134,10 +134,11 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
* @brief Sum the pixels in each cluster * @brief Sum the pixels in each cluster
* @return std::vector<T> vector of sums for each cluster * @return std::vector<T> vector of sums for each cluster
*/ */
/*
std::vector<T> sum() { std::vector<T> sum() {
std::vector<T> sums(m_size); std::vector<T> sums(m_size);
const size_t stride = item_size(); const size_t stride = item_size();
const size_t n_pixels = m_cluster_size_x * m_cluster_size_y; const size_t n_pixels = ClusterSizeX * ClusterSizeY;
std::byte *ptr = m_data + 2 * sizeof(CoordType); // skip x and y std::byte *ptr = m_data + 2 * sizeof(CoordType); // skip x and y
for (size_t i = 0; i < m_size; i++) { for (size_t i = 0; i < m_size; i++) {
@ -148,43 +149,33 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
} }
return sums; return sums;
} }
*/
/** /**
* @brief Return the maximum sum of the 2x2 subclusters in each cluster * @brief Sum the pixels in the 2x2 subcluster with the biggest pixel sum in
* each cluster
* @return std::vector<T> vector of sums for each cluster * @return std::vector<T> vector of sums for each cluster
* @throws std::runtime_error if the cluster size is not 3x3 */ //TODO if underlying container is a vector use std::for_each
* @warning Only 3x3 clusters are supported for the 2x2 sum. /*
*/
std::vector<T> sum_2x2() { std::vector<T> sum_2x2() {
std::vector<T> sums(m_size); std::vector<T> sums_2x2(m_size);
const size_t stride = item_size();
if (m_cluster_size_x != 3 || m_cluster_size_y != 3) {
throw std::runtime_error(
"Only 3x3 clusters are supported for the 2x2 sum.");
}
std::byte *ptr = m_data + 2 * sizeof(CoordType); // skip x and y
for (size_t i = 0; i < m_size; i++) { for (size_t i = 0; i < m_size; i++) {
std::array<T, 4> total; sums_2x2[i] = at(i).max_sum_2x2;
auto T_ptr = reinterpret_cast<T *>(ptr);
total[0] = T_ptr[0] + T_ptr[1] + T_ptr[3] + T_ptr[4];
total[1] = T_ptr[1] + T_ptr[2] + T_ptr[4] + T_ptr[5];
total[2] = T_ptr[3] + T_ptr[4] + T_ptr[6] + T_ptr[7];
total[3] = T_ptr[4] + T_ptr[5] + T_ptr[7] + T_ptr[8];
sums[i] = *std::max_element(total.begin(), total.end());
ptr += stride;
} }
return sums_2x2;
return sums;
} }
*/
/** /**
* @brief Return the number of clusters in the vector * @brief Return the number of clusters in the vector
*/ */
size_t size() const { return m_size; } size_t size() const { return m_size; }
uint8_t cluster_size_x() const { return ClusterSizeX; }
uint8_t cluster_size_y() const { return ClusterSizeY; }
/** /**
* @brief Return the capacity of the buffer in number of clusters. This is * @brief Return the capacity of the buffer in number of clusters. This is
* the number of clusters that can be stored in the current buffer without * the number of clusters that can be stored in the current buffer without
@ -196,8 +187,7 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
* @brief Return the size in bytes of a single cluster * @brief Return the size in bytes of a single cluster
*/ */
size_t item_size() const { size_t item_size() const {
return 2 * sizeof(CoordType) + return 2 * sizeof(CoordType) + ClusterSizeX * ClusterSizeY * sizeof(T);
m_cluster_size_x * m_cluster_size_y * sizeof(T);
} }
/** /**
@ -217,9 +207,6 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
return m_data + element_offset(i); return m_data + element_offset(i);
} }
size_t cluster_size_x() const { return m_cluster_size_x; }
size_t cluster_size_y() const { return m_cluster_size_y; }
std::byte *data() { return m_data; } std::byte *data() { return m_data; }
std::byte const *data() const { return m_data; } std::byte const *data() const { return m_data; }
@ -227,8 +214,16 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
* @brief Return a reference to the i-th cluster casted to type V * @brief Return a reference to the i-th cluster casted to type V
* @tparam V type of the cluster * @tparam V type of the cluster
*/ */
template <typename V> V &at(size_t i) { ClusterType &at(size_t i) {
return *reinterpret_cast<V *>(element_ptr(i)); return *reinterpret_cast<ClusterType *>(element_ptr(i));
}
const ClusterType &at(size_t i) const {
return *reinterpret_cast<const ClusterType *>(element_ptr(i));
}
template <typename V> const V &at(size_t i) const {
return *reinterpret_cast<const V *>(element_ptr(i));
} }
const std::string_view fmt_base() const { const std::string_view fmt_base() const {
@ -271,5 +266,124 @@ template <typename T, typename CoordType = int16_t> class ClusterVector {
m_capacity = new_capacity; m_capacity = new_capacity;
} }
}; };
#endif
/**
* @brief ClusterVector is a container for clusters of various sizes. It
* uses a contiguous memory buffer to store the clusters. It is templated on
* the data type and the coordinate type of the clusters.
* @note push_back can invalidate pointers to elements in the container
* @warning ClusterVector is currently move only to catch unintended copies,
* but this might change since there are probably use cases where copying is
* needed.
* @tparam T data type of the pixels in the cluster
* @tparam CoordType data type of the x and y coordinates of the cluster
* (normally int16_t)
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
std::vector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> m_data{};
uint64_t m_frame_number{0}; // TODO! Check frame number size and type
public:
using value_type = T;
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
/**
* @brief Construct a new ClusterVector object
* @param capacity initial capacity of the buffer in number of clusters
* @param frame_number frame number of the clusters. Default is 0, which is
* also used to indicate that the clusters come from many frames
*/
ClusterVector(size_t capacity = 1024, uint64_t frame_number = 0)
: m_frame_number(frame_number) {
m_data.reserve(capacity);
}
// Move constructor
ClusterVector(ClusterVector &&other) noexcept
: m_data(other.m_data), m_frame_number(other.m_frame_number) {
other.m_data.clear();
}
// Move assignment operator
ClusterVector &operator=(ClusterVector &&other) noexcept {
if (this != &other) {
m_data = other.m_data;
m_frame_number = other.m_frame_number;
other.m_data.clear();
other.m_frame_number = 0;
}
return *this;
}
/**
* @brief Reserve space for at least capacity clusters
* @param capacity number of clusters to reserve space for
* @note If capacity is less than the current capacity, the function does
* nothing.
*/
void reserve(size_t capacity) { m_data.reserve(capacity); }
void resize(size_t size) { m_data.resize(size); }
void push_back(const ClusterType &cluster) { m_data.push_back(cluster); }
ClusterVector &operator+=(const ClusterVector &other) {
m_data.insert(m_data.end(), other.begin(), other.end());
return *this;
}
/**
* @brief Return the number of clusters in the vector
*/
size_t size() const { return m_data.size(); }
uint8_t cluster_size_x() const { return ClusterSizeX; }
uint8_t cluster_size_y() const { return ClusterSizeY; }
/**
* @brief Return the capacity of the buffer in number of clusters. This is
* the number of clusters that can be stored in the current buffer without
* reallocation.
*/
size_t capacity() const { return m_data.capacity(); }
const auto begin() const { return m_data.begin(); }
const auto end() const { return m_data.end(); }
/**
* @brief Return the size in bytes of a single cluster
*/
size_t item_size() const {
return 2 * sizeof(CoordType) + ClusterSizeX * ClusterSizeY * sizeof(T);
}
ClusterType *data() { return m_data.data(); }
ClusterType const *data() const { return m_data.data(); }
/**
* @brief Return a reference to the i-th cluster casted to type V
* @tparam V type of the cluster
*/
ClusterType &at(size_t i) { return m_data[i]; }
const ClusterType &at(size_t i) const { return m_data[i]; }
/**
* @brief Return the frame number of the clusters. 0 is used to indicate
* that the clusters come from many frames
*/
uint64_t frame_number() const { return m_frame_number; }
void set_frame_number(uint64_t frame_number) {
m_frame_number = frame_number;
}
};
} // namespace aare } // namespace aare

View File

@ -17,6 +17,14 @@ NDArray<double, 1> pol1(NDView<double, 1> x, NDView<double, 1> par);
} // namespace func } // namespace func
/**
* @brief Estimate the initial parameters for a Gaussian fit
*/
std::array<double, 3> gaus_init_par(const NDView<double, 1> x, const NDView<double, 1> y);
std::array<double, 2> pol1_init_par(const NDView<double, 1> x, const NDView<double, 1> y);
static constexpr int DEFAULT_NUM_THREADS = 4; static constexpr int DEFAULT_NUM_THREADS = 4;
/** /**
@ -33,7 +41,11 @@ NDArray<double, 1> fit_gaus(NDView<double, 1> x, NDView<double, 1> y);
* @param y y vales, layout [row, col, values] * @param y y vales, layout [row, col, values]
* @param n_threads number of threads to use * @param n_threads number of threads to use
*/ */
NDArray<double, 3> fit_gaus(NDView<double, 1> x, NDView<double, 3> y, int n_threads = DEFAULT_NUM_THREADS);
NDArray<double, 3> fit_gaus(NDView<double, 1> x, NDView<double, 3> y,
int n_threads = DEFAULT_NUM_THREADS);
/** /**
@ -45,10 +57,12 @@ NDArray<double, 3> fit_gaus(NDView<double, 1> x, NDView<double, 3> y, int n_thre
* @param par_err_out output error parameters * @param par_err_out output error parameters
*/ */
void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err, void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out); NDView<double, 1> par_out, NDView<double, 1> par_err_out,
double& chi2);
/** /**
* @brief Fit a 1D Gaussian to each pixel with error estimates. Data layout [row, col, values] * @brief Fit a 1D Gaussian to each pixel with error estimates. Data layout
* [row, col, values]
* @param x x values * @param x x values
* @param y y vales, layout [row, col, values] * @param y y vales, layout [row, col, values]
* @param y_err error in y, layout [row, col, values] * @param y_err error in y, layout [row, col, values]
@ -57,20 +71,22 @@ void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
* @param n_threads number of threads to use * @param n_threads number of threads to use
*/ */
void fit_gaus(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err, void fit_gaus(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, int n_threads = DEFAULT_NUM_THREADS); NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads = DEFAULT_NUM_THREADS
);
NDArray<double, 1> fit_pol1(NDView<double, 1> x, NDView<double, 1> y); NDArray<double, 1> fit_pol1(NDView<double, 1> x, NDView<double, 1> y);
NDArray<double, 3> fit_pol1(NDView<double, 1> x, NDView<double, 3> y, int n_threads = DEFAULT_NUM_THREADS); NDArray<double, 3> fit_pol1(NDView<double, 1> x, NDView<double, 3> y,
int n_threads = DEFAULT_NUM_THREADS);
void fit_pol1(NDView<double, 1> x, NDView<double, 1> y, void fit_pol1(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> y_err, NDView<double, 1> par_out, NDView<double, 1> par_out, NDView<double, 1> par_err_out, double& chi2);
NDView<double, 1> par_err_out);
// TODO! not sure we need to offer the different version in C++
void fit_pol1(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out,NDView<double, 2> chi2_out,
int n_threads = DEFAULT_NUM_THREADS);
//TODO! not sure we need to offer the different version in C++
void fit_pol1(NDView<double, 1> x, NDView<double, 3> y,
NDView<double, 3> y_err, NDView<double, 3> par_out,
NDView<double, 3> par_err_out, int n_threads = DEFAULT_NUM_THREADS);
} // namespace aare } // namespace aare

58
include/aare/GainMap.hpp Normal file
View File

@ -0,0 +1,58 @@
/************************************************
* @file ApplyGainMap.hpp
* @short function to apply gain map of image size to a vector of clusters
***********************************************/
#pragma once
#include "aare/Cluster.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/NDArray.hpp"
#include "aare/NDView.hpp"
#include <memory>
namespace aare {
class GainMap {
public:
explicit GainMap(const NDArray<double, 2> &gain_map)
: m_gain_map(gain_map) {};
explicit GainMap(const NDView<double, 2> gain_map) {
m_gain_map = NDArray<double, 2>(gain_map);
}
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
void apply_gain_map(ClusterVector<ClusterType> &clustervec) {
// in principle we need to know the size of the image for this lookup
size_t ClusterSizeX = clustervec.cluster_size_x();
size_t ClusterSizeY = clustervec.cluster_size_y();
using T = typename ClusterVector<ClusterType>::value_type;
int64_t index_cluster_center_x = ClusterSizeX / 2;
int64_t index_cluster_center_y = ClusterSizeY / 2;
for (size_t i = 0; i < clustervec.size(); i++) {
auto &cl = clustervec.at(i);
if (cl.x > 0 && cl.y > 0 && cl.x < m_gain_map.shape(1) - 1 &&
cl.y < m_gain_map.shape(0) - 1) {
for (size_t j = 0; j < ClusterSizeX * ClusterSizeY; j++) {
size_t x = cl.x + j % ClusterSizeX - index_cluster_center_x;
size_t y = cl.y + j / ClusterSizeX - index_cluster_center_y;
cl.data[j] = cl.data[j] * static_cast<T>(m_gain_map(y, x));
}
} else {
memset(cl.data, 0,
ClusterSizeX * ClusterSizeY *
sizeof(T)); // clear edge clusters
}
}
}
private:
NDArray<double, 2> m_gain_map{};
};
} // end of namespace aare

View File

@ -0,0 +1,132 @@
#pragma once
#include "aare/CalculateEta.hpp"
#include "aare/Cluster.hpp"
#include "aare/ClusterFile.hpp" //Cluster_3x3
#include "aare/ClusterVector.hpp"
#include "aare/NDArray.hpp"
#include "aare/NDView.hpp"
#include "aare/algorithm.hpp"
namespace aare {
struct Photon {
double x;
double y;
double energy;
};
class Interpolator {
NDArray<double, 3> m_ietax;
NDArray<double, 3> m_ietay;
NDArray<double, 1> m_etabinsx;
NDArray<double, 1> m_etabinsy;
NDArray<double, 1> m_energy_bins;
public:
Interpolator(NDView<double, 3> etacube, NDView<double, 1> xbins,
NDView<double, 1> ybins, NDView<double, 1> ebins);
NDArray<double, 3> get_ietax() { return m_ietax; }
NDArray<double, 3> get_ietay() { return m_ietay; }
template <typename ClusterType,
typename Eanble = std::enable_if_t<is_cluster_v<ClusterType>>>
std::vector<Photon> interpolate(const ClusterVector<ClusterType> &clusters);
};
// TODO: generalize to support any clustertype!!! otherwise add std::enable_if_t
// to only take Cluster2x2 and Cluster3x3
template <typename ClusterType, typename Enable>
std::vector<Photon>
Interpolator::interpolate(const ClusterVector<ClusterType> &clusters) {
std::vector<Photon> photons;
photons.reserve(clusters.size());
if (clusters.cluster_size_x() == 3 || clusters.cluster_size_y() == 3) {
for (size_t i = 0; i < clusters.size(); i++) {
auto cluster = clusters.at(i);
auto eta = calculate_eta2(cluster);
Photon photon;
photon.x = cluster.x;
photon.y = cluster.y;
photon.energy = eta.sum;
// auto ie = nearest_index(m_energy_bins, photon.energy)-1;
// auto ix = nearest_index(m_etabinsx, eta.x)-1;
// auto iy = nearest_index(m_etabinsy, eta.y)-1;
// Finding the index of the last element that is smaller
// should work fine as long as we have many bins
auto ie = last_smaller(m_energy_bins, photon.energy);
auto ix = last_smaller(m_etabinsx, eta.x);
auto iy = last_smaller(m_etabinsy, eta.y);
// fmt::print("ex: {}, ix: {}, iy: {}\n", ie, ix, iy);
double dX, dY;
// cBottomLeft = 0,
// cBottomRight = 1,
// cTopLeft = 2,
// cTopRight = 3
switch (eta.c) {
case cTopLeft:
dX = -1.;
dY = 0;
break;
case cTopRight:;
dX = 0;
dY = 0;
break;
case cBottomLeft:
dX = -1.;
dY = -1.;
break;
case cBottomRight:
dX = 0.;
dY = -1.;
break;
}
photon.x += m_ietax(ix, iy, ie) * 2 + dX;
photon.y += m_ietay(ix, iy, ie) * 2 + dY;
photons.push_back(photon);
}
} else if (clusters.cluster_size_x() == 2 ||
clusters.cluster_size_y() == 2) {
for (size_t i = 0; i < clusters.size(); i++) {
auto cluster = clusters.at(i);
auto eta = calculate_eta2(cluster);
Photon photon;
photon.x = cluster.x;
photon.y = cluster.y;
photon.energy = eta.sum;
// Now do some actual interpolation.
// Find which energy bin the cluster is in
// auto ie = nearest_index(m_energy_bins, photon.energy)-1;
// auto ix = nearest_index(m_etabinsx, eta.x)-1;
// auto iy = nearest_index(m_etabinsy, eta.y)-1;
// Finding the index of the last element that is smaller
// should work fine as long as we have many bins
auto ie = last_smaller(m_energy_bins, photon.energy);
auto ix = last_smaller(m_etabinsx, eta.x);
auto iy = last_smaller(m_etabinsy, eta.y);
photon.x += m_ietax(ix, iy, ie) *
2; // eta goes between 0 and 1 but we could move the hit
// anywhere in the 2x2
photon.y += m_ietay(ix, iy, ie) * 2;
photons.push_back(photon);
}
} else {
throw std::runtime_error(
"Only 3x3 and 2x2 clusters are supported for interpolation");
}
return photons;
}
} // namespace aare

View File

@ -69,6 +69,11 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
std::copy(v.begin(), v.end(), begin()); std::copy(v.begin(), v.end(), begin());
} }
template<size_t Size>
NDArray(const std::array<T, Size>& arr) : NDArray<T,1>({Size}) {
std::copy(arr.begin(), arr.end(), begin());
}
// Move constructor // Move constructor
NDArray(NDArray &&other) noexcept NDArray(NDArray &&other) noexcept
: shape_(other.shape_), strides_(c_strides<Ndim>(shape_)), : shape_(other.shape_), strides_(c_strides<Ndim>(shape_)),
@ -97,6 +102,9 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
auto begin() { return data_; } auto begin() { return data_; }
auto end() { return data_ + size_; } auto end() { return data_ + size_; }
auto begin() const { return data_; }
auto end() const { return data_ + size_; }
using value_type = T; using value_type = T;
NDArray &operator=(NDArray &&other) noexcept; // Move assign NDArray &operator=(NDArray &&other) noexcept; // Move assign
@ -105,6 +113,20 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
NDArray &operator-=(const NDArray &other); NDArray &operator-=(const NDArray &other);
NDArray &operator*=(const NDArray &other); NDArray &operator*=(const NDArray &other);
//Write directly to the data array, or create a new one
template<size_t Size>
NDArray<T,1>& operator=(const std::array<T,Size> &other){
if(Size != size_){
delete[] data_;
size_ = Size;
data_ = new T[size_];
}
for (size_t i = 0; i < Size; ++i) {
data_[i] = other[i];
}
return *this;
}
// NDArray& operator/=(const NDArray& other); // NDArray& operator/=(const NDArray& other);
template <typename V> NDArray &operator/=(const NDArray<V, Ndim> &other) { template <typename V> NDArray &operator/=(const NDArray<V, Ndim> &other) {
@ -135,6 +157,11 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
NDArray &operator&=(const T & /*mask*/); NDArray &operator&=(const T & /*mask*/);
void sqrt() { void sqrt() {
for (int i = 0; i < size_; ++i) { for (int i = 0; i < size_; ++i) {
data_[i] = std::sqrt(data_[i]); data_[i] = std::sqrt(data_[i]);
@ -318,6 +345,9 @@ NDArray<T, Ndim> &NDArray<T, Ndim>::operator+=(const T &value) {
return *this; return *this;
} }
template <typename T, int64_t Ndim> template <typename T, int64_t Ndim>
NDArray<T, Ndim> NDArray<T, Ndim>::operator+(const T &value) { NDArray<T, Ndim> NDArray<T, Ndim>::operator+(const T &value) {
NDArray result = *this; NDArray result = *this;
@ -361,12 +391,12 @@ NDArray<T, Ndim> NDArray<T, Ndim>::operator*(const T &value) {
result *= value; result *= value;
return result; return result;
} }
template <typename T, int64_t Ndim> void NDArray<T, Ndim>::Print() { // template <typename T, int64_t Ndim> void NDArray<T, Ndim>::Print() {
if (shape_[0] < 20 && shape_[1] < 20) // if (shape_[0] < 20 && shape_[1] < 20)
Print_all(); // Print_all();
else // else
Print_some(); // Print_some();
} // }
template <typename T, int64_t Ndim> template <typename T, int64_t Ndim>
std::ostream &operator<<(std::ostream &os, const NDArray<T, Ndim> &arr) { std::ostream &operator<<(std::ostream &os, const NDArray<T, Ndim> &arr) {
@ -418,4 +448,6 @@ NDArray<T, Ndim> load(const std::string &pathname,
return img; return img;
} }
} // namespace aare } // namespace aare

View File

@ -1,5 +1,5 @@
#pragma once #pragma once
#include "aare/defs.hpp"
#include "aare/ArrayExpr.hpp" #include "aare/ArrayExpr.hpp"
#include <algorithm> #include <algorithm>
@ -99,6 +99,15 @@ template <typename T, int64_t Ndim = 2> class NDView : public ArrayExpr<NDView<T
NDView &operator/=(const NDView &other) { return elemenwise(other, std::divides<T>()); } NDView &operator/=(const NDView &other) { return elemenwise(other, std::divides<T>()); }
template<size_t Size>
NDView& operator=(const std::array<T, Size> &arr) {
if(size() != arr.size())
throw std::runtime_error(LOCATION + "Array and NDView size mismatch");
std::copy(arr.begin(), arr.end(), begin());
return *this;
}
NDView &operator=(const T val) { NDView &operator=(const T val) {
for (auto it = begin(); it != end(); ++it) for (auto it = begin(); it != end(); ++it)
*it = val; *it = val;

View File

@ -64,7 +64,7 @@ class RawSubFile {
size_t bytes_per_frame() const { return m_bytes_per_frame; } size_t bytes_per_frame() const { return m_bytes_per_frame; }
size_t pixels_per_frame() const { return m_rows * m_cols; } size_t pixels_per_frame() const { return m_rows * m_cols; }
size_t bytes_per_pixel() const { return m_bitdepth / 8; } size_t bytes_per_pixel() const { return m_bitdepth / bits_per_byte; }
private: private:
template <typename T> template <typename T>

View File

@ -7,7 +7,7 @@
#include "aare/NDArray.hpp" #include "aare/NDArray.hpp"
const int MAX_CLUSTER_SIZE = 200; const int MAX_CLUSTER_SIZE = 50;
namespace aare { namespace aare {
template <typename T> class VarClusterFinder { template <typename T> class VarClusterFinder {

View File

@ -0,0 +1,55 @@
#pragma once
#include <algorithm>
#include <array>
#include <vector>
#include <aare/NDArray.hpp>
namespace aare {
/**
* @brief Find the index of the last element smaller than val
* assume a sorted array
*/
template <typename T>
size_t last_smaller(const T* first, const T* last, T val) {
for (auto iter = first+1; iter != last; ++iter) {
if (*iter > val) {
return std::distance(first, iter-1);
}
}
return std::distance(first, last-1);
}
template <typename T>
size_t last_smaller(const NDArray<T, 1>& arr, T val) {
return last_smaller(arr.begin(), arr.end(), val);
}
template <typename T>
size_t nearest_index(const T* first, const T* last, T val) {
auto iter = std::min_element(first, last,
[val](T a, T b) {
return std::abs(a - val) < std::abs(b - val);
});
return std::distance(first, iter);
}
template <typename T>
size_t nearest_index(const NDArray<T, 1>& arr, T val) {
return nearest_index(arr.begin(), arr.end(), val);
}
template <typename T>
size_t nearest_index(const std::vector<T>& vec, T val) {
return nearest_index(vec.data(), vec.data()+vec.size(), val);
}
template <typename T, size_t N>
size_t nearest_index(const std::array<T,N>& arr, T val) {
return nearest_index(arr.data(), arr.data()+arr.size(), val);
}
} // namespace aare

View File

@ -1,11 +1,9 @@
#pragma once #pragma once
#include "aare/Dtype.hpp" #include "aare/Dtype.hpp"
// #include "aare/utils/logger.hpp"
#include <array> #include <array>
#include <stdexcept> #include <stdexcept>
#include <cassert> #include <cassert>
#include <cstdint> #include <cstdint>
#include <cstring> #include <cstring>
@ -38,9 +36,12 @@
namespace aare { namespace aare {
inline constexpr size_t bits_per_byte = 8;
void assert_failed(const std::string &msg); void assert_failed(const std::string &msg);
class DynamicCluster { class DynamicCluster {
public: public:
int cluster_sizeX; int cluster_sizeX;
@ -213,6 +214,9 @@ struct ROI{
int64_t height() const { return ymax - ymin; } int64_t height() const { return ymax - ymin; }
int64_t width() const { return xmax - xmin; } int64_t width() const { return xmax - xmin; }
bool contains(int64_t x, int64_t y) const {
return x >= xmin && x < xmax && y >= ymin && y < ymax;
}
}; };

View File

@ -0,0 +1,18 @@
#include <thread>
#include <vector>
#include <utility>
namespace aare {
template<typename F>
void RunInParallel(F func, const std::vector<std::pair<int, int>>& tasks) {
// auto tasks = split_task(0, y.shape(0), n_threads);
std::vector<std::thread> threads;
for (auto &task : tasks) {
threads.push_back(std::thread(func, task.first, task.second));
}
for (auto &thread : threads) {
thread.join();
}
}
} // namespace aare

View File

@ -0,0 +1,18 @@
diff --git a/CMakeLists.txt b/CMakeLists.txt
index dd3d8eb9..c0187747 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -1,11 +1,8 @@
# CMake build script for ZeroMQ
project(ZeroMQ)
-if(${CMAKE_SYSTEM_NAME} STREQUAL Darwin)
- cmake_minimum_required(VERSION 3.0.2)
-else()
- cmake_minimum_required(VERSION 2.8.12)
-endif()
+cmake_minimum_required(VERSION 3.15)
+message(STATUS "Patched cmake version")
include(CheckIncludeFiles)
include(CheckCCompilerFlag)

View File

@ -4,7 +4,9 @@ build-backend = "scikit_build_core.build"
[project] [project]
name = "aare" name = "aare"
version = "2025.2.12" version = "2025.4.1"
[tool.scikit-build] [tool.scikit-build]
cmake.verbose = true cmake.verbose = true

View File

@ -50,10 +50,10 @@ set(PYTHON_EXAMPLES
) )
# Copy the python examples to the build directory # Copy the python examples to the build directory
foreach(FILE ${PYTHON_EXAMPLES}) foreach(FILE ${PYTHON_EXAMPLES})
configure_file(${FILE} ${CMAKE_BINARY_DIR}/${FILE} ) configure_file(${FILE} ${CMAKE_BINARY_DIR}/${FILE} )
message(STATUS "Copying ${FILE} to ${CMAKE_BINARY_DIR}/${FILE}")
endforeach(FILE ${PYTHON_EXAMPLES}) endforeach(FILE ${PYTHON_EXAMPLES})

View File

@ -2,16 +2,17 @@
from . import _aare from . import _aare
from ._aare import File, RawMasterFile, RawSubFile # from ._aare import File, RawMasterFile, RawSubFile
from ._aare import Pedestal_d, Pedestal_f, ClusterFinder, VarClusterFinder # from ._aare import Pedestal_d, Pedestal_f, ClusterFinder, VarClusterFinder
from ._aare import DetectorType from ._aare import DetectorType
from ._aare import ClusterFile from ._aare import ClusterFile_Cluster3x3i as ClusterFile
from ._aare import hitmap from ._aare import hitmap
from ._aare import ROI
from ._aare import ClusterFinderMT, ClusterCollector, ClusterFileSink, ClusterVector_i # from ._aare import ClusterFinderMT, ClusterCollector, ClusterFileSink, ClusterVector_i
from ._aare import fit_gaus, fit_pol1 from ._aare import fit_gaus, fit_pol1
from ._aare import Interpolator
from .CtbRawFile import CtbRawFile from .CtbRawFile import CtbRawFile
from .RawFile import RawFile from .RawFile import RawFile
from .ScanParameters import ScanParameters from .ScanParameters import ScanParameters

View File

@ -1,68 +1,79 @@
import sys import sys
sys.path.append('/home/l_msdetect/erik/aare/build') sys.path.append('/home/l_msdetect/erik/aare/build')
#Our normal python imports from aare._aare import ClusterVector_i, Interpolator
from pathlib import Path
import matplotlib.pyplot as plt import pickle
import numpy as np import numpy as np
import matplotlib.pyplot as plt
import boost_histogram as bh import boost_histogram as bh
import torch
import math
import time import time
<<<<<<< HEAD
from aare import File, ClusterFinder, VarClusterFinder, ClusterFile, CtbRawFile
from aare import gaus, fit_gaus
base = Path('/mnt/sls_det_storage/moench_data/Julian/MOENCH05/20250113_first_xrays_redo/raw_files/')
cluster_file = Path('/home/l_msdetect/erik/tmp/Cu.clust')
t0 = time.perf_counter()
offset= -0.5
hist3d = bh.Histogram(
bh.axis.Regular(160, 0+offset, 160+offset), #x
bh.axis.Regular(150, 0+offset, 150+offset), #y
bh.axis.Regular(200, 0, 6000), #ADU
)
total_clusters = 0
with ClusterFile(cluster_file, chunk_size = 1000) as f:
for i, clusters in enumerate(f):
arr = np.array(clusters)
total_clusters += clusters.size
hist3d.fill(arr['y'],arr['x'], clusters.sum_2x2()) #python talks [row, col] cluster finder [x,y]
=======
from aare import RawFile
f = RawFile('/mnt/sls_det_storage/jungfrau_data1/vadym_tests/jf12_M431/laser_scan/laserScan_pedestal_G0_master_0.json')
print(f'{f.frame_number(1)}')
for i in range(10):
header, img = f.read_frame()
print(header['frameNumber'], img.shape)
>>>>>>> developer
t_elapsed = time.perf_counter()-t0 def gaussian_2d(mx, my, sigma = 1, res=100, grid_size = 2):
print(f'Histogram filling took: {t_elapsed:.3f}s {total_clusters/t_elapsed/1e6:.3f}M clusters/s') """
Generate a 2D gaussian as position mx, my, with sigma=sigma.
The gaussian is placed on a 2x2 pixel matrix with resolution
res in one dimesion.
"""
x = torch.linspace(0, pixel_size*grid_size, res)
x,y = torch.meshgrid(x,x, indexing="ij")
return 1 / (2*math.pi*sigma**2) * \
torch.exp(-((x - my)**2 / (2*sigma**2) + (y - mx)**2 / (2*sigma**2)))
histogram_data = hist3d.counts() scale = 1000 #Scale factor when converting to integer
x = hist3d.axes[2].edges[:-1] pixel_size = 25 #um
grid = 2
resolution = 100
sigma_um = 10
xa = np.linspace(0,grid*pixel_size,resolution)
ticks = [0, 25, 50]
y = histogram_data[100,100,:] hit = np.array((20,20))
xx = np.linspace(x[0], x[-1]) etahist_fname = "/home/l_msdetect/erik/tmp/test_hist.pkl"
# fig, ax = plt.subplots()
# ax.step(x, y, where = 'post')
y_err = np.sqrt(y) local_resolution = 99
y_err = np.zeros(y.size) grid_size = 3
y_err += 1 xaxis = np.linspace(0,grid_size*pixel_size, local_resolution)
t = gaussian_2d(hit[0],hit[1], grid_size = grid_size, sigma = 10, res = local_resolution)
pixels = t.reshape(grid_size, t.shape[0] // grid_size, grid_size, t.shape[1] // grid_size).sum(axis = 3).sum(axis = 1)
pixels = pixels.numpy()
pixels = (pixels*scale).astype(np.int32)
v = ClusterVector_i(3,3)
v.push_back(1,1, pixels)
# par = fit_gaus2(y,x, y_err) with open(etahist_fname, "rb") as f:
# ax.plot(xx, gaus(xx,par)) hist = pickle.load(f)
# print(par) eta = hist.view().copy()
etabinsx = np.array(hist.axes.edges.T[0].flat)
etabinsy = np.array(hist.axes.edges.T[1].flat)
ebins = np.array(hist.axes.edges.T[2].flat)
p = Interpolator(eta, etabinsx[0:-1], etabinsy[0:-1], ebins[0:-1])
res = fit_gaus(y,x)
res2 = fit_gaus(y,x, y_err)
print(res)
print(res2)
#Generate the hit
tmp = p.interpolate(v)
print(f'tmp:{tmp}')
pos = np.array((tmp['x'], tmp['y']))*25
print(pixels)
fig, ax = plt.subplots(figsize = (7,7))
ax.pcolormesh(xaxis, xaxis, t)
ax.plot(*pos, 'o')
ax.set_xticks([0,25,50,75])
ax.set_yticks([0,25,50,75])
ax.set_xlim(0,75)
ax.set_ylim(0,75)
ax.grid()
print(f'{hit=}')
print(f'{pos=}')

View File

@ -16,141 +16,246 @@
namespace py = pybind11; namespace py = pybind11;
using pd_type = double; using pd_type = double;
template <typename T> using namespace aare;
template <typename Type, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
void define_cluster(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("Cluster{}", typestr);
using ClusterType =
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType, void>;
py::class_<Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType, void>>(
m, class_name.c_str())
.def(py::init([](uint8_t x, uint8_t y, py::array_t<Type> data) {
py::buffer_info buf_info = data.request();
Type *ptr = static_cast<Type *>(buf_info.ptr);
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType, void> cluster;
cluster.x = x;
cluster.y = y;
std::copy(ptr, ptr + ClusterSizeX * ClusterSizeY,
cluster.data); // Copy array contents
return cluster;
}))
//.def(py::init<>())
.def_readwrite("x", &ClusterType::x)
.def_readwrite("y", &ClusterType::y)
.def_property(
"data",
[](ClusterType &c) -> py::array {
return py::array(py::buffer_info(
c.data, sizeof(Type),
py::format_descriptor<Type>::format(), // Type
// format
1, // Number of dimensions
{static_cast<ssize_t>(ClusterSizeX *
ClusterSizeY)}, // Shape (flattened)
{sizeof(Type)} // Stride (step size between elements)
));
},
[](ClusterType &c, py::array_t<Type> arr) {
py::buffer_info buf_info = arr.request();
Type *ptr = static_cast<Type *>(buf_info.ptr);
std::copy(ptr, ptr + ClusterSizeX * ClusterSizeY,
c.data); // TODO dont iterate over centers!!!
});
}
template <typename Type, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_cluster_vector(py::module &m, const std::string &typestr) { void define_cluster_vector(py::module &m, const std::string &typestr) {
using ClusterType =
Cluster<Type, ClusterSizeX, ClusterSizeY, uint16_t, void>;
auto class_name = fmt::format("ClusterVector_{}", typestr); auto class_name = fmt::format("ClusterVector_{}", typestr);
py::class_<ClusterVector<T>>(m, class_name.c_str(), py::buffer_protocol())
.def(py::init<int, int>()) py::class_<ClusterVector<ClusterType>>(m, class_name.c_str(),
.def_property_readonly("size", &ClusterVector<T>::size) py::buffer_protocol())
.def("item_size", &ClusterVector<T>::item_size)
.def(py::init()) // TODO change!!!
/*
.def("push_back",
[](ClusterVector<ClusterType> &self, ClusterType &cl) {
// auto view = make_view_2d(data);
self.push_back(cl);
})
*/
/*
.def(
"push_back",
[](ClusterVector<ClusterType> &self, py::object obj) {
ClusterType &cl = py::cast<ClusterType &>(obj);
self.push_back(cl);
},
py::arg("cluster"))
*/
.def("push_back",
[](ClusterVector<ClusterType> &self, const ClusterType &cluster) {
self.push_back(cluster);
})
//.def("push_back", &ClusterVector<ClusterType>::push_back) //TODO
// implement push_back
.def_property_readonly("size", &ClusterVector<ClusterType>::size)
.def("item_size", &ClusterVector<ClusterType>::item_size)
.def_property_readonly("fmt", .def_property_readonly("fmt",
[typestr](ClusterVector<T> &self) { [typestr]() { return fmt_format<ClusterType>; })
return fmt::format( /*
self.fmt_base(), self.cluster_size_x(),
self.cluster_size_y(), typestr);
})
.def("sum", .def("sum",
[](ClusterVector<T> &self) { [](ClusterVector<ClusterType> &self) {
auto *vec = new std::vector<T>(self.sum()); auto *vec = new std::vector<T>(self.sum());
return return_vector(vec); return return_vector(vec);
}) })
.def("sum_2x2", [](ClusterVector<T> &self) { .def("sum_2x2",
auto *vec = new std::vector<T>(self.sum_2x2()); [](ClusterVector<ClusterType> &self) {
return return_vector(vec); auto *vec = new std::vector<T>(self.sum_2x2());
}) return return_vector(vec);
.def_property_readonly("capacity", &ClusterVector<T>::capacity) })
.def_property("frame_number", &ClusterVector<T>::frame_number, */
&ClusterVector<T>::set_frame_number) .def_property_readonly("cluster_size_x",
.def_buffer([typestr](ClusterVector<T> &self) -> py::buffer_info { &ClusterVector<ClusterType>::cluster_size_x)
return py::buffer_info( .def_property_readonly("cluster_size_y",
self.data(), /* Pointer to buffer */ &ClusterVector<ClusterType>::cluster_size_y)
self.item_size(), /* Size of one scalar */ .def_property_readonly("capacity",
fmt::format(self.fmt_base(), self.cluster_size_x(), &ClusterVector<ClusterType>::capacity)
self.cluster_size_y(), .def_property("frame_number", &ClusterVector<ClusterType>::frame_number,
typestr), /* Format descriptor */ &ClusterVector<ClusterType>::set_frame_number)
1, /* Number of dimensions */ .def_buffer(
{self.size()}, /* Buffer dimensions */ [typestr](ClusterVector<ClusterType> &self) -> py::buffer_info {
{self.item_size()} /* Strides (in bytes) for each index */ return py::buffer_info(
); self.data(), /* Pointer to buffer */
}); self.item_size(), /* Size of one scalar */
fmt_format<ClusterType>, /* Format descriptor */
1, /* Number of dimensions */
{self.size()}, /* Buffer dimensions */
{self.item_size()} /* Strides (in bytes) for each index */
);
});
} }
void define_cluster_finder_mt_bindings(py::module &m) { template <typename ClusterType>
py::class_<ClusterFinderMT<uint16_t, pd_type>>(m, "ClusterFinderMT") void define_cluster_finder_mt_bindings(py::module &m,
.def(py::init<Shape<2>, Shape<2>, pd_type, size_t, size_t>(), const std::string &typestr) {
py::arg("image_size"), py::arg("cluster_size"), auto class_name = fmt::format("ClusterFinderMT_{}", typestr);
py::arg("n_sigma") = 5.0, py::arg("capacity") = 2048,
py::arg("n_threads") = 3) py::class_<ClusterFinderMT<ClusterType, uint16_t, pd_type>>(
m, class_name.c_str())
.def(py::init<Shape<2>, pd_type, size_t, size_t>(),
py::arg("image_size"), py::arg("n_sigma") = 5.0,
py::arg("capacity") = 2048, py::arg("n_threads") = 3)
.def("push_pedestal_frame", .def("push_pedestal_frame",
[](ClusterFinderMT<uint16_t, pd_type> &self, [](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame) { py::array_t<uint16_t> frame) {
auto view = make_view_2d(frame); auto view = make_view_2d(frame);
self.push_pedestal_frame(view); self.push_pedestal_frame(view);
}) })
.def( .def(
"find_clusters", "find_clusters",
[](ClusterFinderMT<uint16_t, pd_type> &self, [](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame, uint64_t frame_number) { py::array_t<uint16_t> frame, uint64_t frame_number) {
auto view = make_view_2d(frame); auto view = make_view_2d(frame);
self.find_clusters(view, frame_number); self.find_clusters(view, frame_number);
return; return;
}, },
py::arg(), py::arg("frame_number") = 0) py::arg(), py::arg("frame_number") = 0)
.def("clear_pedestal", &ClusterFinderMT<uint16_t, pd_type>::clear_pedestal) .def("clear_pedestal",
.def("sync", &ClusterFinderMT<uint16_t, pd_type>::sync) &ClusterFinderMT<ClusterType, uint16_t, pd_type>::clear_pedestal)
.def("stop", &ClusterFinderMT<uint16_t, pd_type>::stop) .def("sync", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::sync)
.def("start", &ClusterFinderMT<uint16_t, pd_type>::start) .def("stop", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::stop)
.def("pedestal", .def("start", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::start)
[](ClusterFinderMT<uint16_t, pd_type> &self, size_t thread_index) { .def(
auto pd = new NDArray<pd_type, 2>{}; "pedestal",
*pd = self.pedestal(thread_index); [](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
return return_image_data(pd); size_t thread_index) {
},py::arg("thread_index") = 0) auto pd = new NDArray<pd_type, 2>{};
.def("noise", *pd = self.pedestal(thread_index);
[](ClusterFinderMT<uint16_t, pd_type> &self, size_t thread_index) { return return_image_data(pd);
auto arr = new NDArray<pd_type, 2>{}; },
*arr = self.noise(thread_index); py::arg("thread_index") = 0)
return return_image_data(arr); .def(
},py::arg("thread_index") = 0); "noise",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
size_t thread_index) {
auto arr = new NDArray<pd_type, 2>{};
*arr = self.noise(thread_index);
return return_image_data(arr);
},
py::arg("thread_index") = 0);
} }
void define_cluster_collector_bindings(py::module &m) { template <typename ClusterType>
py::class_<ClusterCollector>(m, "ClusterCollector") void define_cluster_collector_bindings(py::module &m,
.def(py::init<ClusterFinderMT<uint16_t, double, int32_t> *>()) const std::string &typestr) {
.def("stop", &ClusterCollector::stop) auto class_name = fmt::format("ClusterCollector_{}", typestr);
py::class_<ClusterCollector<ClusterType>>(m, class_name.c_str())
.def(py::init<ClusterFinderMT<ClusterType, uint16_t, double> *>())
.def("stop", &ClusterCollector<ClusterType>::stop)
.def( .def(
"steal_clusters", "steal_clusters",
[](ClusterCollector &self) { [](ClusterCollector<ClusterType> &self) {
auto v = auto v = new std::vector<ClusterVector<ClusterType>>(
new std::vector<ClusterVector<int>>(self.steal_clusters()); self.steal_clusters());
return v; return v;
}, },
py::return_value_policy::take_ownership); py::return_value_policy::take_ownership);
} }
void define_cluster_file_sink_bindings(py::module &m) { template <typename ClusterType>
py::class_<ClusterFileSink>(m, "ClusterFileSink") void define_cluster_file_sink_bindings(py::module &m,
.def(py::init<ClusterFinderMT<uint16_t, double, int32_t> *, const std::string &typestr) {
auto class_name = fmt::format("ClusterFileSink_{}", typestr);
py::class_<ClusterFileSink<ClusterType>>(m, class_name.c_str())
.def(py::init<ClusterFinderMT<ClusterType, uint16_t, double> *,
const std::filesystem::path &>()) const std::filesystem::path &>())
.def("stop", &ClusterFileSink::stop); .def("stop", &ClusterFileSink<ClusterType>::stop);
} }
void define_cluster_finder_bindings(py::module &m) { template <typename ClusterType>
py::class_<ClusterFinder<uint16_t, pd_type>>(m, "ClusterFinder") void define_cluster_finder_bindings(py::module &m, const std::string &typestr) {
.def(py::init<Shape<2>, Shape<2>, pd_type, size_t>(), auto class_name = fmt::format("ClusterFinder_{}", typestr);
py::arg("image_size"), py::arg("cluster_size"),
py::class_<ClusterFinder<ClusterType, uint16_t, pd_type>>(
m, class_name.c_str())
.def(py::init<Shape<2>, pd_type, size_t>(), py::arg("image_size"),
py::arg("n_sigma") = 5.0, py::arg("capacity") = 1'000'000) py::arg("n_sigma") = 5.0, py::arg("capacity") = 1'000'000)
.def("push_pedestal_frame", .def("push_pedestal_frame",
[](ClusterFinder<uint16_t, pd_type> &self, [](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame) { py::array_t<uint16_t> frame) {
auto view = make_view_2d(frame); auto view = make_view_2d(frame);
self.push_pedestal_frame(view); self.push_pedestal_frame(view);
}) })
.def("clear_pedestal", &ClusterFinder<uint16_t, pd_type>::clear_pedestal) .def("clear_pedestal",
.def_property_readonly("pedestal", &ClusterFinder<ClusterType, uint16_t, pd_type>::clear_pedestal)
[](ClusterFinder<uint16_t, pd_type> &self) { .def_property_readonly(
auto pd = new NDArray<pd_type, 2>{}; "pedestal",
*pd = self.pedestal(); [](ClusterFinder<ClusterType, uint16_t, pd_type> &self) {
return return_image_data(pd); auto pd = new NDArray<pd_type, 2>{};
}) *pd = self.pedestal();
.def_property_readonly("noise", return return_image_data(pd);
[](ClusterFinder<uint16_t, pd_type> &self) { })
auto arr = new NDArray<pd_type, 2>{}; .def_property_readonly(
*arr = self.noise(); "noise",
return return_image_data(arr); [](ClusterFinder<ClusterType, uint16_t, pd_type> &self) {
}) auto arr = new NDArray<pd_type, 2>{};
*arr = self.noise();
return return_image_data(arr);
})
.def( .def(
"steal_clusters", "steal_clusters",
[](ClusterFinder<uint16_t, pd_type> &self, [](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
bool realloc_same_capacity) { bool realloc_same_capacity) {
auto v = new ClusterVector<int>( auto v = new ClusterVector<ClusterType>(
self.steal_clusters(realloc_same_capacity)); self.steal_clusters(realloc_same_capacity));
return v; return v;
}, },
py::arg("realloc_same_capacity") = false) py::arg("realloc_same_capacity") = false)
.def( .def(
"find_clusters", "find_clusters",
[](ClusterFinder<uint16_t, pd_type> &self, [](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame, uint64_t frame_number) { py::array_t<uint16_t> frame, uint64_t frame_number) {
auto view = make_view_2d(frame); auto view = make_view_2d(frame);
self.find_clusters(view, frame_number); self.find_clusters(view, frame_number);
@ -159,7 +264,7 @@ void define_cluster_finder_bindings(py::module &m) {
py::arg(), py::arg("frame_number") = 0); py::arg(), py::arg("frame_number") = 0);
m.def("hitmap", m.def("hitmap",
[](std::array<size_t, 2> image_size, ClusterVector<int32_t> &cv) { [](std::array<size_t, 2> image_size, ClusterVector<ClusterType> &cv) {
py::array_t<int32_t> hitmap(image_size); py::array_t<int32_t> hitmap(image_size);
auto r = hitmap.mutable_unchecked<2>(); auto r = hitmap.mutable_unchecked<2>();
@ -178,24 +283,4 @@ void define_cluster_finder_bindings(py::module &m) {
} }
return hitmap; return hitmap;
}); });
define_cluster_vector<int>(m, "i");
define_cluster_vector<double>(m, "d");
define_cluster_vector<float>(m, "f");
py::class_<DynamicCluster>(m, "DynamicCluster", py::buffer_protocol())
.def(py::init<int, int, Dtype>())
.def("size", &DynamicCluster::size)
.def("begin", &DynamicCluster::begin)
.def("end", &DynamicCluster::end)
.def_readwrite("x", &DynamicCluster::x)
.def_readwrite("y", &DynamicCluster::y)
.def_buffer([](DynamicCluster &c) -> py::buffer_info {
return py::buffer_info(c.data(), c.dt.bytes(), c.dt.format_descr(),
1, {c.size()}, {c.dt.bytes()});
})
.def("__repr__", [](const DynamicCluster &a) {
return "<DynamicCluster: x: " + std::to_string(a.x) +
", y: " + std::to_string(a.y) + ">";
});
} }

View File

@ -1,3 +1,4 @@
#include "aare/CalculateEta.hpp"
#include "aare/ClusterFile.hpp" #include "aare/ClusterFile.hpp"
#include "aare/defs.hpp" #include "aare/defs.hpp"
@ -10,63 +11,81 @@
#include <pybind11/stl/filesystem.h> #include <pybind11/stl/filesystem.h>
#include <string> #include <string>
//Disable warnings for unused parameters, as we ignore some // Disable warnings for unused parameters, as we ignore some
//in the __exit__ method // in the __exit__ method
#pragma GCC diagnostic push #pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter" #pragma GCC diagnostic ignored "-Wunused-parameter"
namespace py = pybind11; namespace py = pybind11;
using namespace ::aare; using namespace ::aare;
void define_cluster_file_io_bindings(py::module &m) { template <typename ClusterType>
PYBIND11_NUMPY_DTYPE(Cluster3x3, x, y, data); void define_cluster_file_io_bindings(py::module &m,
const std::string &typestr) {
// PYBIND11_NUMPY_DTYPE(Cluster<int, 3, 3>, x, y,
// data); // is this used - maybe use as cluster type
py::class_<ClusterFile>(m, "ClusterFile") auto class_name = fmt::format("ClusterFile_{}", typestr);
py::class_<ClusterFile<ClusterType>>(m, class_name.c_str())
.def(py::init<const std::filesystem::path &, size_t, .def(py::init<const std::filesystem::path &, size_t,
const std::string &>(), const std::string &>(),
py::arg(), py::arg("chunk_size") = 1000, py::arg("mode") = "r") py::arg(), py::arg("chunk_size") = 1000, py::arg("mode") = "r")
.def("read_clusters", .def(
[](ClusterFile &self, size_t n_clusters) { "read_clusters",
auto v = new ClusterVector<int32_t>(self.read_clusters(n_clusters)); [](ClusterFile<ClusterType> &self, size_t n_clusters) {
auto v = new ClusterVector<ClusterType>(
self.read_clusters(n_clusters));
return v; return v;
},py::return_value_policy::take_ownership) },
py::return_value_policy::take_ownership)
.def("read_frame", .def("read_frame",
[](ClusterFile &self) { [](ClusterFile<ClusterType> &self) {
auto v = new ClusterVector<int32_t>(self.read_frame()); auto v = new ClusterVector<ClusterType>(self.read_frame());
return v; return v;
}) })
.def("write_frame", &ClusterFile::write_frame) .def("set_roi", &ClusterFile<ClusterType>::set_roi)
// .def("read_cluster_with_cut", .def(
// [](ClusterFile &self, size_t n_clusters, "set_noise_map",
// py::array_t<double> noise_map, int nx, int ny) { [](ClusterFile<ClusterType> &self, py::array_t<int32_t> noise_map) {
// auto view = make_view_2d(noise_map); auto view = make_view_2d(noise_map);
// auto *vec = self.set_noise_map(view);
// new std::vector<Cluster3x3>(self.read_cluster_with_cut( })
// n_clusters, view.data(), nx, ny));
// return return_vector(vec); .def("set_gain_map",
// }) [](ClusterFile<ClusterType> &self, py::array_t<double> gain_map) {
.def("__enter__", [](ClusterFile &self) { return &self; }) auto view = make_view_2d(gain_map);
self.set_gain_map(view);
})
// void set_gain_map(const GainMap &gain_map); //TODO do i need a
// gainmap constructor?
.def("close", &ClusterFile<ClusterType>::close)
.def("write_frame", &ClusterFile<ClusterType>::write_frame)
.def("__enter__", [](ClusterFile<ClusterType> &self) { return &self; })
.def("__exit__", .def("__exit__",
[](ClusterFile &self, [](ClusterFile<ClusterType> &self,
const std::optional<pybind11::type> &exc_type, const std::optional<pybind11::type> &exc_type,
const std::optional<pybind11::object> &exc_value, const std::optional<pybind11::object> &exc_value,
const std::optional<pybind11::object> &traceback) { const std::optional<pybind11::object> &traceback) {
self.close(); self.close();
}) })
.def("__iter__", [](ClusterFile &self) { return &self; }) .def("__iter__", [](ClusterFile<ClusterType> &self) { return &self; })
.def("__next__", [](ClusterFile &self) { .def("__next__", [](ClusterFile<ClusterType> &self) {
auto v = new ClusterVector<int32_t>(self.read_clusters(self.chunk_size())); auto v = new ClusterVector<ClusterType>(
self.read_clusters(self.chunk_size()));
if (v->size() == 0) { if (v->size() == 0) {
throw py::stop_iteration(); throw py::stop_iteration();
} }
return v; return v;
}); });
m.def("calculate_eta2", []( aare::ClusterVector<int32_t> &clusters) { m.def("calculate_eta2",
auto eta2 = new NDArray<double, 2>(calculate_eta2(clusters)); [](const aare::ClusterVector<ClusterType> &clusters) {
return return_image_data(eta2); auto eta2 = new NDArray<double, 2>(calculate_eta2(clusters));
}); return return_image_data(eta2);
});
} }
#pragma GCC diagnostic pop #pragma GCC diagnostic pop

View File

@ -32,7 +32,7 @@ m.def("adc_sar_05_decode64to16", [](py::array_t<uint8_t> input) {
} }
//Create a 2D output array with the same shape as the input //Create a 2D output array with the same shape as the input
std::vector<ssize_t> shape{input.shape(0), input.shape(1)/8}; std::vector<ssize_t> shape{input.shape(0), input.shape(1)/static_cast<int64_t>(bits_per_byte)};
py::array_t<uint16_t> output(shape); py::array_t<uint16_t> output(shape);
//Create a view of the input and output arrays //Create a view of the input and output arrays
@ -53,7 +53,7 @@ m.def("adc_sar_04_decode64to16", [](py::array_t<uint8_t> input) {
} }
//Create a 2D output array with the same shape as the input //Create a 2D output array with the same shape as the input
std::vector<ssize_t> shape{input.shape(0), input.shape(1)/8}; std::vector<ssize_t> shape{input.shape(0), input.shape(1)/static_cast<int64_t>(bits_per_byte)};
py::array_t<uint16_t> output(shape); py::array_t<uint16_t> output(shape);
//Create a view of the input and output arrays //Create a view of the input and output arrays

View File

@ -195,6 +195,8 @@ void define_file_io_bindings(py::module &m) {
py::class_<ROI>(m, "ROI") py::class_<ROI>(m, "ROI")
.def(py::init<>()) .def(py::init<>())
.def(py::init<int64_t, int64_t, int64_t, int64_t>(), py::arg("xmin"),
py::arg("xmax"), py::arg("ymin"), py::arg("ymax"))
.def_readwrite("xmin", &ROI::xmin) .def_readwrite("xmin", &ROI::xmin)
.def_readwrite("xmax", &ROI::xmax) .def_readwrite("xmax", &ROI::xmax)
.def_readwrite("ymin", &ROI::ymin) .def_readwrite("ymin", &ROI::ymin)

View File

@ -7,6 +7,8 @@
#include "aare/Fit.hpp" #include "aare/Fit.hpp"
namespace py = pybind11; namespace py = pybind11;
using namespace pybind11::literals;
void define_fit_bindings(py::module &m) { void define_fit_bindings(py::module &m) {
@ -29,7 +31,8 @@ void define_fit_bindings(py::module &m) {
The points at which to evaluate the Gaussian function. The points at which to evaluate the Gaussian function.
par : array_like par : array_like
The parameters of the Gaussian function. The first element is the amplitude, the second element is the mean, and the third element is the standard deviation. The parameters of the Gaussian function. The first element is the amplitude, the second element is the mean, and the third element is the standard deviation.
)", py::arg("x"), py::arg("par")); )",
py::arg("x"), py::arg("par"));
m.def( m.def(
"pol1", "pol1",
@ -49,7 +52,9 @@ void define_fit_bindings(py::module &m) {
The points at which to evaluate the polynomial function. The points at which to evaluate the polynomial function.
par : array_like par : array_like
The parameters of the polynomial function. The first element is the intercept, and the second element is the slope. The parameters of the polynomial function. The first element is the intercept, and the second element is the slope.
)", py::arg("x"), py::arg("par")); )",
py::arg("x"), py::arg("par"));
m.def( m.def(
"fit_gaus", "fit_gaus",
@ -72,7 +77,8 @@ void define_fit_bindings(py::module &m) {
throw std::runtime_error("Data must be 1D or 3D"); throw std::runtime_error("Data must be 1D or 3D");
} }
}, },
R"( R"(
Fit a 1D Gaussian to data. Fit a 1D Gaussian to data.
Parameters Parameters
@ -90,8 +96,9 @@ n_threads : int, optional
"fit_gaus", "fit_gaus",
[](py::array_t<double, py::array::c_style | py::array::forcecast> x, [](py::array_t<double, py::array::c_style | py::array::forcecast> x,
py::array_t<double, py::array::c_style | py::array::forcecast> y, py::array_t<double, py::array::c_style | py::array::forcecast> y,
py::array_t<double, py::array::c_style | py::array::forcecast> py::array_t<double, py::array::c_style | py::array::forcecast> y_err,
y_err, int n_threads) { int n_threads) {
if (y.ndim() == 3) { if (y.ndim() == 3) {
// Allocate memory for the output // Allocate memory for the output
// Need to have pointers to allow python to manage // Need to have pointers to allow python to manage
@ -99,15 +106,20 @@ n_threads : int, optional
auto par = new NDArray<double, 3>({y.shape(0), y.shape(1), 3}); auto par = new NDArray<double, 3>({y.shape(0), y.shape(1), 3});
auto par_err = auto par_err =
new NDArray<double, 3>({y.shape(0), y.shape(1), 3}); new NDArray<double, 3>({y.shape(0), y.shape(1), 3});
auto chi2 = new NDArray<double, 2>({y.shape(0), y.shape(1)});
// Make views of the numpy arrays
auto y_view = make_view_3d(y); auto y_view = make_view_3d(y);
auto y_view_err = make_view_3d(y_err); auto y_view_err = make_view_3d(y_err);
auto x_view = make_view_1d(x); auto x_view = make_view_1d(x);
aare::fit_gaus(x_view, y_view, y_view_err, par->view(), aare::fit_gaus(x_view, y_view, y_view_err, par->view(),
par_err->view(), n_threads); par_err->view(), chi2->view(), n_threads);
// return return_image_data(par);
return py::make_tuple(return_image_data(par), return py::dict("par"_a = return_image_data(par),
return_image_data(par_err)); "par_err"_a = return_image_data(par_err),
"chi2"_a = return_image_data(chi2),
"Ndf"_a = y.shape(2) - 3);
} else if (y.ndim() == 1) { } else if (y.ndim() == 1) {
// Allocate memory for the output // Allocate memory for the output
// Need to have pointers to allow python to manage // Need to have pointers to allow python to manage
@ -120,15 +132,21 @@ n_threads : int, optional
auto y_view_err = make_view_1d(y_err); auto y_view_err = make_view_1d(y_err);
auto x_view = make_view_1d(x); auto x_view = make_view_1d(x);
double chi2 = 0;
aare::fit_gaus(x_view, y_view, y_view_err, par->view(), aare::fit_gaus(x_view, y_view, y_view_err, par->view(),
par_err->view()); par_err->view(), chi2);
return py::make_tuple(return_image_data(par),
return_image_data(par_err)); return py::dict("par"_a = return_image_data(par),
"par_err"_a = return_image_data(par_err),
"chi2"_a = chi2, "Ndf"_a = y.size() - 3);
} else { } else {
throw std::runtime_error("Data must be 1D or 3D"); throw std::runtime_error("Data must be 1D or 3D");
} }
}, },
R"( R"(
Fit a 1D Gaussian to data with error estimates. Fit a 1D Gaussian to data with error estimates.
Parameters Parameters
@ -172,11 +190,11 @@ n_threads : int, optional
"fit_pol1", "fit_pol1",
[](py::array_t<double, py::array::c_style | py::array::forcecast> x, [](py::array_t<double, py::array::c_style | py::array::forcecast> x,
py::array_t<double, py::array::c_style | py::array::forcecast> y, py::array_t<double, py::array::c_style | py::array::forcecast> y,
py::array_t<double, py::array::c_style | py::array::forcecast> py::array_t<double, py::array::c_style | py::array::forcecast> y_err,
y_err, int n_threads) { int n_threads) {
if (y.ndim() == 3) { if (y.ndim() == 3) {
auto par = auto par = new NDArray<double, 3>({y.shape(0), y.shape(1), 2});
new NDArray<double, 3>({y.shape(0), y.shape(1), 2});
auto par_err = auto par_err =
new NDArray<double, 3>({y.shape(0), y.shape(1), 2}); new NDArray<double, 3>({y.shape(0), y.shape(1), 2});
@ -184,10 +202,15 @@ n_threads : int, optional
auto y_view_err = make_view_3d(y_err); auto y_view_err = make_view_3d(y_err);
auto x_view = make_view_1d(x); auto x_view = make_view_1d(x);
aare::fit_pol1(x_view, y_view,y_view_err, par->view(), auto chi2 = new NDArray<double, 2>({y.shape(0), y.shape(1)});
par_err->view(), n_threads);
return py::make_tuple(return_image_data(par), aare::fit_pol1(x_view, y_view, y_view_err, par->view(),
return_image_data(par_err)); par_err->view(), chi2->view(), n_threads);
return py::dict("par"_a = return_image_data(par),
"par_err"_a = return_image_data(par_err),
"chi2"_a = return_image_data(chi2),
"Ndf"_a = y.shape(2) - 2);
} else if (y.ndim() == 1) { } else if (y.ndim() == 1) {
auto par = new NDArray<double, 1>({2}); auto par = new NDArray<double, 1>({2});
@ -197,15 +220,19 @@ n_threads : int, optional
auto y_view_err = make_view_1d(y_err); auto y_view_err = make_view_1d(y_err);
auto x_view = make_view_1d(x); auto x_view = make_view_1d(x);
double chi2 = 0;
aare::fit_pol1(x_view, y_view, y_view_err, par->view(), aare::fit_pol1(x_view, y_view, y_view_err, par->view(),
par_err->view()); par_err->view(), chi2);
return py::make_tuple(return_image_data(par), return py::dict("par"_a = return_image_data(par),
return_image_data(par_err)); "par_err"_a = return_image_data(par_err),
"chi2"_a = chi2, "Ndf"_a = y.size() - 2);
} else { } else {
throw std::runtime_error("Data must be 1D or 3D"); throw std::runtime_error("Data must be 1D or 3D");
} }
}, },
R"( R"(
Fit a 1D polynomial to data with error estimates. Fit a 1D polynomial to data with error estimates.
Parameters Parameters

View File

@ -0,0 +1,81 @@
#include "aare/Interpolator.hpp"
#include "aare/NDArray.hpp"
#include "aare/NDView.hpp"
#include "np_helper.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
namespace py = pybind11;
template <typename ClusterType>
void register_interpolate(py::class_<aare::Interpolator> &interpolator,
const std::string &typestr) {
auto name = fmt::format("interpolate_{}", typestr);
interpolator.def(name.c_str(),
[](aare::Interpolator &self,
const ClusterVector<ClusterType> &clusters) {
auto photons = self.interpolate<ClusterType>(clusters);
auto *ptr = new std::vector<Photon>{photons};
return return_vector(ptr);
});
}
void define_interpolation_bindings(py::module &m) {
PYBIND11_NUMPY_DTYPE(aare::Photon, x, y, energy);
auto interpolator =
py::class_<aare::Interpolator>(m, "Interpolator")
.def(py::init([](py::array_t<double, py::array::c_style |
py::array::forcecast>
etacube,
py::array_t<double> xbins,
py::array_t<double> ybins,
py::array_t<double> ebins) {
return Interpolator(make_view_3d(etacube), make_view_1d(xbins),
make_view_1d(ybins), make_view_1d(ebins));
}))
.def("get_ietax",
[](Interpolator &self) {
auto *ptr = new NDArray<double, 3>{};
*ptr = self.get_ietax();
return return_image_data(ptr);
})
.def("get_ietay", [](Interpolator &self) {
auto *ptr = new NDArray<double, 3>{};
*ptr = self.get_ietay();
return return_image_data(ptr);
});
register_interpolate<Cluster<int, 3, 3>>(interpolator, "Cluster3x3i");
register_interpolate<Cluster<float, 3, 3>>(interpolator, "Cluster3x3f");
register_interpolate<Cluster<double, 3, 3>>(interpolator, "Cluster3x3d");
register_interpolate<Cluster<int, 2, 2>>(interpolator, "Cluster2x2i");
register_interpolate<Cluster<float, 2, 2>>(interpolator, "Cluster2x2f");
register_interpolate<Cluster<double, 2, 2>>(interpolator, "Cluster2x2d");
// TODO! Evaluate without converting to double
m.def(
"hej",
[]() {
// auto boost_histogram = py::module_::import("boost_histogram");
// py::object axis =
// boost_histogram.attr("axis").attr("Regular")(10, 0.0, 10.0);
// py::object histogram = boost_histogram.attr("Histogram")(axis);
// return histogram;
// return h;
},
R"(
Evaluate a 1D Gaussian function for all points in x using parameters par.
Parameters
----------
x : array_like
The points at which to evaluate the Gaussian function.
par : array_like
The parameters of the Gaussian function. The first element is the amplitude, the second element is the mean, and the third element is the standard deviation.
)");
}

View File

@ -1,16 +1,17 @@
//Files with bindings to the different classes // Files with bindings to the different classes
#include "file.hpp"
#include "raw_file.hpp"
#include "ctb_raw_file.hpp"
#include "raw_master_file.hpp"
#include "var_cluster.hpp"
#include "pixel_map.hpp"
#include "pedestal.hpp"
#include "cluster.hpp" #include "cluster.hpp"
#include "cluster_file.hpp" #include "cluster_file.hpp"
#include "ctb_raw_file.hpp"
#include "file.hpp"
#include "fit.hpp" #include "fit.hpp"
#include "interpolation.hpp"
#include "pedestal.hpp"
#include "pixel_map.hpp"
#include "raw_file.hpp"
#include "raw_master_file.hpp"
#include "var_cluster.hpp"
//Pybind stuff // Pybind stuff
#include <pybind11/pybind11.h> #include <pybind11/pybind11.h>
#include <pybind11/stl.h> #include <pybind11/stl.h>
@ -25,11 +26,55 @@ PYBIND11_MODULE(_aare, m) {
define_pixel_map_bindings(m); define_pixel_map_bindings(m);
define_pedestal_bindings<double>(m, "Pedestal_d"); define_pedestal_bindings<double>(m, "Pedestal_d");
define_pedestal_bindings<float>(m, "Pedestal_f"); define_pedestal_bindings<float>(m, "Pedestal_f");
define_cluster_finder_bindings(m);
define_cluster_finder_mt_bindings(m);
define_cluster_file_io_bindings(m);
define_cluster_collector_bindings(m);
define_cluster_file_sink_bindings(m);
define_fit_bindings(m); define_fit_bindings(m);
define_interpolation_bindings(m);
define_cluster_file_io_bindings<Cluster<int, 3, 3>>(m, "Cluster3x3i");
define_cluster_file_io_bindings<Cluster<double, 3, 3>>(m, "Cluster3x3d");
define_cluster_file_io_bindings<Cluster<float, 3, 3>>(m, "Cluster3x3f");
define_cluster_file_io_bindings<Cluster<int, 2, 2>>(m, "Cluster2x2i");
define_cluster_file_io_bindings<Cluster<float, 2, 2>>(m, "Cluster2x2f");
define_cluster_file_io_bindings<Cluster<double, 2, 2>>(m, "Cluster2x2d");
define_cluster_vector<int, 3, 3, uint16_t>(m, "Cluster3x3i");
define_cluster_vector<double, 3, 3, uint16_t>(m, "Cluster3x3d");
define_cluster_vector<float, 3, 3, uint16_t>(m, "Cluster3x3f");
define_cluster_vector<int, 2, 2, uint16_t>(m, "Cluster2x2i");
define_cluster_vector<double, 2, 2, uint16_t>(m, "Cluster2x2d");
define_cluster_vector<float, 2, 2, uint16_t>(m, "Cluster2x2f");
define_cluster_finder_bindings<Cluster<int, 3, 3>>(m, "Cluster3x3i");
define_cluster_finder_bindings<Cluster<double, 3, 3>>(m, "Cluster3x3d");
define_cluster_finder_bindings<Cluster<float, 3, 3>>(m, "Cluster3x3f");
define_cluster_finder_bindings<Cluster<int, 2, 2>>(m, "Cluster2x2i");
define_cluster_finder_bindings<Cluster<double, 2, 2>>(m, "Cluster2x2d");
define_cluster_finder_bindings<Cluster<float, 2, 2>>(m, "Cluster2x2f");
define_cluster_finder_mt_bindings<Cluster<int, 3, 3>>(m, "Cluster3x3i");
define_cluster_finder_mt_bindings<Cluster<double, 3, 3>>(m, "Cluster3x3d");
define_cluster_finder_mt_bindings<Cluster<float, 3, 3>>(m, "Cluster3x3f");
define_cluster_finder_mt_bindings<Cluster<int, 2, 2>>(m, "Cluster2x2i");
define_cluster_finder_mt_bindings<Cluster<double, 2, 2>>(m, "Cluster2x2d");
define_cluster_finder_mt_bindings<Cluster<float, 2, 2>>(m, "Cluster2x2f");
define_cluster_file_sink_bindings<Cluster<int, 3, 3>>(m, "Cluster3x3i");
define_cluster_file_sink_bindings<Cluster<double, 3, 3>>(m, "Cluster3x3d");
define_cluster_file_sink_bindings<Cluster<float, 3, 3>>(m, "Cluster3x3f");
define_cluster_file_sink_bindings<Cluster<int, 2, 2>>(m, "Cluster2x2i");
define_cluster_file_sink_bindings<Cluster<double, 2, 2>>(m, "Cluster2x2d");
define_cluster_file_sink_bindings<Cluster<float, 2, 2>>(m, "Cluster2x2f");
define_cluster_collector_bindings<Cluster<int, 3, 3>>(m, "Cluster3x3i");
define_cluster_collector_bindings<Cluster<double, 3, 3>>(m, "Cluster3x3f");
define_cluster_collector_bindings<Cluster<float, 3, 3>>(m, "Cluster3x3d");
define_cluster_collector_bindings<Cluster<int, 2, 2>>(m, "Cluster2x2i");
define_cluster_collector_bindings<Cluster<double, 2, 2>>(m, "Cluster2x2f");
define_cluster_collector_bindings<Cluster<float, 2, 2>>(m, "Cluster2x2d");
define_cluster<int, 3, 3, uint16_t>(m, "3x3i");
define_cluster<float, 3, 3, uint16_t>(m, "3x3f");
define_cluster<double, 3, 3, uint16_t>(m, "3x3d");
define_cluster<int, 2, 2, uint16_t>(m, "2x2i");
define_cluster<float, 2, 2, uint16_t>(m, "2x2f");
define_cluster<double, 2, 2, uint16_t>(m, "2x2d");
} }

View File

@ -10,6 +10,7 @@
#include "aare/NDView.hpp" #include "aare/NDView.hpp"
namespace py = pybind11; namespace py = pybind11;
using namespace aare;
// Pass image data back to python as a numpy array // Pass image data back to python as a numpy array
template <typename T, int64_t Ndim> template <typename T, int64_t Ndim>
@ -40,25 +41,46 @@ template <typename T> py::array return_vector(std::vector<T> *vec) {
} }
// todo rewrite generic // todo rewrite generic
template <class T, int Flags> auto get_shape_3d(py::array_t<T, Flags> arr) { template <class T, int Flags>
auto get_shape_3d(const py::array_t<T, Flags> &arr) {
return aare::Shape<3>{arr.shape(0), arr.shape(1), arr.shape(2)}; return aare::Shape<3>{arr.shape(0), arr.shape(1), arr.shape(2)};
} }
template <class T, int Flags> auto make_view_3d(py::array_t<T, Flags> arr) { template <class T, int Flags> auto make_view_3d(py::array_t<T, Flags> &arr) {
return aare::NDView<T, 3>(arr.mutable_data(), get_shape_3d<T, Flags>(arr)); return aare::NDView<T, 3>(arr.mutable_data(), get_shape_3d<T, Flags>(arr));
} }
template <class T, int Flags> auto get_shape_2d(py::array_t<T, Flags> arr) { template <class T, int Flags>
auto get_shape_2d(const py::array_t<T, Flags> &arr) {
return aare::Shape<2>{arr.shape(0), arr.shape(1)}; return aare::Shape<2>{arr.shape(0), arr.shape(1)};
} }
template <class T, int Flags> auto get_shape_1d(py::array_t<T, Flags> arr) { template <class T, int Flags>
auto get_shape_1d(const py::array_t<T, Flags> &arr) {
return aare::Shape<1>{arr.shape(0)}; return aare::Shape<1>{arr.shape(0)};
} }
template <class T, int Flags> auto make_view_2d(py::array_t<T, Flags> arr) { template <class T, int Flags> auto make_view_2d(py::array_t<T, Flags> &arr) {
return aare::NDView<T, 2>(arr.mutable_data(), get_shape_2d<T, Flags>(arr)); return aare::NDView<T, 2>(arr.mutable_data(), get_shape_2d<T, Flags>(arr));
} }
template <class T, int Flags> auto make_view_1d(py::array_t<T, Flags> arr) { template <class T, int Flags> auto make_view_1d(py::array_t<T, Flags> &arr) {
return aare::NDView<T, 1>(arr.mutable_data(), get_shape_1d<T, Flags>(arr)); return aare::NDView<T, 1>(arr.mutable_data(), get_shape_1d<T, Flags>(arr));
} }
template <typename ClusterType> struct fmt_format_trait; // forward declaration
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
struct fmt_format_trait<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
static std::string value() {
return fmt::format("T{{{}:x;{}:y;{}:data;}}",
py::format_descriptor<CoordType>::format(),
py::format_descriptor<CoordType>::format(),
fmt::format("{}{}", ClusterSizeX * ClusterSizeY,
py::format_descriptor<T>::format()));
}
};
template <typename ClusterType>
auto fmt_format = fmt_format_trait<ClusterType>::value();

View File

@ -19,15 +19,24 @@ using namespace::aare;
void define_var_cluster_finder_bindings(py::module &m) { void define_var_cluster_finder_bindings(py::module &m) {
PYBIND11_NUMPY_DTYPE(VarClusterFinder<double>::Hit, size, row, col, PYBIND11_NUMPY_DTYPE(VarClusterFinder<double>::Hit, size, row, col,
reserved, energy, max); reserved, energy, max, rows, cols, enes);
py::class_<VarClusterFinder<double>>(m, "VarClusterFinder") py::class_<VarClusterFinder<double>>(m, "VarClusterFinder")
.def(py::init<Shape<2>, double>()) .def(py::init<Shape<2>, double>())
.def("labeled", .def("labeled",
[](VarClusterFinder<double> &self) { [](VarClusterFinder<double> &self) {
auto ptr = new NDArray<int, 2>(self.labeled()); auto *ptr = new NDArray<int, 2>(self.labeled());
return return_image_data(ptr); return return_image_data(ptr);
}) })
.def("set_noiseMap",
[](VarClusterFinder<double> &self,
py::array_t<double, py::array::c_style | py::array::forcecast>
noise_map) {
auto noise_map_span = make_view_2d(noise_map);
self.set_noiseMap(noise_map_span);
})
.def("set_peripheralThresholdFactor",
&VarClusterFinder<double>::set_peripheralThresholdFactor)
.def("find_clusters", .def("find_clusters",
[](VarClusterFinder<double> &self, [](VarClusterFinder<double> &self,
py::array_t<double, py::array::c_style | py::array::forcecast> py::array_t<double, py::array::c_style | py::array::forcecast>
@ -35,6 +44,30 @@ void define_var_cluster_finder_bindings(py::module &m) {
auto view = make_view_2d(img); auto view = make_view_2d(img);
self.find_clusters(view); self.find_clusters(view);
}) })
.def("find_clusters_X",
[](VarClusterFinder<double> &self,
py::array_t<double, py::array::c_style | py::array::forcecast>
img) {
auto img_span = make_view_2d(img);
self.find_clusters_X(img_span);
})
.def("single_pass",
[](VarClusterFinder<double> &self,
py::array_t<double, py::array::c_style | py::array::forcecast>
img) {
auto img_span = make_view_2d(img);
self.single_pass(img_span);
})
.def("hits",
[](VarClusterFinder<double> &self) {
auto ptr = new std::vector<VarClusterFinder<double>::Hit>(
self.steal_hits());
return return_vector(ptr);
})
.def("clear_hits",
[](VarClusterFinder<double> &self) {
self.clear_hits();
})
.def("steal_hits", .def("steal_hits",
[](VarClusterFinder<double> &self) { [](VarClusterFinder<double> &self) {
auto ptr = new std::vector<VarClusterFinder<double>::Hit>( auto ptr = new std::vector<VarClusterFinder<double>::Hit>(

View File

@ -0,0 +1,64 @@
import pytest
import numpy as np
from _aare import ClusterVector_Cluster3x3i, Interpolator, Cluster3x3i, ClusterFinder_Cluster3x3i
def test_ClusterVector():
"""Test ClusterVector"""
clustervector = ClusterVector_Cluster3x3i()
assert clustervector.cluster_size_x == 3
assert clustervector.cluster_size_y == 3
assert clustervector.item_size() == 4+9*4
assert clustervector.frame_number == 0
assert clustervector.capacity == 1024
assert clustervector.size == 0
cluster = Cluster3x3i(0,0,np.ones(9, dtype=np.int32))
clustervector.push_back(cluster)
assert clustervector.size == 1
#push_back - check size
def test_Interpolator():
"""Test Interpolator"""
ebins = np.linspace(0,10, 20, dtype=np.float64)
xbins = np.linspace(0, 5, 30, dtype=np.float64)
ybins = np.linspace(0, 5, 30, dtype=np.float64)
etacube = np.zeros(shape=[30, 30, 20], dtype=np.float64)
interpolator = Interpolator(etacube, xbins, ybins, ebins)
assert interpolator.get_ietax().shape == (30,30,20)
assert interpolator.get_ietay().shape == (30,30,20)
clustervector = ClusterVector_Cluster3x3i()
cluster = Cluster3x3i(0,0, np.ones(9, dtype=np.int32))
#clustervector.push_back(cluster)
#num_clusters = 1;
#assert interpolator.interpolate_Cluster3x3i(clustervector).shape == (num_clusters, 3)
#def test_cluster_file():
#def test_cluster_finder():
#"""Test ClusterFinder"""
#clusterfinder = ClusterFinder_Cluster3x3i([100,100])
#clusterfinder.find_clusters()
#clusters = clusterfinder.steal_clusters()
#print("cluster size: ", clusters.size())

62
src/CalculateEta.test.cpp Normal file
View File

@ -0,0 +1,62 @@
/************************************************
* @file CalculateEta.test.cpp
* @short test case to calculate_eta2
***********************************************/
#include "aare/CalculateEta.hpp"
#include "aare/Cluster.hpp"
#include "aare/ClusterFile.hpp"
// #include "catch.hpp"
#include <array>
#include <catch2/catch_all.hpp>
#include <catch2/catch_test_macros.hpp>
using namespace aare;
using ClusterTypes =
std::variant<Cluster<int, 2, 2>, Cluster<int, 3, 3>, Cluster<int, 5, 5>,
Cluster<int, 4, 2>, Cluster<int, 2, 3>>;
auto get_test_parameters() {
return GENERATE(
std::make_tuple(ClusterTypes{Cluster<int, 2, 2>{0, 0, {1, 2, 3, 1}}},
Eta2<int>{2. / 3, 3. / 4, corner::cBottomLeft, 7}),
std::make_tuple(
ClusterTypes{Cluster<int, 3, 3>{0, 0, {1, 2, 3, 4, 5, 6, 1, 2, 7}}},
Eta2<int>{6. / 11, 2. / 7, corner::cTopRight, 20}),
std::make_tuple(ClusterTypes{Cluster<int, 5, 5>{
0, 0, {1, 6, 7, 6, 5, 4, 3, 2, 1, 8, 8, 9, 2,
1, 4, 5, 6, 7, 8, 4, 1, 1, 1, 1, 1}}},
Eta2<int>{9. / 17, 5. / 13, 8, 28}),
std::make_tuple(
ClusterTypes{Cluster<int, 4, 2>{0, 0, {1, 4, 7, 2, 5, 6, 4, 3}}},
Eta2<int>{7. / 11, 6. / 10, 1, 21}),
std::make_tuple(
ClusterTypes{Cluster<int, 2, 3>{0, 0, {1, 3, 2, 3, 4, 2}}},
Eta2<int>{3. / 5, 4. / 6, 1, 11}));
}
TEST_CASE("compute_largest_2x2_subcluster", "[.eta_calculation]") {
auto [cluster, expected_eta] = get_test_parameters();
auto [sum, index] = std::visit(
[](const auto &clustertype) { return clustertype.max_sum_2x2(); },
cluster);
CHECK(expected_eta.c == index);
CHECK(expected_eta.sum == sum);
}
TEST_CASE("calculate_eta2", "[.eta_calculation]") {
auto [cluster, expected_eta] = get_test_parameters();
auto eta = std::visit(
[](const auto &clustertype) { return calculate_eta2(clustertype); },
cluster);
CHECK(eta.x == expected_eta.x);
CHECK(eta.y == expected_eta.y);
CHECK(eta.c == expected_eta.c);
CHECK(eta.sum == expected_eta.sum);
}

28
src/Cluster.test.cpp Normal file
View File

@ -0,0 +1,28 @@
/************************************************
* @file test-Cluster.cpp
* @short test case for generic Cluster, ClusterVector, and calculate_eta2
***********************************************/
#include "aare/Cluster.hpp"
#include "aare/CalculateEta.hpp"
#include "aare/ClusterFile.hpp"
// #include "catch.hpp"
#include <array>
#include <catch2/catch_all.hpp>
#include <catch2/catch_test_macros.hpp>
using namespace aare;
TEST_CASE("Correct Instantiation of Cluster and ClusterVector",
"[.cluster][.instantiation]") {
CHECK(is_valid_cluster<double, 3, 3>);
CHECK(is_valid_cluster<double, 3, 2>);
CHECK(not is_valid_cluster<int, 0, 0>);
CHECK(not is_valid_cluster<std::string, 2, 2>);
CHECK(not is_valid_cluster<int, 2, 2, double>);
CHECK(not is_cluster_v<int>);
CHECK(is_cluster_v<Cluster<int, 3, 3>>);
}

View File

@ -1,442 +0,0 @@
#include "aare/ClusterFile.hpp"
#include <algorithm>
namespace aare {
ClusterFile::ClusterFile(const std::filesystem::path &fname, size_t chunk_size,
const std::string &mode)
: m_chunk_size(chunk_size), m_mode(mode) {
if (mode == "r") {
fp = fopen(fname.c_str(), "rb");
if (!fp) {
throw std::runtime_error("Could not open file for reading: " +
fname.string());
}
} else if (mode == "w") {
fp = fopen(fname.c_str(), "wb");
if (!fp) {
throw std::runtime_error("Could not open file for writing: " +
fname.string());
}
} else if (mode == "a") {
fp = fopen(fname.c_str(), "ab");
if (!fp) {
throw std::runtime_error("Could not open file for appending: " +
fname.string());
}
} else {
throw std::runtime_error("Unsupported mode: " + mode);
}
}
ClusterFile::~ClusterFile() { close(); }
void ClusterFile::close() {
if (fp) {
fclose(fp);
fp = nullptr;
}
}
void ClusterFile::write_frame(const ClusterVector<int32_t> &clusters) {
if (m_mode != "w" && m_mode != "a") {
throw std::runtime_error("File not opened for writing");
}
if (!(clusters.cluster_size_x() == 3) &&
!(clusters.cluster_size_y() == 3)) {
throw std::runtime_error("Only 3x3 clusters are supported");
}
int32_t frame_number = clusters.frame_number();
fwrite(&frame_number, sizeof(frame_number), 1, fp);
uint32_t n_clusters = clusters.size();
fwrite(&n_clusters, sizeof(n_clusters), 1, fp);
fwrite(clusters.data(), clusters.item_size(), clusters.size(), fp);
}
ClusterVector<int32_t> ClusterFile::read_clusters(size_t n_clusters) {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
ClusterVector<int32_t> clusters(3,3, n_clusters);
int32_t iframe = 0; // frame number needs to be 4 bytes!
size_t nph_read = 0;
uint32_t nn = m_num_left;
uint32_t nph = m_num_left; // number of clusters in frame needs to be 4
// auto buf = reinterpret_cast<Cluster3x3 *>(clusters.data());
auto buf = clusters.data();
// if there are photons left from previous frame read them first
if (nph) {
if (nph > n_clusters) {
// if we have more photons left in the frame then photons to read we
// read directly the requested number
nn = n_clusters;
} else {
nn = nph;
}
nph_read += fread((buf + nph_read*clusters.item_size()),
clusters.item_size(), nn, fp);
m_num_left = nph - nn; // write back the number of photons left
}
if (nph_read < n_clusters) {
// keep on reading frames and photons until reaching n_clusters
while (fread(&iframe, sizeof(iframe), 1, fp)) {
// read number of clusters in frame
if (fread(&nph, sizeof(nph), 1, fp)) {
if (nph > (n_clusters - nph_read))
nn = n_clusters - nph_read;
else
nn = nph;
nph_read += fread((buf + nph_read*clusters.item_size()),
clusters.item_size(), nn, fp);
m_num_left = nph - nn;
}
if (nph_read >= n_clusters)
break;
}
}
// Resize the vector to the number of clusters.
// No new allocation, only change bounds.
clusters.resize(nph_read);
return clusters;
}
ClusterVector<int32_t> ClusterFile::read_frame() {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
if (m_num_left) {
throw std::runtime_error(
"There are still photons left in the last frame");
}
int32_t frame_number;
if (fread(&frame_number, sizeof(frame_number), 1, fp) != 1) {
throw std::runtime_error("Could not read frame number");
}
int32_t n_clusters; // Saved as 32bit integer in the cluster file
if (fread(&n_clusters, sizeof(n_clusters), 1, fp) != 1) {
throw std::runtime_error("Could not read number of clusters");
}
// std::vector<Cluster3x3> clusters(n_clusters);
ClusterVector<int32_t> clusters(3, 3, n_clusters);
clusters.set_frame_number(frame_number);
if (fread(clusters.data(), clusters.item_size(), n_clusters, fp) !=
static_cast<size_t>(n_clusters)) {
throw std::runtime_error("Could not read clusters");
}
clusters.resize(n_clusters);
return clusters;
}
// std::vector<Cluster3x3> ClusterFile::read_cluster_with_cut(size_t n_clusters,
// double *noise_map,
// int nx, int ny) {
// if (m_mode != "r") {
// throw std::runtime_error("File not opened for reading");
// }
// std::vector<Cluster3x3> clusters(n_clusters);
// // size_t read_clusters_with_cut(FILE *fp, size_t n_clusters, Cluster *buf,
// // uint32_t *n_left, double *noise_map, int
// // nx, int ny) {
// int iframe = 0;
// // uint32_t nph = *n_left;
// uint32_t nph = m_num_left;
// // uint32_t nn = *n_left;
// uint32_t nn = m_num_left;
// size_t nph_read = 0;
// int32_t t2max, tot1;
// int32_t tot3;
// // Cluster *ptr = buf;
// Cluster3x3 *ptr = clusters.data();
// int good = 1;
// double noise;
// // read photons left from previous frame
// if (noise_map)
// printf("Using noise map\n");
// if (nph) {
// if (nph > n_clusters) {
// // if we have more photons left in the frame then photons to
// // read we read directly the requested number
// nn = n_clusters;
// } else {
// nn = nph;
// }
// for (size_t iph = 0; iph < nn; iph++) {
// // read photons 1 by 1
// size_t n_read =
// fread(reinterpret_cast<void *>(ptr), sizeof(Cluster3x3), 1, fp);
// if (n_read != 1) {
// clusters.resize(nph_read);
// return clusters;
// }
// // TODO! error handling on read
// good = 1;
// if (noise_map) {
// if (ptr->x >= 0 && ptr->x < nx && ptr->y >= 0 && ptr->y < ny) {
// tot1 = ptr->data[4];
// analyze_cluster(*ptr, &t2max, &tot3, NULL, NULL, NULL, NULL,
// NULL);
// noise = noise_map[ptr->y * nx + ptr->x];
// if (tot1 > noise || t2max > 2 * noise || tot3 > 3 * noise) {
// ;
// } else {
// good = 0;
// printf("%d %d %f %d %d %d\n", ptr->x, ptr->y, noise,
// tot1, t2max, tot3);
// }
// } else {
// printf("Bad pixel number %d %d\n", ptr->x, ptr->y);
// good = 0;
// }
// }
// if (good) {
// ptr++;
// nph_read++;
// }
// (m_num_left)--;
// if (nph_read >= n_clusters)
// break;
// }
// }
// if (nph_read < n_clusters) {
// // // keep on reading frames and photons until reaching
// // n_clusters
// while (fread(&iframe, sizeof(iframe), 1, fp)) {
// // // printf("%d\n",nph_read);
// if (fread(&nph, sizeof(nph), 1, fp)) {
// // // printf("** %d\n",nph);
// m_num_left = nph;
// for (size_t iph = 0; iph < nph; iph++) {
// // // read photons 1 by 1
// size_t n_read = fread(reinterpret_cast<void *>(ptr),
// sizeof(Cluster3x3), 1, fp);
// if (n_read != 1) {
// clusters.resize(nph_read);
// return clusters;
// // return nph_read;
// }
// good = 1;
// if (noise_map) {
// if (ptr->x >= 0 && ptr->x < nx && ptr->y >= 0 &&
// ptr->y < ny) {
// tot1 = ptr->data[4];
// analyze_cluster(*ptr, &t2max, &tot3, NULL, NULL,
// NULL, NULL, NULL);
// // noise = noise_map[ptr->y * nx + ptr->x];
// noise = noise_map[ptr->y + ny * ptr->x];
// if (tot1 > noise || t2max > 2 * noise ||
// tot3 > 3 * noise) {
// ;
// } else
// good = 0;
// } else {
// printf("Bad pixel number %d %d\n", ptr->x, ptr->y);
// good = 0;
// }
// }
// if (good) {
// ptr++;
// nph_read++;
// }
// (m_num_left)--;
// if (nph_read >= n_clusters)
// break;
// }
// }
// if (nph_read >= n_clusters)
// break;
// }
// }
// // printf("%d\n",nph_read);
// clusters.resize(nph_read);
// return clusters;
// }
NDArray<double, 2> calculate_eta2(ClusterVector<int> &clusters) {
//TOTO! make work with 2x2 clusters
NDArray<double, 2> eta2({static_cast<int64_t>(clusters.size()), 2});
for (size_t i = 0; i < clusters.size(); i++) {
auto e = calculate_eta2(clusters.at<Cluster3x3>(i));
eta2(i, 0) = e.x;
eta2(i, 1) = e.y;
}
return eta2;
}
/**
* @brief Calculate the eta2 values for a 3x3 cluster and return them in a Eta2 struct
* containing etay, etax and the corner of the cluster.
*/
Eta2 calculate_eta2(Cluster3x3 &cl) {
Eta2 eta{};
std::array<int32_t, 4> tot2;
tot2[0] = cl.data[0] + cl.data[1] + cl.data[3] + cl.data[4];
tot2[1] = cl.data[1] + cl.data[2] + cl.data[4] + cl.data[5];
tot2[2] = cl.data[3] + cl.data[4] + cl.data[6] + cl.data[7];
tot2[3] = cl.data[4] + cl.data[5] + cl.data[7] + cl.data[8];
auto c = std::max_element(tot2.begin(), tot2.end()) - tot2.begin();
switch (c) {
case cBottomLeft:
if ((cl.data[3] + cl.data[4]) != 0)
eta.x =
static_cast<double>(cl.data[4]) / (cl.data[3] + cl.data[4]);
if ((cl.data[1] + cl.data[4]) != 0)
eta.y =
static_cast<double>(cl.data[4]) / (cl.data[1] + cl.data[4]);
eta.c = cBottomLeft;
break;
case cBottomRight:
if ((cl.data[2] + cl.data[5]) != 0)
eta.x =
static_cast<double>(cl.data[5]) / (cl.data[4] + cl.data[5]);
if ((cl.data[1] + cl.data[4]) != 0)
eta.y =
static_cast<double>(cl.data[4]) / (cl.data[1] + cl.data[4]);
eta.c = cBottomRight;
break;
case cTopLeft:
if ((cl.data[7] + cl.data[4]) != 0)
eta.x =
static_cast<double>(cl.data[4]) / (cl.data[3] + cl.data[4]);
if ((cl.data[7] + cl.data[4]) != 0)
eta.y =
static_cast<double>(cl.data[7]) / (cl.data[7] + cl.data[4]);
eta.c = cTopLeft;
break;
case cTopRight:
if ((cl.data[5] + cl.data[4]) != 0)
eta.x =
static_cast<double>(cl.data[5]) / (cl.data[5] + cl.data[4]);
if ((cl.data[7] + cl.data[4]) != 0)
eta.y =
static_cast<double>(cl.data[7]) / (cl.data[7] + cl.data[4]);
eta.c = cTopRight;
break;
// no default to allow compiler to warn about missing cases
}
return eta;
}
int analyze_cluster(Cluster3x3 &cl, int32_t *t2, int32_t *t3, char *quad,
double *eta2x, double *eta2y, double *eta3x,
double *eta3y) {
return analyze_data(cl.data, t2, t3, quad, eta2x, eta2y, eta3x, eta3y);
}
int analyze_data(int32_t *data, int32_t *t2, int32_t *t3, char *quad,
double *eta2x, double *eta2y, double *eta3x, double *eta3y) {
int ok = 1;
int32_t tot2[4];
int32_t t2max = 0;
char c = 0;
int32_t val, tot3;
tot3 = 0;
for (int i = 0; i < 4; i++)
tot2[i] = 0;
for (int ix = 0; ix < 3; ix++) {
for (int iy = 0; iy < 3; iy++) {
val = data[iy * 3 + ix];
// printf ("%d ",data[iy * 3 + ix]);
tot3 += val;
if (ix <= 1 && iy <= 1)
tot2[cBottomLeft] += val;
if (ix >= 1 && iy <= 1)
tot2[cBottomRight] += val;
if (ix <= 1 && iy >= 1)
tot2[cTopLeft] += val;
if (ix >= 1 && iy >= 1)
tot2[cTopRight] += val;
}
// printf ("\n");
}
// printf ("\n");
if (t2 || quad) {
t2max = tot2[0];
c = cBottomLeft;
for (int i = 1; i < 4; i++) {
if (tot2[i] > t2max) {
t2max = tot2[i];
c = i;
}
}
// printf("*** %d %d %d %d --
// %d\n",tot2[0],tot2[1],tot2[2],tot2[3],t2max);
if (quad)
*quad = c;
if (t2)
*t2 = t2max;
}
if (t3)
*t3 = tot3;
if (eta2x || eta2y) {
if (eta2x)
*eta2x = 0;
if (eta2y)
*eta2y = 0;
switch (c) {
case cBottomLeft:
if (eta2x && (data[3] + data[4]) != 0)
*eta2x = static_cast<double>(data[4]) / (data[3] + data[4]);
if (eta2y && (data[1] + data[4]) != 0)
*eta2y = static_cast<double>(data[4]) / (data[1] + data[4]);
break;
case cBottomRight:
if (eta2x && (data[2] + data[5]) != 0)
*eta2x = static_cast<double>(data[5]) / (data[4] + data[5]);
if (eta2y && (data[1] + data[4]) != 0)
*eta2y = static_cast<double>(data[4]) / (data[1] + data[4]);
break;
case cTopLeft:
if (eta2x && (data[7] + data[4]) != 0)
*eta2x = static_cast<double>(data[4]) / (data[3] + data[4]);
if (eta2y && (data[7] + data[4]) != 0)
*eta2y = static_cast<double>(data[7]) / (data[7] + data[4]);
break;
case cTopRight:
if (eta2x && t2max != 0)
*eta2x = static_cast<double>(data[5]) / (data[5] + data[4]);
if (eta2y && t2max != 0)
*eta2y = static_cast<double>(data[7]) / (data[7] + data[4]);
break;
default:;
}
}
if (eta3x || eta3y) {
if (eta3x && (data[3] + data[4] + data[5]) != 0)
*eta3x = static_cast<double>(-data[3] + data[3 + 2]) /
(data[3] + data[4] + data[5]);
if (eta3y && (data[1] + data[4] + data[7]) != 0)
*eta3y = static_cast<double>(-data[1] + data[2 * 3 + 1]) /
(data[1] + data[4] + data[7]);
}
return ok;
}
} // namespace aare

75
src/ClusterFile.test.cpp Normal file
View File

@ -0,0 +1,75 @@
#include "aare/ClusterFile.hpp"
#include "test_config.hpp"
#include "aare/defs.hpp"
#include <catch2/catch_test_macros.hpp>
#include <filesystem>
using aare::Cluster;
using aare::ClusterFile;
TEST_CASE("Read one frame from a a cluster file", "[.integration]") {
// We know that the frame has 97 clusters
auto fpath =
test_data_path() / "clusters" / "single_frame_97_clustrers.clust";
REQUIRE(std::filesystem::exists(fpath));
ClusterFile<Cluster<int32_t, 3, 3>> f(fpath);
auto clusters = f.read_frame();
REQUIRE(clusters.size() == 97);
REQUIRE(clusters.frame_number() == 135);
}
TEST_CASE("Read one frame using ROI", "[.integration]") {
// We know that the frame has 97 clusters
auto fpath =
test_data_path() / "clusters" / "single_frame_97_clustrers.clust";
REQUIRE(std::filesystem::exists(fpath));
ClusterFile<Cluster<int32_t, 3, 3>> f(fpath);
aare::ROI roi;
roi.xmin = 0;
roi.xmax = 50;
roi.ymin = 200;
roi.ymax = 249;
f.set_roi(roi);
auto clusters = f.read_frame();
REQUIRE(clusters.size() == 49);
REQUIRE(clusters.frame_number() == 135);
// Check that all clusters are within the ROI
for (size_t i = 0; i < clusters.size(); i++) {
auto c = clusters.at(i);
REQUIRE(c.x >= roi.xmin);
REQUIRE(c.x <= roi.xmax);
REQUIRE(c.y >= roi.ymin);
REQUIRE(c.y <= roi.ymax);
}
}
TEST_CASE("Read clusters from single frame file", "[.integration]") {
auto fpath =
test_data_path() / "clusters" / "single_frame_97_clustrers.clust";
REQUIRE(std::filesystem::exists(fpath));
SECTION("Read fewer clusters than available") {
ClusterFile<Cluster<int32_t, 3, 3>> f(fpath);
auto clusters = f.read_clusters(50);
REQUIRE(clusters.size() == 50);
REQUIRE(clusters.frame_number() == 135);
}
SECTION("Read more clusters than available") {
ClusterFile<Cluster<int32_t, 3, 3>> f(fpath);
// 100 is the maximum number of clusters read
auto clusters = f.read_clusters(100);
REQUIRE(clusters.size() == 97);
REQUIRE(clusters.frame_number() == 135);
}
SECTION("Read all clusters") {
ClusterFile<Cluster<int32_t, 3, 3>> f(fpath);
auto clusters = f.read_clusters(97);
REQUIRE(clusters.size() == 97);
REQUIRE(clusters.frame_number() == 135);
}
}

View File

@ -1,19 +1,18 @@
#include "aare/ClusterFinder.hpp" #include "aare/ClusterFinder.hpp"
#include "aare/Pedestal.hpp" #include "aare/Pedestal.hpp"
#include <catch2/matchers/catch_matchers_floating_point.hpp>
#include <catch2/catch_test_macros.hpp> #include <catch2/catch_test_macros.hpp>
#include <catch2/matchers/catch_matchers_floating_point.hpp>
#include <chrono> #include <chrono>
#include <random> #include <random>
using namespace aare; using namespace aare;
//TODO! Find a way to test the cluster finder // TODO! Find a way to test the cluster finder
// class ClusterFinderUnitTest : public ClusterFinder { // class ClusterFinderUnitTest : public ClusterFinder {
// public: // public:
// ClusterFinderUnitTest(int cluster_sizeX, int cluster_sizeY, double nSigma = 5.0, double threshold = 0.0) // ClusterFinderUnitTest(int cluster_sizeX, int cluster_sizeY, double nSigma
// = 5.0, double threshold = 0.0)
// : ClusterFinder(cluster_sizeX, cluster_sizeY, nSigma, threshold) {} // : ClusterFinder(cluster_sizeX, cluster_sizeY, nSigma, threshold) {}
// double get_c2() { return c2; } // double get_c2() { return c2; }
// double get_c3() { return c3; } // double get_c3() { return c3; }
@ -37,8 +36,8 @@ using namespace aare;
// REQUIRE_THAT(cf.get_c3(), Catch::Matchers::WithinRel(c3, 1e-9)); // REQUIRE_THAT(cf.get_c3(), Catch::Matchers::WithinRel(c3, 1e-9));
// } // }
TEST_CASE("Construct a cluster finder"){ TEST_CASE("Construct a cluster finder") {
ClusterFinder clusterFinder({400,400}, {3,3}); ClusterFinder clusterFinder({400, 400});
// REQUIRE(clusterFinder.get_cluster_sizeX() == 3); // REQUIRE(clusterFinder.get_cluster_sizeX() == 3);
// REQUIRE(clusterFinder.get_cluster_sizeY() == 3); // REQUIRE(clusterFinder.get_cluster_sizeY() == 3);
// REQUIRE(clusterFinder.get_threshold() == 1); // REQUIRE(clusterFinder.get_threshold() == 1);
@ -49,16 +48,17 @@ TEST_CASE("Construct a cluster finder"){
// aare::Pedestal pedestal(10, 10, 5); // aare::Pedestal pedestal(10, 10, 5);
// NDArray<double, 2> frame({10, 10}); // NDArray<double, 2> frame({10, 10});
// frame = 0; // frame = 0;
// ClusterFinder clusterFinder(3, 3, 1, 1); // 3x3 cluster, 1 nSigma, 1 threshold // ClusterFinder clusterFinder(3, 3, 1, 1); // 3x3 cluster, 1 nSigma, 1
// threshold
// auto clusters = clusterFinder.find_clusters_without_threshold(frame.span(), pedestal); // auto clusters =
// clusterFinder.find_clusters_without_threshold(frame.span(), pedestal);
// REQUIRE(clusters.size() == 0); // REQUIRE(clusters.size() == 0);
// frame(5, 5) = 10; // frame(5, 5) = 10;
// clusters = clusterFinder.find_clusters_without_threshold(frame.span(), pedestal); // clusters = clusterFinder.find_clusters_without_threshold(frame.span(),
// REQUIRE(clusters.size() == 1); // pedestal); REQUIRE(clusters.size() == 1); REQUIRE(clusters[0].x == 5);
// REQUIRE(clusters[0].x == 5);
// REQUIRE(clusters[0].y == 5); // REQUIRE(clusters[0].y == 5);
// for (int i = 0; i < 3; i++) { // for (int i = 0; i < 3; i++) {
// for (int j = 0; j < 3; j++) { // for (int j = 0; j < 3; j++) {

View File

@ -1,21 +1,17 @@
#include <cstdint>
#include "aare/ClusterVector.hpp" #include "aare/ClusterVector.hpp"
#include <cstdint>
#include <catch2/matchers/catch_matchers_floating_point.hpp> #include <catch2/catch_all.hpp>
#include <catch2/catch_test_macros.hpp> #include <catch2/catch_test_macros.hpp>
#include <catch2/matchers/catch_matchers_floating_point.hpp>
using aare::Cluster;
using aare::ClusterVector; using aare::ClusterVector;
struct Cluster_i2x2 { TEST_CASE("ClusterVector 2x2 int32_t capacity 4, push back then read",
int16_t x; "[.ClusterVector]") {
int16_t y;
int32_t data[4];
};
TEST_CASE("ClusterVector 2x2 int32_t capacity 4, push back then read") { ClusterVector<Cluster<int32_t, 2, 2>> cv(4);
ClusterVector<int32_t> cv(2, 2, 4);
REQUIRE(cv.capacity() == 4); REQUIRE(cv.capacity() == 4);
REQUIRE(cv.size() == 0); REQUIRE(cv.size() == 0);
REQUIRE(cv.cluster_size_x() == 2); REQUIRE(cv.cluster_size_x() == 2);
@ -23,112 +19,102 @@ TEST_CASE("ClusterVector 2x2 int32_t capacity 4, push back then read") {
// int16_t, int16_t, 2x2 int32_t = 20 bytes // int16_t, int16_t, 2x2 int32_t = 20 bytes
REQUIRE(cv.item_size() == 20); REQUIRE(cv.item_size() == 20);
//Create a cluster and push back into the vector // Create a cluster and push back into the vector
Cluster_i2x2 c1 = {1, 2, {3, 4, 5, 6}}; Cluster<int32_t, 2, 2> c1 = {1, 2, {3, 4, 5, 6}};
cv.push_back(c1.x, c1.y, reinterpret_cast<std::byte*>(&c1.data[0])); cv.push_back(c1);
REQUIRE(cv.size() == 1); REQUIRE(cv.size() == 1);
REQUIRE(cv.capacity() == 4); REQUIRE(cv.capacity() == 4);
//Read the cluster back out using copy. TODO! Can we improve the API? auto c2 = cv.at(0);
Cluster_i2x2 c2;
std::byte *ptr = cv.element_ptr(0);
std::copy(ptr, ptr + cv.item_size(), reinterpret_cast<std::byte*>(&c2));
//Check that the data is the same // Check that the data is the same
REQUIRE(c1.x == c2.x); REQUIRE(c1.x == c2.x);
REQUIRE(c1.y == c2.y); REQUIRE(c1.y == c2.y);
for(size_t i = 0; i < 4; i++) { for (size_t i = 0; i < 4; i++) {
REQUIRE(c1.data[i] == c2.data[i]); REQUIRE(c1.data[i] == c2.data[i]);
} }
} }
TEST_CASE("Summing 3x1 clusters of int64"){ TEST_CASE("Summing 3x1 clusters of int64", "[.ClusterVector]") {
struct Cluster_l3x1{ ClusterVector<Cluster<int32_t, 3, 1>> cv(2);
int16_t x;
int16_t y;
int32_t data[3];
};
ClusterVector<int32_t> cv(3, 1, 2);
REQUIRE(cv.capacity() == 2); REQUIRE(cv.capacity() == 2);
REQUIRE(cv.size() == 0); REQUIRE(cv.size() == 0);
REQUIRE(cv.cluster_size_x() == 3); REQUIRE(cv.cluster_size_x() == 3);
REQUIRE(cv.cluster_size_y() == 1); REQUIRE(cv.cluster_size_y() == 1);
//Create a cluster and push back into the vector // Create a cluster and push back into the vector
Cluster_l3x1 c1 = {1, 2, {3, 4, 5}}; Cluster<int32_t, 3, 1> c1 = {1, 2, {3, 4, 5}};
cv.push_back(c1.x, c1.y, reinterpret_cast<std::byte*>(&c1.data[0])); cv.push_back(c1);
REQUIRE(cv.capacity() == 2); REQUIRE(cv.capacity() == 2);
REQUIRE(cv.size() == 1); REQUIRE(cv.size() == 1);
Cluster_l3x1 c2 = {6, 7, {8, 9, 10}}; Cluster<int32_t, 3, 1> c2 = {6, 7, {8, 9, 10}};
cv.push_back(c2.x, c2.y, reinterpret_cast<std::byte*>(&c2.data[0])); cv.push_back(c2);
REQUIRE(cv.capacity() == 2); REQUIRE(cv.capacity() == 2);
REQUIRE(cv.size() == 2); REQUIRE(cv.size() == 2);
Cluster_l3x1 c3 = {11, 12, {13, 14, 15}}; Cluster<int32_t, 3, 1> c3 = {11, 12, {13, 14, 15}};
cv.push_back(c3.x, c3.y, reinterpret_cast<std::byte*>(&c3.data[0])); cv.push_back(c3);
REQUIRE(cv.capacity() == 4); REQUIRE(cv.capacity() == 4);
REQUIRE(cv.size() == 3); REQUIRE(cv.size() == 3);
/*
auto sums = cv.sum(); auto sums = cv.sum();
REQUIRE(sums.size() == 3); REQUIRE(sums.size() == 3);
REQUIRE(sums[0] == 12); REQUIRE(sums[0] == 12);
REQUIRE(sums[1] == 27); REQUIRE(sums[1] == 27);
REQUIRE(sums[2] == 42); REQUIRE(sums[2] == 42);
*/
} }
TEST_CASE("Storing floats"){ TEST_CASE("Storing floats", "[.ClusterVector]") {
struct Cluster_f4x2{ ClusterVector<Cluster<float, 2, 4>> cv(10);
int16_t x;
int16_t y;
float data[8];
};
ClusterVector<float> cv(2, 4, 10);
REQUIRE(cv.capacity() == 10); REQUIRE(cv.capacity() == 10);
REQUIRE(cv.size() == 0); REQUIRE(cv.size() == 0);
REQUIRE(cv.cluster_size_x() == 2); REQUIRE(cv.cluster_size_x() == 2);
REQUIRE(cv.cluster_size_y() == 4); REQUIRE(cv.cluster_size_y() == 4);
//Create a cluster and push back into the vector // Create a cluster and push back into the vector
Cluster_f4x2 c1 = {1, 2, {3.0, 4.0, 5.0, 6.0,3.0, 4.0, 5.0, 6.0}}; Cluster<float, 2, 4> c1 = {1, 2, {3.0, 4.0, 5.0, 6.0, 3.0, 4.0, 5.0, 6.0}};
cv.push_back(c1.x, c1.y, reinterpret_cast<std::byte*>(&c1.data[0])); cv.push_back(c1);
REQUIRE(cv.capacity() == 10); REQUIRE(cv.capacity() == 10);
REQUIRE(cv.size() == 1); REQUIRE(cv.size() == 1);
Cluster<float, 2, 4> c2 = {
Cluster_f4x2 c2 = {6, 7, {8.0, 9.0, 10.0, 11.0,8.0, 9.0, 10.0, 11.0}}; 6, 7, {8.0, 9.0, 10.0, 11.0, 8.0, 9.0, 10.0, 11.0}};
cv.push_back(c2.x, c2.y, reinterpret_cast<std::byte*>(&c2.data[0])); cv.push_back(c2);
REQUIRE(cv.capacity() == 10); REQUIRE(cv.capacity() == 10);
REQUIRE(cv.size() == 2); REQUIRE(cv.size() == 2);
/*
auto sums = cv.sum(); auto sums = cv.sum();
REQUIRE(sums.size() == 2); REQUIRE(sums.size() == 2);
REQUIRE_THAT(sums[0], Catch::Matchers::WithinAbs(36.0, 1e-6)); REQUIRE_THAT(sums[0], Catch::Matchers::WithinAbs(36.0, 1e-6));
REQUIRE_THAT(sums[1], Catch::Matchers::WithinAbs(76.0, 1e-6)); REQUIRE_THAT(sums[1], Catch::Matchers::WithinAbs(76.0, 1e-6));
*/
} }
TEST_CASE("Push back more than initial capacity"){ TEST_CASE("Push back more than initial capacity", "[.ClusterVector]") {
ClusterVector<int32_t> cv(2, 2, 2); ClusterVector<Cluster<int32_t, 2, 2>> cv(2);
auto initial_data = cv.data(); auto initial_data = cv.data();
Cluster_i2x2 c1 = {1, 2, {3, 4, 5, 6}}; Cluster<int32_t, 2, 2> c1 = {1, 2, {3, 4, 5, 6}};
cv.push_back(c1.x, c1.y, reinterpret_cast<std::byte*>(&c1.data[0])); cv.push_back(c1);
REQUIRE(cv.size() == 1); REQUIRE(cv.size() == 1);
REQUIRE(cv.capacity() == 2); REQUIRE(cv.capacity() == 2);
Cluster_i2x2 c2 = {6, 7, {8, 9, 10, 11}}; Cluster<int32_t, 2, 2> c2 = {6, 7, {8, 9, 10, 11}};
cv.push_back(c2.x, c2.y, reinterpret_cast<std::byte*>(&c2.data[0])); cv.push_back(c2);
REQUIRE(cv.size() == 2); REQUIRE(cv.size() == 2);
REQUIRE(cv.capacity() == 2); REQUIRE(cv.capacity() == 2);
Cluster_i2x2 c3 = {11, 12, {13, 14, 15, 16}}; Cluster<int32_t, 2, 2> c3 = {11, 12, {13, 14, 15, 16}};
cv.push_back(c3.x, c3.y, reinterpret_cast<std::byte*>(&c3.data[0])); cv.push_back(c3);
REQUIRE(cv.size() == 3); REQUIRE(cv.size() == 3);
REQUIRE(cv.capacity() == 4); REQUIRE(cv.capacity() == 4);
Cluster_i2x2* ptr = reinterpret_cast<Cluster_i2x2*>(cv.data()); Cluster<int32_t, 2, 2> *ptr =
reinterpret_cast<Cluster<int32_t, 2, 2> *>(cv.data());
REQUIRE(ptr[0].x == 1); REQUIRE(ptr[0].x == 1);
REQUIRE(ptr[0].y == 2); REQUIRE(ptr[0].y == 2);
REQUIRE(ptr[1].x == 6); REQUIRE(ptr[1].x == 6);
@ -136,29 +122,31 @@ TEST_CASE("Push back more than initial capacity"){
REQUIRE(ptr[2].x == 11); REQUIRE(ptr[2].x == 11);
REQUIRE(ptr[2].y == 12); REQUIRE(ptr[2].y == 12);
//We should have allocated a new buffer, since we outgrew the initial capacity // We should have allocated a new buffer, since we outgrew the initial
// capacity
REQUIRE(initial_data != cv.data()); REQUIRE(initial_data != cv.data());
} }
TEST_CASE("Concatenate two cluster vectors where the first has enough capacity"){ TEST_CASE("Concatenate two cluster vectors where the first has enough capacity",
ClusterVector<int32_t> cv1(2, 2, 12); "[.ClusterVector]") {
Cluster_i2x2 c1 = {1, 2, {3, 4, 5, 6}}; ClusterVector<Cluster<int32_t, 2, 2>> cv1(12);
cv1.push_back(c1.x, c1.y, reinterpret_cast<std::byte*>(&c1.data[0])); Cluster<int32_t, 2, 2> c1 = {1, 2, {3, 4, 5, 6}};
Cluster_i2x2 c2 = {6, 7, {8, 9, 10, 11}}; cv1.push_back(c1);
cv1.push_back(c2.x, c2.y, reinterpret_cast<std::byte*>(&c2.data[0])); Cluster<int32_t, 2, 2> c2 = {6, 7, {8, 9, 10, 11}};
cv1.push_back(c2);
ClusterVector<int32_t> cv2(2, 2, 2); ClusterVector<Cluster<int32_t, 2, 2>> cv2(2);
Cluster_i2x2 c3 = {11, 12, {13, 14, 15, 16}}; Cluster<int32_t, 2, 2> c3 = {11, 12, {13, 14, 15, 16}};
cv2.push_back(c3.x, c3.y, reinterpret_cast<std::byte*>(&c3.data[0])); cv2.push_back(c3);
Cluster_i2x2 c4 = {16, 17, {18, 19, 20, 21}}; Cluster<int32_t, 2, 2> c4 = {16, 17, {18, 19, 20, 21}};
cv2.push_back(c4.x, c4.y, reinterpret_cast<std::byte*>(&c4.data[0])); cv2.push_back(c4);
cv1 += cv2; cv1 += cv2;
REQUIRE(cv1.size() == 4); REQUIRE(cv1.size() == 4);
REQUIRE(cv1.capacity() == 12); REQUIRE(cv1.capacity() == 12);
Cluster_i2x2* ptr = reinterpret_cast<Cluster_i2x2*>(cv1.data()); Cluster<int32_t, 2, 2> *ptr =
reinterpret_cast<Cluster<int32_t, 2, 2> *>(cv1.data());
REQUIRE(ptr[0].x == 1); REQUIRE(ptr[0].x == 1);
REQUIRE(ptr[0].y == 2); REQUIRE(ptr[0].y == 2);
REQUIRE(ptr[1].x == 6); REQUIRE(ptr[1].x == 6);
@ -169,24 +157,26 @@ TEST_CASE("Concatenate two cluster vectors where the first has enough capacity")
REQUIRE(ptr[3].y == 17); REQUIRE(ptr[3].y == 17);
} }
TEST_CASE("Concatenate two cluster vectors where we need to allocate"){ TEST_CASE("Concatenate two cluster vectors where we need to allocate",
ClusterVector<int32_t> cv1(2, 2, 2); "[.ClusterVector]") {
Cluster_i2x2 c1 = {1, 2, {3, 4, 5, 6}}; ClusterVector<Cluster<int32_t, 2, 2>> cv1(2);
cv1.push_back(c1.x, c1.y, reinterpret_cast<std::byte*>(&c1.data[0])); Cluster<int32_t, 2, 2> c1 = {1, 2, {3, 4, 5, 6}};
Cluster_i2x2 c2 = {6, 7, {8, 9, 10, 11}}; cv1.push_back(c1);
cv1.push_back(c2.x, c2.y, reinterpret_cast<std::byte*>(&c2.data[0])); Cluster<int32_t, 2, 2> c2 = {6, 7, {8, 9, 10, 11}};
cv1.push_back(c2);
ClusterVector<int32_t> cv2(2, 2, 2); ClusterVector<Cluster<int32_t, 2, 2>> cv2(2);
Cluster_i2x2 c3 = {11, 12, {13, 14, 15, 16}}; Cluster<int32_t, 2, 2> c3 = {11, 12, {13, 14, 15, 16}};
cv2.push_back(c3.x, c3.y, reinterpret_cast<std::byte*>(&c3.data[0])); cv2.push_back(c3);
Cluster_i2x2 c4 = {16, 17, {18, 19, 20, 21}}; Cluster<int32_t, 2, 2> c4 = {16, 17, {18, 19, 20, 21}};
cv2.push_back(c4.x, c4.y, reinterpret_cast<std::byte*>(&c4.data[0])); cv2.push_back(c4);
cv1 += cv2; cv1 += cv2;
REQUIRE(cv1.size() == 4); REQUIRE(cv1.size() == 4);
REQUIRE(cv1.capacity() == 4); REQUIRE(cv1.capacity() == 4);
Cluster_i2x2* ptr = reinterpret_cast<Cluster_i2x2*>(cv1.data()); Cluster<int32_t, 2, 2> *ptr =
reinterpret_cast<Cluster<int32_t, 2, 2> *>(cv1.data());
REQUIRE(ptr[0].x == 1); REQUIRE(ptr[0].x == 1);
REQUIRE(ptr[0].y == 2); REQUIRE(ptr[0].y == 2);
REQUIRE(ptr[1].x == 6); REQUIRE(ptr[1].x == 6);
@ -196,3 +186,48 @@ TEST_CASE("Concatenate two cluster vectors where we need to allocate"){
REQUIRE(ptr[3].x == 16); REQUIRE(ptr[3].x == 16);
REQUIRE(ptr[3].y == 17); REQUIRE(ptr[3].y == 17);
} }
struct ClusterTestData {
uint8_t ClusterSizeX;
uint8_t ClusterSizeY;
std::vector<int64_t> index_map_x;
std::vector<int64_t> index_map_y;
};
TEST_CASE("Gain Map Calculation Index Map", "[.ClusterVector][.gain_map]") {
auto clustertestdata = GENERATE(
ClusterTestData{3,
3,
{-1, 0, 1, -1, 0, 1, -1, 0, 1},
{-1, -1, -1, 0, 0, 0, 1, 1, 1}},
ClusterTestData{
4,
4,
{-2, -1, 0, 1, -2, -1, 0, 1, -2, -1, 0, 1, -2, -1, 0, 1},
{-2, -2, -2, -2, -1, -1, -1, -1, 0, 0, 0, 0, 1, 1, 1, 1}},
ClusterTestData{2, 2, {-1, 0, -1, 0}, {-1, -1, 0, 0}},
ClusterTestData{5,
5,
{-2, -1, 0, 1, 2, -2, -1, 0, 1, 2, -2, -1, 0,
1, 2, -2, -1, 0, 1, 2, -2, -1, 0, 1, 2},
{-2, -2, -2, -2, -2, -1, -1, -1, -1, -1, 0, 0, 0,
0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2}});
uint8_t ClusterSizeX = clustertestdata.ClusterSizeX;
uint8_t ClusterSizeY = clustertestdata.ClusterSizeY;
std::vector<int64_t> index_map_x(ClusterSizeX * ClusterSizeY);
std::vector<int64_t> index_map_y(ClusterSizeX * ClusterSizeY);
int64_t index_cluster_center_x = ClusterSizeX / 2;
int64_t index_cluster_center_y = ClusterSizeY / 2;
for (size_t j = 0; j < ClusterSizeX * ClusterSizeY; j++) {
index_map_x[j] = j % ClusterSizeX - index_cluster_center_x;
index_map_y[j] = j / ClusterSizeX - index_cluster_center_y;
}
CHECK(index_map_x == clustertestdata.index_map_x);
CHECK(index_map_y == clustertestdata.index_map_y);
}

View File

@ -70,7 +70,7 @@ uint8_t Dtype::bitdepth() const {
/** /**
* @brief Get the number of bytes of the data type * @brief Get the number of bytes of the data type
*/ */
size_t Dtype::bytes() const { return bitdepth() / 8; } size_t Dtype::bytes() const { return bitdepth() / bits_per_byte; }
/** /**
* @brief Construct a DType object from a TypeIndex * @brief Construct a DType object from a TypeIndex

View File

@ -73,7 +73,7 @@ size_t File::tell() const { return file_impl->tell(); }
size_t File::rows() const { return file_impl->rows(); } size_t File::rows() const { return file_impl->rows(); }
size_t File::cols() const { return file_impl->cols(); } size_t File::cols() const { return file_impl->cols(); }
size_t File::bitdepth() const { return file_impl->bitdepth(); } size_t File::bitdepth() const { return file_impl->bitdepth(); }
size_t File::bytes_per_pixel() const { return file_impl->bitdepth() / 8; } size_t File::bytes_per_pixel() const { return file_impl->bitdepth() / bits_per_byte; }
DetectorType File::detector_type() const { return file_impl->detector_type(); } DetectorType File::detector_type() const { return file_impl->detector_type(); }

View File

@ -1,11 +1,13 @@
#include "aare/Fit.hpp" #include "aare/Fit.hpp"
#include "aare/utils/task.hpp" #include "aare/utils/task.hpp"
#include "aare/utils/par.hpp"
#include <lmcurve2.h> #include <lmcurve2.h>
#include <lmfit.hpp> #include <lmfit.hpp>
#include <thread> #include <thread>
#include <array>
namespace aare { namespace aare {
namespace func { namespace func {
@ -35,33 +37,11 @@ NDArray<double, 1> pol1(NDView<double, 1> x, NDView<double, 1> par) {
} // namespace func } // namespace func
NDArray<double, 1> fit_gaus(NDView<double, 1> x, NDView<double, 1> y) { NDArray<double, 1> fit_gaus(NDView<double, 1> x, NDView<double, 1> y) {
NDArray<double, 1> result({3}, 0); NDArray<double, 1> result = gaus_init_par(x, y);
lm_control_struct control = lm_control_double; lm_status_struct status;
// Estimate the initial parameters for the fit lmcurve(result.size(), result.data(), x.size(), x.data(), y.data(),
std::vector<double> start_par{0, 0, 0}; aare::func::gaus, &lm_control_double, &status);
auto e = std::max_element(y.begin(), y.end());
auto idx = std::distance(y.begin(), e);
start_par[0] = *e; // For amplitude we use the maximum value
start_par[1] =
x[idx]; // For the mean we use the x value of the maximum value
// For sigma we estimate the fwhm and divide by 2.35
// assuming equally spaced x values
auto delta = x[1] - x[0];
start_par[2] =
std::count_if(y.begin(), y.end(),
[e, delta](double val) { return val > *e / 2; }) *
delta / 2.35;
lmfit::result_t res(start_par);
lmcurve(res.par.size(), res.par.data(), x.size(), x.data(), y.data(),
aare::func::gaus, &control, &res.status);
result(0) = res.par[0];
result(1) = res.par[1];
result(2) = res.par[2];
return result; return result;
} }
@ -81,65 +61,17 @@ NDArray<double, 3> fit_gaus(NDView<double, 1> x, NDView<double, 3> y,
} }
} }
}; };
auto tasks = split_task(0, y.shape(0), n_threads);
std::vector<std::thread> threads;
for (auto &task : tasks) {
threads.push_back(std::thread(process, task.first, task.second));
}
for (auto &thread : threads) {
thread.join();
}
auto tasks = split_task(0, y.shape(0), n_threads);
RunInParallel(process, tasks);
return result; return result;
} }
void fit_gaus(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err, std::array<double, 3> gaus_init_par(const NDView<double, 1> x, const NDView<double, 1> y) {
NDView<double, 3> par_out, NDView<double, 3> par_err_out, std::array<double, 3> start_par{0, 0, 0};
int n_threads) {
auto process = [&](ssize_t first_row, ssize_t last_row) {
for (ssize_t row = first_row; row < last_row; row++) {
for (ssize_t col = 0; col < y.shape(1); col++) {
NDView<double, 1> y_view(&y(row, col, 0), {y.shape(2)});
NDView<double, 1> y_err_view(&y_err(row, col, 0),
{y_err.shape(2)});
NDView<double, 1> par_out_view(&par_out(row, col, 0),
{par_out.shape(2)});
NDView<double, 1> par_err_out_view(&par_err_out(row, col, 0),
{par_err_out.shape(2)});
fit_gaus(x, y_view, y_err_view, par_out_view, par_err_out_view);
}
}
};
auto tasks = split_task(0, y.shape(0), n_threads);
std::vector<std::thread> threads;
for (auto &task : tasks) {
threads.push_back(std::thread(process, task.first, task.second));
}
for (auto &thread : threads) {
thread.join();
}
}
void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out) {
// Check that we have the correct sizes
if (y.size() != x.size() || y.size() != y_err.size() ||
par_out.size() != 3 || par_err_out.size() != 3) {
throw std::runtime_error("Data, x, data_err must have the same size "
"and par_out, par_err_out must have size 3");
}
lm_control_struct control = lm_control_double;
// Estimate the initial parameters for the fit
std::vector<double> start_par{0, 0, 0};
std::vector<double> start_par_err{0, 0, 0};
std::vector<double> start_cov{0, 0, 0, 0, 0, 0, 0, 0, 0};
auto e = std::max_element(y.begin(), y.end()); auto e = std::max_element(y.begin(), y.end());
auto idx = std::distance(y.begin(), e); auto idx = std::distance(y.begin(), e);
start_par[0] = *e; // For amplitude we use the maximum value start_par[0] = *e; // For amplitude we use the maximum value
start_par[1] = start_par[1] =
x[idx]; // For the mean we use the x value of the maximum value x[idx]; // For the mean we use the x value of the maximum value
@ -152,66 +84,83 @@ void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
[e, delta](double val) { return val > *e / 2; }) * [e, delta](double val) { return val > *e / 2; }) *
delta / 2.35; delta / 2.35;
lmfit::result_t res(start_par); return start_par;
lmfit::result_t res_err(start_par_err);
lmfit::result_t cov(start_cov);
// TODO can we make lmcurve write the result directly where is should be?
lmcurve2(res.par.size(), res.par.data(), res_err.par.data(), cov.par.data(),
x.size(), x.data(), y.data(), y_err.data(), aare::func::gaus,
&control, &res.status);
par_out(0) = res.par[0];
par_out(1) = res.par[1];
par_out(2) = res.par[2];
par_err_out(0) = res_err.par[0];
par_err_out(1) = res_err.par[1];
par_err_out(2) = res_err.par[2];
} }
void fit_pol1(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out) { std::array<double, 2> pol1_init_par(const NDView<double, 1> x, const NDView<double, 1> y){
// Estimate the initial parameters for the fit
std::array<double, 2> start_par{0, 0};
auto y2 = std::max_element(y.begin(), y.end());
auto x2 = x[std::distance(y.begin(), y2)];
auto y1 = std::min_element(y.begin(), y.end());
auto x1 = x[std::distance(y.begin(), y1)];
start_par[0] =
(*y2 - *y1) / (x2 - x1); // For amplitude we use the maximum value
start_par[1] =
*y1 - ((*y2 - *y1) / (x2 - x1)) *
x1; // For the mean we use the x value of the maximum value
return start_par;
}
void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out,
double &chi2) {
// Check that we have the correct sizes // Check that we have the correct sizes
if (y.size() != x.size() || y.size() != y_err.size() || if (y.size() != x.size() || y.size() != y_err.size() ||
par_out.size() != 2 || par_err_out.size() != 2) { par_out.size() != 3 || par_err_out.size() != 3) {
throw std::runtime_error("Data, x, data_err must have the same size " throw std::runtime_error("Data, x, data_err must have the same size "
"and par_out, par_err_out must have size 2"); "and par_out, par_err_out must have size 3");
} }
lm_control_struct control = lm_control_double;
// Estimate the initial parameters for the fit // /* Collection of output parameters for status info. */
std::vector<double> start_par{0, 0}; // typedef struct {
std::vector<double> start_par_err{0, 0}; // double fnorm; /* norm of the residue vector fvec. */
std::vector<double> start_cov{0, 0, 0, 0}; // int nfev; /* actual number of iterations. */
// int outcome; /* Status indicator. Nonnegative values are used as
// index
// for the message text lm_infmsg, set in lmmin.c. */
// int userbreak; /* Set when function evaluation requests termination.
// */
// } lm_status_struct;
auto y2 = std::max_element(y.begin(), y.end());
auto x2 = x[std::distance(y.begin(), y2)];
auto y1 = std::min_element(y.begin(), y.end());
auto x1 = x[std::distance(y.begin(), y1)];
start_par[0] = lm_status_struct status;
(*y2 - *y1) / (x2 - x1); // For amplitude we use the maximum value par_out = gaus_init_par(x, y);
start_par[1] = std::array<double, 9> cov{0, 0, 0, 0, 0, 0, 0 , 0 , 0};
*y1 - ((*y2 - *y1) / (x2 - x1)) *
x1; // For the mean we use the x value of the maximum value
lmfit::result_t res(start_par); // void lmcurve2( const int n_par, double *par, double *parerr, double *covar, const int m_dat, const double *t, const double *y, const double *dy, double (*f)( const double ti, const double *par ), const lm_control_struct *control, lm_status_struct *status);
lmfit::result_t res_err(start_par_err); // n_par - Number of free variables. Length of parameter vector par.
lmfit::result_t cov(start_cov); // par - Parameter vector. On input, it must contain a reasonable guess. On output, it contains the solution found to minimize ||r||.
// parerr - Parameter uncertainties vector. Array of length n_par or NULL. On output, unless it or covar is NULL, it contains the weighted parameter uncertainties for the found parameters.
// covar - Covariance matrix. Array of length n_par * n_par or NULL. On output, unless it is NULL, it contains the covariance matrix.
// m_dat - Number of data points. Length of vectors t, y, dy. Must statisfy n_par <= m_dat.
// t - Array of length m_dat. Contains the abcissae (time, or "x") for which function f will be evaluated.
// y - Array of length m_dat. Contains the ordinate values that shall be fitted.
// dy - Array of length m_dat. Contains the standard deviations of the values y.
// f - A user-supplied parametric function f(ti;par).
// control - Parameter collection for tuning the fit procedure. In most cases, the default &lm_control_double is adequate. If f is only computed with single-precision accuracy, &lm_control_float should be used. Parameters are explained in lmmin2(3).
// status - A record used to return information about the minimization process: For details, see lmmin2(3).
lmcurve2(res.par.size(), res.par.data(), res_err.par.data(), cov.par.data(), lmcurve2(par_out.size(), par_out.data(), par_err_out.data(), cov.data(),
x.size(), x.data(), y.data(), y_err.data(), aare::func::pol1, x.size(), x.data(), y.data(), y_err.data(), aare::func::gaus,
&control, &res.status); &lm_control_double, &status);
par_out(0) = res.par[0]; // Calculate chi2
par_out(1) = res.par[1]; chi2 = 0;
par_err_out(0) = res_err.par[0]; for (size_t i = 0; i < y.size(); i++) {
par_err_out(1) = res_err.par[1]; chi2 += std::pow((y(i) - func::gaus(x(i), par_out.data())) / y_err(i), 2);
}
} }
void fit_pol1(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err, void fit_gaus(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads) { int n_threads) {
auto process = [&](ssize_t first_row, ssize_t last_row) { auto process = [&](ssize_t first_row, ssize_t last_row) {
@ -224,21 +173,69 @@ void fit_pol1(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
{par_out.shape(2)}); {par_out.shape(2)});
NDView<double, 1> par_err_out_view(&par_err_out(row, col, 0), NDView<double, 1> par_err_out_view(&par_err_out(row, col, 0),
{par_err_out.shape(2)}); {par_err_out.shape(2)});
fit_pol1(x, y_view, y_err_view, par_out_view, par_err_out_view);
fit_gaus(x, y_view, y_err_view, par_out_view, par_err_out_view,
chi2_out(row, col));
} }
} }
}; };
auto tasks = split_task(0, y.shape(0), n_threads); auto tasks = split_task(0, y.shape(0), n_threads);
std::vector<std::thread> threads; RunInParallel(process, tasks);
for (auto &task : tasks) { }
threads.push_back(std::thread(process, task.first, task.second));
void fit_pol1(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out, double& chi2) {
// Check that we have the correct sizes
if (y.size() != x.size() || y.size() != y_err.size() ||
par_out.size() != 2 || par_err_out.size() != 2) {
throw std::runtime_error("Data, x, data_err must have the same size "
"and par_out, par_err_out must have size 2");
} }
for (auto &thread : threads) {
thread.join(); lm_status_struct status;
par_out = pol1_init_par(x, y);
std::array<double, 4> cov{0, 0, 0, 0};
lmcurve2(par_out.size(), par_out.data(), par_err_out.data(), cov.data(),
x.size(), x.data(), y.data(), y_err.data(), aare::func::pol1,
&lm_control_double, &status);
// Calculate chi2
chi2 = 0;
for (size_t i = 0; i < y.size(); i++) {
chi2 += std::pow((y(i) - func::pol1(x(i), par_out.data())) / y_err(i), 2);
} }
} }
void fit_pol1(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads) {
auto process = [&](ssize_t first_row, ssize_t last_row) {
for (ssize_t row = first_row; row < last_row; row++) {
for (ssize_t col = 0; col < y.shape(1); col++) {
NDView<double, 1> y_view(&y(row, col, 0), {y.shape(2)});
NDView<double, 1> y_err_view(&y_err(row, col, 0),
{y_err.shape(2)});
NDView<double, 1> par_out_view(&par_out(row, col, 0),
{par_out.shape(2)});
NDView<double, 1> par_err_out_view(&par_err_out(row, col, 0),
{par_err_out.shape(2)});
fit_pol1(x, y_view, y_err_view, par_out_view, par_err_out_view, chi2_out(row, col));
}
}
};
auto tasks = split_task(0, y.shape(0), n_threads);
RunInParallel(process, tasks);
}
NDArray<double, 1> fit_pol1(NDView<double, 1> x, NDView<double, 1> y) { NDArray<double, 1> fit_pol1(NDView<double, 1> x, NDView<double, 1> y) {
// // Check that we have the correct sizes // // Check that we have the correct sizes
// if (y.size() != x.size() || y.size() != y_err.size() || // if (y.size() != x.size() || y.size() != y_err.size() ||
@ -246,28 +243,12 @@ NDArray<double, 1> fit_pol1(NDView<double, 1> x, NDView<double, 1> y) {
// throw std::runtime_error("Data, x, data_err must have the same size " // throw std::runtime_error("Data, x, data_err must have the same size "
// "and par_out, par_err_out must have size 2"); // "and par_out, par_err_out must have size 2");
// } // }
NDArray<double, 1> par({2}, 0); NDArray<double, 1> par = pol1_init_par(x, y);
lm_control_struct control = lm_control_double; lm_status_struct status;
lmcurve(par.size(), par.data(), x.size(), x.data(), y.data(),
aare::func::pol1, &lm_control_double, &status);
// Estimate the initial parameters for the fit
std::vector<double> start_par{0, 0};
auto y2 = std::max_element(y.begin(), y.end());
auto x2 = x[std::distance(y.begin(), y2)];
auto y1 = std::min_element(y.begin(), y.end());
auto x1 = x[std::distance(y.begin(), y1)];
start_par[0] = (*y2 - *y1) / (x2 - x1);
start_par[1] = *y1 - ((*y2 - *y1) / (x2 - x1)) * x1;
lmfit::result_t res(start_par);
lmcurve(res.par.size(), res.par.data(), x.size(), x.data(), y.data(),
aare::func::pol1, &control, &res.status);
par(0) = res.par[0];
par(1) = res.par[1];
return par; return par;
} }
@ -287,13 +268,8 @@ NDArray<double, 3> fit_pol1(NDView<double, 1> x, NDView<double, 3> y,
}; };
auto tasks = split_task(0, y.shape(0), n_threads); auto tasks = split_task(0, y.shape(0), n_threads);
std::vector<std::thread> threads;
for (auto &task : tasks) { RunInParallel(process, tasks);
threads.push_back(std::thread(process, task.first, task.second));
}
for (auto &thread : threads) {
thread.join();
}
return result; return result;
} }

56
src/Interpolator.cpp Normal file
View File

@ -0,0 +1,56 @@
#include "aare/Interpolator.hpp"
namespace aare {
Interpolator::Interpolator(NDView<double, 3> etacube, NDView<double, 1> xbins,
NDView<double, 1> ybins, NDView<double, 1> ebins)
: m_ietax(etacube), m_ietay(etacube), m_etabinsx(xbins), m_etabinsy(ybins),
m_energy_bins(ebins) {
if (etacube.shape(0) != xbins.size() || etacube.shape(1) != ybins.size() ||
etacube.shape(2) != ebins.size()) {
throw std::invalid_argument(
"The shape of the etacube does not match the shape of the bins");
}
// Cumulative sum in the x direction
for (ssize_t i = 1; i < m_ietax.shape(0); i++) {
for (ssize_t j = 0; j < m_ietax.shape(1); j++) {
for (ssize_t k = 0; k < m_ietax.shape(2); k++) {
m_ietax(i, j, k) += m_ietax(i - 1, j, k);
}
}
}
// Normalize by the highest row, if norm less than 1 don't do anything
for (ssize_t i = 0; i < m_ietax.shape(0); i++) {
for (ssize_t j = 0; j < m_ietax.shape(1); j++) {
for (ssize_t k = 0; k < m_ietax.shape(2); k++) {
auto val = m_ietax(m_ietax.shape(0) - 1, j, k);
double norm = val < 1 ? 1 : val;
m_ietax(i, j, k) /= norm;
}
}
}
// Cumulative sum in the y direction
for (ssize_t i = 0; i < m_ietay.shape(0); i++) {
for (ssize_t j = 1; j < m_ietay.shape(1); j++) {
for (ssize_t k = 0; k < m_ietay.shape(2); k++) {
m_ietay(i, j, k) += m_ietay(i, j - 1, k);
}
}
}
// Normalize by the highest column, if norm less than 1 don't do anything
for (ssize_t i = 0; i < m_ietay.shape(0); i++) {
for (ssize_t j = 0; j < m_ietay.shape(1); j++) {
for (ssize_t k = 0; k < m_ietay.shape(2); k++) {
auto val = m_ietay(i, m_ietay.shape(1) - 1, k);
double norm = val < 1 ? 1 : val;
m_ietay(i, j, k) /= norm;
}
}
}
}
} // namespace aare

View File

@ -2,6 +2,7 @@
#include <array> #include <array>
#include <catch2/benchmark/catch_benchmark.hpp> #include <catch2/benchmark/catch_benchmark.hpp>
#include <catch2/catch_test_macros.hpp> #include <catch2/catch_test_macros.hpp>
#include <numeric>
using aare::NDArray; using aare::NDArray;
using aare::NDView; using aare::NDView;
@ -34,6 +35,24 @@ TEST_CASE("Construct from an NDView") {
} }
} }
TEST_CASE("3D NDArray from NDView"){
std::vector<int> data(27);
std::iota(data.begin(), data.end(), 0);
NDView<int, 3> view(data.data(), Shape<3>{3, 3, 3});
NDArray<int, 3> image(view);
REQUIRE(image.shape() == view.shape());
REQUIRE(image.size() == view.size());
REQUIRE(image.data() != view.data());
for(int64_t i=0; i<image.shape(0); i++){
for(int64_t j=0; j<image.shape(1); j++){
for(int64_t k=0; k<image.shape(2); k++){
REQUIRE(image(i, j, k) == view(i, j, k));
}
}
}
}
TEST_CASE("1D image") { TEST_CASE("1D image") {
std::array<int64_t, 1> shape{{20}}; std::array<int64_t, 1> shape{{20}};
NDArray<short, 1> img(shape, 3); NDArray<short, 1> img(shape, 3);
@ -380,3 +399,31 @@ TEST_CASE("Elementwise operations on images") {
} }
} }
} }
TEST_CASE("Assign an std::array to a 1D NDArray") {
NDArray<int, 1> a{{5}, 0};
std::array<int, 5> b{1, 2, 3, 4, 5};
a = b;
for (uint32_t i = 0; i < a.size(); ++i) {
REQUIRE(a(i) == b[i]);
}
}
TEST_CASE("Assign an std::array to a 1D NDArray of a different size") {
NDArray<int, 1> a{{3}, 0};
std::array<int, 5> b{1, 2, 3, 4, 5};
a = b;
REQUIRE(a.size() == 5);
for (uint32_t i = 0; i < a.size(); ++i) {
REQUIRE(a(i) == b[i]);
}
}
TEST_CASE("Construct an NDArray from an std::array") {
std::array<int, 5> b{1, 2, 3, 4, 5};
NDArray<int, 1> a(b);
for (uint32_t i = 0; i < a.size(); ++i) {
REQUIRE(a(i) == b[i]);
}
}

View File

@ -76,8 +76,7 @@ size_t RawFile::n_mod() const { return n_subfile_parts; }
size_t RawFile::bytes_per_frame() { size_t RawFile::bytes_per_frame() {
// return m_rows * m_cols * m_master.bitdepth() / 8; return m_geometry.pixels_x * m_geometry.pixels_y * m_master.bitdepth() / bits_per_byte;
return m_geometry.pixels_x * m_geometry.pixels_y * m_master.bitdepth() / 8;
} }
size_t RawFile::pixels_per_frame() { size_t RawFile::pixels_per_frame() {
// return m_rows * m_cols; // return m_rows * m_cols;

69
src/algorithm.test.cpp Normal file
View File

@ -0,0 +1,69 @@
#include <aare/algorithm.hpp>
#include <catch2/catch_test_macros.hpp>
TEST_CASE("Find the closed index in a 1D array", "[algorithm]") {
aare::NDArray<double, 1> arr({5});
for (size_t i = 0; i < arr.size(); i++) {
arr[i] = i;
}
// arr 0, 1, 2, 3, 4
REQUIRE(aare::nearest_index(arr, 2.3) == 2);
REQUIRE(aare::nearest_index(arr, 2.6) == 3);
REQUIRE(aare::nearest_index(arr, 45.0) == 4);
REQUIRE(aare::nearest_index(arr, 0.0) == 0);
REQUIRE(aare::nearest_index(arr, -1.0) == 0);
}
TEST_CASE("Passing integers to nearest_index works", "[algorithm]") {
aare::NDArray<int, 1> arr({5});
for (size_t i = 0; i < arr.size(); i++) {
arr[i] = i;
}
// arr 0, 1, 2, 3, 4
REQUIRE(aare::nearest_index(arr, 2) == 2);
REQUIRE(aare::nearest_index(arr, 3) == 3);
REQUIRE(aare::nearest_index(arr, 45) == 4);
REQUIRE(aare::nearest_index(arr, 0) == 0);
REQUIRE(aare::nearest_index(arr, -1) == 0);
}
TEST_CASE("nearest_index works with std::vector", "[algorithm]") {
std::vector<double> vec = {0, 1, 2, 3, 4};
REQUIRE(aare::nearest_index(vec, 2.123) == 2);
REQUIRE(aare::nearest_index(vec, 2.66) == 3);
REQUIRE(aare::nearest_index(vec, 4555555.0) == 4);
REQUIRE(aare::nearest_index(vec, 0.0) == 0);
REQUIRE(aare::nearest_index(vec, -10.0) == 0);
}
TEST_CASE("nearest index works with std::array", "[algorithm]") {
std::array<double, 5> arr = {0, 1, 2, 3, 4};
REQUIRE(aare::nearest_index(arr, 2.123) == 2);
REQUIRE(aare::nearest_index(arr, 2.501) == 3);
REQUIRE(aare::nearest_index(arr, 4555555.0) == 4);
REQUIRE(aare::nearest_index(arr, 0.0) == 0);
REQUIRE(aare::nearest_index(arr, -10.0) == 0);
}
TEST_CASE("last smaller", "[algorithm]") {
aare::NDArray<double, 1> arr({5});
for (size_t i = 0; i < arr.size(); i++) {
arr[i] = i;
}
// arr 0, 1, 2, 3, 4
REQUIRE(aare::last_smaller(arr, -10.0) == 0);
REQUIRE(aare::last_smaller(arr, 0.0) == 0);
REQUIRE(aare::last_smaller(arr, 2.3) == 2);
REQUIRE(aare::last_smaller(arr, 253.) == 4);
}
TEST_CASE("returns last bin strictly smaller", "[algorithm]") {
aare::NDArray<double, 1> arr({5});
for (size_t i = 0; i < arr.size(); i++) {
arr[i] = i;
}
// arr 0, 1, 2, 3, 4
REQUIRE(aare::last_smaller(arr, 2.0) == 2);
}

View File

@ -7,6 +7,6 @@ inline auto test_data_path(){
if(const char* env_p = std::getenv("AARE_TEST_DATA")){ if(const char* env_p = std::getenv("AARE_TEST_DATA")){
return std::filesystem::path(env_p); return std::filesystem::path(env_p);
}else{ }else{
throw std::runtime_error("AARE_TEST_DATA_PATH not set"); throw std::runtime_error("Path to test data: $AARE_TEST_DATA not set");
} }
} }