Compare commits

141 Commits

Author SHA1 Message Date
Erik Fröjdh
1f2d937d74 test implementation 2025-12-19 14:57:07 +01:00
Erik Fröjdh
c0357e2020 Improvements to NDArray (#258)
All checks were successful
Build on RHEL9 / build (push) Successful in 3m11s
Build on RHEL8 / build (push) Successful in 3m25s
- Removed redundant arr.value(ix,iy...) on NDArray use arr(ix,iy...)
- Removed Print/Print_some/Print_all form NDArray (operator << still
works)
- Added const* version of .data()
- Comment for documentation
- Some extra tests
2025-12-19 14:49:41 +01:00
dfb29b719f fixed out of bounds in test (#259)
- fixed test (out of bounds access)
2025-12-19 13:15:49 +01:00
Erik Fröjdh
7f3123d68f Added parsing of exptime and period from master files (#256)
All checks were successful
Build on RHEL9 / build (push) Successful in 3m26s
Build on RHEL8 / build (push) Successful in 3m33s
- New aare:to_string/string_to similar to what we have in
slsDetectorPackage
- Added members period and exptime to RawMasterFile
- Parsing exposure time and period for json and raw master file formats
- Parsing of RawMasterFile from string stream to enable test without
files

Comments:

- to_string is at the moment not a public header. Can make it later if
needed. This gives us full freedom with the API
- FileConfig should probably be deprecated need to look into it.
Meanwhile removed python bindings and string conv
2025-12-18 17:04:12 +01:00
fb95e518b4 Dev/interpolation documentation (#255)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m5s
Build on RHEL9 / build (push) Successful in 3m21s
- added transform_eta_values for easier debugging more control for the
user
- updated Documentation
2025-12-16 13:06:28 +01:00
Erik Fröjdh
80a2b02345 Dev/decode my302 (#254)
Some checks failed
Build on RHEL8 / build (push) Failing after 0s
Build on RHEL9 / build (push) Failing after 0s
This PR adds support for decoding digital data from the my320 test chip.
- Added BitOffset (strong type)
- Expand 24 to 32 bit 
- Python bindings for decoding my302
- Improved docs
2025-12-09 18:27:02 +01:00
e795310b16 fixed tests (#252)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m11s
Build on RHEL9 / build (push) Successful in 3m46s
- fixed failed tests 
- removed import of pickle, scipy 
- still requires boost_histogram, pytest_check

Co-authored-by: Erik Fröjdh <erik.frojdh@psi.ch>
2025-11-28 11:28:13 +01:00
1d8b68bf75 Run catch2 tests in CI (#253)
Some checks failed
Build on RHEL9 / build (push) Failing after 0s
Build on RHEL8 / build (push) Successful in 3m28s
- Build and run tests in CI
- Added macOS builds (and tests)
- Renamed workflow to build_with_docs.yml
2025-11-27 08:58:24 +01:00
Erik Fröjdh
cf53922bd3 restrict upload artifact 2025-11-26 20:21:57 +01:00
Erik Fröjdh
13cffb1ea8 WIP 2025-11-26 20:11:38 +01:00
Erik Fröjdh
bea373a112 with file 2025-11-26 20:09:05 +01:00
Erik Fröjdh
a66ce15a6c renamed workflow and added macos-latest 2025-11-26 20:07:32 +01:00
Erik Fröjdh
44fd015cfe added catch2 to dev env 2025-11-26 20:00:03 +01:00
Erik Fröjdh
6fe822d5dd WIP 2025-11-26 19:56:53 +01:00
Erik Fröjdh
8ed874a679 using tests 2025-11-26 19:54:43 +01:00
dd5ed138cf Dev/print filepath in error (#251)
Some checks failed
Build on RHEL8 / build (push) Failing after 0s
Build on RHEL9 / build (push) Successful in 3m15s
2025-11-25 11:25:44 +01:00
8201c5e999 Fix/mythenfilereading (#250)
Some checks failed
Build on RHEL8 / build (push) Failing after 0s
Build on RHEL9 / build (push) Successful in 3m23s
Add counter mask as member of RawFile
BugFix: temporarily handle -1 for ROI in mythenfile
2025-11-24 12:29:08 +01:00
03af5927ad Release 2025.11.21 (#249)
All checks were successful
Build on RHEL9 / build (push) Successful in 3m19s
Build on RHEL8 / build (push) Successful in 3m27s
- Updated VERSION and script 
- Updated release notes
2025-11-21 16:34:57 +01:00
Erik Fröjdh
452cfcb60f updated release notes 2025-11-21 16:17:04 +01:00
Erik Fröjdh
e8402d9d36 updated version and version script 2025-11-21 16:14:05 +01:00
a9de336817 added license (#247)
- Added LICENSE file
- Added SPX identifier to source files
2025-11-21 15:11:30 +01:00
6f7cb4ae30 Merge branch 'main' into dev/license 2025-11-21 14:52:54 +01:00
267ca87ab0 Dev/rosenblatttransform (#241)
- added rosenblatttransform 
- added 3x3 eta methods 
- interpolation can be used with various eta functions
- added documentation for interpolation, eta calculation 
- exposed full eta struct in python 
- disable ClusterFinder for 2x2 clusters 
- factory function for ClusterVector

---------

Co-authored-by: Dhanya Thattil <dhanya.thattil@psi.ch>
Co-authored-by: Erik Fröjdh <erik.frojdh@psi.ch>
2025-11-21 14:48:46 +01:00
Erik Fröjdh
5dbb969bcc Merge branch 'dev/license' of github.com:slsdetectorgroup/aare into dev/license 2025-11-21 14:45:12 +01:00
Erik Fröjdh
d58d8ea82f added comment in readme 2025-11-21 14:44:54 +01:00
f61f76ccf7 changed License in update_version.py added to etc
Some checks failed
Build on RHEL9 / build (push) Failing after 3m26s
Build on RHEL8 / build (push) Failing after 3m36s
2025-11-21 10:29:12 +01:00
Erik Fröjdh
200ae91622 also hpp 2025-11-21 10:14:14 +01:00
Erik Fröjdh
53aed8d8c6 added license 2025-11-20 09:01:28 +01:00
7fb500c44c Dev/expose sum 2x2 to python (#238)
Some checks failed
Build on RHEL8 / build (push) Failing after 3m15s
Build on RHEL9 / build (push) Failing after 3m16s
Saverio requested that max_sum_2x2 exposes index information in  python 
- max_sum_2x2 returns a corner as index
- replaced eta corner with corner enum class
- max_sum_2x2 now returns index as well in python 
- added link to Documenation in README

Note: Some Tests fail in EtaCalculation due to previous PR about
updating Eta 2x2 will fix in other PR
2025-10-27 20:04:16 +01:00
8989d2eb4a Merge branch 'main' into dev/expose_sum_2x2_to_python 2025-10-27 19:47:09 +01:00
c8c681faa8 updated release notes 2025-10-27 19:30:43 +01:00
0faaf2bbc7 updated release notes
Some checks failed
Build on RHEL8 / build (push) Failing after 3m9s
Build on RHEL9 / build (push) Failing after 3m16s
2025-10-27 18:45:23 +01:00
Erik Fröjdh
ac83eeff9b added tell and better error checking to cluster file (#239)
Some checks failed
Build on RHEL8 / build (push) Failing after 3m8s
Build on RHEL9 / build (push) Failing after 3m18s
- Check feof and ferror when reading frames
- added tell member function to ClusterFile
2025-10-27 15:46:31 +01:00
df7b9be5a5 added docstrings wrap struct into tuple
Some checks failed
Build on RHEL8 / build (push) Failing after 3m42s
Build on RHEL9 / build (push) Failing after 3m41s
2025-10-23 19:16:33 +02:00
dbffea15c0 fix: included deleted file 2025-10-23 17:50:17 +02:00
6e38c3259b added documentation link in README 2025-10-23 17:40:55 +02:00
73e8fd31c9 vector class no longer needed 2025-10-23 17:36:29 +02:00
b28abb2668 updated tests 2025-10-23 17:35:16 +02:00
01fa61cf47 index now returns enum type 2025-10-23 17:34:54 +02:00
790dd63ba3 make max_sum_2x2 properly accessible from python 2025-10-23 15:00:52 +02:00
45f506b473 Fix/adapt and test interpolation (#231)
Some checks failed
Build on RHEL8 / build (push) Failing after 3m9s
Build on RHEL9 / build (push) Failing after 3m18s
Adapted eta interpolation: 

### Issues with previous interpolation: 

## Eta Calculation: 
- previously assumed photon hit to be in bottom left pixel of cluster
(photon hit assumed in bottom right pixel of cluster)
- clusters are filled from top left to bottom right (previously assumed:
bottom left to top right)

## Actual Interpolation: 
- photon hits are given in pixel coordinates (previous interpolation
assumed euclidean coordinates, e.g. positive distance in y coordinate
becomes negative distance in row pixels)
- removed *2 of calculated distance 

## General Adaption: 
- max_sum_2x2 return subcluster index relative to cluster center e.g.
bottomleft, bottomright

## Added proper test case 
- simulated photon hit with normal energy distribution 
- Note: Test case for 2x2 cluster fails - Think uniform photon hit
distribution cant be modeled by normalized eta distribution for 2x2
clusters
2025-10-17 10:44:08 +02:00
6f10afbcdc Merge branch 'main' into fix/adapt_and_test_interpolation 2025-10-17 10:03:26 +02:00
e418986fd2 fix/roi_max (#237)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m3s
Build on RHEL9 / build (push) Successful in 3m13s
roi max should be incremented by 1 for all versions of the file
2025-10-16 16:08:10 +02:00
723c8dd013 add nlohmann_json to support CMake find_package after 3.12 split 2025-10-16 15:30:43 +02:00
351f4626b3 roi max should be incremented by 1 for all versions of the file 2025-10-16 12:26:30 +02:00
516ef88d10 adresses SonarQube comments 2025-10-08 18:19:17 +02:00
5329be816e removed times 2 in calculated photon center distance 2025-10-08 17:01:38 +02:00
72a2604ca5 test for interpolation with simulated normal energy distribution 2025-10-08 16:35:52 +02:00
c78a73ebaf changed default CoordType in Cluster constructor in python bindings to uint16_t 2025-10-07 16:49:06 +02:00
77a9788891 changed eta interpolation to take into account photon center 2025-10-07 16:48:14 +02:00
c0ee17275e Bug/aare file reading (#230)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m10s
Build on RHEL9 / build (push) Successful in 3m12s
MasterFile supports reading new json file format (backwards compatible
for older versions)
Multiple ROI's not supported yet
2025-10-02 10:05:11 +02:00
ad3ef88607 changed default DAC value in ScanParameters 2025-10-01 20:37:40 +02:00
f814b3f4e7 updated release notes 2025-10-01 20:30:25 +02:00
1f46266183 clang-format 2025-10-01 20:25:27 +02:00
d3d9f760b3 updated parse_json to parse new master json file 2025-10-01 20:17:37 +02:00
0891ffb1ee compile with POSITION_INDEPENDANT_CODE=On (#228)
All checks were successful
Build on RHEL9 / build (push) Successful in 3m17s
Build on RHEL8 / build (push) Successful in 3m20s
The python bindings build a shared library and I cant link against
static libraries. Apparently I have to build with
CMAKE_POSITION_INDEPENDANT_CODE=On.
2025-09-30 17:39:43 +02:00
0b74bc25d5 enabled position independant code only for aare_core 2025-09-30 16:29:42 +02:00
3ec40fa809 Merge branch 'main' into fix/cmake_fix_compile_width_position_independent_code
All checks were successful
Build on RHEL8 / build (push) Successful in 3m18s
Build on RHEL9 / build (push) Successful in 3m49s
2025-09-30 10:58:35 +02:00
74280379ce naive implementation of 3x3 and 5x5 reduction (#210)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m10s
Build on RHEL9 / build (push) Successful in 3m16s
- Still quite far from a state where it can be merged
- Reduce 5x5 to 3x3
- Reduce 3x3 to 2x2

Open issues:

- [ ] Can we generalize it? 
- [ ] Which reductions are needed
- [ ] Naming
2025-09-09 09:08:42 +02:00
474c35cc6b Merge branch 'main' into dev/reduce
All checks were successful
Build on RHEL8 / build (push) Successful in 3m16s
Build on RHEL9 / build (push) Successful in 3m35s
2025-09-08 15:39:27 +02:00
e2a97d3c45 General reduce (#223)
Generalized reduction to 3x3 and 3x3 clusters for general sized
clusters.
2025-09-08 15:22:03 +02:00
bce8e9d5fc Merge branch 'main' into fix/cmake_fix_compile_width_position_independent_code 2025-09-05 14:11:33 +02:00
4c1e276e2c compile with POSITION_INDEPENDANT_CODE=On 2025-09-05 14:02:26 +02:00
12114e7275 added documentation
All checks were successful
Build on RHEL8 / build (push) Successful in 3m10s
Build on RHEL9 / build (push) Successful in 3m12s
2025-09-01 15:29:58 +02:00
7926993bb2 reduction tests for python 2025-09-01 14:15:08 +02:00
ed7fb1f1f9 induce the cluster size of ClusterCollector from ClusterFinderMT - ha… (#225)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m11s
Build on RHEL9 / build (push) Successful in 3m15s
In ClusterCollector induces cluster size from passed ClusterFinderMT.
2025-08-26 09:30:56 +02:00
Erik Fröjdh
8ab98b356b Merge branch 'main' into fix/saverio_cluster_finder
All checks were successful
Build on RHEL9 / build (push) Successful in 3m0s
Build on RHEL8 / build (push) Successful in 3m12s
2025-08-25 09:26:09 +02:00
d908ad3636 removed option to give clustersize
All checks were successful
Build on RHEL8 / build (push) Successful in 3m9s
Build on RHEL9 / build (push) Successful in 3m16s
2025-08-22 15:25:15 +02:00
8733a1d66f added benchmark
All checks were successful
Build on RHEL8 / build (push) Successful in 3m5s
Build on RHEL9 / build (push) Successful in 3m13s
2025-08-22 15:14:05 +02:00
437f7cec89 induce the cluster size of ClusterCollector from ClusterFinderMT - handle backwards compatibility 2025-08-22 10:08:38 +02:00
Erik Fröjdh
6c3524298f bumped version for release
All checks were successful
Build on RHEL8 / build (push) Successful in 3m13s
Build on RHEL9 / build (push) Successful in 3m24s
2025-08-22 09:52:24 +02:00
b59277c4bf 3x3 reduction for general cluszter sizes
All checks were successful
Build on RHEL8 / build (push) Successful in 3m8s
Build on RHEL9 / build (push) Successful in 3m9s
2025-08-19 12:37:55 +02:00
cb163c79b4 reduction to 2x2 clusters for general clusters 2025-08-18 18:23:15 +02:00
Erik Fröjdh
a0fb4900f0 Update RELEASE.md
All checks were successful
Build on RHEL8 / build (push) Successful in 3m7s
Build on RHEL9 / build (push) Successful in 3m10s
2025-08-18 12:16:44 +02:00
Erik Fröjdh
91d74110fa specified glibc in conda build (#222)
Fixed a runtime error on older linux systems, since by mistake we used
glibc from ubutu 24. Same code as in slsDetectorPackage now.
2025-08-18 12:14:54 +02:00
f54e76e6bf view is only allowed on l-value frame (#220)
Vadym accidentally called view() directly on an R-value frame, which
leads to a dangling view pointer.
Adjusted code such that compiler throws an error if called on an R-value
frame.

Co-authored-by: Erik Fröjdh <erik.frojdh@psi.ch>
2025-08-18 11:02:05 +02:00
JFMulvey
c6da36d10b Fixed the order of cluster.data being incorrect (#221)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m4s
Build on RHEL9 / build (push) Successful in 3m11s
While using the cluster finder and saving a cluster, pixels which are
out of bounds are skipped. cluster.data should contain the pedestal
corrected ADU information of each pixel.

However, the counter "i" which keeps track of the position of
cluster.data is only incremented if the pixel was inside the bounds of
the frame.

This means that any clusters close to the frame's edges are not
construed properly. This means that if you want to extract a 3x3 from a
9x9 cluster, it can fail if the cluster data is not properly centered in
the pixel.

Fixed by moving i++ outside the bounds check.

Co-authored-by: Jonathan Mulvey <jonathan.mulvey@psi.ch>
2025-08-14 09:27:02 +02:00
5107513ff5 Pedestal, calibration in g0 and counting pixels (#217)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m5s
Build on RHEL9 / build (push) Successful in 3m8s
- NDView operator()(size_t) now returns a view with one less dimension
- Apply calibration takes also a 2D array and then ignores pixels that
switch
- Calculate pedestal from a dataset which contains all three gains 
- G0 variant of pedestal
- Function to count pixels switching
2025-07-25 13:50:53 +02:00
f7aa66a2c9 templated calculate_pedestal with boolean template argument only_gain… (#218)
some refactoring for less code duplication, added functionality
drop_dimension in NDArray
2025-07-25 12:25:41 +02:00
3ac94641e3 Move constructor to drop 1st dimension of NDArray (#219)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m3s
Build on RHEL9 / build (push) Successful in 3m4s
- helper function to initialize shape
- helper function to calculate the number of elements
- move constructor to create a NDArray<T, Ndim-1> if sizes match
2025-07-25 12:03:42 +02:00
froejdh_e
89bb8776ea check Ndim on drop_first_dim 2025-07-25 11:44:27 +02:00
Erik Fröjdh
1527a45cf3 Merge branch 'template_on_gain0' into dev/move_dim 2025-07-25 10:45:20 +02:00
froejdh_e
3d6858ad33 removed data_ref 2025-07-25 10:42:47 +02:00
froejdh_e
d6222027d0 move constructor for Ndim-1 2025-07-25 10:40:32 +02:00
1195a5e100 added drop dimension test, added file calibration.test.cpp 2025-07-25 10:18:55 +02:00
1347158235 templated calculate_pedestal with boolean template argument only_gain0, added drop_dimension to NDArray and reference pointer to data 2025-07-24 15:40:05 +02:00
froejdh_e
8c4d8b687e using make_subview
All checks were successful
Build on RHEL9 / build (push) Successful in 3m2s
Build on RHEL8 / build (push) Successful in 3m5s
2025-07-24 12:16:08 +02:00
froejdh_e
b8e91d0282 zero out switching pixels if 2D calibration is used 2025-07-24 12:10:13 +02:00
froejdh_e
46876bfa73 reduced duplicate code 2025-07-24 10:57:02 +02:00
froejdh_e
348fd0f937 removed unused code 2025-07-24 10:14:29 +02:00
froejdh_e
0fea0f5b0e added safe_divide to NDArray and used it for pedestal 2025-07-24 09:40:38 +02:00
Erik Fröjdh
cb439efb48 added tests
All checks were successful
Build on RHEL8 / build (push) Successful in 3m0s
Build on RHEL9 / build (push) Successful in 3m8s
2025-07-23 11:34:47 +02:00
Erik Fröjdh
5de402f91b added docs 2025-07-23 11:05:44 +02:00
froejdh_e
9a7713e98a added g0 calibration, pedestal and pixel counting 2025-07-22 16:42:09 +02:00
froejdh_e
b898e1c8d0 date also in release
All checks were successful
Build on RHEL9 / build (push) Successful in 3m9s
Build on RHEL8 / build (push) Successful in 3m9s
2025-07-18 10:23:17 +02:00
froejdh_e
4073c0cbe0 bumped version 2025-07-18 10:21:28 +02:00
Erik Fröjdh
9a3694b980 Merge branch 'main' into dev/reduce
All checks were successful
Build on RHEL9 / build (push) Successful in 3m10s
Build on RHEL8 / build (push) Successful in 3m11s
2025-07-18 10:19:42 +02:00
Erik Fröjdh
abae2674a9 Apply calibration to Jungfrau raw data (#216)
- Added function to read calibration file
- Multi threaded pedestal subtraction and application of the calibration
2025-07-18 10:19:14 +02:00
1414d75320 fix/mh02 map (#214)
All checks were successful
Build on RHEL9 / build (push) Successful in 3m8s
Build on RHEL8 / build (push) Successful in 3m13s
- Fixed bug in reading MH02 files
2025-07-17 15:32:40 +02:00
Erik Fröjdh
85c3bf7bed Merge branch 'main' into dev/reduce
Some checks failed
Build on RHEL8 / build (push) Failing after 1m51s
Build on RHEL9 / build (push) Successful in 3m16s
2025-07-16 17:04:23 +02:00
Erik Fröjdh
fa3b7a5afe Merge branch 'main' into fix/mh02-map 2025-07-16 17:03:31 +02:00
Erik Fröjdh
e95326faa1 Fix/remove cpp (#213)
Some checks failed
Build on RHEL8 / build (push) Failing after 1m52s
Build on RHEL9 / build (push) Successful in 3m11s
- Removed unused ClusterFile.cpp (code from before it was templated)
- Updated the list of .cpp files in CMakeLists.txt to match alphabetic
listing in the browser
2025-07-16 16:43:08 +02:00
froejdh_e
8143524acf updated release notes 2025-07-16 16:41:27 +02:00
froejdh_e
8e2346abf8 fixed pixel map for mh02 2025-07-16 15:54:29 +02:00
Erik Fröjdh
8eb7fec435 Merge branch 'main' into dev/reduce
All checks were successful
Build on RHEL9 / build (push) Successful in 3m4s
Build on RHEL8 / build (push) Successful in 3m7s
2025-07-16 11:13:11 +02:00
Erik Fröjdh
d8952eccc6 Update RELEASE.md
All checks were successful
Build on RHEL9 / build (push) Successful in 2m47s
Build on RHEL8 / build (push) Successful in 3m0s
2025-06-27 17:19:14 +02:00
Erik Fröjdh
83717571c8 Merge branch 'main' into dev/reduce 2025-06-27 17:10:24 +02:00
Erik Fröjdh
97dae4ac60 added empty() to ClusterVector and fixed docs (#209)
- added ClusterVector::empty() to check if the vector is empty
- Fixed generation of missing docs for ClusterVector
2025-06-27 17:00:46 +02:00
froejdh_e
5a9c3b717e naive implementation of 3x3 and 5x5 reduction 2025-06-27 16:36:21 +02:00
Erik Fröjdh
e3f4b34b72 Const element access and fixed comparing bug (#208)
- Added const element access
- Added const data*
- Fixed bug comparing two Views of same size but different shapes

closes #207
2025-06-27 14:13:51 +02:00
Erik Fröjdh
6ec8fbee72 migrated tags for tests and added missing raw files (#206)
All checks were successful
Build on RHEL8 / build (push) Successful in 2m57s
Build on RHEL9 / build (push) Successful in 2m59s
- No changes or evaluation of existing tests
- Tags for including tests that require data is changed to
**[.with-data]** and **--with-data** for C++ and python respectively
- Minor update to docs
- Added missing files to the test data repo
2025-06-26 17:11:20 +02:00
30822d9c5f Dev/fix/rawfilereader with roi (#192)
All checks were successful
Build on RHEL8 / build (push) Successful in 2m54s
Build on RHEL9 / build (push) Successful in 3m0s
geometry is calculated from master file.
2025-06-24 16:41:28 +02:00
ff7312f45d replaced fmt with LOG 2025-06-24 16:24:25 +02:00
8e7c9eadff fixed cmake merge 2025-06-24 13:49:05 +02:00
d35b7762b4 Merge branch 'main' into dev/fix/rawfilereader_with_roi 2025-06-24 13:43:26 +02:00
df4dbb8fd0 fixed numpy test 2025-06-24 13:39:32 +02:00
c92be4bca2 added eiger quad test
All checks were successful
Build on RHEL8 / build (push) Successful in 2m53s
Build on RHEL9 / build (push) Successful in 3m0s
2025-06-24 11:29:25 +02:00
664055de92 fixed quad structure
All checks were successful
Build on RHEL9 / build (push) Successful in 2m50s
Build on RHEL8 / build (push) Successful in 3m2s
2025-06-23 17:27:13 +02:00
318e640639 only test over the public interface 2025-06-23 17:27:13 +02:00
c6990dabad deleted unused variables 2025-06-23 17:27:13 +02:00
c9fe16b4c2 use target_compile_definitions (#203)
All checks were successful
Build on RHEL9 / build (push) Successful in 2m54s
Build on RHEL8 / build (push) Successful in 2m55s
use target_compile_definition instead of add_compile_definition to use
macros across projects
2025-06-23 09:06:25 +02:00
Erik Fröjdh
64438c8803 Merge branch 'main' into dev/fix/rawfilereader_with_roi
All checks were successful
Build on RHEL9 / build (push) Successful in 2m58s
Build on RHEL8 / build (push) Successful in 2m58s
2025-06-23 08:21:39 +02:00
9f8eee5d08 fixed python bindings - only read headers of modules that are in the roi
All checks were successful
Build on RHEL8 / build (push) Successful in 2m51s
Build on RHEL9 / build (push) Successful in 2m53s
2025-06-16 11:07:00 +02:00
35114cde9d fatal error did not open any subfiles
All checks were successful
Build on RHEL8 / build (push) Successful in 2m56s
Build on RHEL9 / build (push) Successful in 2m59s
2025-06-13 18:12:47 +02:00
b13f864b2b need n_modules 2025-06-13 17:01:13 +02:00
05828baa54 removed n_modules from python bindings 2025-06-13 16:38:46 +02:00
0f56846e3d deleted some commented lines 2025-06-13 16:33:25 +02:00
be67bbab6b extended DetectorGeometry class with find_geometry, update_geometry (refactoring) 2025-06-13 16:16:23 +02:00
Erik Fröjdh
8354439605 droping version spec on sphinx (#202)
All checks were successful
Build on RHEL8 / build (push) Successful in 2m56s
Build on RHEL9 / build (push) Successful in 2m58s
- Removing the version requirement on sphinx since the latest version
works again
- added numpy and matplotlib do the etc/dev-env.yml since they are
needed to import aare
2025-06-13 15:25:43 +02:00
Erik Fröjdh
11fa95b23c Improved documentation for ClusterFile on the python side (#201)
- Fixed CI job not doing python docs
- added more docs on cluster file 
- fixed generating docs on cluster vector
2025-06-13 10:41:39 +02:00
bd7870e75a review comments
All checks were successful
Build on RHEL9 / build (push) Successful in 2m53s
Build on RHEL8 / build (push) Successful in 2m55s
2025-06-12 18:14:48 +02:00
75f63607fc friend_test macro 2025-06-12 17:46:10 +02:00
cfe7c31fe4 changed data path of test data
All checks were successful
Build on RHEL9 / build (push) Successful in 2m54s
Build on RHEL8 / build (push) Successful in 2m55s
2025-06-12 09:53:15 +02:00
Erik Fröjdh
4976ec1651 added back chunk_size in python (#199)
All checks were successful
Build on RHEL9 / build (push) Successful in 2m52s
Build on RHEL8 / build (push) Successful in 2m57s
When refactoring the dispatch of the python binding for ClusterFile I
forgot chunk_size. Adding it back in.
Excluded from release notes since the bug was introduced after the last
release and now fixed before the next release.

1. added back chunk_size
2. removed a few commented out lines

closes #197
2025-06-12 09:32:42 +02:00
Erik Fröjdh
a9a55fb27d Merge branch 'main' into dev/fix/rawfilereader_with_roi
All checks were successful
Build on RHEL8 / build (push) Successful in 2m57s
Build on RHEL9 / build (push) Successful in 3m1s
2025-06-11 13:23:01 +02:00
19ecc82fff solved merge conflict
All checks were successful
Build on RHEL8 / build (push) Successful in 2m52s
Build on RHEL9 / build (push) Successful in 3m12s
2025-06-10 17:01:40 +02:00
923f7d22b8 Merge branch 'main' into dev/fix/rawfilereader_with_roi 2025-06-10 15:59:52 +02:00
6438a4bef1 updated python bindings
All checks were successful
Build on RHEL9 / build (push) Successful in 2m21s
Build on RHEL8 / build (push) Successful in 2m29s
2025-06-10 12:00:07 +02:00
ad7525cd02 considered num_udp_interafces for jungfrau and quad structure for eiger 2025-06-10 11:35:15 +02:00
87d8682b1e added test cases 2025-06-06 11:31:39 +02:00
9c6e629298 only files within the ROI are opened & geometry always read in RawMasterFile 2025-06-04 16:34:40 +02:00
212 changed files with 9043 additions and 4522 deletions

View File

@@ -10,7 +10,7 @@ jobs:
strategy:
fail-fast: false
matrix:
platform: [ubuntu-latest, ] # macos-12, windows-2019]
platform: [ubuntu-latest] # macos-12, windows-2019]
python-version: ["3.12",]
runs-on: ${{ matrix.platform }}

View File

@@ -2,11 +2,15 @@ name: Build the package using cmake then documentation
on:
workflow_dispatch:
push:
pull_request:
release:
types:
- published
env:
# Customize the CMake build type here (Release, Debug, RelWithDebInfo, etc.)
BUILD_TYPE: Debug
permissions:
contents: read
@@ -18,7 +22,7 @@ jobs:
strategy:
fail-fast: false
matrix:
platform: [ubuntu-latest, ]
platform: [ubuntu-latest, macos-latest]
python-version: ["3.12",]
runs-on: ${{ matrix.platform }}
@@ -39,15 +43,20 @@ jobs:
channels: conda-forge
conda-remove-defaults: "true"
- name: Build library
- name: Build library and docs
run: |
mkdir build
cd build
cmake .. -DAARE_SYSTEM_LIBRARIES=ON -DAARE_DOCS=ON
make -j 2
cmake .. -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} -DAARE_SYSTEM_LIBRARIES=ON -DAARE_PYTHON_BINDINGS=ON -DAARE_DOCS=ON -DAARE_TESTS=ON
make -j 4
make docs
- name: C++ unit tests
working-directory: ${{github.workspace}}/build
run: ctest -C ${{env.BUILD_TYPE}} -j4
- name: Upload static files as artifact
if: matrix.platform == 'ubuntu-latest'
id: deployment
uses: actions/upload-pages-artifact@v3
with:

View File

@@ -1,3 +1,4 @@
# SPDX-License-Identifier: MPL-2.0
cmake_minimum_required(VERSION 3.15)
project(aare
@@ -53,7 +54,6 @@ option(AARE_DOCS "Build documentation" OFF)
option(AARE_VERBOSE "Verbose output" OFF)
option(AARE_CUSTOM_ASSERT "Use custom assert" OFF)
option(AARE_INSTALL_PYTHONEXT "Install the python extension in the install tree under CMAKE_INSTALL_PREFIX/aare/" OFF)
option(AARE_HDF5 "Hdf5 File Format" OFF)
option(AARE_ASAN "Enable AddressSanitizer" OFF)
# Configure which of the dependencies to use FetchContent for
@@ -78,17 +78,6 @@ if(AARE_SYSTEM_LIBRARIES)
# on conda-forge
endif()
if(AARE_VERBOSE)
add_compile_definitions(AARE_VERBOSE)
add_compile_definitions(AARE_LOG_LEVEL=aare::logDEBUG5)
else()
add_compile_definitions(AARE_LOG_LEVEL=aare::logINFOBLUE)
endif()
if(AARE_CUSTOM_ASSERT)
add_compile_definitions(AARE_CUSTOM_ASSERT)
endif()
if(AARE_BENCHMARKS)
add_subdirectory(benchmarks)
endif()
@@ -336,13 +325,10 @@ if(AARE_ASAN)
)
endif()
if(AARE_TESTS)
enable_testing()
add_subdirectory(tests)
target_compile_definitions(tests PRIVATE AARE_TESTS)
endif()
###------------------------------------------------------------------------------MAIN LIBRARY
@@ -357,10 +343,6 @@ set(PUBLICHEADERS
include/aare/CtbRawFile.hpp
include/aare/ClusterVector.hpp
include/aare/decode.hpp
include/aare/type_traits.hpp
include/aare/scan_parameters.hpp
include/aare/to_string.hpp
include/aare/string_utils.hpp
include/aare/defs.hpp
include/aare/Dtype.hpp
include/aare/File.hpp
@@ -369,7 +351,7 @@ set(PUBLICHEADERS
include/aare/FilePtr.hpp
include/aare/Frame.hpp
include/aare/GainMap.hpp
include/aare/geo_helpers.hpp
include/aare/DetectorGeometry.hpp
include/aare/JungfrauDataFile.hpp
include/aare/logger.hpp
include/aare/NDArray.hpp
@@ -387,44 +369,28 @@ set(PUBLICHEADERS
set(SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/calibration.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/CtbRawFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/to_string.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/string_utils.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/decode.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Frame.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/DetectorGeometry.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/File.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/FilePtr.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Fit.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/geo_helpers.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Frame.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Interpolator.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/JungfrauDataFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyHelpers.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Interpolator.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/PixelMap.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawSubFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawMasterFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawSubFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/to_string.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/utils/task.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/utils/ifstream_helpers.cpp
)
# HDF5
if (AARE_HDF5)
find_package(HDF5 1.10 COMPONENTS CXX REQUIRED)
add_definitions(
${HDF5_DEFINITIONS}
)
list (APPEND PUBLICHEADERS
include/aare/Hdf5File.hpp
include/aare/Hdf5MasterFile.hpp
)
list (APPEND SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/Hdf5File.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Hdf5MasterFile.cpp
)
endif (AARE_HDF5)
)
add_library(aare_core STATIC ${SourceFiles})
target_include_directories(aare_core PUBLIC
@@ -448,14 +414,20 @@ target_link_libraries(
)
if (AARE_HDF5 AND HDF5_FOUND)
add_definitions(-DHDF5_FOUND)
target_link_libraries(aare_core PUBLIC
${HDF5_LIBRARIES}
)
target_include_directories(aare_core PUBLIC
${HDF5_INCLUDE_DIRS}
)
set_property(TARGET aare_core PROPERTY POSITION_INDEPENDENT_CODE ON)
if(AARE_TESTS)
target_compile_definitions(aare_core PRIVATE AARE_TESTS)
endif()
if(AARE_VERBOSE)
target_compile_definitions(aare_core PUBLIC AARE_VERBOSE)
target_compile_definitions(aare_core PUBLIC AARE_LOG_LEVEL=aare::logDEBUG5)
else()
target_compile_definitions(aare_core PUBLIC AARE_LOG_LEVEL=aare::logERROR)
endif()
if(AARE_CUSTOM_ASSERT)
target_compile_definitions(aare_core PUBLIC AARE_CUSTOM_ASSERT)
endif()
set_target_properties(aare_core PROPERTIES
@@ -463,20 +435,16 @@ set_target_properties(aare_core PROPERTIES
PUBLIC_HEADER "${PUBLICHEADERS}"
)
if (AARE_PYTHON_BINDINGS)
set_property(TARGET aare_core PROPERTY POSITION_INDEPENDENT_CODE ON)
endif()
if(AARE_TESTS)
set(TestSources
${CMAKE_CURRENT_SOURCE_DIR}/src/algorithm.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/calibration.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/to_string.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/scan_parameters.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/decode.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Frame.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/geo_helpers.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/DetectorGeometry.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Interpolation.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawMasterFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NDArray.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NDView.test.cpp
@@ -493,24 +461,13 @@ if(AARE_TESTS)
${CMAKE_CURRENT_SOURCE_DIR}/src/RawFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawSubFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/utils/task.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/to_string.test.cpp
)
if(HDF5_FOUND)
list (APPEND TestSources
${CMAKE_CURRENT_SOURCE_DIR}/src/Hdf5MasterFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Hdf5File.test.cpp
)
endif()
target_sources(tests PRIVATE ${TestSources} )
endif()
###------------------------------------------------------------------------------------------
###------------------------------------------------------------------------------------------
if(AARE_MASTER_PROJECT)
install(TARGETS aare_core aare_compiler_flags
EXPORT "${TARGETS_EXPORT_NAME}"
@@ -520,7 +477,6 @@ if(AARE_MASTER_PROJECT)
)
endif()
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
set(CMAKE_INSTALL_RPATH $ORIGIN)
set(CMAKE_BUILD_WITH_INSTALL_RPATH FALSE)

373
LICENSE Normal file
View File

@@ -0,0 +1,373 @@
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

View File

@@ -1,6 +1,14 @@
# aare
Data analysis library for PSI hybrid detectors
## Documentation
Detailed documentation including installation can be found in [Documentation](https://slsdetectorgroup.github.io/aare/)
## License
This project is licensed under the MPL-2.0 license.
See the LICENSE file or https://www.mozilla.org/en-US/MPL/ for details.
## Build and install

View File

@@ -1,14 +1,82 @@
# Release notes
## head
### head
### New Features:
- Expanding 24 to 32 bit data
- Decoding digital data from Mythen 302
- added ``transform_eta_values``. Function transforms :math:`\eta` to uniform spatial coordinates. Should only be used for easier debugging.
- New to_string, string_to for aare
- Added exptime and period members to RawMasterFile including decoding
- Removed redundant arr.value(ix,iy...) on NDArray use arr(ix,iy...)
- Removed Print/Print_some/Print_all form NDArray (operator << still works)
- Added const* version of .data()
### 2025.11.21
### New Features:
- Added SPDX-License-Identifier: MPL-2.0 to source files
- Calculate Eta3 supports all cluster types
- interpolation class supports using cross eta3x3 and eta3x3 on full cluster as well as eta2x2 on full cluster
- interpolation class has option to calculate the rosenblatt transform
- reduction operations to reduce Clusters of general size to 2x2 or 3x3 clusters
- `max_sum_2x2` including index of subcluster with highest energy is now available from Python API
- interpolation supports bilinear interpolation of eta values for more fine grained transformed uniform coordinates
- Interpolation is documented
- Added tell to ClusterFile. Returns position in bytes for debugging
### Resolved Features:
- calculate_eta coincides with theoretical definition
### Bugfixes:
- eta calculation assumes correct photon center
- eta transformation to uniform coordinates starts at 0
- Bug in interpolation
- File supports reading new master json file format (multiple ROI's not supported yet)
### API Changes:
- ClusterFinder for 2x2 Cluster disabled
- eta stores corner as enum class cTopLeft, cTopRight, BottomLeft, cBottomRight indicating 2x2 subcluster with largest energy relative to cluster center
- max_sum_2x2 returns corner as index
### 2025.8.22
Features:
- Apply calibration works in G0 if passes a 2D calibration and pedestal
- count pixels that switch
- calculate pedestal (also g0 version)
- NDArray::view() needs an lvalue to reduce issues with the view outliving the array
Bugfixes:
- Now using glibc 2.17 in conda builds (was using the host)
- Fixed shifted pixels in clusters close to the edge of a frame
### 2025.7.18
Features:
- Cluster finder now works with 5x5, 7x7 and 9x9 clusters
- Added ClusterVector::empty() member
- Added apply_calibration function for Jungfrau data
Bugfixes:
- Fixed reading RawFiles with ROI fully excluding some sub files.
- Decoding of MH02 files placed the pixels in wrong position
- Removed unused file: ClusterFile.cpp
### 2025.05.22
### 2025.5.22
Features:
@@ -18,5 +86,34 @@ Bugfixes:
- Fixed crash when opening raw files with large number of data files
## Download, Documentation & Support
### Download
The Source Code:
https://github.com/slsdetectorgroup/aare
### Documentation
Documentation including installation details:
https://github.com/slsdetectorgroup/aare
### Support
erik.frojdh@psi.ch \
alice.mazzoleni@psi.ch \
dhanya.thattil@psi.ch

View File

@@ -1 +1 @@
2025.5.22
2025.11.21

View File

@@ -1,3 +1,4 @@
# SPDX-License-Identifier: MPL-2.0
include(FetchContent)
@@ -15,7 +16,7 @@ FetchContent_MakeAvailable(benchmark)
add_executable(benchmarks)
target_sources(benchmarks PRIVATE ndarray_benchmark.cpp calculateeta_benchmark.cpp)
target_sources(benchmarks PRIVATE ndarray_benchmark.cpp calculateeta_benchmark.cpp reduce_benchmark.cpp)
# Link Google Benchmark and other necessary libraries
target_link_libraries(benchmarks PRIVATE benchmark::benchmark aare_core aare_compiler_flags)

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#include "aare/CalculateEta.hpp"
#include "aare/ClusterFile.hpp"
#include <benchmark/benchmark.h>
@@ -8,6 +9,7 @@ class ClusterFixture : public benchmark::Fixture {
public:
Cluster<int, 2, 2> cluster_2x2{};
Cluster<int, 3, 3> cluster_3x3{};
Cluster<int, 4, 4> cluster_4x4{};
private:
using benchmark::Fixture::SetUp;
@@ -26,6 +28,13 @@ class ClusterFixture : public benchmark::Fixture {
cluster_3x3.x = 0;
cluster_3x3.y = 0;
int temp_data3[16] = {1, 2, 3, 4, 5, 6, 7, 8,
9, 10, 11, 12, 13, 14, 15, 16};
std::copy(std::begin(temp_data3), std::end(temp_data3),
std::begin(cluster_4x4.data));
cluster_4x4.x = 0;
cluster_4x4.y = 0;
}
// void TearDown(::benchmark::State& state) {
@@ -67,4 +76,29 @@ BENCHMARK_F(ClusterFixture, CalculateGeneralEtaFor3x3Cluster)
benchmark::DoNotOptimize(eta);
}
}
BENCHMARK_F(ClusterFixture, Calculate2x2Etawithreduction)
(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
auto reduced_cluster = reduce_to_2x2(cluster_4x4);
Eta2 eta = calculate_eta2(reduced_cluster);
auto reduced_cluster_from_3x3 = reduce_to_2x2(cluster_3x3);
Eta2 eta2 = calculate_eta2(reduced_cluster_from_3x3);
benchmark::DoNotOptimize(eta);
benchmark::DoNotOptimize(eta2);
}
}
BENCHMARK_F(ClusterFixture, Calculate2x2Etawithoutreduction)
(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
Eta2 eta = calculate_eta2(cluster_4x4);
Eta2 eta2 = calculate_eta2(cluster_3x3);
benchmark::DoNotOptimize(eta);
benchmark::DoNotOptimize(eta2);
}
}
// BENCHMARK_MAIN();

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#include "aare/NDArray.hpp"
#include <benchmark/benchmark.h>

View File

@@ -0,0 +1,169 @@
// SPDX-License-Identifier: MPL-2.0
#include "aare/Cluster.hpp"
#include <benchmark/benchmark.h>
using namespace aare;
class ClustersForReduceFixture : public benchmark::Fixture {
public:
Cluster<int, 5, 5> cluster_5x5{};
Cluster<int, 3, 3> cluster_3x3{};
private:
using benchmark::Fixture::SetUp;
void SetUp([[maybe_unused]] const benchmark::State &state) override {
int temp_data[25] = {1, 1, 1, 1, 1, 1, 1, 2, 1, 1,
1, 2, 3, 1, 2, 1, 1, 1, 1, 2};
std::copy(std::begin(temp_data), std::end(temp_data),
std::begin(cluster_5x5.data));
cluster_5x5.x = 5;
cluster_5x5.y = 5;
int temp_data2[9] = {1, 1, 1, 2, 3, 1, 2, 2, 1};
std::copy(std::begin(temp_data2), std::end(temp_data2),
std::begin(cluster_3x3.data));
cluster_3x3.x = 5;
cluster_3x3.y = 5;
}
// void TearDown(::benchmark::State& state) {
// }
};
template <typename T>
Cluster<T, 3, 3, uint16_t> reduce_to_3x3(const Cluster<T, 5, 5, uint16_t> &c) {
Cluster<T, 3, 3, uint16_t> result;
// Write out the sums in the hope that the compiler can optimize this
std::array<T, 9> sum_3x3_subclusters;
// Write out the sums in the hope that the compiler can optimize this
sum_3x3_subclusters[0] = c.data[0] + c.data[1] + c.data[2] + c.data[5] +
c.data[6] + c.data[7] + c.data[10] + c.data[11] +
c.data[12];
sum_3x3_subclusters[1] = c.data[1] + c.data[2] + c.data[3] + c.data[6] +
c.data[7] + c.data[8] + c.data[11] + c.data[12] +
c.data[13];
sum_3x3_subclusters[2] = c.data[2] + c.data[3] + c.data[4] + c.data[7] +
c.data[8] + c.data[9] + c.data[12] + c.data[13] +
c.data[14];
sum_3x3_subclusters[3] = c.data[5] + c.data[6] + c.data[7] + c.data[10] +
c.data[11] + c.data[12] + c.data[15] + c.data[16] +
c.data[17];
sum_3x3_subclusters[4] = c.data[6] + c.data[7] + c.data[8] + c.data[11] +
c.data[12] + c.data[13] + c.data[16] + c.data[17] +
c.data[18];
sum_3x3_subclusters[5] = c.data[7] + c.data[8] + c.data[9] + c.data[12] +
c.data[13] + c.data[14] + c.data[17] + c.data[18] +
c.data[19];
sum_3x3_subclusters[6] = c.data[10] + c.data[11] + c.data[12] + c.data[15] +
c.data[16] + c.data[17] + c.data[20] + c.data[21] +
c.data[22];
sum_3x3_subclusters[7] = c.data[11] + c.data[12] + c.data[13] + c.data[16] +
c.data[17] + c.data[18] + c.data[21] + c.data[22] +
c.data[23];
sum_3x3_subclusters[8] = c.data[12] + c.data[13] + c.data[14] + c.data[17] +
c.data[18] + c.data[19] + c.data[22] + c.data[23] +
c.data[24];
auto index = std::max_element(sum_3x3_subclusters.begin(),
sum_3x3_subclusters.end()) -
sum_3x3_subclusters.begin();
switch (index) {
case 0:
result.x = c.x - 1;
result.y = c.y + 1;
result.data = {c.data[0], c.data[1], c.data[2], c.data[5], c.data[6],
c.data[7], c.data[10], c.data[11], c.data[12]};
break;
case 1:
result.x = c.x;
result.y = c.y + 1;
result.data = {c.data[1], c.data[2], c.data[3], c.data[6], c.data[7],
c.data[8], c.data[11], c.data[12], c.data[13]};
break;
case 2:
result.x = c.x + 1;
result.y = c.y + 1;
result.data = {c.data[2], c.data[3], c.data[4], c.data[7], c.data[8],
c.data[9], c.data[12], c.data[13], c.data[14]};
break;
case 3:
result.x = c.x - 1;
result.y = c.y;
result.data = {c.data[5], c.data[6], c.data[7],
c.data[10], c.data[11], c.data[12],
c.data[15], c.data[16], c.data[17]};
break;
case 4:
result.x = c.x + 1;
result.y = c.y;
result.data = {c.data[6], c.data[7], c.data[8],
c.data[11], c.data[12], c.data[13],
c.data[16], c.data[17], c.data[18]};
break;
case 5:
result.x = c.x + 1;
result.y = c.y;
result.data = {c.data[7], c.data[8], c.data[9],
c.data[12], c.data[13], c.data[14],
c.data[17], c.data[18], c.data[19]};
break;
case 6:
result.x = c.x + 1;
result.y = c.y - 1;
result.data = {c.data[10], c.data[11], c.data[12],
c.data[15], c.data[16], c.data[17],
c.data[20], c.data[21], c.data[22]};
break;
case 7:
result.x = c.x + 1;
result.y = c.y - 1;
result.data = {c.data[11], c.data[12], c.data[13],
c.data[16], c.data[17], c.data[18],
c.data[21], c.data[22], c.data[23]};
break;
case 8:
result.x = c.x + 1;
result.y = c.y - 1;
result.data = {c.data[12], c.data[13], c.data[14],
c.data[17], c.data[18], c.data[19],
c.data[22], c.data[23], c.data[24]};
break;
}
return result;
}
BENCHMARK_F(ClustersForReduceFixture, Reduce2x2)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
benchmark::DoNotOptimize(reduce_to_2x2<int, 3, 3, uint16_t>(
cluster_3x3)); // make sure compiler evaluates the expression
}
}
BENCHMARK_F(ClustersForReduceFixture, SpecificReduce2x2)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
benchmark::DoNotOptimize(reduce_to_2x2<int>(cluster_3x3));
}
}
BENCHMARK_F(ClustersForReduceFixture, Reduce3x3)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
benchmark::DoNotOptimize(
reduce_to_3x3<int, 5, 5, uint16_t>(cluster_5x5));
}
}
BENCHMARK_F(ClustersForReduceFixture, SpecificReduce3x3)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
benchmark::DoNotOptimize(reduce_to_3x3<int>(cluster_5x5));
}
}

View File

@@ -3,3 +3,14 @@ python:
- 3.12
- 3.13
c_compiler:
- gcc # [linux]
c_stdlib:
- sysroot # [linux]
cxx_compiler:
- gxx # [linux]
c_stdlib_version: # [linux]
- 2.17 # [linux]

View File

@@ -16,6 +16,8 @@ build:
requirements:
build:
- {{ compiler('c') }}
- {{ stdlib("c") }}
- {{ compiler('cxx') }}
- cmake
- ninja
@@ -47,4 +49,5 @@ test:
- python -m pytest python/tests
about:
license: SPDX-License-Identifier MPL-2.0
summary: Data analysis library for hybrid pixel detectors from PSI

View File

@@ -1,3 +1,4 @@
# SPDX-License-Identifier: MPL-2.0
find_package(Doxygen REQUIRED)
find_package(Sphinx REQUIRED)
@@ -11,14 +12,19 @@ set(SPHINX_SOURCE ${CMAKE_CURRENT_SOURCE_DIR}/src)
set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR})
file(GLOB SPHINX_SOURCE_FILES CONFIGURE_DEPENDS "src/*.rst")
file(GLOB_RECURSE SPHINX_SOURCE_FILES
CONFIGURE_DEPENDS
RELATIVE "${CMAKE_CURRENT_SOURCE_DIR}/src"
"${CMAKE_CURRENT_SOURCE_DIR}/src/*.rst"
)
foreach(relpath IN LISTS SPHINX_SOURCE_FILES)
set(src "${CMAKE_CURRENT_SOURCE_DIR}/src/${relpath}")
set(dst "${SPHINX_BUILD}/src/${relpath}")
foreach(filename ${SPHINX_SOURCE_FILES})
get_filename_component(fname ${filename} NAME)
message(STATUS "Copying ${filename} to ${SPHINX_BUILD}/src/${fname}")
configure_file(${filename} "${SPHINX_BUILD}/src/${fname}")
endforeach(filename ${SPHINX_SOURCE_FILES})
message(STATUS "Copying ${src} to ${dst}")
configure_file("${src}" "${dst}" COPYONLY)
endforeach()
configure_file(
"${CMAKE_CURRENT_SOURCE_DIR}/conf.py.in"
@@ -26,6 +32,8 @@ configure_file(
@ONLY
)
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/figures"
DESTINATION "${SPHINX_BUILD}")
configure_file(
"${CMAKE_CURRENT_SOURCE_DIR}/static/extra.css"
@@ -44,12 +52,3 @@ add_custom_target(
COMMENT "Generating documentation with Sphinx"
)
add_custom_target(
rst
COMMAND ${SPHINX_EXECUTABLE} -a -b html
-Dbreathe_projects.aare=${CMAKE_CURRENT_BINARY_DIR}/xml
-c "${SPHINX_BUILD}"
${SPHINX_BUILD}/src
${SPHINX_BUILD}/html
COMMENT "Generating documentation with Sphinx"
)

View File

@@ -886,7 +886,7 @@ EXCLUDE_SYMLINKS = NO
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories for example use the pattern */test/*
EXCLUDE_PATTERNS = */docs/* */tests/* */python/* */manual */slsDetectorServers/* */libs/* */integrationTests *README* */slsDetectorGui/* */ctbGui/* */slsDetectorCalibration/* *TobiSchluter*
EXCLUDE_PATTERNS = *build* */docs/* */tests/* *.test.cpp* */python/* */manual */slsDetectorServers/* */libs/* */integrationTests *README* *_deps* *TobiSchluter*
# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
# (namespaces, classes, functions, etc.) that should be excluded from the

BIN
docs/figures/Eta2x2.pdf Normal file

Binary file not shown.

BIN
docs/figures/Eta2x2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.3 KiB

BIN
docs/figures/Eta2x2Full.pdf Normal file

Binary file not shown.

BIN
docs/figures/Eta2x2Full.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/figures/Eta3x3.pdf Normal file

Binary file not shown.

BIN
docs/figures/Eta3x3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.5 KiB

15
docs/src/Cluster.rst Normal file
View File

@@ -0,0 +1,15 @@
Cluster
========
.. doxygenstruct:: aare::Cluster
:members:
:undoc-members:
:private-members:
**Free Functions:**
.. doxygenfunction:: aare::reduce_to_3x3(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>&)
.. doxygenfunction:: aare::reduce_to_2x2(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>&)

View File

@@ -4,4 +4,5 @@ ClusterFile
.. doxygenclass:: aare::ClusterFile
:members:
:undoc-members:
:private-members:
:private-members:

View File

@@ -3,4 +3,20 @@ ClusterVector
.. doxygenclass:: aare::ClusterVector
:members:
:undoc-members:
:undoc-members:
:private-members:
.. doxygenclass:: aare::ClusterVector< Cluster< T, ClusterSizeX, ClusterSizeY, CoordType > >
:members:
:undoc-members:
:private-members:
**Free Functions:**
.. doxygenfunction:: aare::reduce_to_3x3(const ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>>&)
.. doxygenfunction:: aare::reduce_to_2x2(const ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>>&)

View File

@@ -1,8 +0,0 @@
Hdf5File
===============
.. doxygenclass:: aare::Hdf5File
:members:
:undoc-members:
:private-members:

View File

@@ -1,14 +0,0 @@
Hdf5MasterFile
===============
.. doxygenclass:: aare::Hdf5MasterFile
:members:
:undoc-members:
:private-members:
.. doxygenclass:: aare::Hdf5FileNameComponents
:members:
:undoc-members:
:private-members:

160
docs/src/Interpolation.rst Normal file
View File

@@ -0,0 +1,160 @@
.. _Interpolation_C++API:
Interpolation
==============
The Interpolation class implements the :math:`\eta`-interpolation method.
This interpolation technique is based on charge sharing: for detected photon hits (e.g. clusters), it refines the estimated photon hit using information from neighboring pixels.
The method relies on the so-called :math:`\eta`-functions, which describe the relationship between the energy measured in the central cluster pixel (the initially estimated photon hit) and the energies measured in its neighboring pixels.
Depending on how much energy each neighboring pixel receives relative to the central pixel, the estimated photon hit is shifted toward that neighbor by a certain offset to the actual photon hit position in the pixel :math:`(x, y)`.
The mapping between the :math:`\eta` values and the corresponding spatial photon position :math:`(x,y)` can be viewed as an optimal transport problem.
One can readily compute the probability distribution :math:`P_{\eta}` of the :math:`\eta` values by forming a 2D histogram.
However, the probability distribution :math:`P_{x,y}` of the true photon positions is generally unknown unless the detector is illuminated uniformly (i.e. under flat-field conditions).
In a flat-field, the photon positions are uniformly distributed.
With this assumption, the problem reduces to determining a transport map :math:`T:(\eta_x,\eta_y) \rightarrow (x,y)`, that pushes forward the distribution of :math:`(\eta_x, \eta_y)` to the known uniform distribution of photon positions of a flatfield.
The map :math:`T` is given by:
.. math::
\begin{align*}
T_1: & F_{x}^{-1} F_{\eta_x|\eta_y} \\
T_2: & F_{y}^{-1} F_{\eta_y|\eta_x},
\end{align*}
where :math:`F_{\eta_x|\eta_y}` and :math:`F_{\eta_y|\eta_x}` are the conditional cumulative distribution functions e.g. :math:`F_{\eta_x|\eta_y}(\eta_x', \eta_y') = P_{\eta_x, \eta_y}(\eta_x \leq \eta_x' | \eta_y = \eta_y')`.
And :math:`F_{x}` and :math:`F_{y}` are the cumulative distribution functions of :math:`x` and :math:`y`. Note as :math:`x` and :math:`y` are uniformly distributed :math:`F_{x}` and :math:`F_{y}` are the identity functions. The map :math:`T` thus simplifies to
.. math::
\begin{align*}
T_1: & F_{\eta_x|\eta_y} \\
T_2: & F_{\eta_y|\eta_x}.
\end{align*}
Note that for the implementation :math:`P_{\eta}` is not only a distribution of :math:`\eta_x`, :math:`\eta_y` but also of the estimated photon energy :math:`e`.
The energy level correlates slightly with the z-depth. Higher z-depth leads to more charge sharing and a different :math:`\eta` distribution. Thus we create a mapping :math:`T` for each energy level.
:math:`\eta`-Functions:
---------------------------
.. doxygenstruct:: aare::Eta2
:members:
:undoc-members:
:private-members:
.. note::
The corner value ``c`` is only relevant when one uses ``calculate_eta_2`` or ``calculate_full_eta2``. Otherwise its default value is ``cTopLeft``.
Supported are the following :math:`\eta`-functions:
:math:`\eta`-Function on 2x2 Clusters:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: ../figures/Eta2x2.png
:target: ../figures/Eta2x2.png
:width: 650px
:align: center
:alt: Eta2x2
.. math::
\begin{equation*}
{\color{blue}{\eta_x}} = \frac{Q_{1,1}}{Q_{1,0} + Q_{1,1}} \quad \quad
{\color{green}{\eta_y}} = \frac{Q_{1,1}}{Q_{0,1} + Q_{1,1}}
\end{equation*}
The :math:`\eta` values can range between 0,1. Note they only range between 0,1 because the position of the center pixel (red) can change.
If the center pixel is in the bottom left pixel :math:`\eta_x` will be close to zero. If the center pixel is in the bottom right pixel :math:`\eta_y` will be close to 1.
One can apply this :math:`\eta` not only on 2x2 clusters but on clusters with any size. Then the 2x2 subcluster with maximum energy is choosen and the :math:`\eta` function applied on the subcluster.
.. doxygenfunction:: aare::calculate_eta2(const ClusterVector<ClusterType>&)
.. doxygenfunction:: aare::calculate_eta2(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>&)
Full :math:`\eta`-Function on 2x2 Clusters:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: ../figures/Eta2x2Full.png
:target: ../figures/Eta2x2Full.png
:width: 650px
:align: center
:alt: Eta2x2 Full
.. math::
\begin{equation*}
{\color{blue}{\eta_x}} = \frac{Q_{0,1} + Q_{1,1}}{\sum_i^{1}\sum_j^{1}Q_{i,j}} \quad \quad
{\textcolor{green}{\eta_y}} = \frac{Q_{1,0} + Q_{1,1}}{\sum_i^{1}\sum_j^{1}Q_{i,j}}
\end{equation*}
The :math:`\eta` values can range between 0,1. Note they only range between 0,1 because the position of the center pixel (red) can change.
If the center pixel is in the bottom left pixel :math:`\eta_x` will be close to zero. If the center pixel is in the bottom right pixel :math:`\eta_y` will be close to 1.
.. doxygenfunction:: aare::calculate_full_eta2(const ClusterVector<ClusterType>&)
.. doxygenfunction:: aare::calculate_full_eta2(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>&)
Full :math:`\eta`-Function on 3x3 Clusters:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: ../figures/Eta3x3.png
:target: ../figures/Eta3x3.png
:width: 650px
:align: center
:alt: Eta3x3
.. math::
\begin{equation*}
{\color{blue}{\eta_x}} = \frac{\sum_{i=0}^{2} Q_{i,2} - \sum_{i=0}^{2} Q_{i,0}}{\sum_{i=0}^{2}\sum_{j=0}^{2} Q_{i,j}} \quad \quad
{\color{green}{\eta_y}} = \frac{\sum_{j=0}^{2} Q_{2,j} - \sum_{j=0}^{2} Q_{0,j}}{\sum_{i=0}^{2}\sum_{j=0}^{2} Q_{i,j}}
\end{equation*}
The :math:`\eta` values can range between -0.5,0.5.
.. doxygenfunction:: aare::calculate_eta3(const ClusterVector<ClusterType>&)
.. doxygenfunction:: aare::calculate_eta3(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>&)
Cross :math:`\eta`-Function on 3x3 Clusters:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: ../figures/Eta3x3Cross.png
:target: ../figures/Eta3x3Cross.png
:width: 650px
:align: center
:alt: Cross Eta3x3
.. math::
\begin{equation*}
{\color{blue}{\eta_x}} = \frac{Q_{1,2} - Q_{1,0}}{Q_{1,0} + Q_{1,1} + Q_{1,2}} \quad \quad
{\color{green}{\eta_y}} = \frac{Q_{0,2} - Q_{0,1}}{Q_{0,1} + Q_{1,1} + Q_{2,1}}
\end{equation*}
The :math:`\eta` values can range between -0.5,0.5.
.. doxygenfunction:: aare::calculate_cross_eta3(const ClusterVector<ClusterType>&)
.. doxygenfunction:: aare::calculate_cross_eta3(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>&)
Interpolation class:
---------------------
.. warning::
The interpolation might lead to erroneous photon positions for clusters at the borders of a frame. Make sure to filter out such cases.
.. Warning::
Make sure to use the same :math:`\eta`-function during interpolation as given by the joint :math:`\eta`-distribution passed to the constructor.
.. Note::
Make sure to use resonable energy bins, when constructing the joint distribution. If data is too sparse for a given energy the interpolation will lead to erreneous results.
.. doxygenclass:: aare::Interpolator
:members:
:undoc-members:
:private-members:

View File

@@ -2,7 +2,7 @@
Tests
****************
We test the code both from the C++ and Python API. By default only tests that does not require image data is run.
We test the code both from C++ and Python. By default only tests that does not require additional data are run.
C++
~~~~~~~~~~~~~~~~~~
@@ -15,7 +15,7 @@ C++
make -j 4
export AARE_TEST_DATA=/path/to/test/data
./run_test [.files] #or using ctest, [.files] is the option to include tests needing data
./run_test [.with-data] #or using ctest, [.with-data] is the option to include tests needing data
@@ -25,7 +25,7 @@ Python
.. code-block:: bash
#From the root dir of the library
python -m pytest python/tests --files # passing --files will run the tests needing data
python -m pytest python/tests --with-data # passing --with-data will run the tests needing data
@@ -35,7 +35,7 @@ Getting the test data
.. attention ::
The tests needing the test data are not run by default. To make the data available, you need to set the environment variable
AARE_TEST_DATA to the path of the test data directory. Then pass either [.files] for the C++ tests or --files for Python
AARE_TEST_DATA to the path of the test data directory. Then pass either [.with-data] for the C++ tests or --files for Python
The image files needed for the test are large and are not included in the repository. They are stored
using GIT LFS in a separate repository. To get the test data, you need to clone the repository.

View File

@@ -37,7 +37,7 @@ unfamiliar steps.
Checklists for deployment
~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Feature:**

View File

@@ -22,20 +22,14 @@ AARE
.. toctree::
:caption: Python API
:maxdepth: 1
pyFile
pyCtbRawFile
pyClusterFile
pyClusterVector
pyJungfrauDataFile
pyRawFile
pyRawMasterFile
pyHdf5File
pyHdf5MasterFile
pyVarClusterFinder
:maxdepth: 3
:hidden:
pycalibration
python/cluster/index
python/file/index
pyFit
.. toctree::
@@ -48,17 +42,17 @@ AARE
Frame
File
Dtype
Cluster
ClusterFinder
ClusterFinderMT
ClusterFile
ClusterVector
Interpolation
JungfrauDataFile
Pedestal
RawFile
RawSubFile
RawMasterFile
Hdf5File
Hdf5MasterFile
VarClusterFinder

View File

@@ -1,11 +0,0 @@
ClusterFile
============
.. py:currentmodule:: aare
.. autoclass:: ClusterFile
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

View File

@@ -1,11 +0,0 @@
CtbRawFile
============
.. py:currentmodule:: aare
.. autoclass:: CtbRawFile
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

View File

@@ -1,10 +0,0 @@
Hdf5File
===================
.. py:currentmodule:: aare
.. autoclass:: Hdf5File
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

View File

@@ -1,10 +0,0 @@
Hdf5MasterFile
===================
.. py:currentmodule:: aare
.. autoclass:: Hdf5MasterFile
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

View File

@@ -0,0 +1,40 @@
Calibration
==============
Functions for applying calibration to data.
.. code-block:: python
import aare
# Load calibration data for a single JF module (512x1024 pixels)
calibration = aare.load_calibration('path/to/calibration/file.bin')
raw_data = ... # Load your raw data here
pedestal = ... # Load your pedestal data here
# Apply calibration to raw data to convert from raw ADC values to keV
data = aare.apply_calibration(raw_data, pd=pedestal, cal=calibration)
# If you pass a 2D pedestal and calibration only G0 will be used for the conversion
# Pixels that switched to G1 or G2 will be set to 0
data = aare.apply_calibration(raw_data, pd=pedestal[0], cal=calibration[0])
.. py:currentmodule:: aare
.. autofunction:: apply_calibration
.. autofunction:: load_calibration
.. autofunction:: calculate_pedestal
.. autofunction:: calculate_pedestal_float
.. autofunction:: calculate_pedestal_g0
.. autofunction:: calculate_pedestal_g0_float
.. autofunction:: count_switching_pixels

View File

@@ -0,0 +1,11 @@
Cluster & Interpolation
==========================
.. toctree::
:caption: Cluster & Interpolation
:maxdepth: 1
pyCluster
pyClusterVector
pyInterpolation
pyVarClusterFinder

View File

@@ -0,0 +1,23 @@
Cluster
========
.. py:currentmodule:: aare
.. autoclass:: Cluster
:members:
:undoc-members:
:inherited-members:
Below is the API of a cluster of size :math:`3\times 3` and type ``int`` but all variants share the same API.
.. autoclass:: aare._aare.Cluster3x3i
:special-members: __init__
:members:
:undoc-members:
:show-inheritance:
:inherited-members:
.. note::
More functions can be found in the :ref:`ClusterVector <py_clustervector>` documentation. Generally apply functions directly on the ``ClusterVector`` instead of looping over individual clusters.

View File

@@ -1,9 +1,13 @@
.. _py_clustervector:
ClusterVector
================
The ClusterVector, holds clusters from the ClusterFinder. Since it is templated
in C++ we use a suffix indicating the data type in python. The suffix is
``_i`` for integer, ``_f`` for float, and ``_d`` for double.
in C++ we use a suffix indicating the type of cluster it holds. The suffix follows
the same pattern as for ClusterFile i.e. ``ClusterVector_Cluster3x3i``
for a vector holding 3x3 integer clusters.
At the moment the functionality from python is limited and it is not supported
to push_back clusters to the vector. The intended use case is to pass it to
@@ -26,8 +30,29 @@ C++ functions that support the ClusterVector or to view it as a numpy array.
.. py:currentmodule:: aare
.. autoclass:: ClusterVector_i
.. autoclass:: ClusterVector
:members:
:undoc-members:
:inherited-members:
Below is the API of the ClusterVector_Cluster3x3i but all variants share the same API.
.. autoclass:: aare._aare.ClusterVector_Cluster3x3i
:special-members: __init__
:members:
:undoc-members:
:show-inheritance:
:inherited-members:
:inherited-members:
**Free Functions:**
.. autofunction:: reduce_to_3x3
:noindex:
Reduce a single Cluster to 3x3 by taking the 3x3 subcluster with highest photon energy.
.. autofunction:: reduce_to_2x2
:noindex:
Reduce a single Cluster to 2x2 by taking the 2x2 subcluster with highest photon energy.

View File

@@ -0,0 +1,124 @@
Interpolation
==============
The Interpolation class implements the :math:`\eta`-interpolation method.
This interpolation technique is based on charge sharing: for detected photon hits (e.g. clusters), it refines the estimated photon hit using information from neighboring pixels.
See :ref:`Interpolation_C++API` for a more elaborate documentation and explanation of the method.
:math:`\eta`-Functions:
--------------------------
Below is an example of the Eta class of type ``double``. Supported are ``Etaf`` of type ``float`` and ``Etai`` of type ``int``.
.. autoclass:: aare._aare.Etad
:members:
:private-members:
.. note::
The corner value ``c`` is only relevant when one uses ``calculate_eta_2`` or ``calculate_full_eta2``. Otherwise its default value is ``cTopLeft``.
Supported are the following :math:`\eta`-functions:
:math:`\eta`-Function on 2x2 Clusters:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. py:currentmodule:: aare
.. image:: ../../../figures/Eta2x2.png
:target: ../../../figures/Eta2x2.png
:width: 650px
:align: center
:alt: Eta2x2
.. math::
\begin{equation*}
{\color{blue}{\eta_x}} = \frac{Q_{1,1}}{Q_{1,0} + Q_{1,1}} \quad \quad
{\color{green}{\eta_y}} = \frac{Q_{1,1}}{Q_{0,1} + Q_{1,1}}
\end{equation*}
The :math:`\eta` values can range between 0,1. Note they only range between 0,1 because the position of the center pixel (red) can change.
If the center pixel is in the bottom left pixel :math:`\eta_x` will be close to zero. If the center pixel is in the bottom right pixel :math:`\eta_y` will be close to 1.
.. autofunction:: calculate_eta2
Full :math:`\eta`-Function on 2x2 Clusters:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: ../../../figures/Eta2x2Full.png
:target: ../../../figures/Eta2x2Full.png
:width: 650px
:align: center
:alt: Eta2x2 Full
.. math::
\begin{equation*}
{\color{blue}{\eta_x}} = \frac{Q_{0,1} + Q_{1,1}}{\sum_{i=0}^{1}\sum_{j=0}^{1}Q_{i,j}} \quad \quad
{\textcolor{green}{\eta_y}} = \frac{Q_{1,0} + Q_{1,1}}{\sum_{i=0}^{1}\sum_{j=0}^{1}Q_{i,j}}
\end{equation*}
The :math:`\eta` values can range between 0,1. Note they only range between 0,1 because the position of the center pixel (red) can change.
If the center pixel is in the bottom left pixel :math:`\eta_x` will be close to zero. If the center pixel is in the bottom right pixel :math:`\eta_y` will be close to 1.
.. autofunction:: calculate_full_eta2
Full :math:`\eta`-Function on 3x3 Clusters:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: ../../../figures/Eta3x3.png
:target: ../../../figures/Eta3x3.png
:width: 650px
:align: center
:alt: Eta3x3
.. math::
\begin{equation*}
{\color{blue}{\eta_x}} = \frac{\sum_{i=0}^{2} Q_{i,2} - \sum_{i=0}^{2} Q_{i,0}}{\sum_{i=0}^{2}\sum_{j}^{3} Q_{i,j}} \quad \quad
{\color{green}{\eta_y}} = \frac{\sum_{j=0}^{2} Q_{2,j} - \sum_{j=0}^{2} Q_{0,j}}{\sum_{i=0}^{2}\sum_{j}^{3} Q_{i,j}}
\end{equation*}
The :math:`\eta` values can range between -0.5,0.5.
.. autofunction:: calculate_eta3
Cross :math:`\eta`-Function on 3x3 Clusters:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. image:: ../../../figures/Eta3x3Cross.png
:target: ../../../figures/Eta3x3Cross.png
:width: 650px
:align: center
:alt: Cross Eta3x3
.. math::
\begin{equation*}
{\color{blue}{\eta_x}} = \frac{Q_{1,2} - Q_{1,0}}{Q_{1,0} + Q_{1,1} + Q_{1,2}} \quad \quad
{\color{green}{\eta_y}} = \frac{Q_{0,2} - Q_{0,1}}{Q_{0,1} + Q_{1,1} + Q_{2,1}}
\end{equation*}
The :math:`\eta` values can range between -0.5,0.5.
.. autofunction:: calculate_cross_eta3
Interpolation class for :math:`\eta`-Interpolation
----------------------------------------------------
.. Warning::
Make sure to use the same :math:`\eta`-function during interpolation as given by the joint :math:`\eta`-distribution passed to the constructor.
.. Warning::
The interpolation might lead to erroneous photon positions for clusters at the boarders of a frame. Make sure to filter out such cases.
.. Note::
Make sure to use resonable energy bins, when constructing the joint distribution. If data is too sparse for a given energy the interpolation will lead to erreneous results.
.. py:currentmodule:: aare
.. autoclass:: Interpolator
:special-members: __init__
:members:
:undoc-members:
:inherited-members:

View File

@@ -0,0 +1,14 @@
File I/O
===================
.. toctree::
:caption: File I/O
:maxdepth: 1
pyClusterFile
pyCtbRawFile
pyFile
pyJungfrauDataFile
pyRawFile
pyRawMasterFile
pyTransform

View File

@@ -0,0 +1,26 @@
ClusterFile
============
The :class:`ClusterFile` class is the main interface to read and write clusters in aare. Unfortunately the
old file format does not include metadata like the cluster size and the data type. This means that the
user has to know this information from other sources. Specifying the wrong cluster size or data type
will lead to garbage data being read.
.. py:currentmodule:: aare
.. autoclass:: ClusterFile
:members:
:undoc-members:
:inherited-members:
Below is the API of the ClusterFile_Cluster3x3i but all variants share the same API.
.. autoclass:: aare._aare.ClusterFile_Cluster3x3i
:special-members: __init__
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

View File

@@ -0,0 +1,25 @@
CtbRawFile
============
Read analog, digital and transceiver samples from a raw file containing
data from the Chip Test Board. Uses :mod:`aare.transform` to decode the
data into a format that the user can work with.
.. code:: python
import aare
from aare.transform import Mythen302Transform
my302 = Mythen302Transform(offset = 4)
with aare.CtbRawFile(fname, transform = my302) as f:
for header, data in f:
#do something with the data
.. py:currentmodule:: aare
.. autoclass:: CtbRawFile
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

View File

@@ -0,0 +1,27 @@
Transform
===================
The transform module takes data read by :class:`aare.CtbRawFile` and decodes it
to a useful image format. Depending on detector it supports both analog
and digital samples.
For convenience the following transform objects are defined with a short name
.. code:: python
moench05 = Moench05Transform()
moench05_1g = Moench05Transform1g()
moench05_old = Moench05TransformOld()
matterhorn02 = Matterhorn02Transform()
adc_sar_04_64to16 = AdcSar04Transform64to16()
adc_sar_05_64to16 = AdcSar05Transform64to16()
.. py:currentmodule:: aare
.. automodule:: aare.transform
:members:
:undoc-members:
:private-members:
:special-members: __call__
:show-inheritance:
:inherited-members:

103
etc/add_license.py Normal file
View File

@@ -0,0 +1,103 @@
#!/usr/bin/env python3
import argparse
import fnmatch
import os
from pathlib import Path
CPP_PATTERNS = ["*.h", "*.hpp", "*.cpp"]
PY_PATTERNS = ["*.py"]
CMAKE_PATTERNS = ["CMakeLists.txt"]
FILE_PATTERNS = CPP_PATTERNS + PY_PATTERNS + CMAKE_PATTERNS
LICENSE_TEXT = "SPDX-License-Identifier: MPL-2.0"
def get_comment_prefix(filename: str) -> str | None:
if any(fnmatch.fnmatch(filename, p) for p in CPP_PATTERNS):
return "// "
if any(fnmatch.fnmatch(filename, p) for p in (PY_PATTERNS + CMAKE_PATTERNS)):
return "# "
return None
def matches_pattern(filename: str) -> bool:
return any(fnmatch.fnmatch(filename, p) for p in FILE_PATTERNS)
def process_file(filepath: Path) -> bool:
filename = filepath.name
prefix = get_comment_prefix(filename)
if not prefix:
return False
license_line = f"{prefix}{LICENSE_TEXT}\n"
try:
with filepath.open("r", encoding="utf-8") as f:
lines = f.readlines()
except Exception as e:
print(f"⚠️ Error reading {filepath}: {e}")
return False
# Skip if SPDX already present anywhere in the file
if any("SPDX-License-Identifier" in line for line in lines):
return False
insert_index = 0
# For Python, keep shebang on the very first line
if filename.endswith(".py") and lines:
if lines[0].startswith("#!"):
insert_index = 1
lines.insert(insert_index, license_line)
try:
with filepath.open("w", encoding="utf-8") as f:
f.writelines(lines)
except Exception as e:
print(f"⚠️ Error writing {filepath}: {e}")
return False
return True
def main() -> None:
parser = argparse.ArgumentParser(
description="Add SPDX-License-Identifier: MPL-2.0 to source files."
)
parser.add_argument(
"path",
help="Root directory to recursively process "
"(*.h, *.cpp, *.py, and CMakeLists.txt).",
)
args = parser.parse_args()
root_path = Path(args.path).expanduser().resolve()
if not root_path.exists():
print(f"Error: Path does not exist: {root_path}")
raise SystemExit(1)
if not root_path.is_dir():
print(f"Error: Path is not a directory: {root_path}")
raise SystemExit(1)
print(f"Processing directory: {root_path}")
modified = 0
for dirpath, _, files in os.walk(root_path):
dirpath = Path(dirpath)
for name in files:
if matches_pattern(name):
fullpath = dirpath / name
if process_file(fullpath):
print(f"✔ Added SPDX: {fullpath}")
modified += 1
print(f"\nDone. Updated {modified} file(s).")
if __name__ == "__main__":
main()

View File

@@ -3,11 +3,16 @@ channels:
- conda-forge
dependencies:
- anaconda-client
- catch2
- conda-build
- doxygen
- sphinx=7.1.2
- sphinx
- breathe
- sphinx_rtd_theme
- furo
- zeromq
- pybind11
- numpy
- matplotlib
- nlohmann_json

View File

@@ -1,4 +1,4 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# SPDX-License-Identifier: MPL-2.0
# Copyright (C) 2021 Contributors to the Aare Package
"""
Script to update VERSION file with semantic versioning if provided as an argument, or with 0.0.0 if no argument is provided.
@@ -7,11 +7,13 @@ Script to update VERSION file with semantic versioning if provided as an argumen
import sys
import os
import re
from datetime import datetime
from pathlib import Path
from packaging.version import Version, InvalidVersion
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
SCRIPT_DIR = Path(__file__).absolute().parent.parent
def is_integer(value):
try:
@@ -26,9 +28,9 @@ def get_version():
# Check at least one argument is passed
if len(sys.argv) < 2:
return "0.0.0"
version = sys.argv[1]
version = datetime.today().strftime('%Y.%-m.%-d')
else:
version = sys.argv[1]
try:
v = Version(version) # normalize check if version follows PEP 440 specification
@@ -45,7 +47,8 @@ def get_version():
def write_version_to_file(version):
version_file_path = os.path.join(SCRIPT_DIR, "VERSION")
version_file_path = SCRIPT_DIR/"VERSION"
print(version_file_path)
with open(version_file_path, "w") as version_file:
version_file.write(version)
print(f"Version {version} written to VERSION file.")
@@ -54,4 +57,4 @@ def write_version_to_file(version):
if __name__ == "__main__":
version = get_version()
write_version_to_file(version)
write_version_to_file(version)

View File

@@ -1,12 +1,53 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/defs.hpp"
#include <array>
#include <cassert>
#include <cstddef>
#include <cstdint>
#include <type_traits>
namespace aare {
template <ssize_t Dim = 0, typename Strides>
ssize_t element_offset(const Strides & /*unused*/) {
return 0;
}
template <ssize_t Dim = 0, typename Strides, typename... Ix>
ssize_t element_offset(const Strides &strides, ssize_t i, Ix... index) {
return i * strides[Dim] + element_offset<Dim + 1>(strides, index...);
}
template <typename Derived, typename T, ssize_t Ndim>
class NDIndexOps {
public:
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) {
return derived().data()[element_offset(derived().strides(), index...)];
}
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, const T &> operator()(Ix... index) const {
return derived().data()[element_offset(derived().strides(), index...)];
}
T &operator()(ssize_t i) {
return derived().data()[i];
}
const T &operator()(ssize_t i) const {
return derived().data()[i];
}
T &operator[](ssize_t i) { return derived().data()[i]; }
const T &operator[](ssize_t i) const { return derived().data()[i]; }
private:
Derived &derived() { return static_cast<Derived &>(*this); }
const Derived &derived() const { return static_cast<const Derived &>(*this); }
};
template <typename E, ssize_t Ndim> class ArrayExpr {
public:
static constexpr bool is_leaf = false;
@@ -95,4 +136,4 @@ auto operator/(const ArrayExpr<A, Ndim> &arr1, const ArrayExpr<B, Ndim> &arr2) {
return ArrayDiv<ArrayExpr<A, Ndim>, ArrayExpr<B, Ndim>, Ndim>(arr1, arr2);
}
} // namespace aare
} // namespace aare

View File

@@ -1,18 +1,13 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Cluster.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/NDArray.hpp"
#include "aare/defs.hpp"
namespace aare {
enum class corner : int {
cBottomLeft = 0,
cBottomRight = 1,
cTopLeft = 2,
cTopRight = 3
};
enum class pixel : int {
pBottomLeft = 0,
pBottom = 1,
@@ -25,146 +20,428 @@ enum class pixel : int {
pTopRight = 8
};
// TODO: better to have sum after x,y
/**
* eta struct
*/
template <typename T> struct Eta2 {
double x;
double y;
int c;
T sum;
/// @brief eta in x direction
double x{};
/// @brief eta in y direction
double y{};
/// @brief index of subcluster with highest energy value (given as corner
/// relative to cluster center)
corner c{0};
/// @brief photon energy (cluster sum)
T sum{};
};
/**
* @brief Calculate the eta2 values for all clusters in a Clustervector
* @brief Calculate the eta2 values for all clusters in a ClusterVector
*/
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
NDArray<double, 2> calculate_eta2(const ClusterVector<ClusterType> &clusters) {
NDArray<double, 2> eta2({static_cast<int64_t>(clusters.size()), 2});
std::vector<Eta2<typename ClusterType::value_type>>
calculate_eta2(const ClusterVector<ClusterType> &clusters) {
std::vector<Eta2<typename ClusterType::value_type>> eta2{};
eta2.reserve(clusters.size());
for (size_t i = 0; i < clusters.size(); i++) {
auto e = calculate_eta2(clusters[i]);
eta2(i, 0) = e.x;
eta2(i, 1) = e.y;
eta2.push_back(e);
}
return eta2;
}
/**
* @brief Calculate the full eta2 values for all clusters in a ClusterVector
*/
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
std::vector<Eta2<typename ClusterType::value_type>>
calculate_full_eta2(const ClusterVector<ClusterType> &clusters) {
std::vector<Eta2<typename ClusterType::value_type>> eta2{};
eta2.reserve(clusters.size());
for (size_t i = 0; i < clusters.size(); i++) {
auto e = calculate_full_eta2(clusters[i]);
eta2.push_back(e);
}
return eta2;
}
/**
* @brief Calculate eta3 for all 3x3 clusters in a ClusterVector
*/
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
std::vector<Eta2<typename ClusterType::value_type>>
calculate_eta3(const ClusterVector<ClusterType> &clusters) {
std::vector<Eta2<typename ClusterType::value_type>> eta2{};
eta2.reserve(clusters.size());
for (size_t i = 0; i < clusters.size(); i++) {
auto e = calculate_eta3(clusters[i]);
eta2.push_back(e);
}
return eta2;
}
/**
* @brief Calculate cross eta3 for all 3x3 clusters in a ClusterVector
*/
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
std::vector<Eta2<typename ClusterType::value_type>>
calculate_cross_eta3(const ClusterVector<ClusterType> &clusters) {
std::vector<Eta2<typename ClusterType::value_type>> eta2{};
eta2.reserve(clusters.size());
for (size_t i = 0; i < clusters.size(); i++) {
auto e = calculate_cross_eta3(clusters[i]);
eta2.push_back(e);
}
return eta2;
}
/**
* @brief helper function to calculate eta2 x and y values
* @param eta reference to the Eta2 object to update
* @param left_x value of the left pixel
* @param right_x value of the right pixel
* @param bottom_y value of the bottom pixel
* @param top_y value of the top pixel
*/
template <typename T>
inline void calculate_eta2(Eta2<T> &eta, const T left_x, const T right_x,
const T bottom_y, const T top_y) {
if ((right_x + left_x) != 0)
eta.x = static_cast<double>(right_x) /
static_cast<double>(right_x + left_x); // between (0,1) the
// closer to zero left
// value probably larger
if ((top_y + bottom_y) != 0)
eta.y = static_cast<double>(top_y) /
static_cast<double>(top_y + bottom_y); // between (0,1) the
// closer to zero bottom
// value probably larger
}
/**
* @brief Calculate the eta2 values for a generic sized cluster and return them
* in a Eta2 struct containing etay, etax and the index of the respective 2x2
* subcluster.
* in a Eta2 struct containing etay, etax and the index (as corner) of the
* respective 2x2 subcluster relative to the cluster center.
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
typename CoordType = uint16_t>
Eta2<T>
calculate_eta2(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType> &cl) {
Eta2<T> eta{};
auto max_sum = cl.max_sum_2x2();
eta.sum = max_sum.first;
auto c = max_sum.second;
static_assert(ClusterSizeX > 1 && ClusterSizeY > 1);
Eta2<T> eta{};
size_t cluster_center_index =
(ClusterSizeX / 2) + (ClusterSizeY / 2) * ClusterSizeX;
size_t index_bottom_left_max_2x2_subcluster =
(int(c / (ClusterSizeX - 1))) * ClusterSizeX + c % (ClusterSizeX - 1);
auto max_sum = cl.max_sum_2x2();
eta.sum = max_sum.sum;
corner c = max_sum.index;
// check that cluster center is in max subcluster
if (cluster_center_index != index_bottom_left_max_2x2_subcluster &&
cluster_center_index != index_bottom_left_max_2x2_subcluster + 1 &&
cluster_center_index !=
index_bottom_left_max_2x2_subcluster + ClusterSizeX &&
cluster_center_index !=
index_bottom_left_max_2x2_subcluster + ClusterSizeX + 1)
throw std::runtime_error("Photon center is not in max 2x2_subcluster");
if ((cluster_center_index - index_bottom_left_max_2x2_subcluster) %
ClusterSizeX ==
0) {
if ((cl.data[cluster_center_index + 1] +
cl.data[cluster_center_index]) != 0)
eta.x = static_cast<double>(cl.data[cluster_center_index + 1]) /
static_cast<double>((cl.data[cluster_center_index + 1] +
cl.data[cluster_center_index]));
} else {
if ((cl.data[cluster_center_index] +
cl.data[cluster_center_index - 1]) != 0)
eta.x = static_cast<double>(cl.data[cluster_center_index]) /
static_cast<double>((cl.data[cluster_center_index - 1] +
cl.data[cluster_center_index]));
// subcluster top right from center
switch (c) {
case (corner::cTopLeft):
calculate_eta2(eta, cl.data[cluster_center_index - 1],
cl.data[cluster_center_index],
cl.data[cluster_center_index - ClusterSizeX],
cl.data[cluster_center_index]);
// dx = -1
// dy = -1
break;
case (corner::cTopRight):
calculate_eta2(eta, cl.data[cluster_center_index],
cl.data[cluster_center_index + 1],
cl.data[cluster_center_index - ClusterSizeX],
cl.data[cluster_center_index]);
// dx = 0
// dy = -1
break;
case (corner::cBottomLeft):
calculate_eta2(eta, cl.data[cluster_center_index - 1],
cl.data[cluster_center_index],
cl.data[cluster_center_index],
cl.data[cluster_center_index + ClusterSizeX]);
// dx = -1
// dy = 0
break;
case (corner::cBottomRight):
calculate_eta2(eta, cl.data[cluster_center_index],
cl.data[cluster_center_index + 1],
cl.data[cluster_center_index],
cl.data[cluster_center_index + ClusterSizeX]);
// dx = 0
// dy = 0
break;
}
if ((cluster_center_index - index_bottom_left_max_2x2_subcluster) /
ClusterSizeX <
1) {
assert(cluster_center_index + ClusterSizeX <
ClusterSizeX * ClusterSizeY); // suppress warning
if ((cl.data[cluster_center_index] +
cl.data[cluster_center_index + ClusterSizeX]) != 0)
eta.y = static_cast<double>(
eta.c = c;
return eta;
}
/**
* @brief Calculate the eta2 values for a generic sized cluster and return them
* in a Eta2 struct containing etay, etax and the index (as corner) of the
* respective 2x2 subcluster relative to the cluster center.
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
Eta2<T> calculate_full_eta2(
const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType> &cl) {
static_assert(ClusterSizeX > 1 && ClusterSizeY > 1);
Eta2<T> eta{};
constexpr size_t cluster_center_index =
(ClusterSizeX / 2) + (ClusterSizeY / 2) * ClusterSizeX;
auto max_sum = cl.max_sum_2x2();
eta.sum = max_sum.sum;
corner c = max_sum.index;
// subcluster top right from center
switch (c) {
case (corner::cTopLeft):
if (eta.sum != 0) {
eta.x = static_cast<double>(
cl.data[cluster_center_index] +
cl.data[cluster_center_index - ClusterSizeX]) /
static_cast<double>(eta.sum);
eta.y = static_cast<double>(cl.data[cluster_center_index - 1] +
cl.data[cluster_center_index]) /
static_cast<double>(eta.sum);
}
// dx = -1
// dy = -1
break;
case (corner::cTopRight):
if (eta.sum != 0) {
eta.x = static_cast<double>(
cl.data[cluster_center_index + 1] +
cl.data[cluster_center_index - ClusterSizeX + 1]) /
static_cast<double>(eta.sum);
eta.y = static_cast<double>(cl.data[cluster_center_index] +
cl.data[cluster_center_index + 1]) /
static_cast<double>(eta.sum);
}
// dx = 0
// dy = -1
break;
case (corner::cBottomLeft):
if (eta.sum != 0) {
eta.x = static_cast<double>(
cl.data[cluster_center_index] +
cl.data[cluster_center_index + ClusterSizeX]) /
static_cast<double>(
(cl.data[cluster_center_index] +
cl.data[cluster_center_index + ClusterSizeX]));
} else {
if ((cl.data[cluster_center_index] +
cl.data[cluster_center_index - ClusterSizeX]) != 0)
eta.y = static_cast<double>(cl.data[cluster_center_index]) /
static_cast<double>(
(cl.data[cluster_center_index] +
cl.data[cluster_center_index - ClusterSizeX]));
static_cast<double>(eta.sum);
eta.y = static_cast<double>(
cl.data[cluster_center_index + ClusterSizeX] +
cl.data[cluster_center_index + ClusterSizeX - 1]) /
static_cast<double>(eta.sum);
}
// dx = -1
// dy = 0
break;
case (corner::cBottomRight):
if (eta.sum != 0) {
eta.x = static_cast<double>(
cl.data[cluster_center_index + 1] +
cl.data[cluster_center_index + ClusterSizeX + 1]) /
static_cast<double>(eta.sum);
eta.y = static_cast<double>(
cl.data[cluster_center_index + ClusterSizeX] +
cl.data[cluster_center_index + ClusterSizeX + 1]) /
static_cast<double>(eta.sum);
}
// dx = 0
// dy = 0
break;
}
eta.c = c; // TODO only supported for 2x2 and 3x3 clusters -> at least no
// underyling enum class
eta.c = c;
return eta;
}
// TODO! Look up eta2 calculation - photon center should be top right corner
template <typename T>
Eta2<T> calculate_eta2(const Cluster<T, 2, 2, int16_t> &cl) {
Eta2<T> calculate_eta2(const Cluster<T, 2, 2, uint16_t> &cl) {
Eta2<T> eta{};
if ((cl.data[0] + cl.data[1]) != 0)
eta.x = static_cast<double>(cl.data[1]) / (cl.data[0] + cl.data[1]);
if ((cl.data[0] + cl.data[2]) != 0)
eta.y = static_cast<double>(cl.data[2]) / (cl.data[0] + cl.data[2]);
// TODO: maybe have as member function of cluster
const uint8_t photon_hit_index =
std::max_element(cl.data.begin(), cl.data.end()) - cl.data.begin();
eta.c = static_cast<corner>(3 - photon_hit_index);
switch (eta.c) {
case corner::cTopLeft:
calculate_eta2(eta, cl.data[2], cl.data[3], cl.data[1], cl.data[3]);
break;
case corner::cTopRight:
calculate_eta2(eta, cl.data[2], cl.data[3], cl.data[0], cl.data[2]);
break;
case corner::cBottomLeft:
calculate_eta2(eta, cl.data[0], cl.data[1], cl.data[1], cl.data[3]);
break;
case corner::cBottomRight:
calculate_eta2(eta, cl.data[0], cl.data[1], cl.data[0], cl.data[2]);
break;
}
eta.sum = cl.sum();
eta.c = static_cast<int>(corner::cBottomLeft); // TODO! This is not correct,
// but need to put something
return eta;
}
// calculates Eta3 for 3x3 cluster based on code from analyze_cluster
// TODO only supported for 3x3 Clusters
template <typename T> Eta2<T> calculate_eta3(const Cluster<T, 3, 3> &cl) {
template <typename T>
Eta2<T> calculate_full_eta2(const Cluster<T, 2, 2, uint16_t> &cl) {
Eta2<T> eta{};
T sum = 0;
eta.sum = cl.sum();
std::for_each(std::begin(cl.data), std::end(cl.data),
[&sum](T x) { sum += x; });
const uint8_t photon_hit_index =
std::max_element(cl.data.begin(), cl.data.end()) - cl.data.begin();
eta.sum = sum;
eta.c = static_cast<corner>(3 - photon_hit_index);
eta.c = corner::cBottomLeft;
if (eta.sum != 0) {
eta.x = static_cast<double>(cl.data[1] + cl.data[3]) /
static_cast<double>(eta.sum);
eta.y = static_cast<double>(cl.data[2] + cl.data[3]) /
static_cast<double>(eta.sum);
}
return eta;
}
// TODO generalize
template <typename T>
Eta2<T> calculate_eta2(const Cluster<T, 1, 2, uint16_t> &cl) {
Eta2<T> eta{};
eta.x = 0;
eta.y = static_cast<double>(cl.data[1]) / cl.data[0];
eta.sum = cl.sum();
}
template <typename T>
Eta2<T> calculate_eta2(const Cluster<T, 2, 1, uint16_t> &cl) {
Eta2<T> eta{};
eta.x = static_cast<double>(cl.data[1]) / cl.data[0];
eta.y = 0;
eta.sum = cl.sum();
}
/**
* @brief calculates cross Eta3 for 3x3 cluster
* cross Eta3 calculates the eta by taking into account only the cross pixels
* {top, bottom, left, right, center}
*/
template <typename T, typename CoordType = uint16_t>
Eta2<T> calculate_cross_eta3(const Cluster<T, 3, 3, CoordType> &cl) {
Eta2<T> eta{};
T photon_energy = cl.sum();
eta.sum = photon_energy;
if ((cl.data[3] + cl.data[4] + cl.data[5]) != 0)
eta.x = static_cast<double>(-cl.data[3] + cl.data[3 + 2]) /
eta.x =
static_cast<double>(-cl.data[3] + cl.data[3 + 2]) /
(cl.data[3] + cl.data[4] + cl.data[5]);
static_cast<double>(cl.data[3] + cl.data[4] + cl.data[5]); // (-1,1)
if ((cl.data[1] + cl.data[4] + cl.data[7]) != 0)
eta.y = static_cast<double>(-cl.data[1] + cl.data[2 * 3 + 1]) /
(cl.data[1] + cl.data[4] + cl.data[7]);
static_cast<double>(cl.data[1] + cl.data[4] + cl.data[7]);
return eta;
}
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
Eta2<T> calculate_cross_eta3(
const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType> &cl) {
static_assert(ClusterSizeX > 2 && ClusterSizeY > 2,
"calculate_eta3 only defined for clusters larger than 2x2");
if constexpr (ClusterSizeX != 3 || ClusterSizeY != 3) {
auto reduced_cluster = reduce_cluster_to_3x3(cl);
return calculate_cross_eta3(reduced_cluster);
} else {
return calculate_cross_eta3(cl);
}
}
/**
* @brief calculates Eta3 for 3x3 cluster
* It calculates the eta by taking into account all pixels in the 3x3 cluster
*/
template <typename T, typename CoordType = uint16_t>
Eta2<T> calculate_eta3(const Cluster<T, 3, 3, CoordType> &cl) {
Eta2<T> eta{};
T photon_energy = cl.sum();
eta.sum = photon_energy;
// TODO: how do we handle potential arithmetic overflows? - T could be
// uint16
if (photon_energy != 0) {
std::array<T, 2> column_sums{
static_cast<T>(cl.data[0] + cl.data[3] + cl.data[6]),
static_cast<T>(cl.data[2] + cl.data[5] + cl.data[8])};
eta.x = static_cast<double>(-column_sums[0] + column_sums[1]) /
static_cast<double>(photon_energy);
std::array<T, 2> row_sums{
static_cast<T>(cl.data[0] + cl.data[1] + cl.data[2]),
static_cast<T>(cl.data[6] + cl.data[7] + cl.data[8])};
eta.y = static_cast<double>(-row_sums[0] + row_sums[1]) /
static_cast<double>(photon_energy);
}
return eta;
}
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
Eta2<T>
calculate_eta3(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType> &cl) {
static_assert(ClusterSizeX > 2 && ClusterSizeY > 2,
"calculate_eta3 only defined for clusters larger than 2x2");
if constexpr (ClusterSizeX != 3 || ClusterSizeY != 3) {
auto reduced_cluster = reduce_cluster_to_3x3(cl);
return calculate_eta3(reduced_cluster);
} else {
return calculate_eta3(cl);
}
}
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <chrono>

183
include/aare/Cluster.hpp Normal file → Executable file
View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
/************************************************
* @file Cluster.hpp
@@ -8,6 +9,7 @@
#pragma once
#include "defs.hpp"
#include <algorithm>
#include <array>
#include <cstdint>
@@ -17,8 +19,12 @@
namespace aare {
// requires clause c++20 maybe update
/**
* @brief Cluster struct
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = int16_t>
typename CoordType = uint16_t>
struct Cluster {
static_assert(std::is_arithmetic_v<T>, "T needs to be an arithmetic type");
@@ -27,8 +33,11 @@ struct Cluster {
static_assert(ClusterSizeX > 0 && ClusterSizeY > 0,
"Cluster sizes must be bigger than zero");
/// @brief Cluster center x coordinate (in pixel coordinates)
CoordType x;
/// @brief Cluster center y coordinate (in pixel coordinates)
CoordType y;
/// @brief Cluster data stored in row-major order starting from top-left
std::array<T, ClusterSizeX * ClusterSizeY> data;
static constexpr uint8_t cluster_size_x = ClusterSizeX;
@@ -36,9 +45,18 @@ struct Cluster {
using value_type = T;
using coord_type = CoordType;
/**
* @brief Sum of all elements in the cluster
*/
T sum() const { return std::accumulate(data.begin(), data.end(), T{}); }
std::pair<T, int> max_sum_2x2() const {
// TODO: handle 1 dimensional clusters
/**
* @brief sum of 2x2 subcluster with highest energy
* @return photon energy of subcluster, 2x2 subcluster index relative to
* cluster center
*/
Sum_index_pair<T, corner> max_sum_2x2() const {
if constexpr (cluster_size_x == 3 && cluster_size_y == 3) {
std::array<T, 4> sum_2x2_subclusters;
@@ -49,31 +67,166 @@ struct Cluster {
int index = std::max_element(sum_2x2_subclusters.begin(),
sum_2x2_subclusters.end()) -
sum_2x2_subclusters.begin();
return std::make_pair(sum_2x2_subclusters[index], index);
return Sum_index_pair<T, corner>{sum_2x2_subclusters[index],
corner{index}};
} else if constexpr (cluster_size_x == 2 && cluster_size_y == 2) {
return std::make_pair(data[0] + data[1] + data[2] + data[3], 0);
return Sum_index_pair<T, corner>{
data[0] + data[1] + data[2] + data[3], corner{0}};
} else {
constexpr size_t num_2x2_subclusters =
(ClusterSizeX - 1) * (ClusterSizeY - 1);
constexpr size_t cluster_center_index =
(ClusterSizeX / 2) + (ClusterSizeY / 2) * ClusterSizeX;
std::array<T, num_2x2_subclusters> sum_2x2_subcluster;
for (size_t i = 0; i < ClusterSizeY - 1; ++i) {
for (size_t j = 0; j < ClusterSizeX - 1; ++j)
sum_2x2_subcluster[i * (ClusterSizeX - 1) + j] =
data[i * ClusterSizeX + j] +
data[i * ClusterSizeX + j + 1] +
data[(i + 1) * ClusterSizeX + j] +
data[(i + 1) * ClusterSizeX + j + 1];
std::array<T, 4> sum_2x2_subcluster{0};
// subcluster top left from center
sum_2x2_subcluster[0] =
data[cluster_center_index] + data[cluster_center_index - 1] +
data[cluster_center_index - ClusterSizeX] +
data[cluster_center_index - 1 - ClusterSizeX];
// subcluster top right from center
if (ClusterSizeX > 2) {
sum_2x2_subcluster[1] =
data[cluster_center_index] +
data[cluster_center_index + 1] +
data[cluster_center_index - ClusterSizeX] +
data[cluster_center_index - ClusterSizeX + 1];
}
// subcluster bottom left from center
if (ClusterSizeY > 2) {
sum_2x2_subcluster[2] =
data[cluster_center_index] +
data[cluster_center_index - 1] +
data[cluster_center_index + ClusterSizeX] +
data[cluster_center_index + ClusterSizeX - 1];
}
// subcluster bottom right from center
if (ClusterSizeX > 2 && ClusterSizeY > 2) {
sum_2x2_subcluster[3] =
data[cluster_center_index] +
data[cluster_center_index + 1] +
data[cluster_center_index + ClusterSizeX] +
data[cluster_center_index + ClusterSizeX + 1];
}
int index = std::max_element(sum_2x2_subcluster.begin(),
sum_2x2_subcluster.end()) -
sum_2x2_subcluster.begin();
return std::make_pair(sum_2x2_subcluster[index], index);
return Sum_index_pair<T, corner>{sum_2x2_subcluster[index],
corner{index}};
}
}
};
/**
* @brief Reduce a cluster to a 2x2 cluster by selecting the 2x2 block with the
* highest sum.
* @param c Cluster to reduce
* @return reduced cluster
* @note The cluster is filled using row major ordering starting at the top-left
* (thus for a max subcluster in the top left cornern the photon hit is at
* the fourth position)
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
Cluster<T, 2, 2, CoordType>
reduce_to_2x2(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType> &c) {
static_assert(ClusterSizeX >= 2 && ClusterSizeY >= 2,
"Cluster sizes must be at least 2x2 for reduction to 2x2");
Cluster<T, 2, 2, CoordType> result{};
auto [sum, index] = c.max_sum_2x2();
constexpr int16_t cluster_center_index =
(ClusterSizeX / 2) + (ClusterSizeY / 2) * ClusterSizeX;
int16_t index_top_left_max_2x2_subcluster = cluster_center_index;
switch (index) {
case corner::cTopLeft:
index_top_left_max_2x2_subcluster -= (ClusterSizeX + 1);
break;
case corner::cTopRight:
index_top_left_max_2x2_subcluster -= ClusterSizeX;
break;
case corner::cBottomLeft:
index_top_left_max_2x2_subcluster -= 1;
break;
case corner::cBottomRight:
// no change needed
break;
}
result.x = c.x;
result.y = c.y;
result.data = {
c.data[index_top_left_max_2x2_subcluster],
c.data[index_top_left_max_2x2_subcluster + 1],
c.data[index_top_left_max_2x2_subcluster + ClusterSizeX],
c.data[index_top_left_max_2x2_subcluster + ClusterSizeX + 1]};
return result;
}
template <typename T>
Cluster<T, 2, 2, uint16_t> reduce_to_2x2(const Cluster<T, 3, 3, uint16_t> &c) {
Cluster<T, 2, 2, uint16_t> result{};
auto [s, i] = c.max_sum_2x2();
result.x = c.x;
result.y = c.y;
switch (i) {
case corner::cTopLeft:
result.data = {c.data[0], c.data[1], c.data[3], c.data[4]};
break;
case corner::cTopRight:
result.data = {c.data[1], c.data[2], c.data[4], c.data[5]};
break;
case corner::cBottomLeft:
result.data = {c.data[3], c.data[4], c.data[6], c.data[7]};
break;
case corner::cBottomRight:
result.data = {c.data[4], c.data[5], c.data[7], c.data[8]};
break;
}
return result;
}
/**
* @brief Reduce a cluster to a 3x3 cluster
* @param c Cluster to reduce
* @return reduced cluster
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = int16_t>
Cluster<T, 3, 3, CoordType>
reduce_to_3x3(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType> &c) {
static_assert(ClusterSizeX >= 3 && ClusterSizeY >= 3,
"Cluster sizes must be at least 3x3 for reduction to 3x3");
Cluster<T, 3, 3, CoordType> result{};
int16_t cluster_center_index =
(ClusterSizeX / 2) + (ClusterSizeY / 2) * ClusterSizeX;
result.x = c.x;
result.y = c.y;
result.data = {c.data[cluster_center_index - ClusterSizeX - 1],
c.data[cluster_center_index - ClusterSizeX],
c.data[cluster_center_index - ClusterSizeX + 1],
c.data[cluster_center_index - 1],
c.data[cluster_center_index],
c.data[cluster_center_index + 1],
c.data[cluster_center_index + ClusterSizeX - 1],
c.data[cluster_center_index + ClusterSizeX],
c.data[cluster_center_index + ClusterSizeX + 1]};
return result;
}
// Type Traits for is_cluster_type
template <typename T>
struct is_cluster : std::false_type {}; // Default case: Not a Cluster

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <atomic>
#include <thread>
@@ -5,6 +6,7 @@
#include "aare/ClusterFinderMT.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/ProducerConsumerQueue.hpp"
#include "aare/defs.hpp"
namespace aare {

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Cluster.hpp"
@@ -14,7 +15,7 @@
namespace aare {
/*
Binary cluster file. Expects data to be layed out as:
Binary cluster file. Expects data to be laid out as:
int32_t frame_number
uint32_t number_of_clusters
int16_t x, int16_t y, int32_t data[9] x number_of_clusters
@@ -189,6 +190,16 @@ class ClusterFile {
}
}
/**
* @brief Return the current position in the file (bytes)
*/
int64_t tell() {
if (!fp) {
throw std::runtime_error(LOCATION + "File not opened");
}
return ftell(fp);
}
/** @brief Open the file in specific mode
*
*/
@@ -354,15 +365,20 @@ template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
ClusterFile<ClusterType, Enable>::read_frame_without_cut() {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
throw std::runtime_error(LOCATION + "File not opened for reading");
}
if (m_num_left) {
throw std::runtime_error(
"There are still photons left in the last frame");
LOCATION + "There are still photons left in the last frame");
}
int32_t frame_number;
if (fread(&frame_number, sizeof(frame_number), 1, fp) != 1) {
throw std::runtime_error(LOCATION + "Could not read frame number");
if (feof(fp))
throw std::runtime_error(LOCATION + "Unexpected end of file");
else if (ferror(fp))
throw std::runtime_error(LOCATION + "Error reading from file");
throw std::runtime_error(LOCATION + "Unexpected error (not feof or ferror) when reading frame number");
}
int32_t n_clusters; // Saved as 32bit integer in the cluster file
@@ -438,8 +454,8 @@ bool ClusterFile<ClusterType, Enable>::is_selected(ClusterType &cl) {
if (m_noise_map) {
auto sum_1x1 = cl.data[cluster_center_index]; // central pixel
auto sum_2x2 = cl.max_sum_2x2().first; // highest sum of 2x2 subclusters
auto total_sum = cl.sum(); // sum of all pixels
auto sum_2x2 = cl.max_sum_2x2().sum; // highest sum of 2x2 subclusters
auto total_sum = cl.sum(); // sum of all pixels
auto noise =
(*m_noise_map)(cl.y, cl.x); // TODO! check if this is correct

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <atomic>
#include <filesystem>
@@ -10,7 +11,8 @@
namespace aare {
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
typename = std::enable_if_t<is_cluster_v<ClusterType>>,
typename = std::enable_if_t<no_2x2_cluster<ClusterType>::value>>
class ClusterFileSink {
ProducerConsumerQueue<ClusterVector<ClusterType>> *m_source;
std::atomic<bool> m_stop_requested{false};

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/ClusterFile.hpp"
#include "aare/ClusterVector.hpp"
@@ -10,8 +11,16 @@
namespace aare {
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
struct no_2x2_cluster {
constexpr static bool value =
ClusterType::cluster_size_x > 2 && ClusterType::cluster_size_y > 2;
};
template <typename ClusterType = Cluster<int32_t, 3, 3>,
typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double>
typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double,
typename = std::enable_if_t<no_2x2_cluster<ClusterType>::value>>
class ClusterFinder {
Shape<2> m_image_size;
const PEDESTAL_TYPE m_nSigma;
@@ -136,7 +145,7 @@ class ClusterFinder {
// don't have a photon
int i = 0;
for (int ir = -dy; ir < dy + has_center_pixel_y; ir++) {
for (int ic = -dx; ic < dx + has_center_pixel_y; ic++) {
for (int ic = -dx; ic < dx + has_center_pixel_x; ic++) {
if (ix + ic >= 0 && ix + ic < frame.shape(1) &&
iy + ir >= 0 && iy + ir < frame.shape(0)) {
CT tmp =
@@ -145,8 +154,8 @@ class ClusterFinder {
m_pedestal.mean(iy + ir, ix + ic));
cluster.data[i] =
tmp; // Watch for out of bounds access
i++;
}
i++;
}
}

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <atomic>
#include <cstdint>
@@ -32,7 +33,8 @@ struct FrameWrapper {
* @tparam CT type of the cluster data
*/
template <typename ClusterType = Cluster<int32_t, 3, 3>,
typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double>
typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double,
typename = std::enable_if_t<no_2x2_cluster<ClusterType>::value>>
class ClusterFinderMT {
protected:

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Cluster.hpp" //TODO maybe store in seperate file !!!
#include <algorithm>
@@ -28,7 +29,7 @@ class ClusterVector; // Forward declaration
* needed.
* @tparam T data type of the pixels in the cluster
* @tparam CoordType data type of the x and y coordinates of the cluster
* (normally int16_t)
* (normally uint16_t)
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
@@ -86,15 +87,14 @@ class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
/**
* @brief Sum the pixels in the 2x2 subcluster with the biggest pixel sum in
* each cluster
* @return std::vector<T> vector of sums for each cluster
* @return vector of sums index pairs for each cluster
*/
std::vector<T> sum_2x2() {
std::vector<T> sums_2x2(m_data.size());
std::vector<Sum_index_pair<T, corner>> sum_2x2() {
std::vector<Sum_index_pair<T, corner>> sums_2x2(m_data.size());
std::transform(m_data.begin(), m_data.end(), sums_2x2.begin(),
[](const ClusterType &cluster) {
return cluster.max_sum_2x2().first;
});
std::transform(
m_data.begin(), m_data.end(), sums_2x2.begin(),
[](const ClusterType &cluster) { return cluster.max_sum_2x2(); });
return sums_2x2;
}
@@ -122,6 +122,11 @@ class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
*/
size_t size() const { return m_data.size(); }
/**
* @brief Check if the vector is empty
*/
bool empty() const { return m_data.empty(); }
uint8_t cluster_size_x() const { return ClusterSizeX; }
uint8_t cluster_size_y() const { return ClusterSizeY; }
@@ -167,4 +172,42 @@ class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
}
};
/**
* @brief Reduce a cluster to a 2x2 cluster by selecting the 2x2 block with the
* highest sum.
* @param cv Clustervector containing clusters to reduce
* @return Clustervector with reduced clusters
* @note The cluster is filled using row major ordering starting at the top-left
* (thus for a max subcluster in the top left cornern the photon hit is at
* the fourth position)
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
ClusterVector<Cluster<T, 2, 2, CoordType>> reduce_to_2x2(
const ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>>
&cv) {
ClusterVector<Cluster<T, 2, 2, CoordType>> result;
for (const auto &c : cv) {
result.push_back(reduce_to_2x2(c));
}
return result;
}
/**
* @brief Reduce a cluster to a 3x3 cluster
* @param cv Clustervector containing clusters to reduce
* @return Clustervector with reduced clusters
*/
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
ClusterVector<Cluster<T, 3, 3, CoordType>> reduce_to_3x3(
const ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>>
&cv) {
ClusterVector<Cluster<T, 3, 3, CoordType>> result;
for (const auto &c : cv) {
result.push_back(reduce_to_3x3(c));
}
return result;
}
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/FileInterface.hpp"

View File

@@ -0,0 +1,82 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/RawMasterFile.hpp" //ROI refactor away
#include "aare/defs.hpp"
namespace aare {
struct ModuleConfig {
int module_gap_row{};
int module_gap_col{};
bool operator==(const ModuleConfig &other) const {
if (module_gap_col != other.module_gap_col)
return false;
if (module_gap_row != other.module_gap_row)
return false;
return true;
}
};
/**
* @brief Class to hold the geometry of a module. Where pixel 0 is located and
* the size of the module
*/
struct ModuleGeometry {
int origin_x{};
int origin_y{};
int height{};
int width{};
int row_index{};
int col_index{};
};
/**
* @brief Class to hold the geometry of a detector. Number of modules, their
* size and where pixel 0 for each module is located
*/
class DetectorGeometry {
public:
DetectorGeometry(const xy &geometry, const ssize_t module_pixels_x,
const ssize_t module_pixels_y,
const xy udp_interfaces_per_module = xy{1, 1},
const bool quad = false);
~DetectorGeometry() = default;
/**
* @brief Update the detector geometry given a region of interest
*
* @param roi
* @return DetectorGeometry
*/
void update_geometry_with_roi(ROI roi);
size_t n_modules() const;
size_t n_modules_in_roi() const;
size_t pixels_x() const;
size_t pixels_y() const;
size_t modules_x() const;
size_t modules_y() const;
const std::vector<ssize_t> &get_modules_in_roi() const;
ssize_t get_modules_in_roi(const size_t index) const;
const std::vector<ModuleGeometry> &get_module_geometries() const;
const ModuleGeometry &get_module_geometries(const size_t index) const;
private:
size_t m_modules_x{};
size_t m_modules_y{};
size_t m_pixels_x{};
size_t m_pixels_y{};
static constexpr ModuleConfig cfg{0, 0};
std::vector<ModuleGeometry> module_geometries{};
std::vector<ssize_t> modules_in_roi{};
};
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <cstdint>
#include <map>

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/FileInterface.hpp"
#include <memory>
@@ -7,7 +8,7 @@ namespace aare {
/**
* @brief RAII File class for reading, and in the future potentially writing
* image files in various formats. Minimal generic interface. For specail
* fuctions plase use the RawFile, NumpyFile or Hdf5File classes directly. Wraps
* fuctions plase use the RawFile or NumpyFile classes directly. Wraps
* FileInterface to abstract the underlying file format
* @note **frame_number** refers the the frame number sent by the detector while
* **frame_index** is the position of the frame in the file

View File

@@ -1,7 +1,8 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Dtype.hpp"
#include "aare/Frame.hpp"
#include "aare/to_string.hpp"
#include "aare/defs.hpp"
#include <filesystem>
#include <vector>
@@ -33,20 +34,20 @@ struct FileConfig {
DetectorType detector_type{DetectorType::Unknown};
int max_frames_per_file{};
size_t total_frames{};
std::string to_string() const {
return "{ dtype: " + dtype.to_string() +
", rows: " + std::to_string(rows) +
", cols: " + std::to_string(cols) +
", geometry: " + geometry.to_string() +
", detector_type: " + ToString(detector_type) +
", max_frames_per_file: " + std::to_string(max_frames_per_file) +
", total_frames: " + std::to_string(total_frames) + " }";
}
// std::string to_string() const {
// return "{ dtype: " + dtype.to_string() +
// ", rows: " + std::to_string(rows) +
// ", cols: " + std::to_string(cols) +
// ", geometry: " + geometry.to_string() +
// ", detector_type: " + ToString(detector_type) +
// ", max_frames_per_file: " + std::to_string(max_frames_per_file) +
// ", total_frames: " + std::to_string(total_frames) + " }";
// }
};
/**
* @brief FileInterface class to define the interface for file operations
* @note parent class for NumpyFile, RawFile and Hdf5File
* @note parent class for NumpyFile and RawFile
* @note all functions are pure virtual and must be implemented by the derived
* classes
*/

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <cstdio>
#include <filesystem>

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <cmath>

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Dtype.hpp"
#include "aare/NDArray.hpp"
@@ -105,7 +106,7 @@ class Frame {
* @tparam T type of the pixels
* @return NDView<T, 2>
*/
template <typename T> NDView<T, 2> view() {
template <typename T> NDView<T, 2> view() & {
std::array<ssize_t, 2> shape = {static_cast<ssize_t>(m_rows),
static_cast<ssize_t>(m_cols)};
T *data = reinterpret_cast<T *>(m_data);

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
/************************************************
* @file GainMap.hpp
* @short function to apply gain map of image size to a vector of clusters -

View File

@@ -1,211 +0,0 @@
#pragma once
#include "aare/FileInterface.hpp"
#include "aare/Frame.hpp"
#include "aare/Hdf5MasterFile.hpp"
#include "aare/NDArray.hpp" //for pixel map
#include <optional>
namespace aare {
class H5Handles {
std::string file_name;
std::string dataset_name;
H5::H5File file;
H5::DataSet dataset;
H5::DataSpace dataspace;
H5::DataType datatype;
std::unique_ptr<H5::DataSpace> memspace;
std::vector<hsize_t> dims;
std::vector<hsize_t> count;
std::vector<hsize_t> offset;
public:
H5Handles(const std::string &fname, const std::string &dname)
: file_name(fname), dataset_name(dname), file(fname, H5F_ACC_RDONLY),
dataset(file.openDataSet(dname)), dataspace(dataset.getSpace()),
datatype(dataset.getDataType()) {
intialize_dimensions();
initialize_memspace();
}
std::vector<hsize_t> get_dims() const { return dims; }
void seek(size_t frame_index) {
if (frame_index >= dims[0]) {
throw std::runtime_error(LOCATION + "Invalid frame number");
}
offset[0] = static_cast<hsize_t>(frame_index);
}
void get_data_into(size_t frame_index, std::byte *frame_buffer,
size_t n_frames = 1) {
seek(frame_index);
count[0] = static_cast<hsize_t>(n_frames);
// std::cout << "offset:" << ToString(offset) << " count:" <<
// ToString(count) << std::endl;
dataspace.selectHyperslab(H5S_SELECT_SET, count.data(), offset.data());
dataset.read(frame_buffer, datatype, *memspace, dataspace);
}
void get_header_into(size_t frame_index, int part_index,
std::byte *header_buffer) {
seek(frame_index);
offset[1] = static_cast<hsize_t>(part_index);
// std::cout << "offset:" << ToString(offset) << " count:" <<
// ToString(count) << std::endl;
dataspace.selectHyperslab(H5S_SELECT_SET, count.data(), offset.data());
dataset.read(header_buffer, datatype, *memspace, dataspace);
}
private:
void intialize_dimensions() {
int rank = dataspace.getSimpleExtentNdims();
dims.resize(rank);
dataspace.getSimpleExtentDims(dims.data(), nullptr);
}
void initialize_memspace() {
int rank = dataspace.getSimpleExtentNdims();
count.clear();
offset.clear();
// header datasets or header virtual datasets
if (rank == 1 || rank == 2) {
count = std::vector<hsize_t>(rank, 1); // slice 1 value
offset = std::vector<hsize_t>(rank, 0);
memspace = std::make_unique<H5::DataSpace>(H5S_SCALAR);
} else if (rank >= 3) {
// data dataset (frame x height x width)
count = {1, dims[1], dims[2]};
offset = {0, 0, 0};
hsize_t dims_image[2] = {dims[1], dims[2]};
memspace = std::make_unique<H5::DataSpace>(2, dims_image);
} else {
throw std::runtime_error(
LOCATION + "Invalid rank for dataset: " + std::to_string(rank));
}
}
};
template <typename Fn>
void read_hdf5_header_fields(DetectorHeader *header, Fn &&fn_read_field) {
fn_read_field(0, reinterpret_cast<std::byte *>(&(header->frameNumber)));
fn_read_field(1, reinterpret_cast<std::byte *>(&(header->expLength)));
fn_read_field(2, reinterpret_cast<std::byte *>(&(header->packetNumber)));
fn_read_field(3, reinterpret_cast<std::byte *>(&(header->bunchId)));
fn_read_field(4, reinterpret_cast<std::byte *>(&(header->timestamp)));
fn_read_field(5, reinterpret_cast<std::byte *>(&(header->modId)));
fn_read_field(6, reinterpret_cast<std::byte *>(&(header->row)));
fn_read_field(7, reinterpret_cast<std::byte *>(&(header->column)));
fn_read_field(8, reinterpret_cast<std::byte *>(&(header->reserved)));
fn_read_field(9, reinterpret_cast<std::byte *>(&(header->debug)));
fn_read_field(10, reinterpret_cast<std::byte *>(&(header->roundRNumber)));
fn_read_field(11, reinterpret_cast<std::byte *>(&(header->detType)));
fn_read_field(12, reinterpret_cast<std::byte *>(&(header->version)));
fn_read_field(13, reinterpret_cast<std::byte *>(&(header->packetMask)));
}
/**
* @brief Class to read .h5 files. The class will parse the master file
* to find the correct geometry for the frames.
* @note A more generic interface is available in the aare::File class.
* Consider using that unless you need hdf5 file specific functionality.
*/
class Hdf5File : public FileInterface {
Hdf5MasterFile m_master;
size_t m_current_frame{};
size_t m_total_frames{};
size_t m_rows{};
size_t m_cols{};
static const std::string metadata_group_name;
static const std::vector<std::string> header_dataset_names;
std::unique_ptr<H5Handles> m_data_dataset{nullptr};
std::vector<std::unique_ptr<H5Handles>> m_header_datasets{};
public:
/**
* @brief Hdf5File constructor
* @param fname path to the master file (.json)
* @param mode file mode (only "r" is supported at the moment)
*/
Hdf5File(const std::filesystem::path &fname, const std::string &mode = "r");
virtual ~Hdf5File() override;
Frame read_frame() override;
Frame read_frame(size_t frame_number) override;
std::vector<Frame> read_n(size_t n_frames) override;
void read_into(std::byte *image_buf) override;
void read_into(std::byte *image_buf, size_t n_frames) override;
// TODO! do we need to adapt the API?
void read_into(std::byte *image_buf, DetectorHeader *header);
void read_into(std::byte *image_buf, size_t n_frames,
DetectorHeader *header);
size_t frame_number(size_t frame_index) override;
size_t bytes_per_frame() override;
size_t pixels_per_frame() override;
size_t bytes_per_pixel() const;
void seek(size_t frame_index) override;
size_t tell() override;
size_t total_frames() const override;
size_t rows() const override;
size_t cols() const override;
size_t bitdepth() const override;
xy geometry();
size_t n_modules() const;
Hdf5MasterFile master() const;
DetectorType detector_type() const override;
private:
/**
* @brief get the frame at the given frame index
* @param frame_number frame number to read
* @return Frame
*/
Frame get_frame(size_t frame_index);
/**
* @brief read the frame at the given frame index into the image buffer
* @param frame_number frame number to read
* @param n_frames number of frames to read (default is 1)
* @param image_buf buffer to store the frame
*/
void get_frame_into(size_t frame_index, std::byte *frame_buffer,
size_t n_frames = 1, DetectorHeader *header = nullptr);
/**
* @brief read the frame at the given frame index into the image buffer
* @param frame_index frame number to read
* @param n_frames number of frames to read (default is 1)
* @param frame_buffer buffer to store the frame
*/
void get_data_into(size_t frame_index, std::byte *frame_buffer,
size_t n_frames = 1);
/**
* @brief read the header at the given frame index into the header buffer
* @param frame_index frame number to read
* @param part_index part index to read (for virtual datasets)
* @param header buffer to store the header
*/
void get_header_into(size_t frame_index, int part_index,
DetectorHeader *header);
/**
* @brief read the header of the file
* @param fname path to the data subfile
* @return DetectorHeader
*/
static DetectorHeader read_header(const std::filesystem::path &fname);
void open_data_file();
void open_header_files();
};
} // namespace aare

View File

@@ -1,135 +0,0 @@
#pragma once
#include "aare/defs.hpp"
#include "aare/scan_parameters.hpp"
#include "H5Cpp.h"
#include <filesystem>
#include <fmt/format.h>
#include <fstream>
#include <optional>
namespace aare {
using ns = std::chrono::nanoseconds;
/**
* @brief Class for parsing a master file either in our .json format or the old
* .Hdf5 format
*/
class Hdf5MasterFile {
std::filesystem::path m_file_name{};
std::string m_version;
DetectorType m_type;
TimingMode m_timing_mode;
xy m_geometry{};
int m_image_size_in_bytes{};
int m_pixels_y{};
int m_pixels_x{};
int m_max_frames_per_file{};
FrameDiscardPolicy m_frame_discard_policy{};
int m_frame_padding{};
std::optional<ScanParameters> m_scan_parameters{};
size_t m_total_frames_expected{};
std::optional<ns> m_exptime{};
std::optional<ns> m_period{};
std::optional<BurstMode> m_burst_mode{};
std::optional<int> m_number_of_udp_interfaces{};
int m_bitdepth{};
std::optional<bool> m_ten_giga{};
std::optional<int> m_threshold_energy{};
std::optional<std::vector<int>> m_threshold_energy_all{};
std::optional<ns> m_subexptime{};
std::optional<ns> m_subperiod{};
std::optional<bool> m_quad{};
std::optional<int> m_number_of_rows{};
std::optional<std::vector<size_t>> m_rate_corrections{};
std::optional<uint32_t> m_adc_mask{};
bool m_analog_flag{};
std::optional<int> m_analog_samples{};
bool m_digital_flag{};
std::optional<int> m_digital_samples{};
std::optional<int> m_dbit_offset{};
std::optional<size_t> m_dbit_list{};
std::optional<int> m_transceiver_mask{};
bool m_transceiver_flag{};
std::optional<int> m_transceiver_samples{};
// g1 roi - will not be implemented?
std::optional<ROI> m_roi{};
std::optional<int> m_counter_mask{};
std::optional<std::vector<ns>> m_exptime_array{};
std::optional<std::vector<ns>> m_gate_delay_array{};
std::optional<int> m_gates{};
std::optional<std::map<std::string, std::string>>
m_additional_json_header{};
size_t m_frames_in_file{};
// TODO! should these be bool?
public:
Hdf5MasterFile(const std::filesystem::path &fpath);
std::filesystem::path file_name() const;
const std::string &version() const; //!< For example "7.2"
const DetectorType &detector_type() const;
const TimingMode &timing_mode() const;
xy geometry() const;
int image_size_in_bytes() const;
int pixels_y() const;
int pixels_x() const;
int max_frames_per_file() const;
const FrameDiscardPolicy &frame_discard_policy() const;
int frame_padding() const;
std::optional<ScanParameters> scan_parameters() const;
size_t total_frames_expected() const;
std::optional<ns> exptime() const;
std::optional<ns> period() const;
std::optional<BurstMode> burst_mode() const;
std::optional<int> number_of_udp_interfaces() const;
int bitdepth() const;
std::optional<bool> ten_giga() const;
std::optional<int> threshold_energy() const;
std::optional<std::vector<int>> threshold_energy_all() const;
std::optional<ns> subexptime() const;
std::optional<ns> subperiod() const;
std::optional<bool> quad() const;
std::optional<int> number_of_rows() const;
std::optional<std::vector<size_t>> rate_corrections() const;
std::optional<uint32_t> adc_mask() const;
bool analog_flag() const;
std::optional<int> analog_samples() const;
bool digital_flag() const;
std::optional<int> digital_samples() const;
std::optional<int> dbit_offset() const;
std::optional<size_t> dbit_list() const;
std::optional<int> transceiver_mask() const;
bool transceiver_flag() const;
std::optional<int> transceiver_samples() const;
// g1 roi - will not be implemented?
std::optional<ROI> roi() const;
std::optional<int> counter_mask() const;
std::optional<std::vector<ns>> exptime_array() const;
std::optional<std::vector<ns>> gate_delay_array() const;
std::optional<int> gates() const;
std::optional<std::map<std::string, std::string>>
additional_json_header() const;
size_t frames_in_file() const;
size_t n_modules() const;
private:
static const std::string metadata_group_name;
void parse_acquisition_metadata(const std::filesystem::path &fpath);
template <typename T>
T h5_read_scalar_dataset(const H5::DataSet &dataset,
const H5::DataType &data_type);
template <typename T>
T h5_get_scalar_dataset(const H5::H5File &file,
const std::string &dataset_name);
};
template <>
std::string Hdf5MasterFile::h5_read_scalar_dataset<std::string>(
const H5::DataSet &dataset, const H5::DataType &data_type);
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/CalculateEta.hpp"
@@ -16,8 +17,27 @@ struct Photon {
double energy;
};
struct Coordinate2D {
double x{};
double y{};
};
class Interpolator {
/**
* @brief
* marginal CDF of eta_x (if rosenblatt applied), conditional
* CDF of eta_x conditioned on eta_y
* value at (i, j, e): F(eta_x[i] |
*eta_y[j], energy[e])
*/
NDArray<double, 3> m_ietax;
/**
* @brief
* conditional CDF of eta_y conditioned on eta_x
* value at (i,j,e): F(eta_y[j] | eta_x[i], energy[e])
*/
NDArray<double, 3> m_ietay;
NDArray<double, 1> m_etabinsx;
@@ -25,103 +45,207 @@ class Interpolator {
NDArray<double, 1> m_energy_bins;
public:
/**
* @brief Constructor for the Interpolator class
* @param etacube joint distribution of etaX, etaY and photon energy (note
* first dimension is etaX, second etaY, third photon energy)
* @param xbins bin edges for etaX
* @param ybins bin edges for etaY
* @param ebins bin edges for photon energy
*/
Interpolator(NDView<double, 3> etacube, NDView<double, 1> xbins,
NDView<double, 1> ybins, NDView<double, 1> ebins);
/**
* @brief Constructor for the Interpolator class
* @param xbins bin edges for etaX
* @param ybins bin edges for etaY
* @param ebins bin edges for photon energy
*/
Interpolator(NDView<double, 1> xbins, NDView<double, 1> ybins,
NDView<double, 1> ebins);
/**
* @brief transforms the joint eta distribution of etaX and etaY to the two
* independant uniform distributions based on the Roseblatt transform for
* each energy level
* @param etacube joint distribution of etaX, etaY and photon energy (first
* dimension is etaX, second etaY, third photon energy)
*/
void rosenblatttransform(NDView<double, 3> etacube);
NDArray<double, 3> get_ietax() { return m_ietax; }
NDArray<double, 3> get_ietay() { return m_ietay; }
template <typename ClusterType,
/**
* @brief interpolates the cluster centers for all clusters to a better
* precision
* @tparam ClusterType Type of Clusters to interpolate
* @tparam Etafunction Function object that calculates desired eta default:
* calculate_eta2
* @return interpolated photons (photon positions are given as double but
* following row column format e.g. x=0, y=0 means top row and first column
* of frame) (An interpolated photon position of (1.5, 2.5) corresponds to
* an estimated photon hit at the pixel center of pixel (1,2))
*/
template <auto EtaFunction = calculate_eta2, typename ClusterType,
typename Eanble = std::enable_if_t<is_cluster_v<ClusterType>>>
std::vector<Photon> interpolate(const ClusterVector<ClusterType> &clusters);
std::vector<Photon>
interpolate(const ClusterVector<ClusterType> &clusters) const;
/**
* @brief transforms the eta values to uniform coordinates based on the CDF
* ieta_x and ieta_y
* @tparam eta Eta to transform
* @return uniform coordinates {x,y}
*/
template <typename T>
Coordinate2D transform_eta_values(const Eta2<T> &eta) const;
private:
/**
* @brief bilinear interpolation of the transformed eta values
* @param ix index of etaX bin
* @param iy index of etaY bin
* @param ie index of energy bin
* @return pair of interpolated transformed eta values (ietax, ietay)
*/
template <typename T>
std::pair<double, double>
bilinear_interpolation(const size_t ix, const size_t iy, const size_t ie,
const Eta2<T> &eta) const;
};
// TODO: generalize to support any clustertype!!! otherwise add std::enable_if_t
// to only take Cluster2x2 and Cluster3x3
template <typename ClusterType, typename Enable>
template <typename T>
std::pair<double, double>
Interpolator::bilinear_interpolation(const size_t ix, const size_t iy,
const size_t ie,
const Eta2<T> &eta) const {
auto next_index_y = static_cast<ssize_t>(iy + 1) >= m_ietax.shape(1)
? m_ietax.shape(1) - 1
: iy + 1;
auto next_index_x = static_cast<ssize_t>(ix + 1) >= m_ietax.shape(0)
? m_ietax.shape(0) - 1
: ix + 1;
// bilinear interpolation
double ietax_interp_left = linear_interpolation(
{m_etabinsy(iy), m_etabinsy(iy + 1)},
{m_ietax(ix, iy, ie), m_ietax(ix, next_index_y, ie)}, eta.y);
double ietax_interp_right =
linear_interpolation({m_etabinsy(iy), m_etabinsy(iy + 1)},
{m_ietax(next_index_x, iy, ie),
m_ietax(next_index_x, next_index_y, ie)},
eta.y);
// transformed photon position x between [0,1]
double ietax_interpolated =
linear_interpolation({m_etabinsx(ix), m_etabinsx(ix + 1)},
{ietax_interp_left, ietax_interp_right}, eta.x);
double ietay_interp_left = linear_interpolation(
{m_etabinsx(ix), m_etabinsx(ix + 1)},
{m_ietay(ix, iy, ie), m_ietay(next_index_x, iy, ie)}, eta.x);
double ietay_interp_right =
linear_interpolation({m_etabinsx(ix), m_etabinsx(ix + 1)},
{m_ietay(ix, next_index_y, ie),
m_ietay(next_index_x, next_index_y, ie)},
eta.x);
// transformed photon position y between [0,1]
double ietay_interpolated =
linear_interpolation({m_etabinsy(iy), m_etabinsy(iy + 1)},
{ietay_interp_left, ietay_interp_right}, eta.y);
return {ietax_interpolated, ietay_interpolated};
}
template <typename T>
Coordinate2D Interpolator::transform_eta_values(const Eta2<T> &eta) const {
// Finding the index of the last element that is smaller
// should work fine as long as we have many bins
auto ie = last_smaller(m_energy_bins, static_cast<double>(eta.sum));
auto ix = last_smaller(m_etabinsx, eta.x);
auto iy = last_smaller(m_etabinsy, eta.y);
if (static_cast<ssize_t>(ix) >= m_etabinsx.size() - 1 ||
static_cast<ssize_t>(iy) >= m_etabinsy.size() - 1 ||
static_cast<ssize_t>(ie) >= m_energy_bins.size() - 1)
throw std::runtime_error(
fmt::format("Eta values out of bounds of eta distribution: eta.x = "
"{:.4f}, eta.y = {:.4f}, energy = {:.4f}",
eta.x, eta.y, eta.sum));
// TODO: bilinear interpolation only works if all bins have a size > 1 -
// otherwise bilinear interpolation with zero values which skew the
// results
// TODO: maybe trim the bins at the edges with zero values beforehand
// auto [ietax_interpolated, ietay_interpolated] =
// bilinear_interpolation(ix, iy, ie, eta);
return Coordinate2D{m_ietax(ix, iy, ie), m_ietay(ix, iy, ie)};
}
template <auto EtaFunction, typename ClusterType, typename Enable>
std::vector<Photon>
Interpolator::interpolate(const ClusterVector<ClusterType> &clusters) {
Interpolator::interpolate(const ClusterVector<ClusterType> &clusters) const {
std::vector<Photon> photons;
photons.reserve(clusters.size());
if (clusters.cluster_size_x() == 3 || clusters.cluster_size_y() == 3) {
for (const ClusterType &cluster : clusters) {
for (const ClusterType &cluster : clusters) {
auto eta = calculate_eta2(cluster);
auto eta = EtaFunction(cluster);
Photon photon;
photon.x = cluster.x;
photon.y = cluster.y;
photon.energy = static_cast<decltype(photon.energy)>(eta.sum);
Photon photon;
photon.x = cluster.x;
photon.y = cluster.y;
photon.energy = static_cast<decltype(photon.energy)>(eta.sum);
// auto ie = nearest_index(m_energy_bins, photon.energy)-1;
// auto ix = nearest_index(m_etabinsx, eta.x)-1;
// auto iy = nearest_index(m_etabinsy, eta.y)-1;
// Finding the index of the last element that is smaller
// should work fine as long as we have many bins
auto ie = last_smaller(m_energy_bins, photon.energy);
auto ix = last_smaller(m_etabinsx, eta.x);
auto iy = last_smaller(m_etabinsy, eta.y);
auto uniform_coordinates = transform_eta_values(eta);
// fmt::print("ex: {}, ix: {}, iy: {}\n", ie, ix, iy);
if (EtaFunction == &calculate_eta2<typename ClusterType::value_type,
ClusterType::cluster_size_x,
ClusterType::cluster_size_y,
typename ClusterType::coord_type> ||
EtaFunction ==
&calculate_full_eta2<typename ClusterType::value_type,
ClusterType::cluster_size_x,
ClusterType::cluster_size_y,
typename ClusterType::coord_type>) {
double dX{}, dY{};
double dX, dY;
// cBottomLeft = 0,
// cBottomRight = 1,
// cTopLeft = 2,
// cTopRight = 3
switch (static_cast<corner>(eta.c)) {
// TODO: could also chaneg the sign of the eta calculation
switch (eta.c) {
case corner::cTopLeft:
dX = -1.;
dY = 0;
dX = -1.0;
dY = -1.0;
break;
case corner::cTopRight:;
dX = 0;
dY = 0;
dX = 0.0;
dY = -1.0;
break;
case corner::cBottomLeft:
dX = -1.;
dY = -1.;
dX = -1.0;
dY = 0.0;
break;
case corner::cBottomRight:
dX = 0.;
dY = -1.;
dX = 0.0;
dY = 0.0;
break;
}
photon.x += m_ietax(ix, iy, ie) * 2 + dX;
photon.y += m_ietay(ix, iy, ie) * 2 + dY;
photons.push_back(photon);
}
} else if (clusters.cluster_size_x() == 2 ||
clusters.cluster_size_y() == 2) {
for (const ClusterType &cluster : clusters) {
auto eta = calculate_eta2(cluster);
Photon photon;
photon.x = cluster.x;
photon.y = cluster.y;
photon.energy = static_cast<decltype(photon.energy)>(eta.sum);
// Now do some actual interpolation.
// Find which energy bin the cluster is in
// auto ie = nearest_index(m_energy_bins, photon.energy)-1;
// auto ix = nearest_index(m_etabinsx, eta.x)-1;
// auto iy = nearest_index(m_etabinsy, eta.y)-1;
// Finding the index of the last element that is smaller
// should work fine as long as we have many bins
auto ie = last_smaller(m_energy_bins, photon.energy);
auto ix = last_smaller(m_etabinsx, eta.x);
auto iy = last_smaller(m_etabinsy, eta.y);
photon.x += m_ietax(ix, iy, ie) *
2; // eta goes between 0 and 1 but we could move the hit
// anywhere in the 2x2
photon.y += m_ietay(ix, iy, ie) * 2;
photons.push_back(photon);
photon.x = photon.x + 0.5 + uniform_coordinates.x +
dX; // use pixel center + 0.5
photon.y =
photon.y + 0.5 + uniform_coordinates.y +
dY; // eta2 calculates the ratio between bottom and sum of
// bottom and top shift by 1 add eta value correctly
} else {
photon.x += uniform_coordinates.x;
photon.y += uniform_coordinates.y;
}
} else {
throw std::runtime_error(
"Only 3x3 and 2x2 clusters are supported for interpolation");
photons.push_back(photon);
}
return photons;

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <cstdint>
#include <filesystem>

View File

@@ -1,12 +1,10 @@
// SPDX-License-Identifier: MPL-2.0
//
// Container holding image data, or a time series of image data in contigious
// memory. Used for all data processing in Aare.
//
#pragma once
/*
Container holding image data, or a time series of image data in contigious
memory.
TODO! Add expression templates for operators
*/
#include "aare/ArrayExpr.hpp"
#include "aare/NDView.hpp"
@@ -22,18 +20,24 @@ TODO! Add expression templates for operators
namespace aare {
template <typename T, ssize_t Ndim = 2>
class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim>,
public NDIndexOps<NDArray<T, Ndim>, T, Ndim> {
std::array<ssize_t, Ndim> shape_;
std::array<ssize_t, Ndim> strides_;
size_t size_{};
size_t size_{}; // TODO! do we need to store size when we have shape?
T *data_;
public:
///////////////////////////////////////////////////////////////////////////////
// Constructors
//
///////////////////////////////////////////////////////////////////////////////
/**
* @brief Default constructor. Will construct an empty NDArray.
* @brief Default constructor. Constructs an empty NDArray.
*
*/
NDArray() : shape_(), strides_(c_strides<Ndim>(shape_)), data_(nullptr){};
NDArray() : shape_(), strides_(c_strides<Ndim>(shape_)), data_(nullptr) {};
/**
* @brief Construct a new NDArray object with a given shape.
@@ -43,9 +47,7 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
*/
explicit NDArray(std::array<ssize_t, Ndim> shape)
: shape_(shape), strides_(c_strides<Ndim>(shape_)),
size_(std::accumulate(shape_.begin(), shape_.end(), 1,
std::multiplies<>())),
data_(new T[size_]) {}
size_(num_elements(shape_)), data_(new T[size_]) {}
/**
* @brief Construct a new NDArray object with a shape and value.
@@ -57,6 +59,10 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
this->operator=(value);
}
// Allow NDArray of different type and dimension to be friend classes
// This is needed for the move constructor from NDArray<T,Ndim+1>
template <typename U, ssize_t Dim> friend class NDArray;
/**
* @brief Construct a new NDArray object from a NDView.
* @note The data is copied from the view to the NDArray.
@@ -67,26 +73,67 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
std::copy(v.begin(), v.end(), begin());
}
/**
* @brief Construct a new NDArray object from an std::array.
*/
template <size_t Size>
NDArray(const std::array<T, Size> &arr) : NDArray<T, 1>({Size}) {
std::copy(arr.begin(), arr.end(), begin());
}
// Move constructor
/**
* @brief Move construct a new NDArray object. Cheap since it just
* reassigns the pointer and copy size/strides.
*
* @param other
*/
NDArray(NDArray &&other) noexcept
: shape_(other.shape_), strides_(c_strides<Ndim>(shape_)),
size_(other.size_), data_(other.data_) {
other.reset(); // TODO! is this necessary?
other.reset(); // Needed to avoid double free
}
// Copy constructor
/**
* @brief Move construct a new NDArray object from an array with Ndim + 1.
* Can be used to drop a dimension cheaply.
* @param other
*/
template <ssize_t M, typename = std::enable_if_t<(M == Ndim + 1)>>
NDArray(NDArray<T, M> &&other)
: shape_(drop_first_dim(other.shape())),
strides_(c_strides<Ndim>(shape_)), size_(num_elements(shape_)),
data_(other.data()) {
// For now only allow move if the size matches, to avoid unreachable
// data if the use case arises we can remove this check
if (size() != other.size()) {
data_ = nullptr; // avoid double free, other will clean up the
// memory in it's destructor
throw std::runtime_error(
LOCATION +
"Size mismatch in move constructor of NDArray<T, Ndim-1>");
}
other.reset();
}
/**
* @brief Copy construct a new NDArray object from another NDArray.
*
* @param other
*/
NDArray(const NDArray &other)
: shape_(other.shape_), strides_(c_strides<Ndim>(shape_)),
size_(other.size_), data_(new T[size_]) {
std::copy(other.data_, other.data_ + size_, data_);
}
// Conversion operator from array expression to array
/**
* @brief Conversion from a ArrayExpr to an actual NDArray. Used when
* the expression is evaluated and data needed.
*
* @tparam E
* @param expr
*/
template <typename E>
NDArray(ArrayExpr<E, Ndim> &&expr) : NDArray(expr.shape()) {
for (size_t i = 0; i < size_; ++i) {
@@ -94,23 +141,83 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
}
}
/**
* @brief Destroy the NDArray object. Frees the allocated memory.
*
*/
~NDArray() { delete[] data_; }
auto begin() { return data_; }
auto end() { return data_ + size_; }
///////////////////////////////////////////////////////////////////////////////
// Iterators and indexing
//
///////////////////////////////////////////////////////////////////////////////
auto begin() const { return data_; }
auto end() const { return data_ + size_; }
using NDIndexOps<NDArray<T, Ndim>, T, Ndim>::operator();
using NDIndexOps<NDArray<T, Ndim>, T, Ndim>::operator[];
auto *begin() { return data_; }
const auto *begin() const { return data_; }
auto *end() { return data_ + size_; }
const auto *end() const { return data_ + size_; }
/* @brief Return a raw pointer to the data */
T *data() { return data_; }
/* @brief Return a const raw pointer to the data */
const T *data() const { return data_; }
/* @brief Return a byte pointer to the data. Useful for memcpy like
* operations */
std::byte *buffer() { return reinterpret_cast<std::byte *>(data_); }
/**
* @brief Return the total number of elements in the array as a signed
* integer
*/
ssize_t size() const { return static_cast<ssize_t>(size_); }
/** @brief Return the total number of bytes in the array */
size_t total_bytes() const { return size_ * sizeof(T); }
/** @brief Return the shape of the array */
Shape<Ndim> shape() const noexcept { return shape_; }
/** @brief Return the size of dimension i */
ssize_t shape(ssize_t i) const noexcept { return shape_[i]; }
/** @brief Return the strides of the array */
std::array<ssize_t, Ndim> strides() const noexcept { return strides_; }
/**
* @brief Return the bitdepth of the array. Useful for checking that
* detector data can fit in the array type.
*/
size_t bitdepth() const noexcept { return sizeof(T) * 8; }
/**
* @brief Return the number of bytes to step in each dimension when
* traversing the array.
*/
std::array<ssize_t, Ndim> byte_strides() const noexcept {
auto byte_strides = strides_;
for (auto &val : byte_strides)
val *= sizeof(T);
return byte_strides;
}
using value_type = T;
NDArray &operator=(NDArray &&other) noexcept; // Move assign
NDArray &operator=(const NDArray &other); // Copy assign
NDArray &operator+=(const NDArray &other);
NDArray &operator-=(const NDArray &other);
NDArray &operator*=(const NDArray &other);
///////////////////////////////////////////////////////////////////////////////
// Assignments
//
///////////////////////////////////////////////////////////////////////////////
// Write directly to the data array, or create a new one
/**
* @brief Copy to the NDArray from an std::array. If the size of the array
* is different we reallocate the data.
*
*/
template <size_t Size>
NDArray<T, 1> &operator=(const std::array<T, Size> &other) {
if (Size != size_) {
@@ -124,12 +231,94 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
return *this;
}
// NDArray& operator/=(const NDArray& other);
/**
* @brief Move assignment operator.
*/
NDArray &operator=(NDArray &&other) noexcept {
// TODO! Should we use swap?
if (this != &other) {
delete[] data_;
data_ = other.data_;
shape_ = other.shape_;
size_ = other.size_;
strides_ = other.strides_;
other.reset();
}
return *this;
}
/**
* @brief Copy assignment operator.
*/
NDArray &operator=(const NDArray &other) {
if (this != &other) {
delete[] data_;
shape_ = other.shape_;
strides_ = other.strides_;
size_ = other.size_;
data_ = new T[size_];
std::copy(other.data_, other.data_ + size_, data_);
}
return *this;
}
///////////////////////////////////////////////////////////////////////////////
// Math operators
//
///////////////////////////////////////////////////////////////////////////////
/**
* @brief Add elementwise from another NDArray.
*/
NDArray &operator+=(const NDArray &other) {
if (shape_ != other.shape_)
throw(std::runtime_error(
"Shape of NDArray must match for operator +="));
for (size_t i = 0; i < size_; ++i) {
data_[i] += other.data_[i];
}
return *this;
}
/**
* @brief Subtract elementwise with another NDArray.
*/
NDArray &operator-=(const NDArray &other) {
if (shape_ != other.shape_)
throw(std::runtime_error(
"Shape of NDArray must match for operator -="));
for (size_t i = 0; i < size_; ++i) {
data_[i] -= other.data_[i];
}
return *this;
}
/**
* @brief Multiply elementwise with another NDArray.
*/
NDArray &operator*=(const NDArray &other) {
if (shape_ != other.shape_)
throw(std::runtime_error(
"Shape of NDArray must match for operator *="));
for (size_t i = 0; i < size_; ++i) {
data_[i] *= other.data_[i];
}
return *this;
}
/**
* @brief Divide elementwise by another NDArray. Templated to allow division
* with different types.
*
* TODO! Why is this templated when the others are not?
*/
template <typename V> NDArray &operator/=(const NDArray<V, Ndim> &other) {
// check shape
if (shape_ == other.shape()) {
for (uint32_t i = 0; i < size_; ++i) {
for (size_t i = 0; i < size_; ++i) {
data_[i] /= other(i);
}
return *this;
@@ -137,67 +326,139 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
throw(std::runtime_error("Shape of NDArray must match"));
}
NDArray<bool, Ndim> operator>(const NDArray &other);
/**
* @brief Assign a scalar value to all elements in the NDArray.
*/
NDArray &operator=(const T &value) {
std::fill_n(data_, size_, value);
return *this;
}
bool operator==(const NDArray &other) const;
bool operator!=(const NDArray &other) const;
/**
* @brief Add a scalar value to all elements in the NDArray.
*/
NDArray &operator+=(const T &value) {
for (size_t i = 0; i < size_; ++i)
data_[i] += value;
return *this;
}
NDArray &operator=(const T & /*value*/);
NDArray &operator+=(const T & /*value*/);
NDArray operator+(const T & /*value*/);
NDArray &operator-=(const T & /*value*/);
NDArray operator-(const T & /*value*/);
NDArray &operator*=(const T & /*value*/);
NDArray operator*(const T & /*value*/);
NDArray &operator/=(const T & /*value*/);
NDArray operator/(const T & /*value*/);
/**
* @brief Subtract a scalar value to all elements in the NDArray.
*/
NDArray &operator-=(const T &value) {
for (size_t i = 0; i < size_; ++i)
data_[i] -= value;
return *this;
}
NDArray &operator&=(const T & /*mask*/);
/**
* @brief Multiply all elements in the NDArray with a scalar value
*/
NDArray &operator*=(const T &value) {
for (size_t i = 0; i < size_; ++i)
data_[i] *= value;
return *this;
}
/**
* @brief Divide all elements in the NDArray with a scalar value
*/
NDArray &operator/=(const T &value) {
for (size_t i = 0; i < size_; ++i)
data_[i] /= value;
return *this;
}
/**
* @brief Bitwise AND all elements in the NDArray with a scalar mask.
* Used for example to mask out gain bits for Jungfrau detectors.
*/
NDArray &operator&=(const T &mask) {
for (auto it = begin(); it != end(); ++it)
*it &= mask;
return *this;
}
/**
* @brief Operator + with a scalar value. Returns a new NDArray.
*
* TODO! Expression template version of this?
*/
NDArray operator+(const T &value) {
NDArray result = *this;
result += value;
return result;
}
/**
* @brief Operator - with a scalar value. Returns a new NDArray.
*
* TODO! Expression template version of this?
*/
NDArray operator-(const T &value) {
NDArray result = *this;
result -= value;
return result;
}
/**
* @brief Operator * with a scalar value. Returns a new NDArray.
*
* TODO! Expression template version of this?
*/
NDArray operator*(const T &value) {
NDArray result = *this;
result *= value;
return result;
}
/**
* @brief Operator / with a scalar value. Returns a new NDArray.
*
* TODO! Expression template version of this?
*/
NDArray operator/(const T &value) {
NDArray result = *this;
result /= value;
return result;
}
/**
* @brief Compare two NDArrays elementwise for equality.
*/
bool operator==(const NDArray &other) const {
if (shape_ != other.shape_)
return false;
for (size_t i = 0; i != size_; ++i)
if (data_[i] != other.data_[i])
return false;
return true;
}
/**
* @brief Compare two NDArrays elementwise for non-equality.
*/
bool operator!=(const NDArray &other) const { return !((*this) == other); }
/**
* @brief Compute the square root of all elements in the NDArray.
*/
void sqrt() {
for (int i = 0; i < size_; ++i) {
for (size_t i = 0; i < size_; ++i) {
data_[i] = std::sqrt(data_[i]);
}
}
NDArray &operator++(); // pre inc
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) {
return data_[element_offset(strides_, index...)];
}
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) const {
return data_[element_offset(strides_, index...)];
}
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T> value(Ix... index) {
return data_[element_offset(strides_, index...)];
}
// TODO! is int the right type for index?
T &operator()(ssize_t i) { return data_[i]; }
const T &operator()(ssize_t i) const { return data_[i]; }
T &operator[](ssize_t i) { return data_[i]; }
const T &operator[](ssize_t i) const { return data_[i]; }
T *data() { return data_; }
std::byte *buffer() { return reinterpret_cast<std::byte *>(data_); }
ssize_t size() const { return static_cast<ssize_t>(size_); }
size_t total_bytes() const { return size_ * sizeof(T); }
std::array<ssize_t, Ndim> shape() const noexcept { return shape_; }
ssize_t shape(ssize_t i) const noexcept { return shape_[i]; }
std::array<ssize_t, Ndim> strides() const noexcept { return strides_; }
size_t bitdepth() const noexcept { return sizeof(T) * 8; }
std::array<ssize_t, Ndim> byte_strides() const noexcept {
auto byte_strides = strides_;
for (auto &val : byte_strides)
val *= sizeof(T);
return byte_strides;
/*
* @brief Prefix increment operator. Increments all elements by 1.
*/
NDArray &operator++() {
for (size_t i = 0; i < size_; ++i)
data_[i] += T{1};
return *this;
}
/**
@@ -207,10 +468,12 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
*/
NDView<T, Ndim> view() const { return NDView<T, Ndim>{data_, shape_}; }
void Print();
void Print_all();
void Print_some();
private:
/**
* @brief Reset the NDArray to an empty state. Dropping the ownership of
* the data. Used internally for move operations to avoid double free or
* dangling pointers.
*/
void reset() {
data_ = nullptr;
size_ = 0;
@@ -219,173 +482,10 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
}
};
// Move assign
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &
NDArray<T, Ndim>::operator=(NDArray<T, Ndim> &&other) noexcept {
if (this != &other) {
delete[] data_;
data_ = other.data_;
shape_ = other.shape_;
size_ = other.size_;
strides_ = other.strides_;
other.reset();
}
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator+=(const NDArray<T, Ndim> &other) {
// check shape
if (shape_ == other.shape_) {
for (size_t i = 0; i < size_; ++i) {
data_[i] += other.data_[i];
}
return *this;
}
throw(std::runtime_error("Shape of ImageDatas must match"));
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator-=(const NDArray<T, Ndim> &other) {
// check shape
if (shape_ == other.shape_) {
for (uint32_t i = 0; i < size_; ++i) {
data_[i] -= other.data_[i];
}
return *this;
}
throw(std::runtime_error("Shape of ImageDatas must match"));
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator*=(const NDArray<T, Ndim> &other) {
// check shape
if (shape_ == other.shape_) {
for (uint32_t i = 0; i < size_; ++i) {
data_[i] *= other.data_[i];
}
return *this;
}
throw(std::runtime_error("Shape of ImageDatas must match"));
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator&=(const T &mask) {
for (auto it = begin(); it != end(); ++it)
*it &= mask;
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<bool, Ndim> NDArray<T, Ndim>::operator>(const NDArray &other) {
if (shape_ == other.shape_) {
NDArray<bool, Ndim> result{shape_};
for (int i = 0; i < size_; ++i) {
result(i) = (data_[i] > other.data_[i]);
}
return result;
}
throw(std::runtime_error("Shape of ImageDatas must match"));
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator=(const NDArray<T, Ndim> &other) {
if (this != &other) {
delete[] data_;
shape_ = other.shape_;
strides_ = other.strides_;
size_ = other.size_;
data_ = new T[size_];
std::copy(other.data_, other.data_ + size_, data_);
}
return *this;
}
template <typename T, ssize_t Ndim>
bool NDArray<T, Ndim>::operator==(const NDArray<T, Ndim> &other) const {
if (shape_ != other.shape_)
return false;
for (uint32_t i = 0; i != size_; ++i)
if (data_[i] != other.data_[i])
return false;
return true;
}
template <typename T, ssize_t Ndim>
bool NDArray<T, Ndim>::operator!=(const NDArray<T, Ndim> &other) const {
return !((*this) == other);
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator++() {
for (uint32_t i = 0; i < size_; ++i)
data_[i] += 1;
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator=(const T &value) {
std::fill_n(data_, size_, value);
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator+=(const T &value) {
for (uint32_t i = 0; i < size_; ++i)
data_[i] += value;
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> NDArray<T, Ndim>::operator+(const T &value) {
NDArray result = *this;
result += value;
return result;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator-=(const T &value) {
for (uint32_t i = 0; i < size_; ++i)
data_[i] -= value;
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> NDArray<T, Ndim>::operator-(const T &value) {
NDArray result = *this;
result -= value;
return result;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator/=(const T &value) {
for (uint32_t i = 0; i < size_; ++i)
data_[i] /= value;
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> NDArray<T, Ndim>::operator/(const T &value) {
NDArray result = *this;
result /= value;
return result;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> &NDArray<T, Ndim>::operator*=(const T &value) {
for (uint32_t i = 0; i < size_; ++i)
data_[i] *= value;
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> NDArray<T, Ndim>::operator*(const T &value) {
NDArray result = *this;
result *= value;
return result;
}
// template <typename T, ssize_t Ndim> void NDArray<T, Ndim>::Print() {
// if (shape_[0] < 20 && shape_[1] < 20)
// Print_all();
// else
// Print_some();
// }
///////////////////////////////////////////////////////////////////////////////
// Free functions closely related to NDArray
//
///////////////////////////////////////////////////////////////////////////////
template <typename T, ssize_t Ndim>
std::ostream &operator<<(std::ostream &os, const NDArray<T, Ndim> &arr) {
@@ -399,27 +499,9 @@ std::ostream &operator<<(std::ostream &os, const NDArray<T, Ndim> &arr) {
return os;
}
template <typename T, ssize_t Ndim> void NDArray<T, Ndim>::Print_all() {
for (auto row = 0; row < shape_[0]; ++row) {
for (auto col = 0; col < shape_[1]; ++col) {
std::cout << std::setw(3);
std::cout << (*this)(row, col) << " ";
}
std::cout << "\n";
}
}
template <typename T, ssize_t Ndim> void NDArray<T, Ndim>::Print_some() {
for (auto row = 0; row < 5; ++row) {
for (auto col = 0; col < 5; ++col) {
std::cout << std::setw(7);
std::cout << (*this)(row, col) << " ";
}
std::cout << "\n";
}
}
template <typename T, ssize_t Ndim>
void save(NDArray<T, Ndim> &img, std::string &pathname) {
[[deprecated("Saving of raw arrays without metadata is deprecated")]] void
save(NDArray<T, Ndim> &img, std::string &pathname) {
std::ofstream f;
f.open(pathname, std::ios::binary);
f.write(img.buffer(), img.size() * sizeof(T));
@@ -427,8 +509,9 @@ void save(NDArray<T, Ndim> &img, std::string &pathname) {
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> load(const std::string &pathname,
std::array<ssize_t, Ndim> shape) {
[[deprecated(
"Loading of raw arrays without metadata is deprecated")]] NDArray<T, Ndim>
load(const std::string &pathname, std::array<ssize_t, Ndim> shape) {
NDArray<T, Ndim> img{shape};
std::ifstream f;
f.open(pathname, std::ios::binary);
@@ -437,4 +520,37 @@ NDArray<T, Ndim> load(const std::string &pathname,
return img;
}
} // namespace aare
/**
* @brief Free function to safely divide two NDArrays elementwise, handling
* division by zero. Uses static_cast to convert types as needed.
*
* @tparam RT Result type
* @tparam NT Numerator type
* @tparam DT Denominator type
* @tparam Ndim Number of dimensions
* @param numerator The numerator NDArray
* @param denominator The denominator NDArray
* @return NDArray<RT, Ndim> Resulting NDArray after safe division
* @throws std::runtime_error if the shapes of the numerator and denominator do
* not match
*/
template <typename RT, typename NT, typename DT, ssize_t Ndim>
NDArray<RT, Ndim> safe_divide(const NDArray<NT, Ndim> &numerator,
const NDArray<DT, Ndim> &denominator) {
if (numerator.shape() != denominator.shape()) {
throw std::runtime_error(
"Shapes of numerator and denominator must match");
}
NDArray<RT, Ndim> result(numerator.shape());
for (ssize_t i = 0; i < numerator.size(); ++i) {
if (denominator[i] != 0) {
result[i] =
static_cast<RT>(numerator[i]) / static_cast<RT>(denominator[i]);
} else {
result[i] = RT{0}; // or handle division by zero as needed
}
}
return result;
}
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/ArrayExpr.hpp"
#include "aare/defs.hpp"
@@ -26,14 +27,31 @@ Shape<Ndim> make_shape(const std::vector<size_t> &shape) {
return arr;
}
template <ssize_t Dim = 0, typename Strides>
ssize_t element_offset(const Strides & /*unused*/) {
return 0;
/**
* @brief Helper function to drop the first dimension of a shape.
* This is useful when you want to create a 2D view from a 3D array.
* @param shape The shape to drop the first dimension from.
* @return A new shape with the first dimension dropped.
*/
template<size_t Ndim>
Shape<Ndim-1> drop_first_dim(const Shape<Ndim> &shape) {
static_assert(Ndim > 1, "Cannot drop first dimension from a 1D shape");
Shape<Ndim - 1> new_shape;
std::copy(shape.begin() + 1, shape.end(), new_shape.begin());
return new_shape;
}
template <ssize_t Dim = 0, typename Strides, typename... Ix>
ssize_t element_offset(const Strides &strides, ssize_t i, Ix... index) {
return i * strides[Dim] + element_offset<Dim + 1>(strides, index...);
/**
* @brief Helper function when constructing NDArray/NDView. Calculates the number
* of elements in the resulting array from a shape.
* @param shape The shape to calculate the number of elements for.
* @return The number of elements in and NDArray/NDView of that shape.
*/
template <size_t Ndim>
size_t num_elements(const Shape<Ndim> &shape) {
return std::accumulate(shape.begin(), shape.end(), 1,
std::multiplies<size_t>());
}
template <ssize_t Ndim>
@@ -55,7 +73,8 @@ std::array<ssize_t, Ndim> make_array(const std::vector<ssize_t> &vec) {
}
template <typename T, ssize_t Ndim = 2>
class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim>,
public NDIndexOps<NDView<T, Ndim>, T, Ndim> {
public:
NDView() = default;
~NDView() = default;
@@ -67,22 +86,9 @@ class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
size_(std::accumulate(std::begin(shape), std::end(shape), 1,
std::multiplies<>())) {}
// NDView(T *buffer, const std::vector<ssize_t> &shape)
// : buffer_(buffer),
// strides_(c_strides<Ndim>(make_array<Ndim>(shape))),
// shape_(make_array<Ndim>(shape)),
// size_(std::accumulate(std::begin(shape), std::end(shape), 1,
// std::multiplies<>())) {}
using NDIndexOps<NDView<T, Ndim>, T, Ndim>::operator();
using NDIndexOps<NDView<T, Ndim>, T, Ndim>::operator[];
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) {
return buffer_[element_offset(strides_, index...)];
}
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) const {
return buffer_[element_offset(strides_, index...)];
}
ssize_t size() const { return static_cast<ssize_t>(size_); }
size_t total_bytes() const { return size_ * sizeof(T); }
@@ -92,13 +98,17 @@ class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
T *end() { return buffer_ + size_; }
T const *begin() const { return buffer_; }
T const *end() const { return buffer_ + size_; }
T &operator()(ssize_t i) const { return buffer_[i]; }
T &operator[](ssize_t i) const { return buffer_[i]; }
bool operator==(const NDView &other) const {
if (size_ != other.size_)
return false;
for (uint64_t i = 0; i != size_; ++i) {
if (shape_ != other.shape_)
return false;
for (size_t i = 0; i != size_; ++i) {
if (buffer_[i] != other.buffer_[i])
return false;
}
@@ -157,8 +167,25 @@ class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
auto shape(ssize_t i) const { return shape_[i]; }
T *data() { return buffer_; }
const T *data() const { return buffer_; }
void print_all() const;
/**
* @brief Create a subview of a range of the first dimension.
* This is useful for splitting a batches of frames in parallel processing.
* @param first The first index of the subview (inclusive).
* @param last The last index of the subview (exclusive).
* @return A new NDView that is a subview of the current view.
* @throws std::runtime_error if the range is invalid.
*/
NDView sub_view(ssize_t first, ssize_t last) const {
if (first < 0 || last > shape_[0] || first >= last)
throw std::runtime_error(LOCATION + "Invalid sub_view range");
auto new_shape = shape_;
new_shape[0] = last - first;
return NDView(buffer_ + first * strides_[0], new_shape);
}
private:
T *buffer_{nullptr};
std::array<ssize_t, Ndim> strides_{};
@@ -180,6 +207,7 @@ class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
return *this;
}
};
template <typename T, ssize_t Ndim> void NDView<T, Ndim>::print_all() const {
for (auto row = 0; row < shape_[0]; ++row) {
for (auto col = 0; col < shape_[1]; ++col) {
@@ -206,4 +234,4 @@ template <typename T> NDView<T, 1> make_view(std::vector<T> &vec) {
return NDView<T, 1>(vec.data(), {static_cast<ssize_t>(vec.size())});
}
} // namespace aare
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Dtype.hpp"
#include "aare/FileInterface.hpp"

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <algorithm>

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Frame.hpp"
#include "aare/NDArray.hpp"

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/NDArray.hpp"

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
/*
* Copyright (c) Meta Platforms, Inc. and affiliates.
*

View File

@@ -1,27 +1,20 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/DetectorGeometry.hpp"
#include "aare/FileInterface.hpp"
#include "aare/Frame.hpp"
#include "aare/NDArray.hpp" //for pixel map
#include "aare/RawMasterFile.hpp"
#include "aare/RawSubFile.hpp"
#ifdef AARE_TESTS
#include "../tests/friend_test.hpp"
#endif
#include <optional>
namespace aare {
struct ModuleConfig {
int module_gap_row{};
int module_gap_col{};
bool operator==(const ModuleConfig &other) const {
if (module_gap_col != other.module_gap_col)
return false;
if (module_gap_row != other.module_gap_row)
return false;
return true;
}
};
/**
* @brief Class to read .raw files. The class will parse the master file
* to find the correct geometry for the frames.
@@ -29,11 +22,12 @@ struct ModuleConfig {
* Consider using that unless you need raw file specific functionality.
*/
class RawFile : public FileInterface {
std::vector<std::unique_ptr<RawSubFile>> m_subfiles;
ModuleConfig cfg{0, 0};
RawMasterFile m_master;
size_t m_current_frame{};
size_t m_current_subfile{};
DetectorGeometry m_geometry;
public:
@@ -67,13 +61,21 @@ class RawFile : public FileInterface {
size_t rows() const override;
size_t cols() const override;
size_t bitdepth() const override;
xy geometry();
size_t n_modules() const;
size_t n_modules_in_roi() const;
xy geometry() const;
RawMasterFile master() const;
DetectorType detector_type() const override;
/**
* @brief read the header of the file
* @param fname path to the data subfile
* @return DetectorHeader
*/
static DetectorHeader read_header(const std::filesystem::path &fname);
private:
/**
* @brief read the frame at the given frame index into the image buffer
@@ -91,15 +93,7 @@ class RawFile : public FileInterface {
*/
Frame get_frame(size_t frame_index);
/**
* @brief read the header of the file
* @param fname path to the data subfile
* @return DetectorHeader
*/
static DetectorHeader read_header(const std::filesystem::path &fname);
void open_subfiles();
void find_geometry();
};
} // namespace aare

View File

@@ -1,11 +1,12 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/defs.hpp"
#include "aare/scan_parameters.hpp"
#include <algorithm>
#include <filesystem>
#include <fmt/format.h>
#include <fstream>
#include <optional>
#include <chrono>
#include <nlohmann/json.hpp>
using json = nlohmann::json;
@@ -41,6 +42,31 @@ class RawFileNameComponents {
void set_old_scheme(bool old_scheme);
};
class ScanParameters {
bool m_enabled = false;
DACIndex m_dac{};
int m_start = 0;
int m_stop = 0;
int m_step = 0;
int64_t m_settleTime = 0; // [ns]
public:
ScanParameters(const std::string &par);
ScanParameters(const bool enabled, const DACIndex dac, const int start,
const int stop, const int step, const int64_t settleTime);
ScanParameters() = default;
ScanParameters(const ScanParameters &) = default;
ScanParameters &operator=(const ScanParameters &) = default;
ScanParameters(ScanParameters &&) = default;
int start() const;
int stop() const;
int step() const;
DACIndex dac() const;
bool enabled() const;
int64_t settleTime() const;
void increment_stop();
};
/**
* @brief Class for parsing a master file either in our .json format or the old
* .raw format
@@ -57,8 +83,13 @@ class RawMasterFile {
size_t m_pixels_y{};
size_t m_pixels_x{};
size_t m_bitdepth{};
uint8_t m_quad = 0;
std::optional<std::chrono::nanoseconds> m_exptime;
std::chrono::nanoseconds m_period{0};
xy m_geometry{};
xy m_udp_interfaces_per_module{1, 1};
size_t m_max_frames_per_file{};
// uint32_t m_adc_mask{}; // TODO! implement reading
@@ -76,12 +107,13 @@ class RawMasterFile {
std::optional<size_t> m_digital_samples;
std::optional<size_t> m_transceiver_samples;
std::optional<size_t> m_number_of_rows;
std::optional<uint8_t> m_quad;
std::optional<uint8_t> m_counter_mask;
std::optional<ROI> m_roi;
public:
RawMasterFile(const std::filesystem::path &fpath);
RawMasterFile(std::istream &is, const std::string &fname); // for testing
std::filesystem::path data_fname(size_t mod_id, size_t file_id) const;
@@ -95,25 +127,31 @@ class RawMasterFile {
size_t max_frames_per_file() const;
size_t bitdepth() const;
size_t frame_padding() const;
xy udp_interfaces_per_module() const;
const FrameDiscardPolicy &frame_discard_policy() const;
size_t total_frames_expected() const;
xy geometry() const;
size_t n_modules() const;
uint8_t quad() const;
std::optional<size_t> analog_samples() const;
std::optional<size_t> digital_samples() const;
std::optional<size_t> transceiver_samples() const;
std::optional<size_t> number_of_rows() const;
std::optional<uint8_t> quad() const;
std::optional<uint8_t> counter_mask() const;
std::optional<ROI> roi() const;
ScanParameters scan_parameters() const;
std::optional<std::chrono::nanoseconds> exptime() const { return m_exptime; }
std::chrono::nanoseconds period() const { return m_period; }
private:
void parse_json(const std::filesystem::path &fpath);
void parse_raw(const std::filesystem::path &fpath);
void parse_json(std::istream &is);
void parse_raw(std::istream &is);
void retrieve_geometry();
};
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Frame.hpp"
#include "aare/defs.hpp"

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <algorithm>
@@ -124,7 +125,7 @@ template <typename T> int VarClusterFinder<T>::check_neighbours(int i, int j) {
const auto row = i + di[k];
const auto col = j + dj[k];
if (row >= 0 && col >= 0 && row < shape_[0] && col < shape_[1]) {
auto tmp = labeled_.value(i + di[k], j + dj[k]);
auto tmp = labeled_(i + di[k], j + dj[k]);
if (tmp != 0)
neighbour_labels.push_back(tmp);
}
@@ -240,14 +241,14 @@ template <typename T> void VarClusterFinder<T>::first_pass() {
for (ssize_t i = 0; i < original_.size(); ++i) {
if (use_noise_map)
threshold_ = 5 * noiseMap(i);
binary_(i) = (original_(i) > threshold_);
threshold_ = 5 * noiseMap[i];
binary_[i] = (original_[i] > threshold_);
}
for (int i = 0; i < shape_[0]; ++i) {
for (int j = 0; j < shape_[1]; ++j) {
// do we have someting to process?
// do we have something to process?
if (binary_(i, j)) {
auto tmp = check_neighbours(i, j);
if (tmp != 0) {

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <aare/NDArray.hpp>
@@ -109,4 +110,19 @@ template <typename Container> bool all_equal(const Container &c) {
return false;
}
/**
* linear interpolation
* @param bin_edge left and right bin edges
* @param bin_values function values at bin edges
* @param coord coordinate to interpolate at
* @return interpolated value at coord
*/
inline double linear_interpolation(const std::pair<double, double> &bin_edge,
const std::pair<double, double> &bin_values,
const double coord) {
const double bin_width = bin_edge.second - bin_edge.first;
return bin_values.first * (1 - (coord - bin_edge.first) / bin_width) +
bin_values.second * (coord - bin_edge.first) / bin_width;
}
} // namespace aare

View File

@@ -0,0 +1,210 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/NDArray.hpp"
#include "aare/NDView.hpp"
#include "aare/defs.hpp"
#include "aare/utils/par.hpp"
#include "aare/utils/task.hpp"
#include <cstdint>
#include <future>
namespace aare {
// Really try to convince the compile to inline this function
// TODO! Clang?
#if (defined(_MSC_VER) || defined(__INTEL_COMPILER))
#define STRONG_INLINE __forceinline
#else
#define STRONG_INLINE inline
#endif
#if defined(__GNUC__)
#define ALWAYS_INLINE __attribute__((always_inline)) inline
#else
#define ALWAYS_INLINE STRONG_INLINE
#endif
/**
* @brief Get the gain from the raw ADC value. In Jungfrau the gain is
* encoded in the left most 2 bits of the raw value.
* 00 -> gain 0
* 01 -> gain 1
* 11 -> gain 2
* @param raw the raw ADC value
* @return the gain as an integer
*/
ALWAYS_INLINE int get_gain(uint16_t raw) {
switch (raw >> 14) {
case 0:
return 0;
case 1:
return 1;
case 3:
return 2;
default:
return 0;
}
}
ALWAYS_INLINE uint16_t get_value(uint16_t raw) { return raw & ADC_MASK; }
ALWAYS_INLINE std::pair<uint16_t, int16_t> get_value_and_gain(uint16_t raw) {
static_assert(
sizeof(std::pair<uint16_t, int16_t>) ==
sizeof(uint16_t) + sizeof(int16_t),
"Size of pair<uint16_t, int16_t> does not match expected size");
return {get_value(raw), get_gain(raw)};
}
template <class T>
void apply_calibration_impl(NDView<T, 3> res, NDView<uint16_t, 3> raw_data,
NDView<T, 3> ped, NDView<T, 3> cal, int start,
int stop) {
for (int frame_nr = start; frame_nr != stop; ++frame_nr) {
for (int row = 0; row != raw_data.shape(1); ++row) {
for (int col = 0; col != raw_data.shape(2); ++col) {
auto [value, gain] =
get_value_and_gain(raw_data(frame_nr, row, col));
// Using multiplication does not seem to speed up the code here
// ADU/keV is the standard unit for the calibration which
// means rewriting the formula is not worth it.
res(frame_nr, row, col) =
(value - ped(gain, row, col)) / cal(gain, row, col);
}
}
}
}
template <class T>
void apply_calibration_impl(NDView<T, 3> res, NDView<uint16_t, 3> raw_data,
NDView<T, 2> ped, NDView<T, 2> cal, int start,
int stop) {
for (int frame_nr = start; frame_nr != stop; ++frame_nr) {
for (int row = 0; row != raw_data.shape(1); ++row) {
for (int col = 0; col != raw_data.shape(2); ++col) {
auto [value, gain] =
get_value_and_gain(raw_data(frame_nr, row, col));
// Using multiplication does not seem to speed up the code here
// ADU/keV is the standard unit for the calibration which
// means rewriting the formula is not worth it.
// Set the value to 0 if the gain is not 0
if (gain == 0)
res(frame_nr, row, col) =
(value - ped(row, col)) / cal(row, col);
else
res(frame_nr, row, col) = 0;
}
}
}
}
template <class T, ssize_t Ndim = 3>
void apply_calibration(NDView<T, 3> res, NDView<uint16_t, 3> raw_data,
NDView<T, Ndim> ped, NDView<T, Ndim> cal,
ssize_t n_threads = 4) {
std::vector<std::future<void>> futures;
futures.reserve(n_threads);
auto limits = split_task(0, raw_data.shape(0), n_threads);
for (const auto &lim : limits)
futures.push_back(std::async(
static_cast<void (*)(NDView<T, 3>, NDView<uint16_t, 3>,
NDView<T, Ndim>, NDView<T, Ndim>, int, int)>(
apply_calibration_impl),
res, raw_data, ped, cal, lim.first, lim.second));
for (auto &f : futures)
f.get();
}
template <bool only_gain0>
std::pair<NDArray<size_t, 3>, NDArray<size_t, 3>>
sum_and_count_per_gain(NDView<uint16_t, 3> raw_data) {
constexpr ssize_t num_gains = only_gain0 ? 1 : 3;
NDArray<size_t, 3> accumulator(
std::array<ssize_t, 3>{num_gains, raw_data.shape(1), raw_data.shape(2)},
0);
NDArray<size_t, 3> count(
std::array<ssize_t, 3>{num_gains, raw_data.shape(1), raw_data.shape(2)},
0);
for (int frame_nr = 0; frame_nr != raw_data.shape(0); ++frame_nr) {
for (int row = 0; row != raw_data.shape(1); ++row) {
for (int col = 0; col != raw_data.shape(2); ++col) {
auto [value, gain] =
get_value_and_gain(raw_data(frame_nr, row, col));
if (gain != 0 && only_gain0)
continue;
accumulator(gain, row, col) += value;
count(gain, row, col) += 1;
}
}
}
return {std::move(accumulator), std::move(count)};
}
template <typename T, bool only_gain0 = false>
NDArray<T, 3 - static_cast<ssize_t>(only_gain0)>
calculate_pedestal(NDView<uint16_t, 3> raw_data, ssize_t n_threads) {
constexpr ssize_t num_gains = only_gain0 ? 1 : 3;
std::vector<std::future<std::pair<NDArray<size_t, 3>, NDArray<size_t, 3>>>>
futures;
futures.reserve(n_threads);
auto subviews = make_subviews(raw_data, n_threads);
for (auto view : subviews) {
futures.push_back(std::async(
static_cast<std::pair<NDArray<size_t, 3>, NDArray<size_t, 3>> (*)(
NDView<uint16_t, 3>)>(&sum_and_count_per_gain<only_gain0>),
view));
}
Shape<3> shape{num_gains, raw_data.shape(1), raw_data.shape(2)};
NDArray<size_t, 3> accumulator(shape, 0);
NDArray<size_t, 3> count(shape, 0);
// Combine the results from the futures
for (auto &f : futures) {
auto [acc, cnt] = f.get();
accumulator += acc;
count += cnt;
}
// Will move to a NDArray<T, 3 - static_cast<ssize_t>(only_gain0)>
// if only_gain0 is true
return safe_divide<T>(accumulator, count);
}
/**
* @brief Count the number of switching pixels in the raw data.
* This function counts the number of pixels that switch between G1 and G2 gain.
* It returns an NDArray with the number of switching pixels per pixel.
* @param raw_data The NDView containing the raw data
* @return An NDArray with the number of switching pixels per pixel
*/
NDArray<int, 2> count_switching_pixels(NDView<uint16_t, 3> raw_data);
/**
* @brief Count the number of switching pixels in the raw data.
* This function counts the number of pixels that switch between G1 and G2 gain.
* It returns an NDArray with the number of switching pixels per pixel.
* @param raw_data The NDView containing the raw data
* @param n_threads The number of threads to use for parallel processing
* @return An NDArray with the number of switching pixels per pixel
*/
NDArray<int, 2> count_switching_pixels(NDView<uint16_t, 3> raw_data,
ssize_t n_threads);
template <typename T>
auto calculate_pedestal_g0(NDView<uint16_t, 3> raw_data, ssize_t n_threads) {
return calculate_pedestal<T, true>(raw_data, n_threads);
}
} // namespace aare

View File

@@ -1,10 +1,12 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/defs.hpp"
#include <aare/NDView.hpp>
#include <cstdint>
#include <vector>
namespace aare {
uint16_t adc_sar_05_decode64to16(uint64_t input);
uint16_t adc_sar_04_decode64to16(uint64_t input);
void adc_sar_05_decode64to16(NDView<uint64_t, 2> input,
@@ -12,6 +14,25 @@ void adc_sar_05_decode64to16(NDView<uint64_t, 2> input,
void adc_sar_04_decode64to16(NDView<uint64_t, 2> input,
NDView<uint16_t, 2> output);
/**
* @brief Called with a 32 bit unsigned integer, shift by offset
* and then return the lower 24 bits as an 32 bit integer
* @param input 32-ibt input value
* @param offset (should be in range 0-7 to allow for full 24 bits)
* @return uint32_t
*/
uint32_t mask32to24bits(uint32_t input, BitOffset offset={});
/**
* @brief Expand 24 bit values in a 8bit buffer to 32bit unsigned integers
* Used for detectors with 24bit counters in combination with CTB
*
* @param input View of the 24 bit data as uint8_t (no 24bit native data type exists)
* @param output Destination of the expanded data (32bit, unsigned)
* @param offset Offset within the first byte to where the data starts (0-7 bits)
*/
void expand24to32bit(NDView<uint8_t,1> input, NDView<uint32_t,1> output, BitOffset offset={});
/**
* @brief Apply custom weights to a 16-bit input value. Will sum up
* weights[i]**i for each bit i that is set in the input value.

View File

@@ -1,15 +1,12 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include "aare/Dtype.hpp"
#include "aare/type_traits.hpp"
#include <algorithm>
#include <array>
#include <cassert>
#include <cstdint>
#include <cstring>
#include <iostream>
#include <sstream>
#include <stdexcept>
#include <string>
#include <string_view>
@@ -178,35 +175,6 @@ template <typename T> struct t_xy {
};
using xy = t_xy<uint32_t>;
/**
* @brief Class to hold the geometry of a module. Where pixel 0 is located and
* the size of the module
*/
struct ModuleGeometry {
int origin_x{};
int origin_y{};
int height{};
int width{};
int row_index{};
int col_index{};
};
/**
* @brief Class to hold the geometry of a detector. Number of modules, their
* size and where pixel 0 for each module is located
*/
struct DetectorGeometry {
int modules_x{};
int modules_y{};
int pixels_x{};
int pixels_y{};
int module_gap_row{};
int module_gap_col{};
std::vector<ModuleGeometry> module_pixel_0;
auto size() const { return module_pixel_0.size(); }
};
struct ROI {
ssize_t xmin{};
ssize_t xmax{};
@@ -248,15 +216,153 @@ enum class DetectorType {
Unknown
};
/**
* @brief Enum class to define the Digital to Analog converter
* The values are the same as in slsDetectorPackage
*/
enum DACIndex {
DAC_0,
DAC_1,
DAC_2,
DAC_3,
DAC_4,
DAC_5,
DAC_6,
DAC_7,
DAC_8,
DAC_9,
DAC_10,
DAC_11,
DAC_12,
DAC_13,
DAC_14,
DAC_15,
DAC_16,
DAC_17,
VSVP,
VTRIM,
VRPREAMP,
VRSHAPER,
VSVN,
VTGSTV,
VCMP_LL,
VCMP_LR,
VCAL,
VCMP_RL,
RXB_RB,
RXB_LB,
VCMP_RR,
VCP,
VCN,
VISHAPER,
VTHRESHOLD,
IO_DELAY,
VREF_DS,
VOUT_CM,
VIN_CM,
VREF_COMP,
VB_COMP,
VDD_PROT,
VIN_COM,
VREF_PRECH,
VB_PIXBUF,
VB_DS,
VREF_H_ADC,
VB_COMP_FE,
VB_COMP_ADC,
VCOM_CDS,
VREF_RSTORE,
VB_OPA_1ST,
VREF_COMP_FE,
VCOM_ADC1,
VREF_L_ADC,
VREF_CDS,
VB_CS,
VB_OPA_FD,
VCOM_ADC2,
VCASSH,
VTH2,
VRSHAPER_N,
VIPRE_OUT,
VTH3,
VTH1,
VICIN,
VCAS,
VCAL_N,
VIPRE,
VCAL_P,
VDCSH,
VBP_COLBUF,
VB_SDA,
VCASC_SFP,
VIPRE_CDS,
IBIAS_SFP,
ADC_VPP,
HIGH_VOLTAGE,
TEMPERATURE_ADC,
TEMPERATURE_FPGA,
TEMPERATURE_FPGAEXT,
TEMPERATURE_10GE,
TEMPERATURE_DCDC,
TEMPERATURE_SODL,
TEMPERATURE_SODR,
TEMPERATURE_FPGA2,
TEMPERATURE_FPGA3,
TRIMBIT_SCAN,
V_POWER_A = 100,
V_POWER_B = 101,
V_POWER_C = 102,
V_POWER_D = 103,
V_POWER_IO = 104,
V_POWER_CHIP = 105,
I_POWER_A = 106,
I_POWER_B = 107,
I_POWER_C = 108,
I_POWER_D = 109,
I_POWER_IO = 110,
V_LIMIT = 111,
SLOW_ADC0 = 1000,
SLOW_ADC1,
SLOW_ADC2,
SLOW_ADC3,
SLOW_ADC4,
SLOW_ADC5,
SLOW_ADC6,
SLOW_ADC7,
SLOW_ADC_TEMP
};
// helper pair class to easily expose in python
template <typename T1, typename T2> struct Sum_index_pair {
T1 sum;
T2 index;
};
enum class corner : int {
cTopLeft = 0,
cTopRight = 1,
cBottomLeft = 2,
cBottomRight = 3
};
enum class TimingMode { Auto, Trigger };
enum class FrameDiscardPolicy { NoDiscard, Discard, DiscardPartial };
enum class BurstMode {
Burst_Interal,
Burst_External,
Continuous_Internal,
Continuous_External
};
using DataTypeVariants = std::variant<uint16_t, uint32_t>;
constexpr uint16_t ADC_MASK =
0x3FFF; // used to mask out the gain bits in Jungfrau
class BitOffset{
uint8_t m_offset{};
public:
BitOffset() = default;
explicit BitOffset(uint32_t offset);
uint8_t value() const {return m_offset;}
bool operator==(const BitOffset& other) const;
bool operator<(const BitOffset& other) const;
};
} // namespace aare

View File

@@ -1,15 +0,0 @@
#pragma once
#include "aare/RawMasterFile.hpp" //ROI refactor away
#include "aare/defs.hpp"
namespace aare {
/**
* @brief Update the detector geometry given a region of interest
*
* @param geo
* @param roi
* @return DetectorGeometry
*/
DetectorGeometry update_geometry_with_roi(DetectorGeometry geo, ROI roi);
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
/*Utility to log to console*/
@@ -105,7 +106,7 @@ class Logger {
}
std::ostringstream &Get() {
os << Color(m_level) << "- " << Timestamp() << " " << ToString(m_level)
os << Color(m_level) << "- " << Timestamp() << " " << Logger::ToString(m_level)
<< ": ";
return os;
}

View File

@@ -1,51 +0,0 @@
#pragma once
#include <string>
#include <sstream>
namespace aare {
class ScanParameters {
bool m_enabled = false;
std::string m_dac;
int m_start = 0;
int m_stop = 0;
int m_step = 0;
// ns m_dac_settle_time{0};
// TODO! add settleTime, requires string to time conversion
public:
// "[enabled\ndac dac 4\nstart 500\nstop 2200\nstep 5\nsettleTime 100us\n]"
// TODO: use StringTo<ScanParameters> and move this to to_string
// add ways of setting the members of the class
ScanParameters(const std::string &par) {
std::istringstream iss(par.substr(1, par.size() - 2));
std::string line;
while (std::getline(iss, line)) {
if (line == "enabled") {
m_enabled = true;
} else if (line.find("dac") != std::string::npos) {
m_dac = line.substr(4);
} else if (line.find("start") != std::string::npos) {
m_start = std::stoi(line.substr(6));
} else if (line.find("stop") != std::string::npos) {
m_stop = std::stoi(line.substr(5));
} else if (line.find("step") != std::string::npos) {
m_step = std::stoi(line.substr(5));
}
}
};
ScanParameters() = default;
ScanParameters(const ScanParameters &) = default;
ScanParameters &operator=(const ScanParameters &) = default;
ScanParameters(ScanParameters &&) = default;
int start() const { return m_start; };
int stop() const { return m_stop; };
int step() const { return m_step; };
const std::string &dac() const { return m_dac; };
bool enabled() const { return m_enabled; };
void increment_stop() { m_stop += 1; };
};
} // namespace aare

View File

@@ -1,11 +0,0 @@
#pragma once
#include <string>
namespace aare {
std::string RemoveUnit(std::string &str);
void TrimWhiteSpaces(std::string &s);
} // namespace aare

View File

@@ -1,288 +0,0 @@
#pragma once
#include "aare/defs.hpp"
#include "aare/scan_parameters.hpp"
#include "aare/string_utils.hpp"
#include <optional>
#include <chrono>
namespace aare {
// generic
template <class T, typename = std::enable_if_t<!is_duration<T>::value>>
std::string ToString(T arg) {
return T(arg);
}
template <typename T,
std::enable_if_t<!is_duration<T>::value && !is_container<T>::value,
int> = 0>
T StringTo(const std::string &arg) {
return T(arg);
}
// time
/** Convert std::chrono::duration with specified output unit */
template <typename T, typename Rep = double>
typename std::enable_if<is_duration<T>::value, std::string>::type
ToString(T t, const std::string &unit) {
using std::chrono::duration;
using std::chrono::duration_cast;
std::ostringstream os;
if (unit == "ns")
os << duration_cast<duration<Rep, std::nano>>(t).count() << unit;
else if (unit == "us")
os << duration_cast<duration<Rep, std::micro>>(t).count() << unit;
else if (unit == "ms")
os << duration_cast<duration<Rep, std::milli>>(t).count() << unit;
else if (unit == "s")
os << duration_cast<duration<Rep>>(t).count() << unit;
else
throw std::runtime_error("Unknown unit: " + unit);
return os.str();
}
/** Convert std::chrono::duration automatically selecting the unit */
template <typename From>
typename std::enable_if<is_duration<From>::value, std::string>::type
ToString(From t) {
using std::chrono::abs;
using std::chrono::duration_cast;
using std::chrono::microseconds;
using std::chrono::milliseconds;
using std::chrono::nanoseconds;
auto tns = duration_cast<nanoseconds>(t);
if (abs(tns) < microseconds(1)) {
return ToString(tns, "ns");
} else if (abs(tns) < milliseconds(1)) {
return ToString(tns, "us");
} else if (abs(tns) < milliseconds(99)) {
return ToString(tns, "ms");
} else {
return ToString(tns, "s");
}
}
template <class Rep, class Period>
std::ostream &operator<<(std::ostream &os,
const std::chrono::duration<Rep, Period> &d) {
return os << ToString(d);
}
template <typename T>
T StringTo(const std::string &t, const std::string &unit) {
double tval{0};
try {
tval = std::stod(t);
} catch (const std::invalid_argument &e) {
throw std::invalid_argument("[ERROR] Could not convert string to time");
}
using std::chrono::duration;
using std::chrono::duration_cast;
if (unit == "ns") {
return duration_cast<T>(duration<double, std::nano>(tval));
} else if (unit == "us") {
return duration_cast<T>(duration<double, std::micro>(tval));
} else if (unit == "ms") {
return duration_cast<T>(duration<double, std::milli>(tval));
} else if (unit == "s" || unit.empty()) {
return duration_cast<T>(std::chrono::duration<double>(tval));
} else {
throw std::invalid_argument("[ERROR] Invalid unit in conversion from "
"string to std::chrono::duration");
}
}
template <typename T, std::enable_if_t<is_duration<T>::value, int> = 0>
T StringTo(const std::string &t) {
std::string tmp{t};
auto unit = RemoveUnit(tmp);
return StringTo<T>(tmp, unit);
}
template <> inline bool StringTo(const std::string &s) {
int i = std::stoi(s, nullptr, 10);
switch (i) {
case 0:
return false;
case 1:
return true;
default:
throw std::runtime_error("Unknown boolean. Expecting be 0 or 1.");
}
}
template <> inline uint8_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
int value = std::stoi(s, nullptr, base);
if (value < std::numeric_limits<uint8_t>::min() ||
value > std::numeric_limits<uint8_t>::max()) {
throw std::runtime_error("Cannot scan uint8_t from string '" + s +
"'. Value must be in range 0 - 255.");
}
return static_cast<uint8_t>(value);
}
template <> inline uint16_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
int value = std::stoi(s, nullptr, base);
if (value < std::numeric_limits<uint16_t>::min() ||
value > std::numeric_limits<uint16_t>::max()) {
throw std::runtime_error("Cannot scan uint16_t from string '" + s +
"'. Value must be in range 0 - 65535.");
}
return static_cast<uint16_t>(value);
}
template <> inline uint32_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
return std::stoul(s, nullptr, base);
}
template <> inline uint64_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
return std::stoull(s, nullptr, base);
}
template <> inline int StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
return std::stoi(s, nullptr, base);
}
/*template <> inline size_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
return std::stoull(s, nullptr, base);
}*/
// vector
template <typename T> std::string ToString(const std::vector<T> &vec) {
std::ostringstream oss;
oss << "[";
for (size_t i = 0; i < vec.size(); ++i) {
oss << vec[i];
if (i != vec.size() - 1)
oss << ", ";
}
oss << "]";
return oss.str();
}
template <typename T>
std::ostream &operator<<(std::ostream &os, const std::vector<T> &v) {
return os << ToString(v);
}
template <typename Container,
std::enable_if_t<is_container<Container>::value &&
!is_std_string_v<Container> /*&&
!is_map_v<Container>*/
,
int> = 0>
Container StringTo(const std::string &s) {
using Value = typename Container::value_type;
// strip outer brackets
std::string str = s;
str.erase(
std::remove_if(str.begin(), str.end(),
[](unsigned char c) { return c == '[' || c == ']'; }),
str.end());
std::stringstream ss(str);
std::string item;
Container result;
while (std::getline(ss, item, ',')) {
TrimWhiteSpaces(item);
if (!item.empty()) {
result.push_back(StringTo<Value>(item));
}
}
return result;
}
// map
template <typename KeyType, typename ValueType>
std::string ToString(const std::map<KeyType, ValueType> &m) {
std::ostringstream os;
os << '{';
if (!m.empty()) {
auto it = m.cbegin();
os << ToString(it->first) << ": " << ToString(it->second);
it++;
while (it != m.cend()) {
os << ", " << ToString(it->first) << ": " << ToString(it->second);
it++;
}
}
os << '}';
return os.str();
}
template <>
inline std::map<std::string, std::string> StringTo(const std::string &s) {
std::map<std::string, std::string> result;
std::string str = s;
// Remove outer braces if present
if (!str.empty() && str.front() == '{' && str.back() == '}') {
str = str.substr(1, str.size() - 2);
}
std::stringstream ss(str);
std::string item;
while (std::getline(ss, item, ',')) {
auto colon_pos = item.find(':');
if (colon_pos == std::string::npos)
throw std::runtime_error("Missing ':' in item: " + item);
std::string key = item.substr(0, colon_pos);
std::string value = item.substr(colon_pos + 1);
TrimWhiteSpaces(key);
TrimWhiteSpaces(value);
result[key] = value;
}
return result;
}
// optional
template <class T> std::string ToString(const std::optional<T> &opt) {
return opt ? ToString(*opt) : "nullopt";
}
template <typename T>
std::ostream &operator<<(std::ostream &os, const std::optional<T> &opt) {
if (opt)
os << *opt;
else
os << "nullopt";
return os;
}
// enums
template <> std::string ToString(DetectorType arg);
template <> DetectorType StringTo(const std::string & /*name*/);
template <> std::string ToString(TimingMode arg);
template <> TimingMode StringTo(const std::string & /*mode*/);
template <> std::string ToString(FrameDiscardPolicy arg);
template <> FrameDiscardPolicy StringTo(const std::string & /*mode*/);
template <> std::string ToString(BurstMode arg);
template <> BurstMode StringTo(const std::string & /*mode*/);
template <> std::string ToString(ROI arg);
std::ostream &operator<<(std::ostream &os, const ROI &roi);
template <> std::string ToString(ScanParameters arg);
std::ostream &operator<<(std::ostream &os, const ScanParameters &r);
} // namespace aare

View File

@@ -1,72 +0,0 @@
#pragma once
#include <type_traits>
namespace aare {
/**
* Type trait to check if a template parameter is a std::chrono::duration
*/
template <typename T, typename _ = void>
struct is_duration : std::false_type {};
template <typename... Ts> struct is_duration_helper {};
template <typename T>
struct is_duration<T,
typename std::conditional<
false,
is_duration_helper<typename T::rep, typename T::period,
decltype(std::declval<T>().min()),
decltype(std::declval<T>().max()),
decltype(std::declval<T>().zero())>,
void>::type> : public std::true_type {};
/**
* Type trait to evaluate if template parameter is
* complying with a standard container
*/
template <typename T, typename _ = void>
struct is_container : std::false_type {};
template <typename... Ts> struct is_container_helper {};
template <typename T>
struct is_container<
T, typename std::conditional<
false,
is_container_helper<
typename std::remove_reference<T>::type::value_type,
typename std::remove_reference<T>::type::size_type,
typename std::remove_reference<T>::type::iterator,
typename std::remove_reference<T>::type::const_iterator,
decltype(std::declval<T>().size()),
decltype(std::declval<T>().begin()),
decltype(std::declval<T>().end()),
decltype(std::declval<T>().cbegin()),
decltype(std::declval<T>().cend()),
decltype(std::declval<T>().empty())>,
void>::type> : public std::true_type {};
/**
* Type trait to evaluate if template parameter is
* complying with a std::string
*/
template <typename T>
inline constexpr bool is_std_string_v =
std::is_same_v<std::decay_t<T>, std::string>;
/**
* Type trait to evaluate if template parameter is
* complying with std::map
*/
template <typename T> struct is_map : std::false_type {};
template <typename K, typename V, typename... Args>
struct is_map<std::map<K, V, Args...>> : std::true_type {};
template <typename T>
inline constexpr bool is_map_v = is_map<std::decay_t<T>>::value;
} // namespace aare

View File

@@ -1,3 +1,4 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <fstream>

View File

@@ -1,7 +1,11 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <thread>
#include <utility>
#include <vector>
#include "aare/utils/task.hpp"
namespace aare {
template <typename F>
@@ -15,4 +19,17 @@ void RunInParallel(F func, const std::vector<std::pair<int, int>> &tasks) {
thread.join();
}
}
template <typename T>
std::vector<NDView<T,3>> make_subviews(NDView<T, 3> &data, ssize_t n_threads) {
std::vector<NDView<T, 3>> subviews;
subviews.reserve(n_threads);
auto limits = split_task(0, data.shape(0), n_threads);
for (const auto &lim : limits) {
subviews.push_back(data.sub_view(lim.first, lim.second));
}
return subviews;
}
} // namespace aare

View File

@@ -1,4 +1,5 @@
// SPDX-License-Identifier: MPL-2.0
#pragma once
#include <utility>
#include <vector>

Some files were not shown because too many files have changed in this diff Show More