169 Commits

Author SHA1 Message Date
119ca96a52 added test
All checks were successful
Build on RHEL9 / build (push) Successful in 2m57s
Build on RHEL8 / build (push) Successful in 2m57s
2025-06-13 11:24:16 +02:00
053536d135 fixed adc mask and roi, also the test for hdf5 for jungfrua 2025-06-13 10:20:33 +02:00
286b2888ca wip, test fails at scanparameters
All checks were successful
Build on RHEL9 / build (push) Successful in 2m52s
Build on RHEL8 / build (push) Successful in 2m55s
2025-06-12 17:40:45 +02:00
52aa1d4d9b added some tostring tests from package 2025-06-12 17:02:30 +02:00
8556ab6564 minor changes in includes 2025-06-12 16:33:08 +02:00
51a87e2a1e scan parameters into a separate test
All checks were successful
Build on RHEL9 / build (push) Successful in 2m55s
Build on RHEL8 / build (push) Successful in 2m57s
2025-06-12 14:30:38 +02:00
e6dd1f3ec2 restructured to have a separate file and test for to_string, string_utils, scan_parameters 2025-06-12 13:57:53 +02:00
65672d06f3 removed hdf5 componenets. not needed
All checks were successful
Build on RHEL8 / build (push) Successful in 2m55s
Build on RHEL9 / build (push) Successful in 3m1s
2025-06-11 23:45:37 +02:00
bceefe6d64 merge fix from before
All checks were successful
Build on RHEL9 / build (push) Successful in 2m54s
Build on RHEL8 / build (push) Successful in 2m55s
2025-06-11 16:01:47 +02:00
cc57cc7c27 formatted my changes 2025-06-11 15:13:35 +02:00
d89530ed22 merge from formatted main 2025-06-11 15:11:47 +02:00
7917e6f81a works for multi jungfrau and m3 2025-06-11 15:01:12 +02:00
a26073fb41 added all the parameters
All checks were successful
Build on RHEL8 / build (push) Successful in 2m55s
Build on RHEL9 / build (push) Successful in 2m59s
2025-06-11 14:56:24 +02:00
3cc44f780f Added branching strategy etc. to docs (#191)
All checks were successful
Build on RHEL9 / build (push) Successful in 2m56s
Build on RHEL8 / build (push) Successful in 2m57s
Added a section on the ideas behind the library and also explaining the
branching strategy.

---------

Co-authored-by: Dhanya Thattil <dhanya.thattil@psi.ch>
2025-06-11 13:21:21 +02:00
f3f3e2af6a map of strings 2025-06-11 12:04:56 +02:00
031d9503d8 fixed size_t for consistencies, done everything except array of ns and map
All checks were successful
Build on RHEL8 / build (push) Successful in 2m54s
Build on RHEL9 / build (push) Successful in 3m0s
2025-06-10 17:20:56 +02:00
2a069f3b6e formatted main branch (#195)
All checks were successful
Build on RHEL8 / build (push) Successful in 2m53s
Build on RHEL9 / build (push) Successful in 3m7s
2025-06-10 16:24:11 +02:00
f9751902a2 formatted main branch 2025-06-10 16:09:06 +02:00
cba2e46e2f threshold energy
All checks were successful
Build on RHEL9 / build (push) Successful in 2m54s
Build on RHEL8 / build (push) Successful in 2m55s
2025-06-10 11:02:13 +02:00
b4a9b4caec minor refactoring 2025-06-10 10:49:53 +02:00
be7f510775 fix for burst mode when not in file 2025-06-10 10:42:43 +02:00
56fa6f6bfb added counter mask, fixed adc mask data type, removed redundant scan parameter parsing 2025-06-10 09:25:20 +02:00
ca4d392b2f dbit offset and transceiver mask
All checks were successful
Build on RHEL8 / build (push) Successful in 2m52s
Build on RHEL9 / build (push) Successful in 2m58s
2025-06-09 16:03:55 +02:00
3b65e92cb7 added num interfaces and ten giga enable 2025-06-09 15:14:35 +02:00
755a8fb2b7 added exptime, period in hdf5, also added print for chrono and StringTo 2025-06-09 14:41:07 +02:00
dc7f6d44f2 fixed master h5
All checks were successful
Build on RHEL8 / build (push) Successful in 2m59s
Build on RHEL9 / build (push) Successful in 3m0s
2025-06-09 00:41:21 +02:00
480e28c927 wip at fixing hdf5 master file
Some checks failed
Build on RHEL9 / build (push) Failing after 1m27s
Build on RHEL8 / build (push) Failing after 1m36s
2025-06-06 16:36:40 +02:00
d7242671b2 Merge branch 'main' into dev/hdf5
All checks were successful
Build on RHEL8 / build (push) Successful in 2m58s
Build on RHEL9 / build (push) Successful in 3m1s
2025-06-05 16:18:05 +02:00
a6a02249bc refactoring, removing redundant functiosn to read header fields 2025-06-05 16:17:22 +02:00
efd2338f54 deploy docs on release only
All checks were successful
Build on RHEL8 / build (push) Successful in 2m56s
Build on RHEL9 / build (push) Successful in 2m57s
2025-06-05 14:55:00 +02:00
b97f1e24f9 merged developer 2025-06-05 14:42:37 +02:00
1bc2fd770a Binding 5x5, 7x7 and 9x9 clusters in python (#188)
All checks were successful
Build on RHEL8 / build (push) Successful in 2m55s
Build on RHEL9 / build (push) Successful in 2m58s
- New binding code with macros to bind all cluster templates
- Simplified factory function on the python side
- 5x5, 7x7 and 9x9 bindings in python
2025-06-05 08:57:59 +02:00
a3f831dc9e efficiently read in one hyperslab read instead of multiple reads in a loop
All checks were successful
Build on RHEL9 / build (push) Successful in 2m24s
Build on RHEL8 / build (push) Successful in 2m26s
2025-06-05 00:38:27 +02:00
76b8872fe6 refactored a bit 2025-06-05 00:11:24 +02:00
55236ce6cc todo minor
All checks were successful
Build on RHEL9 / build (push) Successful in 2m22s
Build on RHEL8 / build (push) Successful in 2m28s
2025-06-04 16:58:09 +02:00
e7d3e667b0 should work for other multiple frame reads 2025-06-04 16:54:18 +02:00
d9cbf0f481 able to get headers from multiple modules as well 2025-06-04 15:59:27 +02:00
5681e18403 merge from latest developer
All checks were successful
Build on RHEL9 / build (push) Successful in 2m21s
Build on RHEL8 / build (push) Successful in 2m40s
2025-06-03 11:24:37 +02:00
69964e08d5 Refactor cluster bindings (#185)
All checks were successful
Build on RHEL9 / build (push) Successful in 2m19s
Build on RHEL8 / build (push) Successful in 2m34s
- Split up the file for cluster bindings
- new file names according to bind_ClassName.hpp
2025-06-03 08:43:40 +02:00
9ecf4f4b44 merge
All checks were successful
Build on RHEL9 / build (push) Successful in 2m22s
Build on RHEL8 / build (push) Successful in 2m30s
2025-05-22 11:23:57 +02:00
f2a024644b bumped version upload on release 2025-05-22 11:10:23 +02:00
9e1b8731b0 RawSubFile support multi file access (#173)
This PR is a fix/improvement to a problem that Jonathan had. (#156) The
original implementation opened all subfiles at once witch works for
normal sized datasets but fails at a certain point (thousands of files).

- This solution uses RawSubFile to manage the different file indicies
and only opens the file we need
- Added logger.h from slsDetectorPackage for debug printing (in
production no messages should be visible)
2025-05-22 11:00:03 +02:00
a6eebbe9bd removed extra const on return type, added cast (#177)
All checks were successful
Build on RHEL9 / build (push) Successful in 2m31s
Build on RHEL8 / build (push) Successful in 2m34s
Fixed warnings on apple clang:

- removed extra const on return type
- added cast to suppress a float to double conversion warning
2025-05-20 15:27:38 +02:00
81588fba3b linking to threads and removed extra ; (#176)
All checks were successful
Build on RHEL9 / build (push) Successful in 2m14s
Build on RHEL8 / build (push) Successful in 2m32s
- Fixing broken build of tests on RH8 by linking pthreads
- Removed extra ; causing warnings with -Wpedantic
2025-05-06 17:18:54 +02:00
276283ff14 automated versioning (#175)
Some checks failed
Build on RHEL9 / build (push) Successful in 2m20s
Build on RHEL8 / build (push) Failing after 2m24s
Co-authored-by: mazzol_a <mazzol_a@pc17378.psi.ch>
Co-authored-by: Erik Fröjdh <erik.frojdh@psi.ch>
2025-05-06 14:48:54 +02:00
cf158e2dcd Added scurve fitting (#168)
Some checks failed
Build on RHEL9 / build (push) Successful in 2m21s
Build on RHEL8 / build (push) Failing after 2m26s
- added scurve fitting with two different signs (scurve, scurve2)
- at the moment no option to set initial parameters

---------

Co-authored-by: JulianHeymes <julian.heymes@psi.ch>
2025-05-05 11:40:04 +02:00
12ae1424fb consistent use of ssize_t instead of int64_t (#167)
Some checks failed
Build on RHEL9 / build (push) Successful in 2m10s
Build on RHEL8 / build (push) Failing after 2m33s
- Consistent use of ssize_t to avoid issues on 32 bit platforms and also
mac with (long long int as ssize_t)
2025-04-25 15:52:02 +02:00
6db201f397 updated conda environment (#169)
- updated dev-env.yml conda environment file
- added boost-histogram as a requirement for the python tests
- added environment file in conda build process
2025-04-25 15:24:45 +02:00
d5226909fe Api cluster vector (#147)
Cluster is newly templated on ClusterSize, Cluster data type and cluster coordinate type, accepting arbitrary cluster sizes.
2025-04-25 12:29:39 +02:00
eb6862ff99 changed name of GainMap to InvertedGainMap
Some checks failed
Build on RHEL9 / build (push) Successful in 2m16s
Build on RHEL8 / build (push) Failing after 2m34s
2025-04-25 12:03:59 +02:00
f06e722dce changes from PR review 2025-04-25 11:38:56 +02:00
2e0424254c removed uneccecary codna numpy variants (#165)
With numpy 2.0 we no longer need to build against every supported numpy
version. This way we can save up to 6 builds.

- https://numpy.org/doc/stable/dev/depending_on_numpy.html
-
https://conda-forge.org/docs/maintainer/knowledge_base/#building-against-numpy
2025-04-25 10:31:40 +02:00
7b5e32a824 Api extra (#166)
Changes to be able to run the example notebooks: 

- Invert gain map on setting (multiplication is faster but user supplies
ADU/energy)
- Cast after applying gain map not to loose precision (Important for
int32 clusters)
- "factor" for ClusterFileSink 
- Cluster size available to be able to create the right file sink
2025-04-25 10:31:16 +02:00
86d343f5f5 merged with developer
Some checks failed
Build on RHEL9 / build (push) Successful in 2m9s
Build on RHEL8 / build (push) Failing after 2m32s
2025-04-23 11:45:04 +02:00
129e7e9f9d Merge branch 'developer' of github.com:slsdetectorgroup/aare into developer
All checks were successful
Build on RHEL9 / build (push) Successful in 1m59s
Build on RHEL8 / build (push) Successful in 2m29s
2025-04-22 16:24:32 +02:00
58c934d9cf added mpl to conda specs 2025-04-22 16:24:15 +02:00
4088b0889d Merge branch 'main' into developer 2025-04-22 16:18:48 +02:00
d5f8daf194 removed debug option in CMakelist
All checks were successful
Build on RHEL9 / buildh (push) Successful in 2m36s
2025-04-22 16:16:31 +02:00
c6e8e5f6a1 inverted gain map 2025-04-22 16:16:27 +02:00
b501c31e38 added missed commit 2025-04-22 15:22:47 +02:00
326941e2b4 Custom base for decoding ADC data (#163)
New function apply_custom_weights (can we find a better name) that takes
a uint16 and a NDView<double,1> of bases for the conversion. For each
supplied weight it is used as base (instead of 2) to convert from bits
to a double.

---------

Co-authored-by: siebsi <sieb.patr@gmail.com>
2025-04-22 15:20:46 +02:00
84aafa75f6 Building wheels and uploading to pypi (#160)
All checks were successful
Build on RHEL9 / build (push) Successful in 1m56s
Build on RHEL8 / build (push) Successful in 2m13s
Still to be resolved in another PR: 

- Consistent versioning across compiled code, conda and pypi
2025-04-22 08:36:34 +02:00
177459c98a added multithreaded cluster finder test
All checks were successful
Build on RHEL9 / buildh (push) Successful in 2m20s
2025-04-17 17:09:53 +02:00
c49a2fdf8e removed cluster_2x2 and cluster3x3 specializations
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m58s
2025-04-16 16:40:42 +02:00
14211047ff added function warpper around ClusterFinderMT and ClusterCollector to construct object 2025-04-16 14:22:44 +02:00
acd9d5d487 moved parts of ClusterFile implementation into declaration
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m55s
2025-04-15 15:15:34 +02:00
d4050ec557 enum is now enum class 2025-04-15 14:57:25 +02:00
fca9d5d2fa replaced extract template parameters 2025-04-15 14:40:09 +02:00
1174f7f434 fixed calculate eta 2025-04-15 13:18:25 +02:00
2bb7d360bf Adding more tests, fixing hitmap and reading with cuts (#161)
- Fix for hitmap
- Fix for reading clusters with cut
- Added more tests around eta
- Added factory function for creating the cluster finder
2025-04-15 12:25:01 +02:00
a90e532b21 removed extra sum after merge
Some checks failed
Build on RHEL9 / buildh (push) Failing after 1m57s
2025-04-15 08:08:59 +02:00
8d8182c632 qMerge branch 'testing_clusters' of github.com:slsdetectorgroup/aare into testing_clusters 2025-04-15 08:05:12 +02:00
5f34ab6df1 minor comment 2025-04-15 08:05:05 +02:00
5c8a5099fd Merge branch 'api_cluster_vector' into testing_clusters 2025-04-14 16:40:47 +02:00
7c93632605 tests and fix 2025-04-14 16:38:25 +02:00
54def26334 added ClusterFile tests fixed some bugs in ClusterFile
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m55s
2025-04-14 15:48:09 +02:00
a59e9656be Making RawSubFile usable from Python (#158)
All checks were successful
Build on RHEL8 / build (push) Successful in 1m55s
Build on RHEL9 / build (push) Successful in 1m44s
- Removed a printout left from debugging
- return also header when reading
- added read_n 
- check for error in ifstream
2025-04-11 16:54:21 +02:00
3f753ec900 Some fixes (need more testing later) (#159)
Some checks failed
Build on RHEL9 / buildh (push) Failing after 1m51s
- Change of pointer size caused out of bounds write
- UB to write to memory reserved by std::vector::reserver --> allocate
dummy clusters by using resize instead
   - but now we can't reserve like we want to, need a fix. 
- format string not working, fixed
2025-04-11 14:43:12 +02:00
15e52565a9 dont convert to byte 2025-04-11 14:35:20 +02:00
e71569b15e resize before read 2025-04-11 13:38:33 +02:00
92f5421481 np test 2025-04-10 16:58:47 +02:00
113f34cc98 fixes 2025-04-10 16:50:04 +02:00
53a90e197e added additional tests
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m52s
2025-04-10 10:41:58 +02:00
6e4db45b57 Activated RH8 build on PSI gitea (#155)
All checks were successful
Build on RHEL8 / build (push) Successful in 1m56s
Build on RHEL9 / build (push) Successful in 1m44s
2025-04-10 10:17:16 +02:00
76f050f69f solved merge conflict
Some checks failed
Build on RHEL9 / buildh (push) Failing after 1m22s
2025-04-10 09:21:50 +02:00
a13affa4d3 changed template arguments added tests 2025-04-10 09:13:58 +02:00
8b0eee1e66 fixed warnings and removed ambiguous read_frame (#154)
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m47s
Fixed warnings:
- unused variable in Interpolator
- Narrowing conversions uint64-->int64

Removed an ambiguous function from JungfrauDataFile
- NDarry read_frame(header&=nullptr)
- Frame read_frame()

NDArray and NDView size() is now signed
2025-04-09 17:54:55 +02:00
894065fe9c added utility plot
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m48s
2025-04-09 12:19:14 +02:00
f16273a566 Adding support for Jungfrau .dat files (#152)
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m48s
closes #150 

**Not addressed in this PR:** 

- pixels_per_frame, bytes_per_frame and tell should be made cost in
FileInterface
2025-04-08 15:31:04 +02:00
20d1d02fda function signature for push back (#153)
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 48s
This example now works:
```python
cl = Cluster3x3i(5,7,np.array((1,2,3,4,5,6,7,8,9), dtype = np.int32))
cv = ClusterVector_Cluster3x3i()
cv.push_back(cl)
```
2025-04-07 17:18:17 +02:00
10e4e10431 function signature for push back 2025-04-07 15:33:37 +02:00
017960d963 added push_back property
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 37s
2025-04-07 13:41:14 +02:00
a12e43b176 underlying container of ClusterVcetor is now a std::vector 2025-04-07 12:27:44 +02:00
9de84a7f87 added some python tests
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 41s
2025-04-04 17:19:15 +02:00
885309d97c fix build
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 43s
2025-04-03 17:14:28 +02:00
e24ed68416 fixed include 2025-04-03 16:50:02 +02:00
248d25486f refactored python files 2025-04-03 16:38:12 +02:00
7db1ae4d94 Dev/gitea ci (#151)
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m41s
Build and test on internal PSI gitea
2025-04-03 13:18:55 +02:00
a24bbd9cf9 started to do python refactoring
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 44s
2025-04-03 11:56:25 +02:00
d7ef9bb1d8 missed some refactoring of datatypes
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 49s
2025-04-03 11:36:15 +02:00
de9fc16e89 generalize is_selected 2025-04-03 09:28:54 +02:00
85a6b5b95e suppress compiler warnings 2025-04-03 09:28:02 +02:00
50eeba4005 restructured GainMap to have own class and generalized
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 40s
2025-04-02 17:58:26 +02:00
98d2d6098e refactored other cpp files 2025-04-02 16:00:46 +02:00
61af1105a1 templated eta and updated test 2025-04-02 14:42:38 +02:00
240960d3e7 generalized FindCluster to read in general cluster sizes - assuming that finding cluster center is equal for all clusters 2025-04-02 12:05:16 +02:00
04728929cb implemented sum_2x2() for general clusters, only one calculate_eta2 function for all clusters
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 37s
2025-04-01 18:29:08 +02:00
3083d51699 merge conflict 2025-04-01 17:50:11 +02:00
4240942cec solved merge conflict 2025-04-01 17:48:48 +02:00
745d09fbe9 changed push_back to take Cluster as input argument 2025-04-01 15:30:10 +02:00
8cad7a50a6 fixed py
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 42s
2025-04-01 15:00:03 +02:00
9d8e803474 Merge branch 'main' into developer 2025-04-01 14:35:27 +02:00
a42c0d645b added roi, noise and gain (#143)
- Moved definitions of Cluster_2x2 and Cluster_3x3 to it's own file
- Added optional members for ROI, noise_map and gain_map in ClusterFile

**API:**

After creating the ClusterFile the user can set one or all of: roi,
noise_map, gain_map

```python
f = ClusterFile(fname)
f.set_roi(roi) #aare.ROI
f.set_noise_map(noise_map) #numpy array
f.set_gain_map(gain_map) #numpy array
```

**When reading clusters they are evaluated in the order:**

1. If ROI is enabled check that the cluster is within the ROI
1. If noise_map is enabled check that the cluster meets one of the
conditions
    - Center pixel above noise
    - Highest 2x2 sum above 2x noise
    - 3x3 sum above 3x noise
1. If gain_map is set apply the gain map before returning the clusters
(not used for noise cut)

**Open questions:**
1. Check for out of bounds access in noise and gain map?

closes #139 
closes #135 
closes #90
2025-04-01 14:31:25 +02:00
508adf5016 refactoring of remaining files
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 40s
Build the package using cmake then documentation / deploy (push) Has been skipped
2025-04-01 10:01:23 +02:00
e038bd1646 refactored and put calculate_eta function in seperate file 2025-03-31 17:35:39 +02:00
7e5f91c6ec added benchmark to time generalize calculate_eta - twice as long so will keep specific version for 2x2 and 3x3 clusters 2025-03-31 17:04:57 +02:00
ed9ef7c600 removed analyze_cluster function as not used anymore
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 52s
Build the package using cmake then documentation / deploy (push) Has been skipped
2025-03-31 12:26:29 +02:00
57bb6c71ae ClusterSize should be larger than 1
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 51s
Build the package using cmake then documentation / deploy (push) Has been skipped
2025-03-28 14:49:55 +01:00
f8f98b6ec3 Generalized calculate_eta2 function to work with general cluster types 2025-03-28 14:29:20 +01:00
0876b6891a cpp Cluster and ClusterVector and ClusterFile are templated now, they support generic cluster types 2025-03-25 21:42:50 +01:00
6ad76f63c1 Fixed reading clusters with ROI (#142)
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 9s
Fixed incorrect reading of clusters with ROI


closes #141
2025-03-24 14:28:10 +01:00
6e7e81b36b complete mess but need to install RedHat 9 2025-03-21 16:32:54 +01:00
b529b6d33b Merge branch 'main' into developer
All checks were successful
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Successful in 1m33s
2025-03-19 19:29:15 +01:00
602b04e49f bumped version number
All checks were successful
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Successful in 1m35s
2025-03-18 17:47:05 +01:00
11cd2ec654 Interpolate (#137)
- added eta based interpolation
2025-03-18 17:45:38 +01:00
e59a361b51 removed workspace
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 48s
Build the package using cmake then documentation / deploy (push) Has been skipped
2025-03-17 15:23:55 +01:00
1ad362ccfc added action for gitea (#136)
All checks were successful
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Successful in 1m30s
2025-03-17 15:21:59 +01:00
332bdeb02b modified algo 2025-03-14 11:07:09 +01:00
3a987319d4 WIP 2025-03-05 21:51:23 +01:00
5614cb4673 WIP 2025-03-05 17:40:08 +01:00
8ae6bb76f8 removed warnings added clang-tidy 2025-02-21 11:18:39 +01:00
1d2c38c1d4 Enable VarClusterFinder (#134)
Co-authored-by: xiangyu.xie <xiangyu.xie@psi.ch>
2025-02-19 16:11:24 +01:00
fc1c9f35d6 Merge branch 'main' into developer 2025-02-18 21:52:20 +01:00
5d2f25a6e9 bumped version number 2025-02-18 21:44:03 +01:00
6a83988485 Added chi2 to fit results (#131)
- fit_gaus and fit_pol1 now return a dict
- calculate chi2 after fit
- cleaned up code
2025-02-18 21:13:27 +01:00
8abfc68138 fixed linking to lmfit (#130)
using "$<BUILD_INTERFACE:lmfit>" to exclude the target lmfit from being
included in the installed aare target
2025-02-18 15:54:52 +01:00
8ff6f9f506 fixed linking to lmfit 2025-02-18 15:49:46 +01:00
dcb9a98faa bumped version 2025-02-12 16:49:30 +01:00
7309cff47c Added fitting with lmfit (#128)
- added stand alone fitting using:
https://jugit.fz-juelich.de/mlz/lmfit.git
- fit_gaus, fit_pol1 with and without errors
- multi threaded fitting

---------

Co-authored-by: JulianHeymes <julian.heymes@psi.ch>
2025-02-12 16:35:48 +01:00
c0c5e07ad8 added decoding of adc_sar_04 (#127) 2025-02-12 16:17:32 +01:00
2faa317bdf removed debug line 2025-02-12 10:59:18 +01:00
f7031d7f87 Update CMakeLists.txt
Removed flto=auto which caused issues with gcc 8.5
2025-02-12 10:52:55 +01:00
d86cb533c8 Fix minor warnings (#126)
- Unused variables
- signed vs. unsigned
- added -flto=auto
2025-02-11 11:48:01 +01:00
4c750cc3be Fixing ROI read of RawFile (#125)
- Bugfixes
- New abstraction for detector geometry
- Tests for updating geo with ROI
2025-02-11 11:08:22 +01:00
e96fe31f11 removed main and token 2025-02-05 15:55:55 +01:00
cd5a738696 disable upload on dev 2025-02-05 15:44:45 +01:00
1ba43b69d3 fix 2025-02-05 15:16:16 +01:00
fff536782b disable auto upload 2025-02-05 15:13:53 +01:00
5a3ca2ae2d Decoding for ADC SAR 05 64->16bit (#124)
Co-authored-by: Patrick <patrick.sieberer@psi.ch>
2025-02-05 14:40:26 +01:00
078e5d81ec docs 2025-01-15 16:40:34 +01:00
6cde968c60 summing 2x2 2025-01-15 16:12:06 +01:00
f6d736facd docs for ClusterFile 2025-01-15 09:15:41 +01:00
e1cc774d6c Multi threaded cluster finder (#117) 2025-01-14 21:36:25 +01:00
d0f435a7ab bounds checking on subfiles 2025-01-10 19:02:50 +01:00
7ce02006f2 clear pedestal 2025-01-10 17:26:23 +01:00
7550a2cb97 fixing read bug 2025-01-10 15:33:56 +01:00
caf7b4ecdb added docs for ClusterFinderMT 2025-01-10 10:22:04 +01:00
72d10b7735 Multi threaded cluster finder. (#115)
Added a prototype for the multi threaded cluster finder including python
bindings
2025-01-09 16:55:35 +01:00
cc95561eda MultiThreaded Cluster finder 2025-01-09 16:53:22 +01:00
dc9e10016d WIP 2025-01-08 16:45:24 +01:00
21ce7a3efa bumped version 2025-01-07 16:33:16 +01:00
acdce8454b moved pd to double 2025-01-07 15:01:43 +01:00
d07da42745 bitdepths 2025-01-07 12:27:01 +01:00
0b252709bd rank of virtual parameters is 2 and not 1 as in single module, single file acquisition 2024-12-05 01:02:48 +01:00
e5df929a9a minor semicolon typo. and fix error message 2024-12-05 00:15:07 +01:00
b7337fc6c5 including header 2024-12-04 15:38:10 +01:00
09de69c090 python works 2024-12-04 15:02:57 +01:00
b23e697e26 works for hdf5, needs refactoring 2024-12-04 00:52:36 +01:00
4233509615 first draft of hdf5, reads master metadata, reads data dims, process of hyperslab 2024-12-03 21:16:58 +01:00
120 changed files with 4848 additions and 2093 deletions

View File

@ -2,7 +2,10 @@ name: Build the package using cmake then documentation
on:
workflow_dispatch:
push:
pull_request:
release:
types:
- published
permissions:
@ -55,7 +58,7 @@ jobs:
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main'
if: (github.event_name == 'release' && github.event.action == 'published') || (github.event_name == 'workflow_dispatch' )
steps:
- name: Deploy to GitHub Pages
id: deployment

View File

@ -53,6 +53,7 @@ option(AARE_DOCS "Build documentation" OFF)
option(AARE_VERBOSE "Verbose output" OFF)
option(AARE_CUSTOM_ASSERT "Use custom assert" OFF)
option(AARE_INSTALL_PYTHONEXT "Install the python extension in the install tree under CMAKE_INSTALL_PREFIX/aare/" OFF)
option(AARE_HDF5 "Hdf5 File Format" OFF)
option(AARE_ASAN "Enable AddressSanitizer" OFF)
# Configure which of the dependencies to use FetchContent for
@ -81,7 +82,7 @@ if(AARE_VERBOSE)
add_compile_definitions(AARE_VERBOSE)
add_compile_definitions(AARE_LOG_LEVEL=aare::logDEBUG5)
else()
add_compile_definitions(AARE_LOG_LEVEL=aare::logERROR)
add_compile_definitions(AARE_LOG_LEVEL=aare::logINFOBLUE)
endif()
if(AARE_CUSTOM_ASSERT)
@ -356,6 +357,10 @@ set(PUBLICHEADERS
include/aare/CtbRawFile.hpp
include/aare/ClusterVector.hpp
include/aare/decode.hpp
include/aare/type_traits.hpp
include/aare/scan_parameters.hpp
include/aare/to_string.hpp
include/aare/string_utils.hpp
include/aare/defs.hpp
include/aare/Dtype.hpp
include/aare/File.hpp
@ -366,6 +371,7 @@ set(PUBLICHEADERS
include/aare/GainMap.hpp
include/aare/geo_helpers.hpp
include/aare/JungfrauDataFile.hpp
include/aare/logger.hpp
include/aare/NDArray.hpp
include/aare/NDView.hpp
include/aare/NumpyFile.hpp
@ -383,6 +389,8 @@ set(PUBLICHEADERS
set(SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/CtbRawFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/to_string.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/string_utils.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/decode.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Frame.cpp
@ -402,6 +410,22 @@ set(SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/utils/ifstream_helpers.cpp
)
# HDF5
if (AARE_HDF5)
find_package(HDF5 1.10 COMPONENTS CXX REQUIRED)
add_definitions(
${HDF5_DEFINITIONS}
)
list (APPEND PUBLICHEADERS
include/aare/Hdf5File.hpp
include/aare/Hdf5MasterFile.hpp
)
list (APPEND SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/Hdf5File.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Hdf5MasterFile.cpp
)
endif (AARE_HDF5)
add_library(aare_core STATIC ${SourceFiles})
target_include_directories(aare_core PUBLIC
"$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>"
@ -424,6 +448,16 @@ target_link_libraries(
)
if (AARE_HDF5 AND HDF5_FOUND)
add_definitions(-DHDF5_FOUND)
target_link_libraries(aare_core PUBLIC
${HDF5_LIBRARIES}
)
target_include_directories(aare_core PUBLIC
${HDF5_INCLUDE_DIRS}
)
endif()
set_target_properties(aare_core PROPERTIES
ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}
PUBLIC_HEADER "${PUBLICHEADERS}"
@ -437,6 +471,8 @@ if(AARE_TESTS)
set(TestSources
${CMAKE_CURRENT_SOURCE_DIR}/src/algorithm.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/to_string.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/scan_parameters.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/decode.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Frame.test.cpp
@ -459,11 +495,18 @@ if(AARE_TESTS)
${CMAKE_CURRENT_SOURCE_DIR}/src/utils/task.test.cpp
)
if(HDF5_FOUND)
list (APPEND TestSources
${CMAKE_CURRENT_SOURCE_DIR}/src/Hdf5MasterFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Hdf5File.test.cpp
)
endif()
target_sources(tests PRIVATE ${TestSources} )
endif()
###------------------------------------------------------------------------------------------
###------------------------------------------------------------------------------------------

22
RELEASE.md Normal file
View File

@ -0,0 +1,22 @@
# Release notes
### head
Features:
- Cluster finder now works with 5x5, 7x7 and 9x9 clusters
### 2025.05.22
Features:
- Added scurve fitting
Bugfixes:
- Fixed crash when opening raw files with large number of data files

View File

@ -41,8 +41,8 @@ BENCHMARK_F(ClusterFixture, Calculate2x2Eta)(benchmark::State &st) {
}
// almost takes double the time
BENCHMARK_F(ClusterFixture,
CalculateGeneralEtaFor2x2Cluster)(benchmark::State &st) {
BENCHMARK_F(ClusterFixture, CalculateGeneralEtaFor2x2Cluster)
(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
Eta2 eta = calculate_eta2<int, 2, 2>(cluster_2x2);
@ -59,8 +59,8 @@ BENCHMARK_F(ClusterFixture, Calculate3x3Eta)(benchmark::State &st) {
}
// almost takes double the time
BENCHMARK_F(ClusterFixture,
CalculateGeneralEtaFor3x3Cluster)(benchmark::State &st) {
BENCHMARK_F(ClusterFixture, CalculateGeneralEtaFor3x3Cluster)
(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
Eta2 eta = calculate_eta2<int, 3, 3>(cluster_3x3);

View File

@ -1,19 +1,18 @@
#include <benchmark/benchmark.h>
#include "aare/NDArray.hpp"
#include <benchmark/benchmark.h>
using aare::NDArray;
constexpr ssize_t size = 1024;
class TwoArrays : public benchmark::Fixture {
public:
NDArray<int,2> a{{size,size},0};
NDArray<int,2> b{{size,size},0};
void SetUp(::benchmark::State& state) {
for(uint32_t i = 0; i < size; i++){
for(uint32_t j = 0; j < size; j++){
a(i, j)= i*j+1;
b(i, j)= i*j+1;
public:
NDArray<int, 2> a{{size, size}, 0};
NDArray<int, 2> b{{size, size}, 0};
void SetUp(::benchmark::State &state) {
for (uint32_t i = 0; i < size; i++) {
for (uint32_t j = 0; j < size; j++) {
a(i, j) = i * j + 1;
b(i, j) = i * j + 1;
}
}
}
@ -22,20 +21,17 @@ public:
// }
};
BENCHMARK_F(TwoArrays, AddWithOperator)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, AddWithOperator)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res = a+b;
NDArray<int, 2> res = a + b;
benchmark::DoNotOptimize(res);
}
}
BENCHMARK_F(TwoArrays, AddWithIndex)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, AddWithIndex)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res(a.shape());
NDArray<int, 2> res(a.shape());
for (uint32_t i = 0; i < a.size(); i++) {
res(i) = a(i) + b(i);
}
@ -43,17 +39,17 @@ BENCHMARK_F(TwoArrays, AddWithIndex)(benchmark::State& st) {
}
}
BENCHMARK_F(TwoArrays, SubtractWithOperator)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, SubtractWithOperator)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res = a-b;
NDArray<int, 2> res = a - b;
benchmark::DoNotOptimize(res);
}
}
BENCHMARK_F(TwoArrays, SubtractWithIndex)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, SubtractWithIndex)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res(a.shape());
NDArray<int, 2> res(a.shape());
for (uint32_t i = 0; i < a.size(); i++) {
res(i) = a(i) - b(i);
}
@ -61,17 +57,17 @@ BENCHMARK_F(TwoArrays, SubtractWithIndex)(benchmark::State& st) {
}
}
BENCHMARK_F(TwoArrays, MultiplyWithOperator)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, MultiplyWithOperator)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res = a*b;
NDArray<int, 2> res = a * b;
benchmark::DoNotOptimize(res);
}
}
BENCHMARK_F(TwoArrays, MultiplyWithIndex)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, MultiplyWithIndex)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res(a.shape());
NDArray<int, 2> res(a.shape());
for (uint32_t i = 0; i < a.size(); i++) {
res(i) = a(i) * b(i);
}
@ -79,17 +75,17 @@ BENCHMARK_F(TwoArrays, MultiplyWithIndex)(benchmark::State& st) {
}
}
BENCHMARK_F(TwoArrays, DivideWithOperator)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, DivideWithOperator)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res = a/b;
NDArray<int, 2> res = a / b;
benchmark::DoNotOptimize(res);
}
}
BENCHMARK_F(TwoArrays, DivideWithIndex)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, DivideWithIndex)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res(a.shape());
NDArray<int, 2> res(a.shape());
for (uint32_t i = 0; i < a.size(); i++) {
res(i) = a(i) / b(i);
}
@ -97,17 +93,17 @@ BENCHMARK_F(TwoArrays, DivideWithIndex)(benchmark::State& st) {
}
}
BENCHMARK_F(TwoArrays, FourAddWithOperator)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, FourAddWithOperator)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res = a+b+a+b;
NDArray<int, 2> res = a + b + a + b;
benchmark::DoNotOptimize(res);
}
}
BENCHMARK_F(TwoArrays, FourAddWithIndex)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, FourAddWithIndex)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res(a.shape());
NDArray<int, 2> res(a.shape());
for (uint32_t i = 0; i < a.size(); i++) {
res(i) = a(i) + b(i) + a(i) + b(i);
}
@ -115,17 +111,17 @@ BENCHMARK_F(TwoArrays, FourAddWithIndex)(benchmark::State& st) {
}
}
BENCHMARK_F(TwoArrays, MultiplyAddDivideWithOperator)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, MultiplyAddDivideWithOperator)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res = a*a+b/a;
NDArray<int, 2> res = a * a + b / a;
benchmark::DoNotOptimize(res);
}
}
BENCHMARK_F(TwoArrays, MultiplyAddDivideWithIndex)(benchmark::State& st) {
BENCHMARK_F(TwoArrays, MultiplyAddDivideWithIndex)(benchmark::State &st) {
for (auto _ : st) {
// This code gets timed
NDArray<int,2> res(a.shape());
NDArray<int, 2> res(a.shape());
for (uint32_t i = 0; i < a.size(); i++) {
res(i) = a(i) * a(i) + b(i) / a(i);
}

View File

@ -14,7 +14,6 @@ set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR})
file(GLOB SPHINX_SOURCE_FILES CONFIGURE_DEPENDS "src/*.rst")
foreach(filename ${SPHINX_SOURCE_FILES})
get_filename_component(fname ${filename} NAME)
message(STATUS "Copying ${filename} to ${SPHINX_BUILD}/src/${fname}")

8
docs/src/Hdf5File.rst Normal file
View File

@ -0,0 +1,8 @@
Hdf5File
===============
.. doxygenclass:: aare::Hdf5File
:members:
:undoc-members:
:private-members:

View File

@ -0,0 +1,14 @@
Hdf5MasterFile
===============
.. doxygenclass:: aare::Hdf5MasterFile
:members:
:undoc-members:
:private-members:
.. doxygenclass:: aare::Hdf5FileNameComponents
:members:
:undoc-members:
:private-members:

47
docs/src/Philosophy.rst Normal file
View File

@ -0,0 +1,47 @@
****************
Philosophy
****************
Fast code with a simple interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Aare should be fast and efficient, but also easy to use. We strive to keep a simple interface that feels intuitive.
Internally we use C++ for performance and the ability to integrate the library in other programs, but we see most
users using the Python interface.
Live at head
~~~~~~~~~~~~~~~~~~
As a user of the library you should be able to, and is expected to, use the latest version. Bug fixes will rarely be backported
to older releases. By upgrading frequently you will benefit from the latest features and minimize the effort to maintain your scripts/code
by doing several small upgrades instead of one big upgrade.
API
~~~~~~~~~~~~~~~~~~
We aim to keep the API stable and only break it for good reasons. But specially now in the early stages of development
the API will change. On those occasions it will be clearly stated in the release notes. However, the norm should be a
backward compatible API.
Documentation
~~~~~~~~~~~~~~~~~~
Being a library it is important to have a well documented API. We use Doxygen to generate the C++ documentation
and Sphinx for the Python part. Breathe is used to integrate the two into one Sphinx html site. The documentation is built
automatically on release by the CI and published to GitHub pages. In addition to the generated API documentation,
certain classes might need more descriptions of the usage. This is then placed in the .rst files in the docs/src directory.
.. attention::
The code should be well documented, but using descriptive names is more important. In the same spirit
if a function is called `getNumberOfFrames()` you don't need to write a comment saying that it gets the
number of frames.
Dependencies
~~~~~~~~~~~~~~~~~~
Deployment in the scientific community is often tricky. Either due to old OS versions or the lack of package managers.
We strive to keep the dependencies to a minimum and will vendor some libraries to simplify deployment even though it comes
at a cost of build time.

View File

@ -2,18 +2,21 @@ Requirements
==============================================
- C++17 compiler (gcc 8/clang 7)
- CMake 3.14+
- CMake 3.15+
**Internally used libraries**
.. note ::
These can also be picked up from the system/conda environment by specifying:
To save compile time some of the dependencies can also be picked up from the system/conda environment by specifying:
-DAARE_SYSTEM_LIBRARIES=ON during the cmake configuration.
- pybind11
To simplify deployment we build and statically link a few libraries.
- fmt
- lmfit - https://jugit.fz-juelich.de/mlz/lmfit
- nlohmann_json
- pybind11
- ZeroMQ
**Extra dependencies for building documentation**

86
docs/src/Workflow.rst Normal file
View File

@ -0,0 +1,86 @@
****************
Workflow
****************
This page describes how we develop aare.
GitHub centric
~~~~~~~~~~~~~~~~~~
We use GitHub for all development. Issues and pull requests provide a platform for collaboration as well
as a record of the development process. Even if we discuss things in person, we record the outcome in an issue.
If a particular implementation is chosen over another, the reason should be recorded in the pull request.
Branches
~~~~~~~~~~~~~~~~~~
We aim for an as lightweight branching strategy as possible. Short-lived feature branches are merged back into main.
The main branch is expected to always be in a releasable state. A release is simply a tag on main which provides a
reference and triggers the CI to build the release artifacts (conda, pypi etc.). For large features consider merging
smaller chunks into main as they are completed, rather than waiting for the entire feature to be finished. Worst case
make sure your feature branch merges with main regularly to avoid large merge conflicts later on.
.. note::
The main branch is expected to always work. Feel free to pull from main instead of sticking to a
release
Releases
~~~~~~~~~~~~~~~~~~
Release early, release often. As soon as "enough" new features have been implemented, a release is created.
A release should not be a big thing, rather a routine part of development that does not require any special person or
unfamiliar steps.
Checklists for deployment
~~~~~~~~~~~~~~~~~~
**Feature:**
#. Create a new issue for the feature (label feature)
#. Create a new branch from main.
#. Implement the feature including test and documentation
#. Add the feature to RELEASE.md under head
#. Create a pull request linked to the issue
#. Code is reviewed by at least one other person
#. Once approved, the branch is merged into main
**BugFix:**
Essentially the same as for a feature, if possible start with
a failing test that demonstrates the bug.
#. Create a new issue for the bug (label bug)
#. Create a new branch from main.
#. **Write a test that fails for the bug**
#. Implement the fix
#. **Run the test to ensure it passes**
#. Add the bugfix to RELEASE.md under head
#. Create a pull request linked to the issue.
#. Code is reviewed by at least one other person
#. Once approved, the branch is merged into main
**Release:**
#. Once "enough" new features have been implemented, a release is created
#. Update RELEASE.md with the tag of the release and verify that it is complete
#. Create the release in GitHub describing the new features and bug fixes
#. CI makes magic
**Update documentation only:**
.. attention::
It's possible to update the documentation without changing the code, but take
care since the docs will reflect the code in main and not the latest release.
#. Create a PR to main with the documentation changes
#. Create a pull request linked to the issue.
#. Code is reviewed by at least one other person
#. Once merged you can manually trigger the CI workflow for documentation

View File

@ -31,6 +31,8 @@ AARE
pyJungfrauDataFile
pyRawFile
pyRawMasterFile
pyHdf5File
pyHdf5MasterFile
pyVarClusterFinder
pyFit
@ -55,6 +57,8 @@ AARE
RawFile
RawSubFile
RawMasterFile
Hdf5File
Hdf5MasterFile
VarClusterFinder
@ -63,4 +67,6 @@ AARE
:caption: Developer
:maxdepth: 3
Philosophy
Workflow
Tests

10
docs/src/pyHdf5File.rst Normal file
View File

@ -0,0 +1,10 @@
Hdf5File
===================
.. py:currentmodule:: aare
.. autoclass:: Hdf5File
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

View File

@ -0,0 +1,10 @@
Hdf5MasterFile
===================
.. py:currentmodule:: aare
.. autoclass:: Hdf5MasterFile
:members:
:undoc-members:
:show-inheritance:
:inherited-members:

View File

@ -1,10 +1,9 @@
#pragma once
#include <cstdint>
#include <cstddef>
#include "aare/defs.hpp"
#include <array>
#include <cassert>
#include "aare/defs.hpp"
#include <cstddef>
#include <cstdint>
namespace aare {
@ -15,7 +14,9 @@ template <typename E, ssize_t Ndim> class ArrayExpr {
auto operator[](size_t i) const { return static_cast<E const &>(*this)[i]; }
auto operator()(size_t i) const { return static_cast<E const &>(*this)[i]; }
auto size() const { return static_cast<E const &>(*this).size(); }
std::array<ssize_t, Ndim> shape() const { return static_cast<E const &>(*this).shape(); }
std::array<ssize_t, Ndim> shape() const {
return static_cast<E const &>(*this).shape();
}
};
template <typename A, typename B, ssize_t Ndim>
@ -47,7 +48,7 @@ class ArraySub : public ArrayExpr<ArraySub<A, B, Ndim>, Ndim> {
};
template <typename A, typename B, ssize_t Ndim>
class ArrayMul : public ArrayExpr<ArrayMul<A, B, Ndim>,Ndim> {
class ArrayMul : public ArrayExpr<ArrayMul<A, B, Ndim>, Ndim> {
const A &arr1_;
const B &arr2_;
@ -74,15 +75,13 @@ class ArrayDiv : public ArrayExpr<ArrayDiv<A, B, Ndim>, Ndim> {
std::array<ssize_t, Ndim> shape() const { return arr1_.shape(); }
};
template <typename A, typename B, ssize_t Ndim>
auto operator+(const ArrayExpr<A, Ndim> &arr1, const ArrayExpr<B, Ndim> &arr2) {
return ArrayAdd<ArrayExpr<A, Ndim>, ArrayExpr<B, Ndim>, Ndim>(arr1, arr2);
}
template <typename A, typename B, ssize_t Ndim>
auto operator-(const ArrayExpr<A,Ndim> &arr1, const ArrayExpr<B, Ndim> &arr2) {
auto operator-(const ArrayExpr<A, Ndim> &arr1, const ArrayExpr<B, Ndim> &arr2) {
return ArraySub<ArrayExpr<A, Ndim>, ArrayExpr<B, Ndim>, Ndim>(arr1, arr2);
}
@ -96,6 +95,4 @@ auto operator/(const ArrayExpr<A, Ndim> &arr1, const ArrayExpr<B, Ndim> &arr2) {
return ArrayDiv<ArrayExpr<A, Ndim>, ArrayExpr<B, Ndim>, Ndim>(arr1, arr2);
}
} // namespace aare

View File

@ -17,7 +17,8 @@ template <class ItemType> class CircularFifo {
public:
CircularFifo() : CircularFifo(100){};
CircularFifo(uint32_t size) : fifo_size(size), free_slots(size + 1), filled_slots(size + 1) {
CircularFifo(uint32_t size)
: fifo_size(size), free_slots(size + 1), filled_slots(size + 1) {
// TODO! how do we deal with alignment for writing? alignas???
// Do we give the user a chance to provide memory locations?
@ -55,7 +56,8 @@ template <class ItemType> class CircularFifo {
bool try_pop_free(ItemType &v) { return free_slots.read(v); }
ItemType pop_value(std::chrono::nanoseconds wait, std::atomic<bool> &stopped) {
ItemType pop_value(std::chrono::nanoseconds wait,
std::atomic<bool> &stopped) {
ItemType v;
while (!filled_slots.read(v) && !stopped) {
std::this_thread::sleep_for(wait);

View File

@ -5,6 +5,8 @@
#include "aare/GainMap.hpp"
#include "aare/NDArray.hpp"
#include "aare/defs.hpp"
#include "aare/logger.hpp"
#include <filesystem>
#include <fstream>
#include <optional>
@ -369,11 +371,15 @@ ClusterFile<ClusterType, Enable>::read_frame_without_cut() {
"Could not read number of clusters");
}
LOG(logDEBUG1) << "Reading " << n_clusters << " clusters from frame "
<< frame_number;
ClusterVector<ClusterType> clusters(n_clusters);
clusters.set_frame_number(frame_number);
clusters.resize(n_clusters);
LOG(logDEBUG1) << "clusters.item_size(): " << clusters.item_size();
if (fread(clusters.data(), clusters.item_size(), n_clusters, fp) !=
static_cast<size_t>(n_clusters)) {
throw std::runtime_error(LOCATION + "Could not read clusters");

View File

@ -21,7 +21,7 @@ class ClusterFileSink {
void process() {
m_stopped = false;
fmt::print("ClusterFileSink started\n");
LOG(logDEBUG) << "ClusterFileSink started";
while (!m_stop_requested || !m_source->isEmpty()) {
if (ClusterVector<ClusterType> *clusters = m_source->frontPtr();
clusters != nullptr) {
@ -41,13 +41,16 @@ class ClusterFileSink {
std::this_thread::sleep_for(m_default_wait);
}
}
fmt::print("ClusterFileSink stopped\n");
LOG(logDEBUG) << "ClusterFileSink stopped";
m_stopped = true;
}
public:
ClusterFileSink(ClusterFinderMT<ClusterType, uint16_t, double> *source,
const std::filesystem::path &fname) {
LOG(logDEBUG) << "ClusterFileSink: "
<< "source: " << source->sink()
<< ", file: " << fname.string();
m_source = source->sink();
m_thread = std::thread(&ClusterFileSink::process, this);
m_file.open(fname, std::ios::binary);

View File

@ -38,7 +38,11 @@ class ClusterFinder {
: m_image_size(image_size), m_nSigma(nSigma),
c2(sqrt((ClusterSizeY + 1) / 2 * (ClusterSizeX + 1) / 2)),
c3(sqrt(ClusterSizeX * ClusterSizeY)),
m_pedestal(image_size[0], image_size[1]), m_clusters(capacity) {};
m_pedestal(image_size[0], image_size[1]), m_clusters(capacity) {
LOG(logDEBUG) << "ClusterFinder: "
<< "image_size: " << image_size[0] << "x" << image_size[1]
<< ", nSigma: " << nSigma << ", capacity: " << capacity;
}
void push_pedestal_frame(NDView<FRAME_TYPE, 2> frame) {
m_pedestal.push(frame);

View File

@ -8,6 +8,7 @@
#include "aare/ClusterFinder.hpp"
#include "aare/NDArray.hpp"
#include "aare/ProducerConsumerQueue.hpp"
#include "aare/logger.hpp"
namespace aare {
@ -123,6 +124,12 @@ class ClusterFinderMT {
size_t capacity = 2000, size_t n_threads = 3)
: m_n_threads(n_threads) {
LOG(logDEBUG1) << "ClusterFinderMT: "
<< "image_size: " << image_size[0] << "x"
<< image_size[1] << ", nSigma: " << nSigma
<< ", capacity: " << capacity
<< ", n_threads: " << n_threads;
for (size_t i = 0; i < n_threads; i++) {
m_cluster_finders.push_back(
std::make_unique<

View File

@ -1,25 +1,25 @@
#pragma once
#include "aare/FileInterface.hpp"
#include "aare/RawMasterFile.hpp"
#include "aare/Frame.hpp"
#include "aare/RawMasterFile.hpp"
#include <filesystem>
#include <fstream>
namespace aare{
namespace aare {
class CtbRawFile{
class CtbRawFile {
RawMasterFile m_master;
std::ifstream m_file;
size_t m_current_frame{0};
size_t m_current_subfile{0};
size_t m_num_subfiles{0};
public:
public:
CtbRawFile(const std::filesystem::path &fname);
void read_into(std::byte *image_buf, DetectorHeader* header = nullptr);
void read_into(std::byte *image_buf, DetectorHeader *header = nullptr);
void seek(size_t frame_index); //!< seek to the given frame index
size_t tell() const; //!< get the frame index of the file pointer
@ -29,13 +29,13 @@ public:
size_t frames_in_file() const;
RawMasterFile master() const;
private:
private:
void find_subfiles();
size_t sub_file_index(size_t frame_index) const {
return frame_index / m_master.max_frames_per_file();
}
void open_data_file(size_t subfile_index);
};
}
} // namespace aare

View File

@ -6,31 +6,37 @@
namespace aare {
// The format descriptor is a single character that specifies the type of the data
// The format descriptor is a single character that specifies the type of the
// data
// - python documentation: https://docs.python.org/3/c-api/arg.html#numbers
// - py::format_descriptor<T>::format() (in pybind11) does not return the same format as
// - py::format_descriptor<T>::format() (in pybind11) does not return the same
// format as
// written in python.org documentation.
// - numpy also doesn't use the same format. and also numpy associates the format
// with variable bitdepth types. (e.g. long is int64 on linux64 and int32 on win64)
// https://numpy.org/doc/stable/reference/arrays.scalars.html
// - numpy also doesn't use the same format. and also numpy associates the
// format
// with variable bitdepth types. (e.g. long is int64 on linux64 and int32 on
// win64) https://numpy.org/doc/stable/reference/arrays.scalars.html
//
// github issue discussing this:
// https://github.com/pybind/pybind11/issues/1908#issuecomment-658358767
//
// [IN LINUX] the difference is for int64 (long) and uint64 (unsigned long). The format
// descriptor is 'q' and 'Q' respectively and in the documentation it is 'l' and 'k'.
// [IN LINUX] the difference is for int64 (long) and uint64 (unsigned long). The
// format descriptor is 'q' and 'Q' respectively and in the documentation it is
// 'l' and 'k'.
// in practice numpy doesn't seem to care when reading buffer info: the library
// interprets 'q' or 'l' as int64 and 'Q' or 'L' as uint64.
// for this reason we decided to use the same format descriptor as pybind to avoid
// any further discrepancies.
// for this reason we decided to use the same format descriptor as pybind to
// avoid any further discrepancies.
// in the following order:
// int8, uint8, int16, uint16, int32, uint32, int64, uint64, float, double
const char DTYPE_FORMAT_DSC[] = {'b', 'B', 'h', 'H', 'i', 'I', 'q', 'Q', 'f', 'd'};
const char DTYPE_FORMAT_DSC[] = {'b', 'B', 'h', 'H', 'i',
'I', 'q', 'Q', 'f', 'd'};
// on linux64 & apple
const char NUMPY_FORMAT_DSC[] = {'b', 'B', 'h', 'H', 'i', 'I', 'l', 'L', 'f', 'd'};
const char NUMPY_FORMAT_DSC[] = {'b', 'B', 'h', 'H', 'i',
'I', 'l', 'L', 'f', 'd'};
/**
* @brief enum class to define the endianess of the system
*/
@ -52,12 +58,29 @@ enum class endian {
*/
class Dtype {
public:
enum TypeIndex { INT8, UINT8, INT16, UINT16, INT32, UINT32, INT64, UINT64, FLOAT, DOUBLE, ERROR, NONE };
enum TypeIndex {
INT8,
UINT8,
INT16,
UINT16,
INT32,
UINT32,
INT64,
UINT64,
FLOAT,
DOUBLE,
ERROR,
NONE
};
uint8_t bitdepth() const;
size_t bytes() const;
std::string format_descr() const { return std::string(1, DTYPE_FORMAT_DSC[static_cast<int>(m_type)]); }
std::string numpy_descr() const { return std::string(1, NUMPY_FORMAT_DSC[static_cast<int>(m_type)]); }
std::string format_descr() const {
return std::string(1, DTYPE_FORMAT_DSC[static_cast<int>(m_type)]);
}
std::string numpy_descr() const {
return std::string(1, NUMPY_FORMAT_DSC[static_cast<int>(m_type)]);
}
explicit Dtype(const std::type_info &t);
explicit Dtype(std::string_view sv);

View File

@ -6,11 +6,11 @@ namespace aare {
/**
* @brief RAII File class for reading, and in the future potentially writing
* image files in various formats. Minimal generic interface. For specail fuctions
* plase use the RawFile or NumpyFile classes directly.
* Wraps FileInterface to abstract the underlying file format
* @note **frame_number** refers the the frame number sent by the detector while **frame_index**
* is the position of the frame in the file
* image files in various formats. Minimal generic interface. For specail
* fuctions plase use the RawFile, NumpyFile or Hdf5File classes directly. Wraps
* FileInterface to abstract the underlying file format
* @note **frame_number** refers the the frame number sent by the detector while
* **frame_index** is the position of the frame in the file
*/
class File {
std::unique_ptr<FileInterface> file_impl;
@ -25,29 +25,35 @@ class File {
* @throws std::invalid_argument if the file mode is not supported
*
*/
File(const std::filesystem::path &fname, const std::string &mode="r", const FileConfig &cfg = {});
File(const std::filesystem::path &fname, const std::string &mode = "r",
const FileConfig &cfg = {});
/**Since the object is responsible for managing the file we disable copy construction */
/**Since the object is responsible for managing the file we disable copy
* construction */
File(File const &other) = delete;
/**The same goes for copy assignment */
File& operator=(File const &other) = delete;
File &operator=(File const &other) = delete;
File(File &&other) noexcept;
File& operator=(File &&other) noexcept;
File &operator=(File &&other) noexcept;
~File() = default;
// void close(); //!< close the file
Frame read_frame(); //!< read one frame from the file at the current position
Frame read_frame(size_t frame_index); //!< read one frame at the position given by frame number
std::vector<Frame> read_n(size_t n_frames); //!< read n_frames from the file at the current position
Frame
read_frame(); //!< read one frame from the file at the current position
Frame read_frame(size_t frame_index); //!< read one frame at the position
//!< given by frame number
std::vector<Frame> read_n(size_t n_frames); //!< read n_frames from the file
//!< at the current position
void read_into(std::byte *image_buf);
void read_into(std::byte *image_buf, size_t n_frames);
size_t frame_number(); //!< get the frame number at the current position
size_t frame_number(size_t frame_index); //!< get the frame number at the given frame index
size_t frame_number(
size_t frame_index); //!< get the frame number at the given frame index
size_t bytes_per_frame() const;
size_t pixels_per_frame() const;
size_t bytes_per_pixel() const;
@ -59,8 +65,6 @@ class File {
size_t cols() const;
DetectorType detector_type() const;
};
} // namespace aare

View File

@ -1,7 +1,7 @@
#pragma once
#include "aare/Dtype.hpp"
#include "aare/Frame.hpp"
#include "aare/defs.hpp"
#include "aare/to_string.hpp"
#include <filesystem>
#include <vector>
@ -20,8 +20,10 @@ struct FileConfig {
uint64_t rows{};
uint64_t cols{};
bool operator==(const FileConfig &other) const {
return dtype == other.dtype && rows == other.rows && cols == other.cols && geometry == other.geometry &&
detector_type == other.detector_type && max_frames_per_file == other.max_frames_per_file;
return dtype == other.dtype && rows == other.rows &&
cols == other.cols && geometry == other.geometry &&
detector_type == other.detector_type &&
max_frames_per_file == other.max_frames_per_file;
}
bool operator!=(const FileConfig &other) const { return !(*this == other); }
@ -32,8 +34,11 @@ struct FileConfig {
int max_frames_per_file{};
size_t total_frames{};
std::string to_string() const {
return "{ dtype: " + dtype.to_string() + ", rows: " + std::to_string(rows) + ", cols: " + std::to_string(cols) +
", geometry: " + geometry.to_string() + ", detector_type: " + ToString(detector_type) +
return "{ dtype: " + dtype.to_string() +
", rows: " + std::to_string(rows) +
", cols: " + std::to_string(cols) +
", geometry: " + geometry.to_string() +
", detector_type: " + ToString(detector_type) +
", max_frames_per_file: " + std::to_string(max_frames_per_file) +
", total_frames: " + std::to_string(total_frames) + " }";
}
@ -41,8 +46,9 @@ struct FileConfig {
/**
* @brief FileInterface class to define the interface for file operations
* @note parent class for NumpyFile and RawFile
* @note all functions are pure virtual and must be implemented by the derived classes
* @note parent class for NumpyFile, RawFile and Hdf5File
* @note all functions are pure virtual and must be implemented by the derived
* classes
*/
class FileInterface {
public:
@ -64,17 +70,20 @@ class FileInterface {
* @param n_frames number of frames to read
* @return vector of frames
*/
virtual std::vector<Frame> read_n(size_t n_frames) = 0; // Is this the right interface?
virtual std::vector<Frame>
read_n(size_t n_frames) = 0; // Is this the right interface?
/**
* @brief read one frame from the file at the current position and store it in the provided buffer
* @brief read one frame from the file at the current position and store it
* in the provided buffer
* @param image_buf buffer to store the frame
* @return void
*/
virtual void read_into(std::byte *image_buf) = 0;
/**
* @brief read n_frames from the file at the current position and store them in the provided buffer
* @brief read n_frames from the file at the current position and store them
* in the provided buffer
* @param image_buf buffer to store the frames
* @param n_frames number of frames to read
* @return void
@ -134,7 +143,6 @@ class FileInterface {
*/
virtual size_t bitdepth() const = 0;
virtual DetectorType detector_type() const = 0;
// function to query the data type of the file

View File

@ -12,7 +12,7 @@ class FilePtr {
public:
FilePtr() = default;
FilePtr(const std::filesystem::path& fname, const std::string& mode);
FilePtr(const std::filesystem::path &fname, const std::string &mode);
FilePtr(const FilePtr &) = delete; // we don't want a copy
FilePtr &operator=(const FilePtr &) = delete; // since we handle a resource
FilePtr(FilePtr &&other);

View File

@ -23,16 +23,19 @@ NDArray<double, 1> scurve2(NDView<double, 1> x, NDView<double, 1> par);
} // namespace func
/**
* @brief Estimate the initial parameters for a Gaussian fit
*/
std::array<double, 3> gaus_init_par(const NDView<double, 1> x, const NDView<double, 1> y);
std::array<double, 3> gaus_init_par(const NDView<double, 1> x,
const NDView<double, 1> y);
std::array<double, 2> pol1_init_par(const NDView<double, 1> x, const NDView<double, 1> y);
std::array<double, 2> pol1_init_par(const NDView<double, 1> x,
const NDView<double, 1> y);
std::array<double, 6> scurve_init_par(const NDView<double, 1> x, const NDView<double, 1> y);
std::array<double, 6> scurve2_init_par(const NDView<double, 1> x, const NDView<double, 1> y);
std::array<double, 6> scurve_init_par(const NDView<double, 1> x,
const NDView<double, 1> y);
std::array<double, 6> scurve2_init_par(const NDView<double, 1> x,
const NDView<double, 1> y);
static constexpr int DEFAULT_NUM_THREADS = 4;
@ -43,7 +46,6 @@ static constexpr int DEFAULT_NUM_THREADS = 4;
*/
NDArray<double, 1> fit_gaus(NDView<double, 1> x, NDView<double, 1> y);
/**
* @brief Fit a 1D Gaussian to each pixel. Data layout [row, col, values]
* @param x x values
@ -54,9 +56,6 @@ NDArray<double, 1> fit_gaus(NDView<double, 1> x, NDView<double, 1> y);
NDArray<double, 3> fit_gaus(NDView<double, 1> x, NDView<double, 3> y,
int n_threads = DEFAULT_NUM_THREADS);
/**
* @brief Fit a 1D Gaussian with error estimates
* @param x x values
@ -67,7 +66,7 @@ NDArray<double, 3> fit_gaus(NDView<double, 1> x, NDView<double, 3> y,
*/
void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out,
double& chi2);
double &chi2);
/**
* @brief Fit a 1D Gaussian to each pixel with error estimates. Data layout
@ -80,9 +79,8 @@ void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
* @param n_threads number of threads to use
*/
void fit_gaus(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads = DEFAULT_NUM_THREADS
);
NDView<double, 3> par_out, NDView<double, 3> par_err_out,
NDView<double, 2> chi2_out, int n_threads = DEFAULT_NUM_THREADS);
NDArray<double, 1> fit_pol1(NDView<double, 1> x, NDView<double, 1> y);
@ -90,26 +88,33 @@ NDArray<double, 3> fit_pol1(NDView<double, 1> x, NDView<double, 3> y,
int n_threads = DEFAULT_NUM_THREADS);
void fit_pol1(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out, double& chi2);
NDView<double, 1> par_out, NDView<double, 1> par_err_out,
double &chi2);
// TODO! not sure we need to offer the different version in C++
void fit_pol1(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out,NDView<double, 2> chi2_out,
int n_threads = DEFAULT_NUM_THREADS);
NDView<double, 3> par_out, NDView<double, 3> par_err_out,
NDView<double, 2> chi2_out, int n_threads = DEFAULT_NUM_THREADS);
NDArray<double, 1> fit_scurve(NDView<double, 1> x, NDView<double, 1> y);
NDArray<double, 3> fit_scurve(NDView<double, 1> x, NDView<double, 3> y, int n_threads);
void fit_scurve(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out, double& chi2);
void fit_scurve(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
NDArray<double, 3> fit_scurve(NDView<double, 1> x, NDView<double, 3> y,
int n_threads);
void fit_scurve(NDView<double, 1> x, NDView<double, 1> y,
NDView<double, 1> y_err, NDView<double, 1> par_out,
NDView<double, 1> par_err_out, double &chi2);
void fit_scurve(NDView<double, 1> x, NDView<double, 3> y,
NDView<double, 3> y_err, NDView<double, 3> par_out,
NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads);
NDArray<double, 1> fit_scurve2(NDView<double, 1> x, NDView<double, 1> y);
NDArray<double, 3> fit_scurve2(NDView<double, 1> x, NDView<double, 3> y, int n_threads);
void fit_scurve2(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out, double& chi2);
void fit_scurve2(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
NDArray<double, 3> fit_scurve2(NDView<double, 1> x, NDView<double, 3> y,
int n_threads);
void fit_scurve2(NDView<double, 1> x, NDView<double, 1> y,
NDView<double, 1> y_err, NDView<double, 1> par_out,
NDView<double, 1> par_err_out, double &chi2);
void fit_scurve2(NDView<double, 1> x, NDView<double, 3> y,
NDView<double, 3> y_err, NDView<double, 3> par_out,
NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads);
} // namespace aare

View File

@ -19,7 +19,7 @@ class Frame {
uint32_t m_cols;
Dtype m_dtype;
std::byte *m_data;
//TODO! Add frame number?
// TODO! Add frame number?
public:
/**
@ -39,7 +39,7 @@ class Frame {
* @param dtype data type of the pixels
*/
Frame(const std::byte *bytes, uint32_t rows, uint32_t cols, Dtype dtype);
~Frame(){ delete[] m_data; };
~Frame() { delete[] m_data; };
/** @warning Copy is disabled to ensure performance when passing
* frames around. Can discuss enabling it.
@ -52,7 +52,6 @@ class Frame {
Frame &operator=(Frame &&other) noexcept;
Frame(Frame &&other) noexcept;
Frame clone() const; //<- Explicit copy
uint32_t rows() const;
@ -93,7 +92,7 @@ class Frame {
if (row >= m_rows || col >= m_cols) {
throw std::out_of_range("Invalid row or column index");
}
//TODO! add tests then reimplement using pixel_ptr
// TODO! add tests then reimplement using pixel_ptr
T data;
std::memcpy(&data, m_data + (row * m_cols + col) * m_dtype.bytes(),
m_dtype.bytes());

211
include/aare/Hdf5File.hpp Normal file
View File

@ -0,0 +1,211 @@
#pragma once
#include "aare/FileInterface.hpp"
#include "aare/Frame.hpp"
#include "aare/Hdf5MasterFile.hpp"
#include "aare/NDArray.hpp" //for pixel map
#include <optional>
namespace aare {
class H5Handles {
std::string file_name;
std::string dataset_name;
H5::H5File file;
H5::DataSet dataset;
H5::DataSpace dataspace;
H5::DataType datatype;
std::unique_ptr<H5::DataSpace> memspace;
std::vector<hsize_t> dims;
std::vector<hsize_t> count;
std::vector<hsize_t> offset;
public:
H5Handles(const std::string &fname, const std::string &dname)
: file_name(fname), dataset_name(dname), file(fname, H5F_ACC_RDONLY),
dataset(file.openDataSet(dname)), dataspace(dataset.getSpace()),
datatype(dataset.getDataType()) {
intialize_dimensions();
initialize_memspace();
}
std::vector<hsize_t> get_dims() const { return dims; }
void seek(size_t frame_index) {
if (frame_index >= dims[0]) {
throw std::runtime_error(LOCATION + "Invalid frame number");
}
offset[0] = static_cast<hsize_t>(frame_index);
}
void get_data_into(size_t frame_index, std::byte *frame_buffer,
size_t n_frames = 1) {
seek(frame_index);
count[0] = static_cast<hsize_t>(n_frames);
// std::cout << "offset:" << ToString(offset) << " count:" <<
// ToString(count) << std::endl;
dataspace.selectHyperslab(H5S_SELECT_SET, count.data(), offset.data());
dataset.read(frame_buffer, datatype, *memspace, dataspace);
}
void get_header_into(size_t frame_index, int part_index,
std::byte *header_buffer) {
seek(frame_index);
offset[1] = static_cast<hsize_t>(part_index);
// std::cout << "offset:" << ToString(offset) << " count:" <<
// ToString(count) << std::endl;
dataspace.selectHyperslab(H5S_SELECT_SET, count.data(), offset.data());
dataset.read(header_buffer, datatype, *memspace, dataspace);
}
private:
void intialize_dimensions() {
int rank = dataspace.getSimpleExtentNdims();
dims.resize(rank);
dataspace.getSimpleExtentDims(dims.data(), nullptr);
}
void initialize_memspace() {
int rank = dataspace.getSimpleExtentNdims();
count.clear();
offset.clear();
// header datasets or header virtual datasets
if (rank == 1 || rank == 2) {
count = std::vector<hsize_t>(rank, 1); // slice 1 value
offset = std::vector<hsize_t>(rank, 0);
memspace = std::make_unique<H5::DataSpace>(H5S_SCALAR);
} else if (rank >= 3) {
// data dataset (frame x height x width)
count = {1, dims[1], dims[2]};
offset = {0, 0, 0};
hsize_t dims_image[2] = {dims[1], dims[2]};
memspace = std::make_unique<H5::DataSpace>(2, dims_image);
} else {
throw std::runtime_error(
LOCATION + "Invalid rank for dataset: " + std::to_string(rank));
}
}
};
template <typename Fn>
void read_hdf5_header_fields(DetectorHeader *header, Fn &&fn_read_field) {
fn_read_field(0, reinterpret_cast<std::byte *>(&(header->frameNumber)));
fn_read_field(1, reinterpret_cast<std::byte *>(&(header->expLength)));
fn_read_field(2, reinterpret_cast<std::byte *>(&(header->packetNumber)));
fn_read_field(3, reinterpret_cast<std::byte *>(&(header->bunchId)));
fn_read_field(4, reinterpret_cast<std::byte *>(&(header->timestamp)));
fn_read_field(5, reinterpret_cast<std::byte *>(&(header->modId)));
fn_read_field(6, reinterpret_cast<std::byte *>(&(header->row)));
fn_read_field(7, reinterpret_cast<std::byte *>(&(header->column)));
fn_read_field(8, reinterpret_cast<std::byte *>(&(header->reserved)));
fn_read_field(9, reinterpret_cast<std::byte *>(&(header->debug)));
fn_read_field(10, reinterpret_cast<std::byte *>(&(header->roundRNumber)));
fn_read_field(11, reinterpret_cast<std::byte *>(&(header->detType)));
fn_read_field(12, reinterpret_cast<std::byte *>(&(header->version)));
fn_read_field(13, reinterpret_cast<std::byte *>(&(header->packetMask)));
}
/**
* @brief Class to read .h5 files. The class will parse the master file
* to find the correct geometry for the frames.
* @note A more generic interface is available in the aare::File class.
* Consider using that unless you need hdf5 file specific functionality.
*/
class Hdf5File : public FileInterface {
Hdf5MasterFile m_master;
size_t m_current_frame{};
size_t m_total_frames{};
size_t m_rows{};
size_t m_cols{};
static const std::string metadata_group_name;
static const std::vector<std::string> header_dataset_names;
std::unique_ptr<H5Handles> m_data_dataset{nullptr};
std::vector<std::unique_ptr<H5Handles>> m_header_datasets{};
public:
/**
* @brief Hdf5File constructor
* @param fname path to the master file (.json)
* @param mode file mode (only "r" is supported at the moment)
*/
Hdf5File(const std::filesystem::path &fname, const std::string &mode = "r");
virtual ~Hdf5File() override;
Frame read_frame() override;
Frame read_frame(size_t frame_number) override;
std::vector<Frame> read_n(size_t n_frames) override;
void read_into(std::byte *image_buf) override;
void read_into(std::byte *image_buf, size_t n_frames) override;
// TODO! do we need to adapt the API?
void read_into(std::byte *image_buf, DetectorHeader *header);
void read_into(std::byte *image_buf, size_t n_frames,
DetectorHeader *header);
size_t frame_number(size_t frame_index) override;
size_t bytes_per_frame() override;
size_t pixels_per_frame() override;
size_t bytes_per_pixel() const;
void seek(size_t frame_index) override;
size_t tell() override;
size_t total_frames() const override;
size_t rows() const override;
size_t cols() const override;
size_t bitdepth() const override;
xy geometry();
size_t n_modules() const;
Hdf5MasterFile master() const;
DetectorType detector_type() const override;
private:
/**
* @brief get the frame at the given frame index
* @param frame_number frame number to read
* @return Frame
*/
Frame get_frame(size_t frame_index);
/**
* @brief read the frame at the given frame index into the image buffer
* @param frame_number frame number to read
* @param n_frames number of frames to read (default is 1)
* @param image_buf buffer to store the frame
*/
void get_frame_into(size_t frame_index, std::byte *frame_buffer,
size_t n_frames = 1, DetectorHeader *header = nullptr);
/**
* @brief read the frame at the given frame index into the image buffer
* @param frame_index frame number to read
* @param n_frames number of frames to read (default is 1)
* @param frame_buffer buffer to store the frame
*/
void get_data_into(size_t frame_index, std::byte *frame_buffer,
size_t n_frames = 1);
/**
* @brief read the header at the given frame index into the header buffer
* @param frame_index frame number to read
* @param part_index part index to read (for virtual datasets)
* @param header buffer to store the header
*/
void get_header_into(size_t frame_index, int part_index,
DetectorHeader *header);
/**
* @brief read the header of the file
* @param fname path to the data subfile
* @return DetectorHeader
*/
static DetectorHeader read_header(const std::filesystem::path &fname);
void open_data_file();
void open_header_files();
};
} // namespace aare

View File

@ -0,0 +1,135 @@
#pragma once
#include "aare/defs.hpp"
#include "aare/scan_parameters.hpp"
#include "H5Cpp.h"
#include <filesystem>
#include <fmt/format.h>
#include <fstream>
#include <optional>
namespace aare {
using ns = std::chrono::nanoseconds;
/**
* @brief Class for parsing a master file either in our .json format or the old
* .Hdf5 format
*/
class Hdf5MasterFile {
std::filesystem::path m_file_name{};
std::string m_version;
DetectorType m_type;
TimingMode m_timing_mode;
xy m_geometry{};
int m_image_size_in_bytes{};
int m_pixels_y{};
int m_pixels_x{};
int m_max_frames_per_file{};
FrameDiscardPolicy m_frame_discard_policy{};
int m_frame_padding{};
std::optional<ScanParameters> m_scan_parameters{};
size_t m_total_frames_expected{};
std::optional<ns> m_exptime{};
std::optional<ns> m_period{};
std::optional<BurstMode> m_burst_mode{};
std::optional<int> m_number_of_udp_interfaces{};
int m_bitdepth{};
std::optional<bool> m_ten_giga{};
std::optional<int> m_threshold_energy{};
std::optional<std::vector<int>> m_threshold_energy_all{};
std::optional<ns> m_subexptime{};
std::optional<ns> m_subperiod{};
std::optional<bool> m_quad{};
std::optional<int> m_number_of_rows{};
std::optional<std::vector<size_t>> m_rate_corrections{};
std::optional<uint32_t> m_adc_mask{};
bool m_analog_flag{};
std::optional<int> m_analog_samples{};
bool m_digital_flag{};
std::optional<int> m_digital_samples{};
std::optional<int> m_dbit_offset{};
std::optional<size_t> m_dbit_list{};
std::optional<int> m_transceiver_mask{};
bool m_transceiver_flag{};
std::optional<int> m_transceiver_samples{};
// g1 roi - will not be implemented?
std::optional<ROI> m_roi{};
std::optional<int> m_counter_mask{};
std::optional<std::vector<ns>> m_exptime_array{};
std::optional<std::vector<ns>> m_gate_delay_array{};
std::optional<int> m_gates{};
std::optional<std::map<std::string, std::string>>
m_additional_json_header{};
size_t m_frames_in_file{};
// TODO! should these be bool?
public:
Hdf5MasterFile(const std::filesystem::path &fpath);
std::filesystem::path file_name() const;
const std::string &version() const; //!< For example "7.2"
const DetectorType &detector_type() const;
const TimingMode &timing_mode() const;
xy geometry() const;
int image_size_in_bytes() const;
int pixels_y() const;
int pixels_x() const;
int max_frames_per_file() const;
const FrameDiscardPolicy &frame_discard_policy() const;
int frame_padding() const;
std::optional<ScanParameters> scan_parameters() const;
size_t total_frames_expected() const;
std::optional<ns> exptime() const;
std::optional<ns> period() const;
std::optional<BurstMode> burst_mode() const;
std::optional<int> number_of_udp_interfaces() const;
int bitdepth() const;
std::optional<bool> ten_giga() const;
std::optional<int> threshold_energy() const;
std::optional<std::vector<int>> threshold_energy_all() const;
std::optional<ns> subexptime() const;
std::optional<ns> subperiod() const;
std::optional<bool> quad() const;
std::optional<int> number_of_rows() const;
std::optional<std::vector<size_t>> rate_corrections() const;
std::optional<uint32_t> adc_mask() const;
bool analog_flag() const;
std::optional<int> analog_samples() const;
bool digital_flag() const;
std::optional<int> digital_samples() const;
std::optional<int> dbit_offset() const;
std::optional<size_t> dbit_list() const;
std::optional<int> transceiver_mask() const;
bool transceiver_flag() const;
std::optional<int> transceiver_samples() const;
// g1 roi - will not be implemented?
std::optional<ROI> roi() const;
std::optional<int> counter_mask() const;
std::optional<std::vector<ns>> exptime_array() const;
std::optional<std::vector<ns>> gate_delay_array() const;
std::optional<int> gates() const;
std::optional<std::map<std::string, std::string>>
additional_json_header() const;
size_t frames_in_file() const;
size_t n_modules() const;
private:
static const std::string metadata_group_name;
void parse_acquisition_metadata(const std::filesystem::path &fpath);
template <typename T>
T h5_read_scalar_dataset(const H5::DataSet &dataset,
const H5::DataType &data_type);
template <typename T>
T h5_get_scalar_dataset(const H5::H5File &file,
const std::string &dataset_name);
};
template <>
std::string Hdf5MasterFile::h5_read_scalar_dataset<std::string>(
const H5::DataSet &dataset, const H5::DataType &data_type);
} // namespace aare

View File

@ -3,14 +3,13 @@
#include <filesystem>
#include <vector>
#include "aare/FilePtr.hpp"
#include "aare/defs.hpp"
#include "aare/NDArray.hpp"
#include "aare/FileInterface.hpp"
#include "aare/FilePtr.hpp"
#include "aare/NDArray.hpp"
#include "aare/defs.hpp"
namespace aare {
struct JungfrauDataHeader{
struct JungfrauDataHeader {
uint64_t framenum;
uint64_t bunchid;
};
@ -18,44 +17,50 @@ struct JungfrauDataHeader{
class JungfrauDataFile : public FileInterface {
size_t m_rows{}; //!< number of rows in the image, from find_frame_size();
size_t m_cols{}; //!< number of columns in the image, from find_frame_size();
size_t
m_cols{}; //!< number of columns in the image, from find_frame_size();
size_t m_bytes_per_frame{}; //!< number of bytes per frame excluding header
size_t m_total_frames{}; //!< total number of frames in the series of files
size_t m_offset{}; //!< file index of the first file, allow starting at non zero file
size_t m_offset{}; //!< file index of the first file, allow starting at non
//!< zero file
size_t m_current_file_index{}; //!< The index of the open file
size_t m_current_frame_index{}; //!< The index of the current frame (with reference to all files)
size_t m_current_frame_index{}; //!< The index of the current frame (with
//!< reference to all files)
std::vector<size_t> m_last_frame_in_file{}; //!< Used for seeking to the correct file
std::vector<size_t>
m_last_frame_in_file{}; //!< Used for seeking to the correct file
std::filesystem::path m_path; //!< path to the files
std::string m_base_name; //!< base name used for formatting file names
FilePtr m_fp; //!< RAII wrapper for a FILE*
using pixel_type = uint16_t;
static constexpr size_t header_size = sizeof(JungfrauDataHeader);
static constexpr size_t n_digits_in_file_index = 6; //!< to format file names
static constexpr size_t n_digits_in_file_index =
6; //!< to format file names
public:
JungfrauDataFile(const std::filesystem::path &fname);
std::string base_name() const; //!< get the base name of the file (without path and extension)
std::string base_name()
const; //!< get the base name of the file (without path and extension)
size_t bytes_per_frame() override;
size_t pixels_per_frame() override;
size_t bytes_per_pixel() const;
size_t bitdepth() const override;
void seek(size_t frame_index) override; //!< seek to the given frame index (note not byte offset)
void seek(size_t frame_index)
override; //!< seek to the given frame index (note not byte offset)
size_t tell() override; //!< get the frame index of the file pointer
size_t total_frames() const override;
size_t rows() const override;
size_t cols() const override;
std::array<ssize_t,2> shape() const;
std::array<ssize_t, 2> shape() const;
size_t n_files() const; //!< get the number of files in the series.
// Extra functions needed for FileInterface
Frame read_frame() override;
Frame read_frame(size_t frame_number) override;
std::vector<Frame> read_n(size_t n_frames=0) override;
std::vector<Frame> read_n(size_t n_frames = 0) override;
void read_into(std::byte *image_buf) override;
void read_into(std::byte *image_buf, size_t n_frames) override;
size_t frame_number(size_t frame_index) override;
@ -63,44 +68,48 @@ class JungfrauDataFile : public FileInterface {
/**
* @brief Read a single frame from the file into the given buffer.
* @param image_buf buffer to read the frame into. (Note the caller is responsible for allocating the buffer)
* @param image_buf buffer to read the frame into. (Note the caller is
* responsible for allocating the buffer)
* @param header pointer to a JungfrauDataHeader or nullptr to skip header)
*/
void read_into(std::byte *image_buf, JungfrauDataHeader *header = nullptr);
/**
* @brief Read a multiple frames from the file into the given buffer.
* @param image_buf buffer to read the frame into. (Note the caller is responsible for allocating the buffer)
* @param image_buf buffer to read the frame into. (Note the caller is
* responsible for allocating the buffer)
* @param n_frames number of frames to read
* @param header pointer to a JungfrauDataHeader or nullptr to skip header)
*/
void read_into(std::byte *image_buf, size_t n_frames, JungfrauDataHeader *header = nullptr);
void read_into(std::byte *image_buf, size_t n_frames,
JungfrauDataHeader *header = nullptr);
/**
* @brief Read a single frame from the file into the given NDArray
* @param image NDArray to read the frame into.
*/
void read_into(NDArray<uint16_t>* image, JungfrauDataHeader* header = nullptr);
void read_into(NDArray<uint16_t> *image,
JungfrauDataHeader *header = nullptr);
JungfrauDataHeader read_header();
std::filesystem::path current_file() const { return fpath(m_current_file_index+m_offset); }
std::filesystem::path current_file() const {
return fpath(m_current_file_index + m_offset);
}
private:
/**
* @brief Find the size of the frame in the file. (256x256, 256x1024, 512x1024)
* @brief Find the size of the frame in the file. (256x256, 256x1024,
* 512x1024)
* @param fname path to the file
* @throws std::runtime_error if the file is empty or the size cannot be determined
* @throws std::runtime_error if the file is empty or the size cannot be
* determined
*/
void find_frame_size(const std::filesystem::path &fname);
void parse_fname(const std::filesystem::path &fname);
void scan_files();
void open_file(size_t file_index);
std::filesystem::path fpath(size_t frame_index) const;
};
};
} // namespace aare

View File

@ -21,7 +21,6 @@ TODO! Add expression templates for operators
namespace aare {
template <typename T, ssize_t Ndim = 2>
class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
std::array<ssize_t, Ndim> shape_;
@ -34,7 +33,7 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
* @brief Default constructor. Will construct an empty NDArray.
*
*/
NDArray() : shape_(), strides_(c_strides<Ndim>(shape_)), data_(nullptr) {};
NDArray() : shape_(), strides_(c_strides<Ndim>(shape_)), data_(nullptr){};
/**
* @brief Construct a new NDArray object with a given shape.
@ -48,7 +47,6 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
std::multiplies<>())),
data_(new T[size_]) {}
/**
* @brief Construct a new NDArray object with a shape and value.
*
@ -69,8 +67,8 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
std::copy(v.begin(), v.end(), begin());
}
template<size_t Size>
NDArray(const std::array<T, Size>& arr) : NDArray<T,1>({Size}) {
template <size_t Size>
NDArray(const std::array<T, Size> &arr) : NDArray<T, 1>({Size}) {
std::copy(arr.begin(), arr.end(), begin());
}
@ -79,7 +77,6 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
: shape_(other.shape_), strides_(c_strides<Ndim>(shape_)),
size_(other.size_), data_(other.data_) {
other.reset(); // TODO! is this necessary?
}
// Copy constructor
@ -113,10 +110,10 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
NDArray &operator-=(const NDArray &other);
NDArray &operator*=(const NDArray &other);
//Write directly to the data array, or create a new one
template<size_t Size>
NDArray<T,1>& operator=(const std::array<T,Size> &other){
if(Size != size_){
// Write directly to the data array, or create a new one
template <size_t Size>
NDArray<T, 1> &operator=(const std::array<T, Size> &other) {
if (Size != size_) {
delete[] data_;
size_ = Size;
data_ = new T[size_];
@ -157,11 +154,6 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
NDArray &operator&=(const T & /*mask*/);
void sqrt() {
for (int i = 0; i < size_; ++i) {
data_[i] = std::sqrt(data_[i]);
@ -345,9 +337,6 @@ NDArray<T, Ndim> &NDArray<T, Ndim>::operator+=(const T &value) {
return *this;
}
template <typename T, ssize_t Ndim>
NDArray<T, Ndim> NDArray<T, Ndim>::operator+(const T &value) {
NDArray result = *this;
@ -448,6 +437,4 @@ NDArray<T, Ndim> load(const std::string &pathname,
return img;
}
} // namespace aare

View File

@ -1,6 +1,6 @@
#pragma once
#include "aare/defs.hpp"
#include "aare/ArrayExpr.hpp"
#include "aare/defs.hpp"
#include <algorithm>
#include <array>
@ -17,7 +17,8 @@ namespace aare {
template <ssize_t Ndim> using Shape = std::array<ssize_t, Ndim>;
// TODO! fix mismatch between signed and unsigned
template <ssize_t Ndim> Shape<Ndim> make_shape(const std::vector<size_t> &shape) {
template <ssize_t Ndim>
Shape<Ndim> make_shape(const std::vector<size_t> &shape) {
if (shape.size() != Ndim)
throw std::runtime_error("Shape size mismatch");
Shape<Ndim> arr;
@ -25,14 +26,18 @@ template <ssize_t Ndim> Shape<Ndim> make_shape(const std::vector<size_t> &shape)
return arr;
}
template <ssize_t Dim = 0, typename Strides> ssize_t element_offset(const Strides & /*unused*/) { return 0; }
template <ssize_t Dim = 0, typename Strides>
ssize_t element_offset(const Strides & /*unused*/) {
return 0;
}
template <ssize_t Dim = 0, typename Strides, typename... Ix>
ssize_t element_offset(const Strides &strides, ssize_t i, Ix... index) {
return i * strides[Dim] + element_offset<Dim + 1>(strides, index...);
}
template <ssize_t Ndim> std::array<ssize_t, Ndim> c_strides(const std::array<ssize_t, Ndim> &shape) {
template <ssize_t Ndim>
std::array<ssize_t, Ndim> c_strides(const std::array<ssize_t, Ndim> &shape) {
std::array<ssize_t, Ndim> strides{};
std::fill(strides.begin(), strides.end(), 1);
for (ssize_t i = Ndim - 1; i > 0; --i) {
@ -41,14 +46,16 @@ template <ssize_t Ndim> std::array<ssize_t, Ndim> c_strides(const std::array<ssi
return strides;
}
template <ssize_t Ndim> std::array<ssize_t, Ndim> make_array(const std::vector<ssize_t> &vec) {
template <ssize_t Ndim>
std::array<ssize_t, Ndim> make_array(const std::vector<ssize_t> &vec) {
assert(vec.size() == Ndim);
std::array<ssize_t, Ndim> arr{};
std::copy_n(vec.begin(), Ndim, arr.begin());
return arr;
}
template <typename T, ssize_t Ndim = 2> class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
template <typename T, ssize_t Ndim = 2>
class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
public:
NDView() = default;
~NDView() = default;
@ -57,17 +64,23 @@ template <typename T, ssize_t Ndim = 2> class NDView : public ArrayExpr<NDView<T
NDView(T *buffer, std::array<ssize_t, Ndim> shape)
: buffer_(buffer), strides_(c_strides<Ndim>(shape)), shape_(shape),
size_(std::accumulate(std::begin(shape), std::end(shape), 1, std::multiplies<>())) {}
size_(std::accumulate(std::begin(shape), std::end(shape), 1,
std::multiplies<>())) {}
// NDView(T *buffer, const std::vector<ssize_t> &shape)
// : buffer_(buffer), strides_(c_strides<Ndim>(make_array<Ndim>(shape))), shape_(make_array<Ndim>(shape)),
// size_(std::accumulate(std::begin(shape), std::end(shape), 1, std::multiplies<>())) {}
// : buffer_(buffer),
// strides_(c_strides<Ndim>(make_array<Ndim>(shape))),
// shape_(make_array<Ndim>(shape)),
// size_(std::accumulate(std::begin(shape), std::end(shape), 1,
// std::multiplies<>())) {}
template <typename... Ix> std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) {
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) {
return buffer_[element_offset(strides_, index...)];
}
template <typename... Ix> std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) const {
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) const {
return buffer_[element_offset(strides_, index...)];
}
@ -94,16 +107,21 @@ template <typename T, ssize_t Ndim = 2> class NDView : public ArrayExpr<NDView<T
NDView &operator+=(const T val) { return elemenwise(val, std::plus<T>()); }
NDView &operator-=(const T val) { return elemenwise(val, std::minus<T>()); }
NDView &operator*=(const T val) { return elemenwise(val, std::multiplies<T>()); }
NDView &operator/=(const T val) { return elemenwise(val, std::divides<T>()); }
NDView &operator*=(const T val) {
return elemenwise(val, std::multiplies<T>());
}
NDView &operator/=(const T val) {
return elemenwise(val, std::divides<T>());
}
NDView &operator/=(const NDView &other) { return elemenwise(other, std::divides<T>()); }
NDView &operator/=(const NDView &other) {
return elemenwise(other, std::divides<T>());
}
template<size_t Size>
NDView& operator=(const std::array<T, Size> &arr) {
if(size() != static_cast<ssize_t>(arr.size()))
throw std::runtime_error(LOCATION + "Array and NDView size mismatch");
template <size_t Size> NDView &operator=(const std::array<T, Size> &arr) {
if (size() != static_cast<ssize_t>(arr.size()))
throw std::runtime_error(LOCATION +
"Array and NDView size mismatch");
std::copy(arr.begin(), arr.end(), begin());
return *this;
}
@ -147,13 +165,15 @@ template <typename T, ssize_t Ndim = 2> class NDView : public ArrayExpr<NDView<T
std::array<ssize_t, Ndim> shape_{};
uint64_t size_{};
template <class BinaryOperation> NDView &elemenwise(T val, BinaryOperation op) {
template <class BinaryOperation>
NDView &elemenwise(T val, BinaryOperation op) {
for (uint64_t i = 0; i != size_; ++i) {
buffer_[i] = op(buffer_[i], val);
}
return *this;
}
template <class BinaryOperation> NDView &elemenwise(const NDView &other, BinaryOperation op) {
template <class BinaryOperation>
NDView &elemenwise(const NDView &other, BinaryOperation op) {
for (uint64_t i = 0; i != size_; ++i) {
buffer_[i] = op(buffer_[i], other.buffer_[i]);
}
@ -170,9 +190,8 @@ template <typename T, ssize_t Ndim> void NDView<T, Ndim>::print_all() const {
}
}
template <typename T, ssize_t Ndim>
std::ostream& operator <<(std::ostream& os, const NDView<T, Ndim>& arr){
std::ostream &operator<<(std::ostream &os, const NDView<T, Ndim> &arr) {
for (auto row = 0; row < arr.shape(0); ++row) {
for (auto col = 0; col < arr.shape(1); ++col) {
os << std::setw(3);
@ -183,10 +202,8 @@ std::ostream& operator <<(std::ostream& os, const NDView<T, Ndim>& arr){
return os;
}
template <typename T>
NDView<T,1> make_view(std::vector<T>& vec){
return NDView<T,1>(vec.data(), {static_cast<ssize_t>(vec.size())});
template <typename T> NDView<T, 1> make_view(std::vector<T> &vec) {
return NDView<T, 1>(vec.data(), {static_cast<ssize_t>(vec.size())});
}
} // namespace aare

View File

@ -1,9 +1,8 @@
#pragma once
#include "aare/Dtype.hpp"
#include "aare/defs.hpp"
#include "aare/FileInterface.hpp"
#include "aare/NumpyHelpers.hpp"
#include "aare/defs.hpp"
#include <filesystem>
#include <iostream>
@ -11,13 +10,12 @@
namespace aare {
/**
* @brief NumpyFile class to read and write numpy files
* @note derived from FileInterface
* @note implements all the pure virtual functions from FileInterface
* @note documentation for the functions can also be found in the FileInterface class
* @note documentation for the functions can also be found in the FileInterface
* class
*/
class NumpyFile : public FileInterface {
@ -28,26 +26,35 @@ class NumpyFile : public FileInterface {
* @param mode file mode (r, w)
* @param cfg file configuration
*/
explicit NumpyFile(const std::filesystem::path &fname, const std::string &mode = "r", FileConfig cfg = {});
explicit NumpyFile(const std::filesystem::path &fname,
const std::string &mode = "r", FileConfig cfg = {});
void write(Frame &frame);
Frame read_frame() override { return get_frame(this->current_frame++); }
Frame read_frame(size_t frame_number) override { return get_frame(frame_number); }
Frame read_frame(size_t frame_number) override {
return get_frame(frame_number);
}
std::vector<Frame> read_n(size_t n_frames) override;
void read_into(std::byte *image_buf) override { return get_frame_into(this->current_frame++, image_buf); }
void read_into(std::byte *image_buf) override {
return get_frame_into(this->current_frame++, image_buf);
}
void read_into(std::byte *image_buf, size_t n_frames) override;
size_t frame_number(size_t frame_index) override { return frame_index; };
size_t bytes_per_frame() override;
size_t pixels_per_frame() override;
void seek(size_t frame_number) override { this->current_frame = frame_number; }
void seek(size_t frame_number) override {
this->current_frame = frame_number;
}
size_t tell() override { return this->current_frame; }
size_t total_frames() const override { return m_header.shape[0]; }
size_t rows() const override { return m_header.shape[1]; }
size_t cols() const override { return m_header.shape[2]; }
size_t bitdepth() const override { return m_header.dtype.bitdepth(); }
DetectorType detector_type() const override { return DetectorType::Unknown; }
DetectorType detector_type() const override {
return DetectorType::Unknown;
}
/**
* @brief get the data type of the numpy file
@ -70,7 +77,8 @@ class NumpyFile : public FileInterface {
template <typename T, size_t NDim> NDArray<T, NDim> load() {
NDArray<T, NDim> arr(make_shape<NDim>(m_header.shape));
if (fseek(fp, static_cast<long>(header_size), SEEK_SET)) {
throw std::runtime_error(LOCATION + "Error seeking to the start of the data");
throw std::runtime_error(LOCATION +
"Error seeking to the start of the data");
}
size_t rc = fread(arr.data(), sizeof(T), arr.size(), fp);
if (rc != static_cast<size_t>(arr.size())) {
@ -78,16 +86,20 @@ class NumpyFile : public FileInterface {
}
return arr;
}
template <typename A, typename TYPENAME, A Ndim> void write(NDView<TYPENAME, Ndim> &frame) {
template <typename A, typename TYPENAME, A Ndim>
void write(NDView<TYPENAME, Ndim> &frame) {
write_impl(frame.data(), frame.total_bytes());
}
template <typename A, typename TYPENAME, A Ndim> void write(NDArray<TYPENAME, Ndim> &frame) {
template <typename A, typename TYPENAME, A Ndim>
void write(NDArray<TYPENAME, Ndim> &frame) {
write_impl(frame.data(), frame.total_bytes());
}
template <typename A, typename TYPENAME, A Ndim> void write(NDView<TYPENAME, Ndim> &&frame) {
template <typename A, typename TYPENAME, A Ndim>
void write(NDView<TYPENAME, Ndim> &&frame) {
write_impl(frame.data(), frame.total_bytes());
}
template <typename A, typename TYPENAME, A Ndim> void write(NDArray<TYPENAME, Ndim> &&frame) {
template <typename A, typename TYPENAME, A Ndim>
void write(NDArray<TYPENAME, Ndim> &&frame) {
write_impl(frame.data(), frame.total_bytes());
}

View File

@ -40,15 +40,18 @@ bool parse_bool(const std::string &in);
std::string get_value_from_map(const std::string &mapstr);
std::unordered_map<std::string, std::string> parse_dict(std::string in, const std::vector<std::string> &keys);
std::unordered_map<std::string, std::string>
parse_dict(std::string in, const std::vector<std::string> &keys);
template <typename T, size_t N> bool in_array(T val, const std::array<T, N> &arr) {
template <typename T, size_t N>
bool in_array(T val, const std::array<T, N> &arr) {
return std::find(std::begin(arr), std::end(arr), val) != std::end(arr);
}
bool is_digits(const std::string &str);
aare::Dtype parse_descr(std::string typestring);
size_t write_header(const std::filesystem::path &fname, const NumpyHeader &header);
size_t write_header(const std::filesystem::path &fname,
const NumpyHeader &header);
size_t write_header(std::ostream &out, const NumpyHeader &header);
} // namespace NumpyHelpers

View File

@ -19,13 +19,13 @@ template <typename SUM_TYPE = double> class Pedestal {
uint32_t m_samples;
NDArray<uint32_t, 2> m_cur_samples;
//TODO! in case of int needs to be changed to uint64_t
// TODO! in case of int needs to be changed to uint64_t
NDArray<SUM_TYPE, 2> m_sum;
NDArray<SUM_TYPE, 2> m_sum2;
//Cache mean since it is used over and over in the ClusterFinder
//This optimization is related to the access pattern of the ClusterFinder
//Relies on having more reads than pushes to the pedestal
// Cache mean since it is used over and over in the ClusterFinder
// This optimization is related to the access pattern of the ClusterFinder
// Relies on having more reads than pushes to the pedestal
NDArray<SUM_TYPE, 2> m_mean;
public:
@ -42,9 +42,7 @@ template <typename SUM_TYPE = double> class Pedestal {
}
~Pedestal() = default;
NDArray<SUM_TYPE, 2> mean() {
return m_mean;
}
NDArray<SUM_TYPE, 2> mean() { return m_mean; }
SUM_TYPE mean(const uint32_t row, const uint32_t col) const {
return m_mean(row, col);
@ -71,8 +69,6 @@ template <typename SUM_TYPE = double> class Pedestal {
return variance_array;
}
NDArray<SUM_TYPE, 2> std() {
NDArray<SUM_TYPE, 2> standard_deviation_array({m_rows, m_cols});
for (uint32_t i = 0; i < m_rows * m_cols; i++) {
@ -83,8 +79,6 @@ template <typename SUM_TYPE = double> class Pedestal {
return standard_deviation_array;
}
void clear() {
m_sum = 0;
m_sum2 = 0;
@ -92,8 +86,6 @@ template <typename SUM_TYPE = double> class Pedestal {
m_mean = 0;
}
void clear(const uint32_t row, const uint32_t col) {
m_sum(row, col) = 0;
m_sum2(row, col) = 0;
@ -101,8 +93,6 @@ template <typename SUM_TYPE = double> class Pedestal {
m_mean(row, col) = 0;
}
template <typename T> void push(NDView<T, 2> frame) {
assert(frame.size() == m_rows * m_cols);
@ -140,9 +130,6 @@ template <typename SUM_TYPE = double> class Pedestal {
}
}
template <typename T> void push(Frame &frame) {
assert(frame.rows() == static_cast<size_t>(m_rows) &&
frame.cols() == static_cast<size_t>(m_cols));
@ -170,7 +157,8 @@ template <typename SUM_TYPE = double> class Pedestal {
m_sum(row, col) += val - m_sum(row, col) / m_samples;
m_sum2(row, col) += val * val - m_sum2(row, col) / m_samples;
}
//Since we just did a push we know that m_cur_samples(row, col) is at least 1
// Since we just did a push we know that m_cur_samples(row, col) is at
// least 1
m_mean(row, col) = m_sum(row, col) / m_cur_samples(row, col);
}
@ -183,7 +171,8 @@ template <typename SUM_TYPE = double> class Pedestal {
m_cur_samples(row, col)++;
} else {
m_sum(row, col) += val - m_sum(row, col) / m_cur_samples(row, col);
m_sum2(row, col) += val * val - m_sum2(row, col) / m_cur_samples(row, col);
m_sum2(row, col) +=
val * val - m_sum2(row, col) / m_cur_samples(row, col);
}
}
@ -191,19 +180,16 @@ template <typename SUM_TYPE = double> class Pedestal {
* @brief Update the mean of the pedestal. This is used after having done
* push_no_update. It is not necessary to call this function after push.
*/
void update_mean(){
m_mean = m_sum / m_cur_samples;
}
void update_mean() { m_mean = m_sum / m_cur_samples; }
template<typename T>
void push_fast(const uint32_t row, const uint32_t col, const T val_){
//Assume we reached the steady state where all pixels have
//m_samples samples
template <typename T>
void push_fast(const uint32_t row, const uint32_t col, const T val_) {
// Assume we reached the steady state where all pixels have
// m_samples samples
SUM_TYPE val = static_cast<SUM_TYPE>(val_);
m_sum(row, col) += val - m_sum(row, col) / m_samples;
m_sum2(row, col) += val * val - m_sum2(row, col) / m_samples;
m_mean(row, col) = m_sum(row, col) / m_samples;
}
};
} // namespace aare

View File

@ -1,7 +1,7 @@
#pragma once
#include "aare/defs.hpp"
#include "aare/NDArray.hpp"
#include "aare/defs.hpp"
namespace aare {
@ -10,11 +10,11 @@ NDArray<ssize_t, 2> GenerateMoench05PixelMap();
NDArray<ssize_t, 2> GenerateMoench05PixelMap1g();
NDArray<ssize_t, 2> GenerateMoench05PixelMapOld();
//Matterhorn02
NDArray<ssize_t, 2>GenerateMH02SingleCounterPixelMap();
// Matterhorn02
NDArray<ssize_t, 2> GenerateMH02SingleCounterPixelMap();
NDArray<ssize_t, 3> GenerateMH02FourCounterPixelMap();
//Eiger
NDArray<ssize_t, 2>GenerateEigerFlipRowsPixelMap();
// Eiger
NDArray<ssize_t, 2> GenerateEigerFlipRowsPixelMap();
} // namespace aare

View File

@ -18,9 +18,9 @@
// @author Jordan DeLong (delong.j@fb.com)
// Changes made by PSD Detector Group:
// Copied: Line 34 constexpr std::size_t hardware_destructive_interference_size = 128; from folly/lang/Align.h
// Changed extension to .hpp
// Changed namespace to aare
// Copied: Line 34 constexpr std::size_t hardware_destructive_interference_size
// = 128; from folly/lang/Align.h Changed extension to .hpp Changed namespace to
// aare
#pragma once
@ -45,15 +45,14 @@ template <class T> struct ProducerConsumerQueue {
ProducerConsumerQueue(const ProducerConsumerQueue &) = delete;
ProducerConsumerQueue &operator=(const ProducerConsumerQueue &) = delete;
ProducerConsumerQueue(ProducerConsumerQueue &&other){
ProducerConsumerQueue(ProducerConsumerQueue &&other) {
size_ = other.size_;
records_ = other.records_;
other.records_ = nullptr;
readIndex_ = other.readIndex_.load(std::memory_order_acquire);
writeIndex_ = other.writeIndex_.load(std::memory_order_acquire);
}
ProducerConsumerQueue &operator=(ProducerConsumerQueue &&other){
ProducerConsumerQueue &operator=(ProducerConsumerQueue &&other) {
size_ = other.size_;
records_ = other.records_;
other.records_ = nullptr;
@ -62,15 +61,16 @@ template <class T> struct ProducerConsumerQueue {
return *this;
}
ProducerConsumerQueue():ProducerConsumerQueue(2){};
ProducerConsumerQueue() : ProducerConsumerQueue(2){};
// size must be >= 2.
//
// Also, note that the number of usable slots in the queue at any
// given time is actually (size-1), so if you start with an empty queue,
// isFull() will return true after size-1 insertions.
explicit ProducerConsumerQueue(uint32_t size)
: size_(size), records_(static_cast<T *>(std::malloc(sizeof(T) * size))), readIndex_(0), writeIndex_(0) {
: size_(size),
records_(static_cast<T *>(std::malloc(sizeof(T) * size))),
readIndex_(0), writeIndex_(0) {
assert(size >= 2);
if (!records_) {
throw std::bad_alloc();
@ -154,7 +154,8 @@ template <class T> struct ProducerConsumerQueue {
}
bool isEmpty() const {
return readIndex_.load(std::memory_order_acquire) == writeIndex_.load(std::memory_order_acquire);
return readIndex_.load(std::memory_order_acquire) ==
writeIndex_.load(std::memory_order_acquire);
}
bool isFull() const {
@ -175,7 +176,8 @@ template <class T> struct ProducerConsumerQueue {
// be removing items concurrently).
// * It is undefined to call this from any other thread.
size_t sizeGuess() const {
int ret = writeIndex_.load(std::memory_order_acquire) - readIndex_.load(std::memory_order_acquire);
int ret = writeIndex_.load(std::memory_order_acquire) -
readIndex_.load(std::memory_order_acquire);
if (ret < 0) {
ret += size_;
}
@ -192,7 +194,7 @@ template <class T> struct ProducerConsumerQueue {
// const uint32_t size_;
uint32_t size_;
// T *const records_;
T* records_;
T *records_;
alignas(hardware_destructive_interference_size) AtomicIndex readIndex_;
alignas(hardware_destructive_interference_size) AtomicIndex writeIndex_;

View File

@ -1,11 +1,10 @@
#pragma once
#include "aare/FileInterface.hpp"
#include "aare/RawMasterFile.hpp"
#include "aare/Frame.hpp"
#include "aare/NDArray.hpp" //for pixel map
#include "aare/RawMasterFile.hpp"
#include "aare/RawSubFile.hpp"
#include <optional>
namespace aare {
@ -53,10 +52,10 @@ class RawFile : public FileInterface {
void read_into(std::byte *image_buf) override;
void read_into(std::byte *image_buf, size_t n_frames) override;
//TODO! do we need to adapt the API?
// TODO! do we need to adapt the API?
void read_into(std::byte *image_buf, DetectorHeader *header);
void read_into(std::byte *image_buf, size_t n_frames, DetectorHeader *header);
void read_into(std::byte *image_buf, size_t n_frames,
DetectorHeader *header);
size_t frame_number(size_t frame_index) override;
size_t bytes_per_frame() override;
@ -73,20 +72,17 @@ class RawFile : public FileInterface {
RawMasterFile master() const;
DetectorType detector_type() const override;
private:
/**
* @brief read the frame at the given frame index into the image buffer
* @param frame_number frame number to read
* @param image_buf buffer to store the frame
*/
void get_frame_into(size_t frame_index, std::byte *frame_buffer, DetectorHeader *header = nullptr);
void get_frame_into(size_t frame_index, std::byte *frame_buffer,
DetectorHeader *header = nullptr);
/**
* @brief get the frame at the given frame index
@ -95,8 +91,6 @@ class RawFile : public FileInterface {
*/
Frame get_frame(size_t frame_index);
/**
* @brief read the header of the file
* @param fname path to the data subfile
@ -108,5 +102,4 @@ class RawFile : public FileInterface {
void find_geometry();
};
} // namespace aare

View File

@ -1,5 +1,7 @@
#pragma once
#include "aare/defs.hpp"
#include "aare/scan_parameters.hpp"
#include <filesystem>
#include <fmt/format.h>
#include <fstream>
@ -39,29 +41,6 @@ class RawFileNameComponents {
void set_old_scheme(bool old_scheme);
};
class ScanParameters {
bool m_enabled = false;
std::string m_dac;
int m_start = 0;
int m_stop = 0;
int m_step = 0;
//TODO! add settleTime, requires string to time conversion
public:
ScanParameters(const std::string &par);
ScanParameters() = default;
ScanParameters(const ScanParameters &) = default;
ScanParameters &operator=(const ScanParameters &) = default;
ScanParameters(ScanParameters &&) = default;
int start() const;
int stop() const;
int step() const;
const std::string &dac() const;
bool enabled() const;
void increment_stop();
};
/**
* @brief Class for parsing a master file either in our .json format or the old
* .raw format
@ -101,7 +80,6 @@ class RawMasterFile {
std::optional<ROI> m_roi;
public:
RawMasterFile(const std::filesystem::path &fpath);
@ -129,10 +107,8 @@ class RawMasterFile {
std::optional<size_t> number_of_rows() const;
std::optional<uint8_t> quad() const;
std::optional<ROI> roi() const;
ScanParameters scan_parameters() const;
private:

View File

@ -10,8 +10,9 @@
namespace aare {
/**
* @brief Class to read a singe subfile written in .raw format. Used from RawFile to read
* the entire detector. Can be used directly to read part of the image.
* @brief Class to read a singe subfile written in .raw format. Used from
* RawFile to read the entire detector. Can be used directly to read part of the
* image.
*/
class RawSubFile {
protected:
@ -20,22 +21,23 @@ class RawSubFile {
size_t m_bitdepth;
std::filesystem::path m_path; //!< path to the subfile
std::string m_base_name; //!< base name used for formatting file names
size_t m_offset{}; //!< file index of the first file, allow starting at non zero file
size_t m_offset{}; //!< file index of the first file, allow starting at non
//!< zero file
size_t m_total_frames{}; //!< total number of frames in the series of files
size_t m_rows{};
size_t m_cols{};
size_t m_bytes_per_frame{};
int m_module_index{};
size_t m_current_file_index{}; //!< The index of the open file
size_t m_current_frame_index{}; //!< The index of the current frame (with reference to all files)
std::vector<size_t> m_last_frame_in_file{}; //!< Used for seeking to the correct file
size_t m_current_frame_index{}; //!< The index of the current frame (with
//!< reference to all files)
std::vector<size_t>
m_last_frame_in_file{}; //!< Used for seeking to the correct file
uint32_t m_pos_row{};
uint32_t m_pos_col{};
std::optional<NDArray<ssize_t, 2>> m_pixel_map;
public:
@ -49,12 +51,14 @@ class RawSubFile {
* @throws std::invalid_argument if the detector,type pair is not supported
*/
RawSubFile(const std::filesystem::path &fname, DetectorType detector,
size_t rows, size_t cols, size_t bitdepth, uint32_t pos_row = 0, uint32_t pos_col = 0);
size_t rows, size_t cols, size_t bitdepth, uint32_t pos_row = 0,
uint32_t pos_col = 0);
~RawSubFile() = default;
/**
* @brief Seek to the given frame number
* @note Puts the file pointer at the start of the header, not the start of the data
* @note Puts the file pointer at the start of the header, not the start of
* the data
* @param frame_index frame position in file to seek to
* @throws std::runtime_error if the frame number is out of range
*/
@ -62,7 +66,8 @@ class RawSubFile {
size_t tell();
void read_into(std::byte *image_buf, DetectorHeader *header = nullptr);
void read_into(std::byte *image_buf, size_t n_frames, DetectorHeader *header= nullptr);
void read_into(std::byte *image_buf, size_t n_frames,
DetectorHeader *header = nullptr);
void get_part(std::byte *buffer, size_t frame_index);
void read_header(DetectorHeader *header);
@ -78,15 +83,13 @@ class RawSubFile {
size_t frames_in_file() const { return m_total_frames; }
private:
template <typename T>
void read_with_map(std::byte *image_buf);
private:
template <typename T> void read_with_map(std::byte *image_buf);
void parse_fname(const std::filesystem::path &fname);
void scan_files();
void open_file(size_t file_index);
std::filesystem::path fpath(size_t file_index) const;
};
} // namespace aare

View File

@ -38,8 +38,10 @@ template <typename T> class VarClusterFinder {
bool use_noise_map = false;
int peripheralThresholdFactor_ = 5;
int current_label;
const std::array<int, 4> di{{0, -1, -1, -1}}; // row ### 8-neighbour by scaning from left to right
const std::array<int, 4> dj{{-1, -1, 0, 1}}; // col ### 8-neighbour by scaning from top to bottom
const std::array<int, 4> di{
{0, -1, -1, -1}}; // row ### 8-neighbour by scaning from left to right
const std::array<int, 4> dj{
{-1, -1, 0, 1}}; // col ### 8-neighbour by scaning from top to bottom
const std::array<int, 8> di_{{0, 0, -1, 1, -1, 1, -1, 1}}; // row
const std::array<int, 8> dj_{{-1, 1, 0, 0, 1, -1, -1, 1}}; // col
std::map<int, int> child; // heirachy: key: child; val: parent
@ -50,7 +52,8 @@ template <typename T> class VarClusterFinder {
public:
VarClusterFinder(Shape<2> shape, T threshold)
: shape_(shape), labeled_(shape, 0), peripheral_labeled_(shape, 0), binary_(shape), threshold_(threshold) {
: shape_(shape), labeled_(shape, 0), peripheral_labeled_(shape, 0),
binary_(shape), threshold_(threshold) {
hits.reserve(2000);
}
@ -60,7 +63,9 @@ template <typename T> class VarClusterFinder {
noiseMap = noise_map;
use_noise_map = true;
}
void set_peripheralThresholdFactor(int factor) { peripheralThresholdFactor_ = factor; }
void set_peripheralThresholdFactor(int factor) {
peripheralThresholdFactor_ = factor;
}
void find_clusters(NDView<T, 2> img);
void find_clusters_X(NDView<T, 2> img);
void rec_FillHit(int clusterIndex, int i, int j);
@ -144,7 +149,8 @@ template <typename T> int VarClusterFinder<T>::check_neighbours(int i, int j) {
}
}
template <typename T> void VarClusterFinder<T>::find_clusters(NDView<T, 2> img) {
template <typename T>
void VarClusterFinder<T>::find_clusters(NDView<T, 2> img) {
original_ = img;
labeled_ = 0;
peripheral_labeled_ = 0;
@ -156,7 +162,8 @@ template <typename T> void VarClusterFinder<T>::find_clusters(NDView<T, 2> img)
store_clusters();
}
template <typename T> void VarClusterFinder<T>::find_clusters_X(NDView<T, 2> img) {
template <typename T>
void VarClusterFinder<T>::find_clusters_X(NDView<T, 2> img) {
original_ = img;
int clusterIndex = 0;
for (int i = 0; i < shape_[0]; ++i) {
@ -175,7 +182,8 @@ template <typename T> void VarClusterFinder<T>::find_clusters_X(NDView<T, 2> img
h_size.clear();
}
template <typename T> void VarClusterFinder<T>::rec_FillHit(int clusterIndex, int i, int j) {
template <typename T>
void VarClusterFinder<T>::rec_FillHit(int clusterIndex, int i, int j) {
// printf("original_(%d, %d)=%f\n", i, j, original_(i,j));
// printf("h_size[%d].size=%d\n", clusterIndex, h_size[clusterIndex].size);
if (h_size[clusterIndex].size < MAX_CLUSTER_SIZE) {
@ -203,11 +211,15 @@ template <typename T> void VarClusterFinder<T>::rec_FillHit(int clusterIndex, in
} else {
// if (h_size[clusterIndex].size < MAX_CLUSTER_SIZE){
// h_size[clusterIndex].size += 1;
// h_size[clusterIndex].rows[h_size[clusterIndex].size] = row;
// h_size[clusterIndex].cols[h_size[clusterIndex].size] = col;
// h_size[clusterIndex].enes[h_size[clusterIndex].size] = original_(row, col);
// h_size[clusterIndex].rows[h_size[clusterIndex].size] =
// row; h_size[clusterIndex].cols[h_size[clusterIndex].size]
// = col;
// h_size[clusterIndex].enes[h_size[clusterIndex].size] =
// original_(row, col);
// }// ? weather to include peripheral pixels
original_(row, col) = 0; // remove peripheral pixels, to avoid potential influence for pedestal updating
original_(row, col) =
0; // remove peripheral pixels, to avoid potential influence
// for pedestal updating
}
}
}
@ -275,8 +287,8 @@ template <typename T> void VarClusterFinder<T>::store_clusters() {
for (int i = 0; i < shape_[0]; ++i) {
for (int j = 0; j < shape_[1]; ++j) {
if (labeled_(i, j) != 0 || false
// (i-1 >= 0 and labeled_(i-1, j) != 0) or // another circle of peripheral pixels
// (j-1 >= 0 and labeled_(i, j-1) != 0) or
// (i-1 >= 0 and labeled_(i-1, j) != 0) or // another circle of
// peripheral pixels (j-1 >= 0 and labeled_(i, j-1) != 0) or
// (i+1 < shape_[0] and labeled_(i+1, j) != 0) or
// (j+1 < shape_[1] and labeled_(i, j+1) != 0)
) {

View File

@ -1,9 +1,9 @@
#pragma once
#include <aare/NDArray.hpp>
#include <algorithm>
#include <array>
#include <vector>
#include <aare/NDArray.hpp>
namespace aare {
/**
@ -18,23 +18,21 @@ namespace aare {
*
*/
template <typename T>
size_t last_smaller(const T* first, const T* last, T val) {
for (auto iter = first+1; iter != last; ++iter) {
size_t last_smaller(const T *first, const T *last, T val) {
for (auto iter = first + 1; iter != last; ++iter) {
if (*iter >= val) {
return std::distance(first, iter-1);
return std::distance(first, iter - 1);
}
}
return std::distance(first, last-1);
return std::distance(first, last - 1);
}
template <typename T>
size_t last_smaller(const NDArray<T, 1>& arr, T val) {
template <typename T> size_t last_smaller(const NDArray<T, 1> &arr, T val) {
return last_smaller(arr.begin(), arr.end(), val);
}
template <typename T>
size_t last_smaller(const std::vector<T>& vec, T val) {
return last_smaller(vec.data(), vec.data()+vec.size(), val);
template <typename T> size_t last_smaller(const std::vector<T> &vec, T val) {
return last_smaller(vec.data(), vec.data() + vec.size(), val);
}
/**
@ -48,65 +46,59 @@ size_t last_smaller(const std::vector<T>& vec, T val) {
* @return index of the first element that is larger than val
*/
template <typename T>
size_t first_larger(const T* first, const T* last, T val) {
size_t first_larger(const T *first, const T *last, T val) {
for (auto iter = first; iter != last; ++iter) {
if (*iter > val) {
return std::distance(first, iter);
}
}
return std::distance(first, last-1);
return std::distance(first, last - 1);
}
template <typename T>
size_t first_larger(const NDArray<T, 1>& arr, T val) {
template <typename T> size_t first_larger(const NDArray<T, 1> &arr, T val) {
return first_larger(arr.begin(), arr.end(), val);
}
template <typename T>
size_t first_larger(const std::vector<T>& vec, T val) {
return first_larger(vec.data(), vec.data()+vec.size(), val);
template <typename T> size_t first_larger(const std::vector<T> &vec, T val) {
return first_larger(vec.data(), vec.data() + vec.size(), val);
}
/**
* @brief Index of the nearest element to val.
* Requires a sorted array. If there is no difference it takes the first element.
* Requires a sorted array. If there is no difference it takes the first
* element.
* @param first iterator to the first element
* @param last iterator to the last element
* @param val value to compare
* @return index of the nearest element
*/
template <typename T>
size_t nearest_index(const T* first, const T* last, T val) {
auto iter = std::min_element(first, last,
[val](T a, T b) {
size_t nearest_index(const T *first, const T *last, T val) {
auto iter = std::min_element(first, last, [val](T a, T b) {
return std::abs(a - val) < std::abs(b - val);
});
return std::distance(first, iter);
}
template <typename T>
size_t nearest_index(const NDArray<T, 1>& arr, T val) {
template <typename T> size_t nearest_index(const NDArray<T, 1> &arr, T val) {
return nearest_index(arr.begin(), arr.end(), val);
}
template <typename T>
size_t nearest_index(const std::vector<T>& vec, T val) {
return nearest_index(vec.data(), vec.data()+vec.size(), val);
template <typename T> size_t nearest_index(const std::vector<T> &vec, T val) {
return nearest_index(vec.data(), vec.data() + vec.size(), val);
}
template <typename T, size_t N>
size_t nearest_index(const std::array<T,N>& arr, T val) {
return nearest_index(arr.data(), arr.data()+arr.size(), val);
size_t nearest_index(const std::array<T, N> &arr, T val) {
return nearest_index(arr.data(), arr.data() + arr.size(), val);
}
template <typename T>
std::vector<T> cumsum(const std::vector<T>& vec) {
template <typename T> std::vector<T> cumsum(const std::vector<T> &vec) {
std::vector<T> result(vec.size());
std::partial_sum(vec.begin(), vec.end(), result.begin());
return result;
}
template <typename Container> bool all_equal(const Container &c) {
if (!c.empty() &&
std::all_of(begin(c), end(c),
@ -117,6 +109,4 @@ template <typename Container> bool all_equal(const Container &c) {
return false;
}
} // namespace aare

View File

@ -1,26 +1,27 @@
#pragma once
#include <aare/NDView.hpp>
#include <cstdint>
#include <vector>
#include <aare/NDView.hpp>
namespace aare {
uint16_t adc_sar_05_decode64to16(uint64_t input);
uint16_t adc_sar_04_decode64to16(uint64_t input);
void adc_sar_05_decode64to16(NDView<uint64_t, 2> input, NDView<uint16_t,2> output);
void adc_sar_04_decode64to16(NDView<uint64_t, 2> input, NDView<uint16_t,2> output);
void adc_sar_05_decode64to16(NDView<uint64_t, 2> input,
NDView<uint16_t, 2> output);
void adc_sar_04_decode64to16(NDView<uint64_t, 2> input,
NDView<uint16_t, 2> output);
/**
* @brief Apply custom weights to a 16-bit input value. Will sum up weights[i]**i
* for each bit i that is set in the input value.
* @brief Apply custom weights to a 16-bit input value. Will sum up
* weights[i]**i for each bit i that is set in the input value.
* @throws std::out_of_range if weights.size() < 16
* @param input 16-bit input value
* @param weights vector of weights, size must be less than or equal to 16
*/
double apply_custom_weights(uint16_t input, const NDView<double, 1> weights);
void apply_custom_weights(NDView<uint16_t, 1> input, NDView<double, 1> output, const NDView<double, 1> weights);
void apply_custom_weights(NDView<uint16_t, 1> input, NDView<double, 1> output,
const NDView<double, 1> weights);
} // namespace aare

View File

@ -1,18 +1,21 @@
#pragma once
#include "aare/Dtype.hpp"
#include "aare/type_traits.hpp"
#include <algorithm>
#include <array>
#include <stdexcept>
#include <cassert>
#include <cstdint>
#include <cstring>
#include <iostream>
#include <sstream>
#include <stdexcept>
#include <string>
#include <string_view>
#include <variant>
#include <vector>
/**
* @brief LOCATION macro to get the current location in the code
*/
@ -20,28 +23,24 @@
std::string(__FILE__) + std::string(":") + std::to_string(__LINE__) + \
":" + std::string(__func__) + ":"
#ifdef AARE_CUSTOM_ASSERT
#define AARE_ASSERT(expr)\
if (expr)\
{}\
else\
#define AARE_ASSERT(expr) \
if (expr) { \
} else \
aare::assert_failed(LOCATION + " Assertion failed: " + #expr + "\n");
#else
#define AARE_ASSERT(cond)\
do { (void)sizeof(cond); } while(0)
#define AARE_ASSERT(cond) \
do { \
(void)sizeof(cond); \
} while (0)
#endif
namespace aare {
inline constexpr size_t bits_per_byte = 8;
void assert_failed(const std::string &msg);
class DynamicCluster {
public:
int cluster_sizeX;
@ -179,11 +178,11 @@ template <typename T> struct t_xy {
};
using xy = t_xy<uint32_t>;
/**
* @brief Class to hold the geometry of a module. Where pixel 0 is located and the size of the module
* @brief Class to hold the geometry of a module. Where pixel 0 is located and
* the size of the module
*/
struct ModuleGeometry{
struct ModuleGeometry {
int origin_x{};
int origin_y{};
int height{};
@ -193,10 +192,10 @@ struct ModuleGeometry{
};
/**
* @brief Class to hold the geometry of a detector. Number of modules, their size and where pixel 0
* for each module is located
* @brief Class to hold the geometry of a detector. Number of modules, their
* size and where pixel 0 for each module is located
*/
struct DetectorGeometry{
struct DetectorGeometry {
int modules_x{};
int modules_y{};
int pixels_x{};
@ -208,7 +207,7 @@ struct DetectorGeometry{
auto size() const { return module_pixel_0.size(); }
};
struct ROI{
struct ROI {
ssize_t xmin{};
ssize_t xmax{};
ssize_t ymin{};
@ -219,12 +218,11 @@ struct ROI{
bool contains(ssize_t x, ssize_t y) const {
return x >= xmin && x < xmax && y >= ymin && y < ymax;
}
};
};
using dynamic_shape = std::vector<ssize_t>;
//TODO! Can we uniform enums between the libraries?
// TODO! Can we uniform enums between the libraries?
/**
* @brief Enum class to identify different detectors.
@ -232,7 +230,7 @@ using dynamic_shape = std::vector<ssize_t>;
* Different spelling to avoid confusion with the slsDetectorPackage
*/
enum class DetectorType {
//Standard detectors match the enum values from slsDetectorPackage
// Standard detectors match the enum values from slsDetectorPackage
Generic,
Eiger,
Gotthard,
@ -243,25 +241,21 @@ enum class DetectorType {
Gotthard2,
Xilinx_ChipTestBoard,
//Additional detectors used for defining processing. Variants of the standard ones.
Moench03=100,
// Additional detectors used for defining processing. Variants of the
// standard ones.
Moench03 = 100,
Moench03_old,
Unknown
};
enum class TimingMode { Auto, Trigger };
enum class FrameDiscardPolicy { NoDiscard, Discard, DiscardPartial };
template <class T> T StringTo(const std::string &arg) { return T(arg); }
template <class T> std::string ToString(T arg) { return T(arg); }
template <> DetectorType StringTo(const std::string & /*name*/);
template <> std::string ToString(DetectorType arg);
template <> TimingMode StringTo(const std::string & /*mode*/);
template <> FrameDiscardPolicy StringTo(const std::string & /*mode*/);
enum class BurstMode {
Burst_Interal,
Burst_External,
Continuous_Internal,
Continuous_External
};
using DataTypeVariants = std::variant<uint16_t, uint32_t>;

View File

@ -1,7 +1,7 @@
#pragma once
#include "aare/defs.hpp"
#include "aare/RawMasterFile.hpp" //ROI refactor away
namespace aare{
#include "aare/defs.hpp"
namespace aare {
/**
* @brief Update the detector geometry given a region of interest
@ -12,5 +12,4 @@ namespace aare{
*/
DetectorGeometry update_geometry_with_roi(DetectorGeometry geo, ROI roi);
} // namespace aare

View File

@ -1,7 +1,6 @@
#pragma once
/*Utility to log to console*/
#include <iostream>
#include <sstream>
#include <sys/time.h>
@ -27,7 +26,6 @@ namespace aare {
#define RESET "\x1b[0m"
#define BOLD "\x1b[1m"
enum TLogLevel {
logERROR,
logWARNING,
@ -37,7 +35,8 @@ enum TLogLevel {
logINFOCYAN,
logINFOMAGENTA,
logINFO,
logDEBUG,
logDEBUG, // constructors, destructors etc. should still give too much
// output
logDEBUG1,
logDEBUG2,
logDEBUG3,
@ -47,7 +46,9 @@ enum TLogLevel {
// Compiler should optimize away anything below this value
#ifndef AARE_LOG_LEVEL
#define AARE_LOG_LEVEL "LOG LEVEL NOT SET IN CMAKE" //This is configured in the main CMakeLists.txt
#define AARE_LOG_LEVEL \
"LOG LEVEL NOT SET IN CMAKE" // This is configured in the main
// CMakeLists.txt
#endif
#define __AT__ \
@ -72,7 +73,8 @@ class Logger {
std::clog << os.str() << std::flush; // Single write
}
static TLogLevel &ReportingLevel() { // singelton eeh TODO! Do we need a runtime option?
static TLogLevel &
ReportingLevel() { // singelton eeh TODO! Do we need a runtime option?
static TLogLevel reportingLevel = logDEBUG5;
return reportingLevel;
}

View File

@ -0,0 +1,51 @@
#pragma once
#include <string>
#include <sstream>
namespace aare {
class ScanParameters {
bool m_enabled = false;
std::string m_dac;
int m_start = 0;
int m_stop = 0;
int m_step = 0;
// ns m_dac_settle_time{0};
// TODO! add settleTime, requires string to time conversion
public:
// "[enabled\ndac dac 4\nstart 500\nstop 2200\nstep 5\nsettleTime 100us\n]"
// TODO: use StringTo<ScanParameters> and move this to to_string
// add ways of setting the members of the class
ScanParameters(const std::string &par) {
std::istringstream iss(par.substr(1, par.size() - 2));
std::string line;
while (std::getline(iss, line)) {
if (line == "enabled") {
m_enabled = true;
} else if (line.find("dac") != std::string::npos) {
m_dac = line.substr(4);
} else if (line.find("start") != std::string::npos) {
m_start = std::stoi(line.substr(6));
} else if (line.find("stop") != std::string::npos) {
m_stop = std::stoi(line.substr(5));
} else if (line.find("step") != std::string::npos) {
m_step = std::stoi(line.substr(5));
}
}
};
ScanParameters() = default;
ScanParameters(const ScanParameters &) = default;
ScanParameters &operator=(const ScanParameters &) = default;
ScanParameters(ScanParameters &&) = default;
int start() const { return m_start; };
int stop() const { return m_stop; };
int step() const { return m_step; };
const std::string &dac() const { return m_dac; };
bool enabled() const { return m_enabled; };
void increment_stop() { m_stop += 1; };
};
} // namespace aare

View File

@ -0,0 +1,11 @@
#pragma once
#include <string>
namespace aare {
std::string RemoveUnit(std::string &str);
void TrimWhiteSpaces(std::string &s);
} // namespace aare

288
include/aare/to_string.hpp Normal file
View File

@ -0,0 +1,288 @@
#pragma once
#include "aare/defs.hpp"
#include "aare/scan_parameters.hpp"
#include "aare/string_utils.hpp"
#include <optional>
#include <chrono>
namespace aare {
// generic
template <class T, typename = std::enable_if_t<!is_duration<T>::value>>
std::string ToString(T arg) {
return T(arg);
}
template <typename T,
std::enable_if_t<!is_duration<T>::value && !is_container<T>::value,
int> = 0>
T StringTo(const std::string &arg) {
return T(arg);
}
// time
/** Convert std::chrono::duration with specified output unit */
template <typename T, typename Rep = double>
typename std::enable_if<is_duration<T>::value, std::string>::type
ToString(T t, const std::string &unit) {
using std::chrono::duration;
using std::chrono::duration_cast;
std::ostringstream os;
if (unit == "ns")
os << duration_cast<duration<Rep, std::nano>>(t).count() << unit;
else if (unit == "us")
os << duration_cast<duration<Rep, std::micro>>(t).count() << unit;
else if (unit == "ms")
os << duration_cast<duration<Rep, std::milli>>(t).count() << unit;
else if (unit == "s")
os << duration_cast<duration<Rep>>(t).count() << unit;
else
throw std::runtime_error("Unknown unit: " + unit);
return os.str();
}
/** Convert std::chrono::duration automatically selecting the unit */
template <typename From>
typename std::enable_if<is_duration<From>::value, std::string>::type
ToString(From t) {
using std::chrono::abs;
using std::chrono::duration_cast;
using std::chrono::microseconds;
using std::chrono::milliseconds;
using std::chrono::nanoseconds;
auto tns = duration_cast<nanoseconds>(t);
if (abs(tns) < microseconds(1)) {
return ToString(tns, "ns");
} else if (abs(tns) < milliseconds(1)) {
return ToString(tns, "us");
} else if (abs(tns) < milliseconds(99)) {
return ToString(tns, "ms");
} else {
return ToString(tns, "s");
}
}
template <class Rep, class Period>
std::ostream &operator<<(std::ostream &os,
const std::chrono::duration<Rep, Period> &d) {
return os << ToString(d);
}
template <typename T>
T StringTo(const std::string &t, const std::string &unit) {
double tval{0};
try {
tval = std::stod(t);
} catch (const std::invalid_argument &e) {
throw std::invalid_argument("[ERROR] Could not convert string to time");
}
using std::chrono::duration;
using std::chrono::duration_cast;
if (unit == "ns") {
return duration_cast<T>(duration<double, std::nano>(tval));
} else if (unit == "us") {
return duration_cast<T>(duration<double, std::micro>(tval));
} else if (unit == "ms") {
return duration_cast<T>(duration<double, std::milli>(tval));
} else if (unit == "s" || unit.empty()) {
return duration_cast<T>(std::chrono::duration<double>(tval));
} else {
throw std::invalid_argument("[ERROR] Invalid unit in conversion from "
"string to std::chrono::duration");
}
}
template <typename T, std::enable_if_t<is_duration<T>::value, int> = 0>
T StringTo(const std::string &t) {
std::string tmp{t};
auto unit = RemoveUnit(tmp);
return StringTo<T>(tmp, unit);
}
template <> inline bool StringTo(const std::string &s) {
int i = std::stoi(s, nullptr, 10);
switch (i) {
case 0:
return false;
case 1:
return true;
default:
throw std::runtime_error("Unknown boolean. Expecting be 0 or 1.");
}
}
template <> inline uint8_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
int value = std::stoi(s, nullptr, base);
if (value < std::numeric_limits<uint8_t>::min() ||
value > std::numeric_limits<uint8_t>::max()) {
throw std::runtime_error("Cannot scan uint8_t from string '" + s +
"'. Value must be in range 0 - 255.");
}
return static_cast<uint8_t>(value);
}
template <> inline uint16_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
int value = std::stoi(s, nullptr, base);
if (value < std::numeric_limits<uint16_t>::min() ||
value > std::numeric_limits<uint16_t>::max()) {
throw std::runtime_error("Cannot scan uint16_t from string '" + s +
"'. Value must be in range 0 - 65535.");
}
return static_cast<uint16_t>(value);
}
template <> inline uint32_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
return std::stoul(s, nullptr, base);
}
template <> inline uint64_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
return std::stoull(s, nullptr, base);
}
template <> inline int StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
return std::stoi(s, nullptr, base);
}
/*template <> inline size_t StringTo(const std::string &s) {
int base = s.find("0x") != std::string::npos ? 16 : 10;
return std::stoull(s, nullptr, base);
}*/
// vector
template <typename T> std::string ToString(const std::vector<T> &vec) {
std::ostringstream oss;
oss << "[";
for (size_t i = 0; i < vec.size(); ++i) {
oss << vec[i];
if (i != vec.size() - 1)
oss << ", ";
}
oss << "]";
return oss.str();
}
template <typename T>
std::ostream &operator<<(std::ostream &os, const std::vector<T> &v) {
return os << ToString(v);
}
template <typename Container,
std::enable_if_t<is_container<Container>::value &&
!is_std_string_v<Container> /*&&
!is_map_v<Container>*/
,
int> = 0>
Container StringTo(const std::string &s) {
using Value = typename Container::value_type;
// strip outer brackets
std::string str = s;
str.erase(
std::remove_if(str.begin(), str.end(),
[](unsigned char c) { return c == '[' || c == ']'; }),
str.end());
std::stringstream ss(str);
std::string item;
Container result;
while (std::getline(ss, item, ',')) {
TrimWhiteSpaces(item);
if (!item.empty()) {
result.push_back(StringTo<Value>(item));
}
}
return result;
}
// map
template <typename KeyType, typename ValueType>
std::string ToString(const std::map<KeyType, ValueType> &m) {
std::ostringstream os;
os << '{';
if (!m.empty()) {
auto it = m.cbegin();
os << ToString(it->first) << ": " << ToString(it->second);
it++;
while (it != m.cend()) {
os << ", " << ToString(it->first) << ": " << ToString(it->second);
it++;
}
}
os << '}';
return os.str();
}
template <>
inline std::map<std::string, std::string> StringTo(const std::string &s) {
std::map<std::string, std::string> result;
std::string str = s;
// Remove outer braces if present
if (!str.empty() && str.front() == '{' && str.back() == '}') {
str = str.substr(1, str.size() - 2);
}
std::stringstream ss(str);
std::string item;
while (std::getline(ss, item, ',')) {
auto colon_pos = item.find(':');
if (colon_pos == std::string::npos)
throw std::runtime_error("Missing ':' in item: " + item);
std::string key = item.substr(0, colon_pos);
std::string value = item.substr(colon_pos + 1);
TrimWhiteSpaces(key);
TrimWhiteSpaces(value);
result[key] = value;
}
return result;
}
// optional
template <class T> std::string ToString(const std::optional<T> &opt) {
return opt ? ToString(*opt) : "nullopt";
}
template <typename T>
std::ostream &operator<<(std::ostream &os, const std::optional<T> &opt) {
if (opt)
os << *opt;
else
os << "nullopt";
return os;
}
// enums
template <> std::string ToString(DetectorType arg);
template <> DetectorType StringTo(const std::string & /*name*/);
template <> std::string ToString(TimingMode arg);
template <> TimingMode StringTo(const std::string & /*mode*/);
template <> std::string ToString(FrameDiscardPolicy arg);
template <> FrameDiscardPolicy StringTo(const std::string & /*mode*/);
template <> std::string ToString(BurstMode arg);
template <> BurstMode StringTo(const std::string & /*mode*/);
template <> std::string ToString(ROI arg);
std::ostream &operator<<(std::ostream &os, const ROI &roi);
template <> std::string ToString(ScanParameters arg);
std::ostream &operator<<(std::ostream &os, const ScanParameters &r);
} // namespace aare

View File

@ -0,0 +1,72 @@
#pragma once
#include <type_traits>
namespace aare {
/**
* Type trait to check if a template parameter is a std::chrono::duration
*/
template <typename T, typename _ = void>
struct is_duration : std::false_type {};
template <typename... Ts> struct is_duration_helper {};
template <typename T>
struct is_duration<T,
typename std::conditional<
false,
is_duration_helper<typename T::rep, typename T::period,
decltype(std::declval<T>().min()),
decltype(std::declval<T>().max()),
decltype(std::declval<T>().zero())>,
void>::type> : public std::true_type {};
/**
* Type trait to evaluate if template parameter is
* complying with a standard container
*/
template <typename T, typename _ = void>
struct is_container : std::false_type {};
template <typename... Ts> struct is_container_helper {};
template <typename T>
struct is_container<
T, typename std::conditional<
false,
is_container_helper<
typename std::remove_reference<T>::type::value_type,
typename std::remove_reference<T>::type::size_type,
typename std::remove_reference<T>::type::iterator,
typename std::remove_reference<T>::type::const_iterator,
decltype(std::declval<T>().size()),
decltype(std::declval<T>().begin()),
decltype(std::declval<T>().end()),
decltype(std::declval<T>().cbegin()),
decltype(std::declval<T>().cend()),
decltype(std::declval<T>().empty())>,
void>::type> : public std::true_type {};
/**
* Type trait to evaluate if template parameter is
* complying with a std::string
*/
template <typename T>
inline constexpr bool is_std_string_v =
std::is_same_v<std::decay_t<T>, std::string>;
/**
* Type trait to evaluate if template parameter is
* complying with std::map
*/
template <typename T> struct is_map : std::false_type {};
template <typename K, typename V, typename... Args>
struct is_map<std::map<K, V, Args...>> : std::true_type {};
template <typename T>
inline constexpr bool is_map_v = is_map<std::decay_t<T>>::value;
} // namespace aare

View File

@ -6,7 +6,7 @@ namespace aare {
/**
* @brief Get the error message from an ifstream object
*/
*/
std::string ifstream_error_msg(std::ifstream &ifs);
} // namespace aare

View File

@ -1,11 +1,11 @@
#include <thread>
#include <vector>
#include <utility>
#include <vector>
namespace aare {
template<typename F>
void RunInParallel(F func, const std::vector<std::pair<int, int>>& tasks) {
template <typename F>
void RunInParallel(F func, const std::vector<std::pair<int, int>> &tasks) {
// auto tasks = split_task(0, y.shape(0), n_threads);
std::vector<std::thread> threads;
for (auto &task : tasks) {
@ -14,5 +14,5 @@ namespace aare {
for (auto &thread : threads) {
thread.join();
}
}
}
} // namespace aare

View File

@ -1,22 +1,47 @@
from ._aare import ClusterFinder_Cluster3x3i, ClusterFinder_Cluster2x2i, ClusterFinderMT_Cluster3x3i, ClusterFinderMT_Cluster2x2i, ClusterCollector_Cluster3x3i, ClusterCollector_Cluster2x2i
# from ._aare import ClusterFinder_Cluster3x3i, ClusterFinder_Cluster2x2i, ClusterFinderMT_Cluster3x3i, ClusterFinderMT_Cluster2x2i, ClusterCollector_Cluster3x3i, ClusterCollector_Cluster2x2i
from ._aare import ClusterFileSink_Cluster3x3i, ClusterFileSink_Cluster2x2i
# from ._aare import ClusterFileSink_Cluster3x3i, ClusterFileSink_Cluster2x2i
from . import _aare
import numpy as np
_supported_cluster_sizes = [(2,2), (3,3), (5,5), (7,7), (9,9),]
# def _get_class()
def _type_to_char(dtype):
if dtype == np.int32:
return 'i'
elif dtype == np.float32:
return 'f'
elif dtype == np.float64:
return 'd'
else:
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32, np.float32, and np.float64 are supported.")
def _get_class(name, cluster_size, dtype):
"""
Helper function to get the class based on the name, cluster size, and dtype.
"""
try:
class_name = f"{name}_Cluster{cluster_size[0]}x{cluster_size[1]}{_type_to_char(dtype)}"
cls = getattr(_aare, class_name)
except AttributeError:
raise ValueError(f"Unsupported combination of type and cluster size: {dtype}/{cluster_size} when requesting {class_name}")
return cls
def ClusterFinder(image_size, cluster_size, n_sigma=5, dtype = np.int32, capacity = 1024):
"""
Factory function to create a ClusterFinder object. Provides a cleaner syntax for
the templated ClusterFinder in C++.
"""
if dtype == np.int32 and cluster_size == (3,3):
return ClusterFinder_Cluster3x3i(image_size, n_sigma = n_sigma, capacity=capacity)
elif dtype == np.int32 and cluster_size == (2,2):
return ClusterFinder_Cluster2x2i(image_size, n_sigma = n_sigma, capacity=capacity)
else:
#TODO! add the other formats
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32 is supported.")
cls = _get_class("ClusterFinder", cluster_size, dtype)
return cls(image_size, n_sigma=n_sigma, capacity=capacity)
def ClusterFinderMT(image_size, cluster_size = (3,3), dtype=np.int32, n_sigma=5, capacity = 1024, n_threads = 3):
@ -25,15 +50,9 @@ def ClusterFinderMT(image_size, cluster_size = (3,3), dtype=np.int32, n_sigma=5,
the templated ClusterFinderMT in C++.
"""
if dtype == np.int32 and cluster_size == (3,3):
return ClusterFinderMT_Cluster3x3i(image_size, n_sigma = n_sigma,
capacity = capacity, n_threads = n_threads)
elif dtype == np.int32 and cluster_size == (2,2):
return ClusterFinderMT_Cluster2x2i(image_size, n_sigma = n_sigma,
capacity = capacity, n_threads = n_threads)
else:
#TODO! add the other formats
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32 is supported.")
cls = _get_class("ClusterFinderMT", cluster_size, dtype)
return cls(image_size, n_sigma=n_sigma, capacity=capacity, n_threads=n_threads)
def ClusterCollector(clusterfindermt, cluster_size = (3,3), dtype=np.int32):
@ -42,14 +61,8 @@ def ClusterCollector(clusterfindermt, cluster_size = (3,3), dtype=np.int32):
the templated ClusterCollector in C++.
"""
if dtype == np.int32 and cluster_size == (3,3):
return ClusterCollector_Cluster3x3i(clusterfindermt)
elif dtype == np.int32 and cluster_size == (2,2):
return ClusterCollector_Cluster2x2i(clusterfindermt)
else:
#TODO! add the other formats
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32 is supported.")
cls = _get_class("ClusterCollector", cluster_size, dtype)
return cls(clusterfindermt)
def ClusterFileSink(clusterfindermt, cluster_file, dtype=np.int32):
"""
@ -57,11 +70,15 @@ def ClusterFileSink(clusterfindermt, cluster_file, dtype=np.int32):
the templated ClusterCollector in C++.
"""
if dtype == np.int32 and clusterfindermt.cluster_size == (3,3):
return ClusterFileSink_Cluster3x3i(clusterfindermt, cluster_file)
elif dtype == np.int32 and clusterfindermt.cluster_size == (2,2):
return ClusterFileSink_Cluster2x2i(clusterfindermt, cluster_file)
cls = _get_class("ClusterFileSink", clusterfindermt.cluster_size, dtype)
return cls(clusterfindermt, cluster_file)
else:
#TODO! add the other formats
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32 is supported.")
def ClusterFile(fname, cluster_size=(3,3), dtype=np.int32):
"""
Factory function to create a ClusterFile object. Provides a cleaner syntax for
the templated ClusterFile in C++.
"""
cls = _get_class("ClusterFile", cluster_size, dtype)
return cls(fname)

66
python/aare/Hdf5File.py Normal file
View File

@ -0,0 +1,66 @@
from . import _aare
import numpy as np
#from .ScanParameters import ScanParameters
class Hdf5File(_aare.Hdf5File):
def __init__(self, fname, chunk_size = 1):
super().__init__(fname)
self._chunk_size = chunk_size
def read(self) -> tuple:
"""Read the entire file.
Seeks to the beginning of the file before reading.
Returns:
tuple: header, data
"""
self.seek(0)
return self.read_n(self.total_frames)
# @property
# def scan_parameters(self):
# """Return the scan parameters.
# Returns:
# ScanParameters: Scan parameters.
# """
# return ScanParameters(self.master.scan_parameters)
@property
def master(self):
"""Return the master file.
Returns:
Hdf5MasterFile: Master file.
"""
return super().master
def __len__(self) -> int:
"""Return the number of frames in the file.
Returns:
int: Number of frames in file.
"""
return super().frames_in_file
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
def __iter__(self):
return self
def __next__(self):
try:
if self._chunk_size == 1:
return self.read_frame()
else:
return self.read_n(self._chunk_size)
except RuntimeError:
# TODO! find a good way to check that we actually have the right exception
raise StopIteration

View File

@ -2,16 +2,15 @@
from . import _aare
from ._aare import File, RawMasterFile, RawSubFile, JungfrauDataFile
from ._aare import File, RawMasterFile, RawSubFile, Hdf5MasterFile, JungfrauDataFile
from ._aare import Pedestal_d, Pedestal_f, ClusterFinder_Cluster3x3i, VarClusterFinder
from ._aare import DetectorType
from ._aare import ClusterFile_Cluster3x3i as ClusterFile
from ._aare import hitmap
from ._aare import ROI
# from ._aare import ClusterFinderMT, ClusterCollector, ClusterFileSink, ClusterVector_i
from .ClusterFinder import ClusterFinder, ClusterCollector, ClusterFinderMT, ClusterFileSink
from .ClusterFinder import ClusterFinder, ClusterCollector, ClusterFinderMT, ClusterFileSink, ClusterFile
from .ClusterVector import ClusterVector
@ -24,6 +23,7 @@ from ._aare import apply_custom_weights
from .CtbRawFile import CtbRawFile
from .RawFile import RawFile
from .Hdf5File import Hdf5File
from .ScanParameters import ScanParameters
from .utils import random_pixels, random_pixel, flat_list, add_colorbar

View File

@ -0,0 +1,64 @@
#include "aare/Cluster.hpp"
#include <cstdint>
#include <filesystem>
#include <fmt/format.h>
#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>
namespace py = pybind11;
using pd_type = double;
using namespace aare;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
template <typename Type, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
void define_Cluster(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("Cluster{}", typestr);
py::class_<Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType>>(
m, class_name.c_str(), py::buffer_protocol())
.def(py::init([](uint8_t x, uint8_t y, py::array_t<Type> data) {
py::buffer_info buf_info = data.request();
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType> cluster;
cluster.x = x;
cluster.y = y;
auto r = data.template unchecked<1>(); // no bounds checks
for (py::ssize_t i = 0; i < data.size(); ++i) {
cluster.data[i] = r(i);
}
return cluster;
}));
/*
//TODO! Review if to keep or not
.def_property(
"data",
[](ClusterType &c) -> py::array {
return py::array(py::buffer_info(
c.data, sizeof(Type),
py::format_descriptor<Type>::format(), // Type
// format
1, // Number of dimensions
{static_cast<ssize_t>(ClusterSizeX *
ClusterSizeY)}, // Shape (flattened)
{sizeof(Type)} // Stride (step size between elements)
));
},
[](ClusterType &c, py::array_t<Type> arr) {
py::buffer_info buf_info = arr.request();
Type *ptr = static_cast<Type *>(buf_info.ptr);
std::copy(ptr, ptr + ClusterSizeX * ClusterSizeY,
c.data); // TODO dont iterate over centers!!!
});
*/
}
#pragma GCC diagnostic pop

View File

@ -0,0 +1,44 @@
#include "aare/ClusterCollector.hpp"
#include "aare/ClusterFileSink.hpp"
#include "aare/ClusterFinder.hpp"
#include "aare/ClusterFinderMT.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/NDView.hpp"
#include "aare/Pedestal.hpp"
#include "np_helper.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>
namespace py = pybind11;
using pd_type = double;
using namespace aare;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_ClusterCollector(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("ClusterCollector_{}", typestr);
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
py::class_<ClusterCollector<ClusterType>>(m, class_name.c_str())
.def(py::init<ClusterFinderMT<ClusterType, uint16_t, double> *>())
.def("stop", &ClusterCollector<ClusterType>::stop)
.def(
"steal_clusters",
[](ClusterCollector<ClusterType> &self) {
auto v = new std::vector<ClusterVector<ClusterType>>(
self.steal_clusters());
return v; // TODO change!!!
},
py::return_value_policy::take_ownership);
}
#pragma GCC diagnostic pop

View File

@ -21,8 +21,7 @@ using namespace ::aare;
template <typename Type, uint8_t CoordSizeX, uint8_t CoordSizeY,
typename CoordType = uint16_t>
void define_cluster_file_io_bindings(py::module &m,
const std::string &typestr) {
void define_ClusterFile(py::module &m, const std::string &typestr) {
using ClusterType = Cluster<Type, CoordSizeX, CoordSizeY, CoordType>;

View File

@ -0,0 +1,37 @@
#include "aare/ClusterCollector.hpp"
#include "aare/ClusterFileSink.hpp"
#include "aare/ClusterFinder.hpp"
#include "aare/ClusterFinderMT.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/NDView.hpp"
#include "aare/Pedestal.hpp"
#include "np_helper.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>
namespace py = pybind11;
using pd_type = double;
using namespace aare;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_ClusterFileSink(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("ClusterFileSink_{}", typestr);
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
py::class_<ClusterFileSink<ClusterType>>(m, class_name.c_str())
.def(py::init<ClusterFinderMT<ClusterType, uint16_t, double> *,
const std::filesystem::path &>())
.def("stop", &ClusterFileSink<ClusterType>::stop);
}
#pragma GCC diagnostic pop

View File

@ -0,0 +1,77 @@
#include "aare/ClusterCollector.hpp"
#include "aare/ClusterFileSink.hpp"
#include "aare/ClusterFinder.hpp"
#include "aare/ClusterFinderMT.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/NDView.hpp"
#include "aare/Pedestal.hpp"
#include "np_helper.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>
namespace py = pybind11;
using pd_type = double;
using namespace aare;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_ClusterFinder(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("ClusterFinder_{}", typestr);
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
py::class_<ClusterFinder<ClusterType, uint16_t, pd_type>>(
m, class_name.c_str())
.def(py::init<Shape<2>, pd_type, size_t>(), py::arg("image_size"),
py::arg("n_sigma") = 5.0, py::arg("capacity") = 1'000'000)
.def("push_pedestal_frame",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame) {
auto view = make_view_2d(frame);
self.push_pedestal_frame(view);
})
.def("clear_pedestal",
&ClusterFinder<ClusterType, uint16_t, pd_type>::clear_pedestal)
.def_property_readonly(
"pedestal",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self) {
auto pd = new NDArray<pd_type, 2>{};
*pd = self.pedestal();
return return_image_data(pd);
})
.def_property_readonly(
"noise",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self) {
auto arr = new NDArray<pd_type, 2>{};
*arr = self.noise();
return return_image_data(arr);
})
.def(
"steal_clusters",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
bool realloc_same_capacity) {
ClusterVector<ClusterType> clusters =
self.steal_clusters(realloc_same_capacity);
return clusters;
},
py::arg("realloc_same_capacity") = false)
.def(
"find_clusters",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame, uint64_t frame_number) {
auto view = make_view_2d(frame);
self.find_clusters(view, frame_number);
return;
},
py::arg(), py::arg("frame_number") = 0);
}
#pragma GCC diagnostic pop

View File

@ -0,0 +1,81 @@
#include "aare/ClusterCollector.hpp"
#include "aare/ClusterFileSink.hpp"
#include "aare/ClusterFinder.hpp"
#include "aare/ClusterFinderMT.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/NDView.hpp"
#include "aare/Pedestal.hpp"
#include "np_helper.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>
namespace py = pybind11;
using pd_type = double;
using namespace aare;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_ClusterFinderMT(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("ClusterFinderMT_{}", typestr);
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
py::class_<ClusterFinderMT<ClusterType, uint16_t, pd_type>>(
m, class_name.c_str())
.def(py::init<Shape<2>, pd_type, size_t, size_t>(),
py::arg("image_size"), py::arg("n_sigma") = 5.0,
py::arg("capacity") = 2048, py::arg("n_threads") = 3)
.def("push_pedestal_frame",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame) {
auto view = make_view_2d(frame);
self.push_pedestal_frame(view);
})
.def(
"find_clusters",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame, uint64_t frame_number) {
auto view = make_view_2d(frame);
self.find_clusters(view, frame_number);
return;
},
py::arg(), py::arg("frame_number") = 0)
.def_property_readonly(
"cluster_size",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self) {
return py::make_tuple(ClusterSizeX, ClusterSizeY);
})
.def("clear_pedestal",
&ClusterFinderMT<ClusterType, uint16_t, pd_type>::clear_pedestal)
.def("sync", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::sync)
.def("stop", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::stop)
.def("start", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::start)
.def(
"pedestal",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
size_t thread_index) {
auto pd = new NDArray<pd_type, 2>{};
*pd = self.pedestal(thread_index);
return return_image_data(pd);
},
py::arg("thread_index") = 0)
.def(
"noise",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
size_t thread_index) {
auto arr = new NDArray<pd_type, 2>{};
*arr = self.noise(thread_index);
return return_image_data(arr);
},
py::arg("thread_index") = 0);
}
#pragma GCC diagnostic pop

View File

@ -44,7 +44,8 @@ void define_ClusterVector(py::module &m, const std::string &typestr) {
auto *vec = new std::vector<Type>(self.sum());
return return_vector(vec);
})
.def("sum_2x2", [](ClusterVector<ClusterType> &self){
.def("sum_2x2",
[](ClusterVector<ClusterType> &self) {
auto *vec = new std::vector<Type>(self.sum_2x2());
return return_vector(vec);
})
@ -102,3 +103,5 @@ void define_ClusterVector(py::module &m, const std::string &typestr) {
return hitmap;
});
}
#pragma GCC diagnostic pop

View File

@ -1,211 +0,0 @@
#include "aare/ClusterCollector.hpp"
#include "aare/ClusterFileSink.hpp"
#include "aare/ClusterFinder.hpp"
#include "aare/ClusterFinderMT.hpp"
#include "aare/ClusterVector.hpp"
#include "aare/NDView.hpp"
#include "aare/Pedestal.hpp"
#include "np_helper.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl_bind.h>
namespace py = pybind11;
using pd_type = double;
using namespace aare;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
template <typename Type, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
void define_cluster(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("Cluster{}", typestr);
py::class_<Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType>>(
m, class_name.c_str(), py::buffer_protocol())
.def(py::init([](uint8_t x, uint8_t y, py::array_t<Type> data) {
py::buffer_info buf_info = data.request();
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType> cluster;
cluster.x = x;
cluster.y = y;
auto r = data.template unchecked<1>(); // no bounds checks
for (py::ssize_t i = 0; i < data.size(); ++i) {
cluster.data[i] = r(i);
}
return cluster;
}));
/*
.def_property(
"data",
[](ClusterType &c) -> py::array {
return py::array(py::buffer_info(
c.data, sizeof(Type),
py::format_descriptor<Type>::format(), // Type
// format
1, // Number of dimensions
{static_cast<ssize_t>(ClusterSizeX *
ClusterSizeY)}, // Shape (flattened)
{sizeof(Type)} // Stride (step size between elements)
));
},
[](ClusterType &c, py::array_t<Type> arr) {
py::buffer_info buf_info = arr.request();
Type *ptr = static_cast<Type *>(buf_info.ptr);
std::copy(ptr, ptr + ClusterSizeX * ClusterSizeY,
c.data); // TODO dont iterate over centers!!!
});
*/
}
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_cluster_finder_mt_bindings(py::module &m,
const std::string &typestr) {
auto class_name = fmt::format("ClusterFinderMT_{}", typestr);
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
py::class_<ClusterFinderMT<ClusterType, uint16_t, pd_type>>(
m, class_name.c_str())
.def(py::init<Shape<2>, pd_type, size_t, size_t>(),
py::arg("image_size"), py::arg("n_sigma") = 5.0,
py::arg("capacity") = 2048, py::arg("n_threads") = 3)
.def("push_pedestal_frame",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame) {
auto view = make_view_2d(frame);
self.push_pedestal_frame(view);
})
.def(
"find_clusters",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame, uint64_t frame_number) {
auto view = make_view_2d(frame);
self.find_clusters(view, frame_number);
return;
},
py::arg(), py::arg("frame_number") = 0)
.def_property_readonly("cluster_size", [](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self){
return py::make_tuple(ClusterSizeX, ClusterSizeY);
})
.def("clear_pedestal",
&ClusterFinderMT<ClusterType, uint16_t, pd_type>::clear_pedestal)
.def("sync", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::sync)
.def("stop", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::stop)
.def("start", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::start)
.def(
"pedestal",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
size_t thread_index) {
auto pd = new NDArray<pd_type, 2>{};
*pd = self.pedestal(thread_index);
return return_image_data(pd);
},
py::arg("thread_index") = 0)
.def(
"noise",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
size_t thread_index) {
auto arr = new NDArray<pd_type, 2>{};
*arr = self.noise(thread_index);
return return_image_data(arr);
},
py::arg("thread_index") = 0);
}
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_cluster_collector_bindings(py::module &m,
const std::string &typestr) {
auto class_name = fmt::format("ClusterCollector_{}", typestr);
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
py::class_<ClusterCollector<ClusterType>>(m, class_name.c_str())
.def(py::init<ClusterFinderMT<ClusterType, uint16_t, double> *>())
.def("stop", &ClusterCollector<ClusterType>::stop)
.def(
"steal_clusters",
[](ClusterCollector<ClusterType> &self) {
auto v = new std::vector<ClusterVector<ClusterType>>(
self.steal_clusters());
return v; // TODO change!!!
},
py::return_value_policy::take_ownership);
}
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_cluster_file_sink_bindings(py::module &m,
const std::string &typestr) {
auto class_name = fmt::format("ClusterFileSink_{}", typestr);
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
py::class_<ClusterFileSink<ClusterType>>(m, class_name.c_str())
.def(py::init<ClusterFinderMT<ClusterType, uint16_t, double> *,
const std::filesystem::path &>())
.def("stop", &ClusterFileSink<ClusterType>::stop);
}
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_cluster_finder_bindings(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("ClusterFinder_{}", typestr);
using ClusterType = Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>;
py::class_<ClusterFinder<ClusterType, uint16_t, pd_type>>(
m, class_name.c_str())
.def(py::init<Shape<2>, pd_type, size_t>(), py::arg("image_size"),
py::arg("n_sigma") = 5.0, py::arg("capacity") = 1'000'000)
.def("push_pedestal_frame",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame) {
auto view = make_view_2d(frame);
self.push_pedestal_frame(view);
})
.def("clear_pedestal",
&ClusterFinder<ClusterType, uint16_t, pd_type>::clear_pedestal)
.def_property_readonly(
"pedestal",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self) {
auto pd = new NDArray<pd_type, 2>{};
*pd = self.pedestal();
return return_image_data(pd);
})
.def_property_readonly(
"noise",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self) {
auto arr = new NDArray<pd_type, 2>{};
*arr = self.noise();
return return_image_data(arr);
})
.def(
"steal_clusters",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
bool realloc_same_capacity) {
ClusterVector<ClusterType> clusters =
self.steal_clusters(realloc_same_capacity);
return clusters;
},
py::arg("realloc_same_capacity") = false)
.def(
"find_clusters",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame, uint64_t frame_number) {
auto view = make_view_2d(frame);
self.find_clusters(view, frame_number);
return;
},
py::arg(), py::arg("frame_number") = 0);
}
#pragma GCC diagnostic pop

View File

@ -6,8 +6,8 @@
#include "aare/RawMasterFile.hpp"
#include "aare/RawSubFile.hpp"
#include "aare/defs.hpp"
#include "aare/decode.hpp"
#include "aare/defs.hpp"
// #include "aare/fClusterFileV2.hpp"
#include "np_helper.hpp"
@ -26,68 +26,76 @@ using namespace ::aare;
void define_ctb_raw_file_io_bindings(py::module &m) {
m.def("adc_sar_05_decode64to16", [](py::array_t<uint8_t> input) {
if(input.ndim() != 2){
throw std::runtime_error("Only 2D arrays are supported at this moment");
m.def("adc_sar_05_decode64to16", [](py::array_t<uint8_t> input) {
if (input.ndim() != 2) {
throw std::runtime_error(
"Only 2D arrays are supported at this moment");
}
//Create a 2D output array with the same shape as the input
std::vector<ssize_t> shape{input.shape(0), input.shape(1)/static_cast<ssize_t>(bits_per_byte)};
// Create a 2D output array with the same shape as the input
std::vector<ssize_t> shape{input.shape(0),
input.shape(1) /
static_cast<ssize_t>(bits_per_byte)};
py::array_t<uint16_t> output(shape);
//Create a view of the input and output arrays
NDView<uint64_t, 2> input_view(reinterpret_cast<uint64_t*>(input.mutable_data()), {output.shape(0), output.shape(1)});
NDView<uint16_t, 2> output_view(output.mutable_data(), {output.shape(0), output.shape(1)});
// Create a view of the input and output arrays
NDView<uint64_t, 2> input_view(
reinterpret_cast<uint64_t *>(input.mutable_data()),
{output.shape(0), output.shape(1)});
NDView<uint16_t, 2> output_view(output.mutable_data(),
{output.shape(0), output.shape(1)});
adc_sar_05_decode64to16(input_view, output_view);
return output;
});
});
m.def("adc_sar_04_decode64to16", [](py::array_t<uint8_t> input) {
if(input.ndim() != 2){
throw std::runtime_error("Only 2D arrays are supported at this moment");
m.def("adc_sar_04_decode64to16", [](py::array_t<uint8_t> input) {
if (input.ndim() != 2) {
throw std::runtime_error(
"Only 2D arrays are supported at this moment");
}
//Create a 2D output array with the same shape as the input
std::vector<ssize_t> shape{input.shape(0), input.shape(1)/static_cast<ssize_t>(bits_per_byte)};
// Create a 2D output array with the same shape as the input
std::vector<ssize_t> shape{input.shape(0),
input.shape(1) /
static_cast<ssize_t>(bits_per_byte)};
py::array_t<uint16_t> output(shape);
//Create a view of the input and output arrays
NDView<uint64_t, 2> input_view(reinterpret_cast<uint64_t*>(input.mutable_data()), {output.shape(0), output.shape(1)});
NDView<uint16_t, 2> output_view(output.mutable_data(), {output.shape(0), output.shape(1)});
// Create a view of the input and output arrays
NDView<uint64_t, 2> input_view(
reinterpret_cast<uint64_t *>(input.mutable_data()),
{output.shape(0), output.shape(1)});
NDView<uint16_t, 2> output_view(output.mutable_data(),
{output.shape(0), output.shape(1)});
adc_sar_04_decode64to16(input_view, output_view);
return output;
});
});
m.def(
"apply_custom_weights",
[](py::array_t<uint16_t, py::array::c_style | py::array::forcecast> &input,
m.def("apply_custom_weights",
[](py::array_t<uint16_t, py::array::c_style | py::array::forcecast>
&input,
py::array_t<double, py::array::c_style | py::array::forcecast>
&weights) {
// Create new array with same shape as the input array (uninitialized values)
// Create new array with same shape as the input array
// (uninitialized values)
py::buffer_info buf = input.request();
py::array_t<double> output(buf.shape);
// Use NDViews to call into the C++ library
auto weights_view = make_view_1d(weights);
NDView<uint16_t, 1> input_view(input.mutable_data(), {input.size()});
NDView<double, 1> output_view(output.mutable_data(), {output.size()});
NDView<uint16_t, 1> input_view(input.mutable_data(),
{input.size()});
NDView<double, 1> output_view(output.mutable_data(),
{output.size()});
apply_custom_weights(input_view, output_view, weights_view);
return output;
});
py::class_<CtbRawFile>(m, "CtbRawFile")
py::class_<CtbRawFile>(m, "CtbRawFile")
.def(py::init<const std::filesystem::path &>())
.def("read_frame",
[](CtbRawFile &self) {
@ -103,7 +111,8 @@ py::class_<CtbRawFile>(m, "CtbRawFile")
// always read bytes
image = py::array_t<uint8_t>(shape);
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()),
self.read_into(
reinterpret_cast<std::byte *>(image.mutable_data()),
header.mutable_data());
return py::make_tuple(header, image);
@ -116,5 +125,4 @@ py::class_<CtbRawFile>(m, "CtbRawFile")
&CtbRawFile::image_size_in_bytes)
.def_property_readonly("frames_in_file", &CtbRawFile::frames_in_file);
}

View File

@ -5,6 +5,11 @@
#include "aare/RawMasterFile.hpp"
#include "aare/RawSubFile.hpp"
#ifdef HDF5_FOUND
#include "aare/Hdf5File.hpp"
#include "aare/Hdf5MasterFile.hpp"
#endif
#include "aare/defs.hpp"
// #include "aare/fClusterFileV2.hpp"
@ -20,17 +25,13 @@
namespace py = pybind11;
using namespace ::aare;
//Disable warnings for unused parameters, as we ignore some
//in the __exit__ method
// Disable warnings for unused parameters, as we ignore some
// in the __exit__ method
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
void define_file_io_bindings(py::module &m) {
py::enum_<DetectorType>(m, "DetectorType")
.value("Jungfrau", DetectorType::Jungfrau)
.value("Eiger", DetectorType::Eiger)
@ -41,13 +42,10 @@ void define_file_io_bindings(py::module &m) {
.value("ChipTestBoard", DetectorType::ChipTestBoard)
.value("Unknown", DetectorType::Unknown);
PYBIND11_NUMPY_DTYPE(DetectorHeader, frameNumber, expLength, packetNumber,
bunchId, timestamp, modId, row, column, reserved,
debug, roundRNumber, detType, version, packetMask);
py::class_<File>(m, "File")
.def(py::init([](const std::filesystem::path &fname) {
return File(fname, "r", {});
@ -112,10 +110,12 @@ void define_file_io_bindings(py::module &m) {
reinterpret_cast<std::byte *>(image.mutable_data()));
return image;
})
.def("read_n", [](File &self, size_t n_frames) {
//adjust for actual frames left in the file
n_frames = std::min(n_frames, self.total_frames()-self.tell());
if(n_frames == 0){
.def("read_n",
[](File &self, size_t n_frames) {
// adjust for actual frames left in the file
n_frames =
std::min(n_frames, self.total_frames() - self.tell());
if (n_frames == 0) {
throw std::runtime_error("No frames left in file");
}
std::vector<size_t> shape{n_frames, self.rows(), self.cols()};
@ -129,22 +129,21 @@ void define_file_io_bindings(py::module &m) {
} else if (item_size == 4) {
image = py::array_t<uint32_t>(shape);
}
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()),
self.read_into(
reinterpret_cast<std::byte *>(image.mutable_data()),
n_frames);
return image;
})
.def("__enter__", [](File &self) { return &self; })
.def("__exit__",
[](File &self,
const std::optional<pybind11::type> &exc_type,
[](File &self, const std::optional<pybind11::type> &exc_type,
const std::optional<pybind11::object> &exc_value,
const std::optional<pybind11::object> &traceback) {
// self.close();
})
.def("__iter__", [](File &self) { return &self; })
.def("__next__", [](File &self) {
try{
try {
const uint8_t item_size = self.bytes_per_pixel();
py::array image;
std::vector<ssize_t> shape;
@ -161,12 +160,11 @@ void define_file_io_bindings(py::module &m) {
self.read_into(
reinterpret_cast<std::byte *>(image.mutable_data()));
return image;
}catch(std::runtime_error &e){
} catch (std::runtime_error &e) {
throw py::stop_iteration();
}
});
py::class_<FileConfig>(m, "FileConfig")
.def(py::init<>())
.def_readwrite("rows", &FileConfig::rows)
@ -183,8 +181,6 @@ void define_file_io_bindings(py::module &m) {
return "<FileConfig: " + a.to_string() + ">";
});
py::class_<ScanParameters>(m, "ScanParameters")
.def(py::init<const std::string &>())
.def(py::init<const ScanParameters &>())
@ -195,7 +191,6 @@ void define_file_io_bindings(py::module &m) {
.def_property_readonly("stop", &ScanParameters::stop)
.def_property_readonly("step", &ScanParameters::step);
py::class_<ROI>(m, "ROI")
.def(py::init<>())
.def(py::init<ssize_t, ssize_t, ssize_t, ssize_t>(), py::arg("xmin"),
@ -204,23 +199,21 @@ void define_file_io_bindings(py::module &m) {
.def_readwrite("xmax", &ROI::xmax)
.def_readwrite("ymin", &ROI::ymin)
.def_readwrite("ymax", &ROI::ymax)
.def("__str__", [](const ROI& self){
return fmt::format("ROI: xmin: {} xmax: {} ymin: {} ymax: {}", self.xmin, self.xmax, self.ymin, self.ymax);
.def("__str__",
[](const ROI &self) {
return fmt::format("ROI: xmin: {} xmax: {} ymin: {} ymax: {}",
self.xmin, self.xmax, self.ymin, self.ymax);
})
.def("__repr__", [](const ROI& self){
return fmt::format("<ROI: xmin: {} xmax: {} ymin: {} ymax: {}>", self.xmin, self.xmax, self.ymin, self.ymax);
.def("__repr__",
[](const ROI &self) {
return fmt::format(
"<ROI: xmin: {} xmax: {} ymin: {} ymax: {}>", self.xmin,
self.xmax, self.ymin, self.ymax);
})
.def("__iter__", [](const ROI &self) {
return py::make_iterator(&self.xmin, &self.ymax+1); //NOLINT
return py::make_iterator(&self.xmin, &self.ymax + 1); // NOLINT
});
#pragma GCC diagnostic pop
// py::class_<ClusterHeader>(m, "ClusterHeader")
// .def(py::init<>())

View File

@ -9,7 +9,6 @@
namespace py = pybind11;
using namespace pybind11::literals;
void define_fit_bindings(py::module &m) {
// TODO! Evaluate without converting to double
@ -61,7 +60,8 @@ void define_fit_bindings(py::module &m) {
py::array_t<double, py::array::c_style | py::array::forcecast> par) {
auto x_view = make_view_1d(x);
auto par_view = make_view_1d(par);
auto y = new NDArray<double, 1>{aare::func::scurve(x_view, par_view)};
auto y =
new NDArray<double, 1>{aare::func::scurve(x_view, par_view)};
return return_image_data(y);
},
R"(
@ -82,7 +82,8 @@ void define_fit_bindings(py::module &m) {
py::array_t<double, py::array::c_style | py::array::forcecast> par) {
auto x_view = make_view_1d(x);
auto par_view = make_view_1d(par);
auto y = new NDArray<double, 1>{aare::func::scurve2(x_view, par_view)};
auto y =
new NDArray<double, 1>{aare::func::scurve2(x_view, par_view)};
return return_image_data(y);
},
R"(
@ -139,7 +140,6 @@ n_threads : int, optional
py::array_t<double, py::array::c_style | py::array::forcecast> y,
py::array_t<double, py::array::c_style | py::array::forcecast> y_err,
int n_threads) {
if (y.ndim() == 3) {
// Allocate memory for the output
// Need to have pointers to allow python to manage
@ -173,7 +173,6 @@ n_threads : int, optional
auto y_view_err = make_view_1d(y_err);
auto x_view = make_view_1d(x);
double chi2 = 0;
aare::fit_gaus(x_view, y_view, y_view_err, par->view(),
par_err->view(), chi2);
@ -252,7 +251,6 @@ n_threads : int, optional
"chi2"_a = return_image_data(chi2),
"Ndf"_a = y.shape(2) - 2);
} else if (y.ndim() == 1) {
auto par = new NDArray<double, 1>({2});
auto par_err = new NDArray<double, 1>({2});
@ -289,7 +287,7 @@ n_threads : int, optional
)",
py::arg("x"), py::arg("y"), py::arg("y_err"), py::arg("n_threads") = 4);
//=========
//=========
m.def(
"fit_scurve",
[](py::array_t<double, py::array::c_style | py::array::forcecast> x,
@ -339,7 +337,6 @@ n_threads : int, optional
"chi2"_a = return_image_data(chi2),
"Ndf"_a = y.shape(2) - 2);
} else if (y.ndim() == 1) {
auto par = new NDArray<double, 1>({2});
auto par_err = new NDArray<double, 1>({2});
@ -376,7 +373,6 @@ n_threads : int, optional
)",
py::arg("x"), py::arg("y"), py::arg("y_err"), py::arg("n_threads") = 4);
m.def(
"fit_scurve2",
[](py::array_t<double, py::array::c_style | py::array::forcecast> x,
@ -426,7 +422,6 @@ n_threads : int, optional
"chi2"_a = return_image_data(chi2),
"Ndf"_a = y.shape(2) - 2);
} else if (y.ndim() == 1) {
auto par = new NDArray<double, 1>({6});
auto par_err = new NDArray<double, 1>({6});

106
python/src/hdf5_file.hpp Normal file
View File

@ -0,0 +1,106 @@
#include "H5Cpp.h"
#include "aare/File.hpp"
#include "aare/Frame.hpp"
#include "aare/Hdf5File.hpp"
#include "aare/Hdf5MasterFile.hpp"
#include "aare/defs.hpp"
// #include "aare/fClusterFileV2.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/iostream.h>
#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl/filesystem.h>
#include <string>
namespace py = pybind11;
using namespace ::aare;
void define_hdf5_file_io_bindings(py::module &m) {
py::class_<Hdf5File>(m, "Hdf5File")
.def(py::init<const std::filesystem::path &>())
.def("read_frame",
[](Hdf5File &self) {
py::array image;
std::vector<ssize_t> shape;
shape.reserve(2);
shape.push_back(self.rows());
shape.push_back(self.cols());
// return headers from all subfiles
py::array_t<DetectorHeader> header(self.n_mod());
const uint8_t item_size = self.bytes_per_pixel();
if (item_size == 1) {
image = py::array_t<uint8_t>(shape);
} else if (item_size == 2) {
image = py::array_t<uint16_t>(shape);
} else if (item_size == 4) {
image = py::array_t<uint32_t>(shape);
}
self.read_into(
reinterpret_cast<std::byte *>(image.mutable_data()),
header.mutable_data());
return py::make_tuple(header, image);
})
.def(
"read_n",
[](Hdf5File &self, size_t n_frames) {
// adjust for actual frames left in the file
n_frames =
std::min(n_frames, self.total_frames() - self.tell());
if (n_frames == 0) {
throw std::runtime_error("No frames left in file");
}
std::vector<size_t> shape{n_frames, self.rows(), self.cols()};
// return headers from all subfiles
py::array_t<DetectorHeader> header;
if (self.n_mod() == 1) {
header = py::array_t<DetectorHeader>(n_frames);
} else {
header =
py::array_t<DetectorHeader>({self.n_mod(), n_frames});
}
// py::array_t<DetectorHeader> header({self.n_mod(), n_frames});
py::array image;
const uint8_t item_size = self.bytes_per_pixel();
if (item_size == 1) {
image = py::array_t<uint8_t>(shape);
} else if (item_size == 2) {
image = py::array_t<uint16_t>(shape);
} else if (item_size == 4) {
image = py::array_t<uint32_t>(shape);
}
self.read_into(
reinterpret_cast<std::byte *>(image.mutable_data()),
n_frames, header.mutable_data());
return py::make_tuple(header, image);
},
R"(
Read n frames from the file.
)")
.def("frame_number", &Hdf5File::frame_number)
.def_property_readonly("bytes_per_frame", &Hdf5File::bytes_per_frame)
.def_property_readonly("pixels_per_frame", &Hdf5File::pixels_per_frame)
.def_property_readonly("bytes_per_pixel", &Hdf5File::bytes_per_pixel)
.def("seek", &Hdf5File::seek, R"(
Seek to a frame index in file.
)")
.def("tell", &Hdf5File::tell, R"(
Return the current frame number.)")
.def_property_readonly("total_frames", &Hdf5File::total_frames)
.def_property_readonly("rows", &Hdf5File::rows)
.def_property_readonly("cols", &Hdf5File::cols)
.def_property_readonly("bitdepth", &Hdf5File::bitdepth)
.def_property_readonly("geometry", &Hdf5File::geometry)
.def_property_readonly("n_mod", &Hdf5File::n_mod)
.def_property_readonly("detector_type", &Hdf5File::detector_type)
.def_property_readonly("master", &Hdf5File::master);
}

View File

@ -0,0 +1,86 @@
#include "aare/File.hpp"
#include "aare/Frame.hpp"
#include "aare/Hdf5File.hpp"
#include "aare/Hdf5MasterFile.hpp"
#include "aare/defs.hpp"
// #include "aare/fClusterFileV2.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/iostream.h>
#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl/filesystem.h>
#include <string>
namespace py = pybind11;
using namespace ::aare;
void define_hdf5_master_file_bindings(py::module &m) {
py::class_<Hdf5MasterFile>(m, "Hdf5MasterFile")
.def(py::init<const std::filesystem::path &>())
.def("data_fname", &Hdf5MasterFile::data_fname, R"(
Parameters
------------
module_index : int
module index (d0, d1 .. dN)
file_index : int
file index (f0, f1 .. fN)
Returns
----------
os.PathLike
The name of the data file.
)")
.def_property_readonly("version", &Hdf5MasterFile::version)
.def_property_readonly("detector_type", &Hdf5MasterFile::detector_type)
.def_property_readonly("timing_mode", &Hdf5MasterFile::timing_mode)
.def_property_readonly("image_size_in_bytes",
&Hdf5MasterFile::image_size_in_bytes)
.def_property_readonly("frames_in_file",
&Hdf5MasterFile::frames_in_file)
.def_property_readonly("pixels_y", &Hdf5MasterFile::pixels_y)
.def_property_readonly("pixels_x", &Hdf5MasterFile::pixels_x)
.def_property_readonly("max_frames_per_file",
&Hdf5MasterFile::max_frames_per_file)
.def_property_readonly("bitdepth", &Hdf5MasterFile::bitdepth)
.def_property_readonly("frame_padding", &Hdf5MasterFile::frame_padding)
.def_property_readonly("frame_discard_policy",
&Hdf5MasterFile::frame_discard_policy)
.def_property_readonly("total_frames_expected",
&Hdf5MasterFile::total_frames_expected)
.def_property_readonly("geometry", &Hdf5MasterFile::geometry)
.def_property_readonly("analog_samples",
&Hdf5MasterFile::analog_samples, R"(
Number of analog samples
Returns
----------
int | None
The number of analog samples in the file (or None if not enabled)
)")
.def_property_readonly("digital_samples",
&Hdf5MasterFile::digital_samples, R"(
Number of digital samples
Returns
----------
int | None
The number of digital samples in the file (or None if not enabled)
)")
.def_property_readonly("transceiver_samples",
&Hdf5MasterFile::transceiver_samples)
.def_property_readonly("number_of_rows",
&Hdf5MasterFile::number_of_rows)
.def_property_readonly("quad", &Hdf5MasterFile::quad);
//.def_property_readonly("scan_parameters",
// &Hdf5MasterFile::scan_parameters)
//.def_property_readonly("roi", &Hdf5MasterFile::roi);
}

View File

@ -21,10 +21,7 @@ using namespace ::aare;
auto read_dat_frame(JungfrauDataFile &self) {
py::array_t<JungfrauDataHeader> header(1);
py::array_t<uint16_t> image({
self.rows(),
self.cols()
});
py::array_t<uint16_t> image({self.rows(), self.cols()});
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()),
header.mutable_data());
@ -40,9 +37,7 @@ auto read_n_dat_frames(JungfrauDataFile &self, size_t n_frames) {
}
py::array_t<JungfrauDataHeader> header(n_frames);
py::array_t<uint16_t> image({
n_frames, self.rows(),
self.cols()});
py::array_t<uint16_t> image({n_frames, self.rows(), self.cols()});
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()),
n_frames, header.mutable_data());

View File

@ -1,22 +1,30 @@
// Files with bindings to the different classes
//New style file naming
// New style file naming
#include "bind_Cluster.hpp"
#include "bind_ClusterCollector.hpp"
#include "bind_ClusterFile.hpp"
#include "bind_ClusterFileSink.hpp"
#include "bind_ClusterFinder.hpp"
#include "bind_ClusterFinderMT.hpp"
#include "bind_ClusterVector.hpp"
//TODO! migrate the other names
#include "cluster.hpp"
#include "cluster_file.hpp"
// TODO! migrate the other names
#include "ctb_raw_file.hpp"
#include "file.hpp"
#include "fit.hpp"
#include "interpolation.hpp"
#include "raw_sub_file.hpp"
#include "raw_master_file.hpp"
#include "raw_file.hpp"
#include "pixel_map.hpp"
#include "var_cluster.hpp"
#include "pedestal.hpp"
#ifdef HDF5_FOUND
#include "hdf5_file.hpp"
#include "hdf5_master_file.hpp"
#endif
#include "jungfrau_data_file.hpp"
#include "pedestal.hpp"
#include "pixel_map.hpp"
#include "raw_file.hpp"
#include "raw_master_file.hpp"
#include "raw_sub_file.hpp"
#include "var_cluster.hpp"
// Pybind stuff
#include <pybind11/pybind11.h>
@ -24,12 +32,36 @@
namespace py = pybind11;
/* MACRO that defines Cluster bindings for a specific size and type
T - Storage type of the cluster data (int, float, double)
N - Number of rows in the cluster
M - Number of columns in the cluster
U - Type of the pixel data (e.g., uint16_t)
TYPE_CODE - A character representing the type code (e.g., 'i' for int, 'd' for
double, 'f' for float)
*/
#define DEFINE_CLUSTER_BINDINGS(T, N, M, U, TYPE_CODE) \
define_ClusterFile<T, N, M, U>(m, "Cluster" #N "x" #M #TYPE_CODE); \
define_ClusterVector<T, N, M, U>(m, "Cluster" #N "x" #M #TYPE_CODE); \
define_ClusterFinder<T, N, M, U>(m, "Cluster" #N "x" #M #TYPE_CODE); \
define_ClusterFinderMT<T, N, M, U>(m, "Cluster" #N "x" #M #TYPE_CODE); \
define_ClusterFileSink<T, N, M, U>(m, "Cluster" #N "x" #M #TYPE_CODE); \
define_ClusterCollector<T, N, M, U>(m, "Cluster" #N "x" #M #TYPE_CODE); \
define_Cluster<T, N, M, U>(m, #N "x" #M #TYPE_CODE); \
register_calculate_eta<T, N, M, U>(m);
PYBIND11_MODULE(_aare, m) {
define_file_io_bindings(m);
define_raw_file_io_bindings(m);
define_raw_sub_file_io_bindings(m);
define_ctb_raw_file_io_bindings(m);
define_raw_master_file_bindings(m);
#ifdef HDF5_FOUND
define_hdf5_file_io_bindings(m);
define_hdf5_master_file_bindings(m);
#endif
define_var_cluster_finder_bindings(m);
define_pixel_map_bindings(m);
define_pedestal_bindings<double>(m, "Pedestal_d");
@ -38,59 +70,23 @@ PYBIND11_MODULE(_aare, m) {
define_interpolation_bindings(m);
define_jungfrau_data_file_io_bindings(m);
define_cluster_file_io_bindings<int, 3, 3, uint16_t>(m, "Cluster3x3i");
define_cluster_file_io_bindings<double, 3, 3, uint16_t>(m, "Cluster3x3d");
define_cluster_file_io_bindings<float, 3, 3, uint16_t>(m, "Cluster3x3f");
define_cluster_file_io_bindings<int, 2, 2, uint16_t>(m, "Cluster2x2i");
define_cluster_file_io_bindings<float, 2, 2, uint16_t>(m, "Cluster2x2f");
define_cluster_file_io_bindings<double, 2, 2, uint16_t>(m, "Cluster2x2d");
DEFINE_CLUSTER_BINDINGS(int, 3, 3, uint16_t, i);
DEFINE_CLUSTER_BINDINGS(double, 3, 3, uint16_t, d);
DEFINE_CLUSTER_BINDINGS(float, 3, 3, uint16_t, f);
define_ClusterVector<int, 3, 3, uint16_t>(m, "Cluster3x3i");
define_ClusterVector<double, 3, 3, uint16_t>(m, "Cluster3x3d");
define_ClusterVector<float, 3, 3, uint16_t>(m, "Cluster3x3f");
define_ClusterVector<int, 2, 2, uint16_t>(m, "Cluster2x2i");
define_ClusterVector<double, 2, 2, uint16_t>(m, "Cluster2x2d");
define_ClusterVector<float, 2, 2, uint16_t>(m, "Cluster2x2f");
DEFINE_CLUSTER_BINDINGS(int, 2, 2, uint16_t, i);
DEFINE_CLUSTER_BINDINGS(double, 2, 2, uint16_t, d);
DEFINE_CLUSTER_BINDINGS(float, 2, 2, uint16_t, f);
define_cluster_finder_bindings<int, 3, 3, uint16_t>(m, "Cluster3x3i");
define_cluster_finder_bindings<double, 3, 3, uint16_t>(m, "Cluster3x3d");
define_cluster_finder_bindings<float, 3, 3, uint16_t>(m, "Cluster3x3f");
define_cluster_finder_bindings<int, 2, 2, uint16_t>(m, "Cluster2x2i");
define_cluster_finder_bindings<double, 2, 2, uint16_t>(m, "Cluster2x2d");
define_cluster_finder_bindings<float, 2, 2, uint16_t>(m, "Cluster2x2f");
DEFINE_CLUSTER_BINDINGS(int, 5, 5, uint16_t, i);
DEFINE_CLUSTER_BINDINGS(double, 5, 5, uint16_t, d);
DEFINE_CLUSTER_BINDINGS(float, 5, 5, uint16_t, f);
define_cluster_finder_mt_bindings<int, 3, 3, uint16_t>(m, "Cluster3x3i");
define_cluster_finder_mt_bindings<double, 3, 3, uint16_t>(m, "Cluster3x3d");
define_cluster_finder_mt_bindings<float, 3, 3, uint16_t>(m, "Cluster3x3f");
define_cluster_finder_mt_bindings<int, 2, 2, uint16_t>(m, "Cluster2x2i");
define_cluster_finder_mt_bindings<double, 2, 2, uint16_t>(m, "Cluster2x2d");
define_cluster_finder_mt_bindings<float, 2, 2, uint16_t>(m, "Cluster2x2f");
DEFINE_CLUSTER_BINDINGS(int, 7, 7, uint16_t, i);
DEFINE_CLUSTER_BINDINGS(double, 7, 7, uint16_t, d);
DEFINE_CLUSTER_BINDINGS(float, 7, 7, uint16_t, f);
define_cluster_file_sink_bindings<int, 3, 3, uint16_t>(m, "Cluster3x3i");
define_cluster_file_sink_bindings<double, 3, 3, uint16_t>(m, "Cluster3x3d");
define_cluster_file_sink_bindings<float, 3, 3, uint16_t>(m, "Cluster3x3f");
define_cluster_file_sink_bindings<int, 2, 2, uint16_t>(m, "Cluster2x2i");
define_cluster_file_sink_bindings<double, 2, 2, uint16_t>(m, "Cluster2x2d");
define_cluster_file_sink_bindings<float, 2, 2, uint16_t>(m, "Cluster2x2f");
define_cluster_collector_bindings<int, 3, 3, uint16_t>(m, "Cluster3x3i");
define_cluster_collector_bindings<double, 3, 3, uint16_t>(m, "Cluster3x3f");
define_cluster_collector_bindings<float, 3, 3, uint16_t>(m, "Cluster3x3d");
define_cluster_collector_bindings<int, 2, 2, uint16_t>(m, "Cluster2x2i");
define_cluster_collector_bindings<double, 2, 2, uint16_t>(m, "Cluster2x2f");
define_cluster_collector_bindings<float, 2, 2, uint16_t>(m, "Cluster2x2d");
define_cluster<int, 3, 3, uint16_t>(m, "3x3i");
define_cluster<float, 3, 3, uint16_t>(m, "3x3f");
define_cluster<double, 3, 3, uint16_t>(m, "3x3d");
define_cluster<int, 2, 2, uint16_t>(m, "2x2i");
define_cluster<float, 2, 2, uint16_t>(m, "2x2f");
define_cluster<double, 2, 2, uint16_t>(m, "2x2d");
register_calculate_eta<int, 3, 3, uint16_t>(m);
register_calculate_eta<float, 3, 3, uint16_t>(m);
register_calculate_eta<double, 3, 3, uint16_t>(m);
register_calculate_eta<int, 2, 2, uint16_t>(m);
register_calculate_eta<float, 2, 2, uint16_t>(m);
register_calculate_eta<double, 2, 2, uint16_t>(m);
DEFINE_CLUSTER_BINDINGS(int, 9, 9, uint16_t, i);
DEFINE_CLUSTER_BINDINGS(double, 9, 9, uint16_t, d);
DEFINE_CLUSTER_BINDINGS(float, 9, 9, uint16_t, f);
}

View File

@ -9,7 +9,8 @@
namespace py = pybind11;
template <typename SUM_TYPE> void define_pedestal_bindings(py::module &m, const std::string &name) {
template <typename SUM_TYPE>
void define_pedestal_bindings(py::module &m, const std::string &name) {
py::class_<Pedestal<SUM_TYPE>>(m, name.c_str())
.def(py::init<int, int, int>())
.def(py::init<int, int>())
@ -19,12 +20,14 @@ template <typename SUM_TYPE> void define_pedestal_bindings(py::module &m, const
*mea = self.mean();
return return_image_data(mea);
})
.def("variance", [](Pedestal<SUM_TYPE> &self) {
.def("variance",
[](Pedestal<SUM_TYPE> &self) {
auto var = new NDArray<SUM_TYPE, 2>{};
*var = self.variance();
return return_image_data(var);
})
.def("std", [](Pedestal<SUM_TYPE> &self) {
.def("std",
[](Pedestal<SUM_TYPE> &self) {
auto std = new NDArray<SUM_TYPE, 2>{};
*std = self.std();
return return_image_data(std);
@ -39,14 +42,19 @@ template <typename SUM_TYPE> void define_pedestal_bindings(py::module &m, const
[&](Pedestal<SUM_TYPE> &pedestal) {
return Pedestal<SUM_TYPE>(pedestal);
})
//TODO! add push for other data types
.def("push", [](Pedestal<SUM_TYPE> &pedestal, py::array_t<uint16_t> &f) {
// TODO! add push for other data types
.def("push",
[](Pedestal<SUM_TYPE> &pedestal, py::array_t<uint16_t> &f) {
auto v = make_view_2d(f);
pedestal.push(v);
})
.def("push_no_update", [](Pedestal<SUM_TYPE> &pedestal, py::array_t<uint16_t, py::array::c_style> &f) {
.def(
"push_no_update",
[](Pedestal<SUM_TYPE> &pedestal,
py::array_t<uint16_t, py::array::c_style> &f) {
auto v = make_view_2d(f);
pedestal.push_no_update(v);
}, py::arg().noconvert())
},
py::arg().noconvert())
.def("update_mean", &Pedestal<SUM_TYPE>::update_mean);
}

View File

@ -1,41 +1,46 @@
#include "aare/PixelMap.hpp"
#include "np_helper.hpp"
#include <cstdint>
#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
namespace py = pybind11;
using namespace::aare;
using namespace ::aare;
void define_pixel_map_bindings(py::module &m) {
m.def("GenerateMoench03PixelMap", []() {
auto ptr = new NDArray<ssize_t,2>(GenerateMoench03PixelMap());
m.def("GenerateMoench03PixelMap",
[]() {
auto ptr = new NDArray<ssize_t, 2>(GenerateMoench03PixelMap());
return return_image_data(ptr);
})
.def("GenerateMoench05PixelMap", []() {
auto ptr = new NDArray<ssize_t,2>(GenerateMoench05PixelMap());
.def("GenerateMoench05PixelMap",
[]() {
auto ptr = new NDArray<ssize_t, 2>(GenerateMoench05PixelMap());
return return_image_data(ptr);
})
.def("GenerateMoench05PixelMap1g", []() {
auto ptr = new NDArray<ssize_t,2>(GenerateMoench05PixelMap1g());
.def("GenerateMoench05PixelMap1g",
[]() {
auto ptr =
new NDArray<ssize_t, 2>(GenerateMoench05PixelMap1g());
return return_image_data(ptr);
})
.def("GenerateMoench05PixelMapOld", []() {
auto ptr = new NDArray<ssize_t,2>(GenerateMoench05PixelMapOld());
.def("GenerateMoench05PixelMapOld",
[]() {
auto ptr =
new NDArray<ssize_t, 2>(GenerateMoench05PixelMapOld());
return return_image_data(ptr);
})
.def("GenerateMH02SingleCounterPixelMap", []() {
auto ptr = new NDArray<ssize_t,2>(GenerateMH02SingleCounterPixelMap());
.def("GenerateMH02SingleCounterPixelMap",
[]() {
auto ptr = new NDArray<ssize_t, 2>(
GenerateMH02SingleCounterPixelMap());
return return_image_data(ptr);
})
.def("GenerateMH02FourCounterPixelMap", []() {
auto ptr = new NDArray<ssize_t,3>(GenerateMH02FourCounterPixelMap());
auto ptr =
new NDArray<ssize_t, 3>(GenerateMH02FourCounterPixelMap());
return return_image_data(ptr);
});
}

View File

@ -64,7 +64,8 @@ void define_raw_file_io_bindings(py::module &m) {
if (self.n_modules() == 1) {
header = py::array_t<DetectorHeader>(n_frames);
} else {
header = py::array_t<DetectorHeader>({self.n_modules(), n_frames});
header = py::array_t<DetectorHeader>(
{self.n_modules(), n_frames});
}
// py::array_t<DetectorHeader> header({self.n_mod(), n_frames});

View File

@ -57,7 +57,8 @@ void define_raw_master_file_bindings(py::module &m) {
.def_property_readonly("total_frames_expected",
&RawMasterFile::total_frames_expected)
.def_property_readonly("geometry", &RawMasterFile::geometry)
.def_property_readonly("analog_samples", &RawMasterFile::analog_samples, R"(
.def_property_readonly("analog_samples", &RawMasterFile::analog_samples,
R"(
Number of analog samples
Returns

View File

@ -43,11 +43,9 @@ auto read_frame_from_RawSubFile(RawSubFile &self) {
auto read_n_frames_from_RawSubFile(RawSubFile &self, size_t n_frames) {
py::array_t<DetectorHeader> header(n_frames);
const uint8_t item_size = self.bytes_per_pixel();
std::vector<ssize_t> shape{
static_cast<ssize_t>(n_frames),
std::vector<ssize_t> shape{static_cast<ssize_t>(n_frames),
static_cast<ssize_t>(self.rows()),
static_cast<ssize_t>(self.cols())
};
static_cast<ssize_t>(self.cols())};
py::array image;
if (item_size == 1) {
@ -57,15 +55,14 @@ auto read_n_frames_from_RawSubFile(RawSubFile &self, size_t n_frames) {
} else if (item_size == 4) {
image = py::array_t<uint32_t>(shape);
}
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()), n_frames,
header.mutable_data());
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()),
n_frames, header.mutable_data());
return py::make_tuple(header, image);
}
//Disable warnings for unused parameters, as we ignore some
//in the __exit__ method
// Disable warnings for unused parameters, as we ignore some
// in the __exit__ method
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
@ -84,18 +81,17 @@ void define_raw_sub_file_io_bindings(py::module &m) {
.def_property_readonly("frames_in_file", &RawSubFile::frames_in_file)
.def("read_frame", &read_frame_from_RawSubFile)
.def("read_n", &read_n_frames_from_RawSubFile)
.def("read", [](RawSubFile &self){
.def("read",
[](RawSubFile &self) {
self.seek(0);
auto n_frames = self.frames_in_file();
return read_n_frames_from_RawSubFile(self, n_frames);
})
.def("__enter__", [](RawSubFile &self) { return &self; })
.def("__exit__",
[](RawSubFile &self,
const std::optional<pybind11::type> &exc_type,
[](RawSubFile &self, const std::optional<pybind11::type> &exc_type,
const std::optional<pybind11::object> &exc_value,
const std::optional<pybind11::object> &traceback) {
})
const std::optional<pybind11::object> &traceback) {})
.def("__iter__", [](RawSubFile &self) { return &self; })
.def("__next__", [](RawSubFile &self) {
try {
@ -104,7 +100,6 @@ void define_raw_sub_file_io_bindings(py::module &m) {
throw py::stop_iteration();
}
});
}
#pragma GCC diagnostic pop

View File

@ -12,10 +12,8 @@
// #include <pybind11/stl/filesystem.h>
// #include <string>
namespace py = pybind11;
using namespace::aare;
using namespace ::aare;
void define_var_cluster_finder_bindings(py::module &m) {
PYBIND11_NUMPY_DTYPE(VarClusterFinder<double>::Hit, size, row, col,
@ -65,9 +63,7 @@ void define_var_cluster_finder_bindings(py::module &m) {
return return_vector(ptr);
})
.def("clear_hits",
[](VarClusterFinder<double> &self) {
self.clear_hits();
})
[](VarClusterFinder<double> &self) { self.clear_hits(); })
.def("steal_hits",
[](VarClusterFinder<double> &self) {
auto ptr = new std::vector<VarClusterFinder<double>::Hit>(
@ -75,5 +71,4 @@ void define_var_cluster_finder_bindings(py::module &m) {
return return_vector(ptr);
})
.def("total_clusters", &VarClusterFinder<double>::total_clusters);
}

View File

@ -31,15 +31,13 @@ ClusterFile::ClusterFile(const std::filesystem::path &fname, size_t chunk_size,
}
}
void ClusterFile::set_roi(ROI roi){
m_roi = roi;
}
void ClusterFile::set_roi(ROI roi) { m_roi = roi; }
void ClusterFile::set_noise_map(const NDView<int32_t, 2> noise_map){
void ClusterFile::set_noise_map(const NDView<int32_t, 2> noise_map) {
m_noise_map = NDArray<int32_t, 2>(noise_map);
}
void ClusterFile::set_gain_map(const NDView<double, 2> gain_map){
void ClusterFile::set_gain_map(const NDView<double, 2> gain_map) {
m_gain_map = NDArray<double, 2>(gain_map);
// Gain map is passed as ADU/keV to avoid dividing in when applying the gain
@ -66,42 +64,44 @@ void ClusterFile::write_frame(const ClusterVector<int32_t> &clusters) {
!(clusters.cluster_size_y() == 3)) {
throw std::runtime_error("Only 3x3 clusters are supported");
}
//First write the frame number - 4 bytes
// First write the frame number - 4 bytes
int32_t frame_number = clusters.frame_number();
if(fwrite(&frame_number, sizeof(frame_number), 1, fp)!=1){
if (fwrite(&frame_number, sizeof(frame_number), 1, fp) != 1) {
throw std::runtime_error(LOCATION + "Could not write frame number");
}
//Then write the number of clusters - 4 bytes
// Then write the number of clusters - 4 bytes
uint32_t n_clusters = clusters.size();
if(fwrite(&n_clusters, sizeof(n_clusters), 1, fp)!=1){
throw std::runtime_error(LOCATION + "Could not write number of clusters");
if (fwrite(&n_clusters, sizeof(n_clusters), 1, fp) != 1) {
throw std::runtime_error(LOCATION +
"Could not write number of clusters");
}
//Now write the clusters in the frame
if(fwrite(clusters.data(), clusters.item_size(), clusters.size(), fp)!=clusters.size()){
// Now write the clusters in the frame
if (fwrite(clusters.data(), clusters.item_size(), clusters.size(), fp) !=
clusters.size()) {
throw std::runtime_error(LOCATION + "Could not write clusters");
}
}
ClusterVector<int32_t> ClusterFile::read_clusters(size_t n_clusters){
ClusterVector<int32_t> ClusterFile::read_clusters(size_t n_clusters) {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
if (m_noise_map || m_roi){
if (m_noise_map || m_roi) {
return read_clusters_with_cut(n_clusters);
}else{
} else {
return read_clusters_without_cut(n_clusters);
}
}
ClusterVector<int32_t> ClusterFile::read_clusters_without_cut(size_t n_clusters) {
ClusterVector<int32_t>
ClusterFile::read_clusters_without_cut(size_t n_clusters) {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
ClusterVector<int32_t> clusters(3,3, n_clusters);
ClusterVector<int32_t> clusters(3, 3, n_clusters);
int32_t iframe = 0; // frame number needs to be 4 bytes!
size_t nph_read = 0;
@ -119,7 +119,7 @@ ClusterVector<int32_t> ClusterFile::read_clusters_without_cut(size_t n_clusters)
} else {
nn = nph;
}
nph_read += fread((buf + nph_read*clusters.item_size()),
nph_read += fread((buf + nph_read * clusters.item_size()),
clusters.item_size(), nn, fp);
m_num_left = nph - nn; // write back the number of photons left
}
@ -135,7 +135,7 @@ ClusterVector<int32_t> ClusterFile::read_clusters_without_cut(size_t n_clusters)
else
nn = nph;
nph_read += fread((buf + nph_read*clusters.item_size()),
nph_read += fread((buf + nph_read * clusters.item_size()),
clusters.item_size(), nn, fp);
m_num_left = nph - nn;
}
@ -147,22 +147,22 @@ ClusterVector<int32_t> ClusterFile::read_clusters_without_cut(size_t n_clusters)
// Resize the vector to the number of clusters.
// No new allocation, only change bounds.
clusters.resize(nph_read);
if(m_gain_map)
if (m_gain_map)
clusters.apply_gain_map(m_gain_map->view());
return clusters;
}
ClusterVector<int32_t> ClusterFile::read_clusters_with_cut(size_t n_clusters) {
ClusterVector<int32_t> clusters(3,3);
ClusterVector<int32_t> clusters(3, 3);
clusters.reserve(n_clusters);
// if there are photons left from previous frame read them first
if (m_num_left) {
while(m_num_left && clusters.size() < n_clusters){
while (m_num_left && clusters.size() < n_clusters) {
Cluster3x3 c = read_one_cluster();
if(is_selected(c)){
clusters.push_back(c.x, c.y, reinterpret_cast<std::byte*>(c.data));
if (is_selected(c)) {
clusters.push_back(c.x, c.y,
reinterpret_cast<std::byte *>(c.data));
}
}
}
@ -172,17 +172,21 @@ ClusterVector<int32_t> ClusterFile::read_clusters_with_cut(size_t n_clusters) {
if (clusters.size() < n_clusters) {
// sanity check
if (m_num_left) {
throw std::runtime_error(LOCATION + "Entered second loop with clusters left\n");
throw std::runtime_error(
LOCATION + "Entered second loop with clusters left\n");
}
int32_t frame_number = 0; // frame number needs to be 4 bytes!
while (fread(&frame_number, sizeof(frame_number), 1, fp)) {
if (fread(&m_num_left, sizeof(m_num_left), 1, fp)) {
clusters.set_frame_number(frame_number); //cluster vector will hold the last frame number
while(m_num_left && clusters.size() < n_clusters){
clusters.set_frame_number(
frame_number); // cluster vector will hold the last frame
// number
while (m_num_left && clusters.size() < n_clusters) {
Cluster3x3 c = read_one_cluster();
if(is_selected(c)){
clusters.push_back(c.x, c.y, reinterpret_cast<std::byte*>(c.data));
if (is_selected(c)) {
clusters.push_back(
c.x, c.y, reinterpret_cast<std::byte *>(c.data));
}
}
}
@ -191,15 +195,14 @@ ClusterVector<int32_t> ClusterFile::read_clusters_with_cut(size_t n_clusters) {
if (clusters.size() >= n_clusters)
break;
}
}
if(m_gain_map)
if (m_gain_map)
clusters.apply_gain_map(m_gain_map->view());
return clusters;
}
Cluster3x3 ClusterFile::read_one_cluster(){
Cluster3x3 ClusterFile::read_one_cluster() {
Cluster3x3 c;
auto rc = fread(&c, sizeof(c), 1, fp);
if (rc != 1) {
@ -209,13 +212,13 @@ Cluster3x3 ClusterFile::read_one_cluster(){
return c;
}
ClusterVector<int32_t> ClusterFile::read_frame(){
ClusterVector<int32_t> ClusterFile::read_frame() {
if (m_mode != "r") {
throw std::runtime_error(LOCATION + "File not opened for reading");
}
if (m_noise_map || m_roi){
if (m_noise_map || m_roi) {
return read_frame_with_cut();
}else{
} else {
return read_frame_without_cut();
}
}
@ -235,7 +238,8 @@ ClusterVector<int32_t> ClusterFile::read_frame_without_cut() {
int32_t n_clusters; // Saved as 32bit integer in the cluster file
if (fread(&n_clusters, sizeof(n_clusters), 1, fp) != 1) {
throw std::runtime_error(LOCATION + "Could not read number of clusters");
throw std::runtime_error(LOCATION +
"Could not read number of clusters");
}
ClusterVector<int32_t> clusters(3, 3, n_clusters);
@ -264,7 +268,6 @@ ClusterVector<int32_t> ClusterFile::read_frame_with_cut() {
throw std::runtime_error("Could not read frame number");
}
if (fread(&m_num_left, sizeof(m_num_left), 1, fp) != 1) {
throw std::runtime_error("Could not read number of clusters");
}
@ -272,10 +275,10 @@ ClusterVector<int32_t> ClusterFile::read_frame_with_cut() {
ClusterVector<int32_t> clusters(3, 3);
clusters.reserve(m_num_left);
clusters.set_frame_number(frame_number);
while(m_num_left){
while (m_num_left) {
Cluster3x3 c = read_one_cluster();
if(is_selected(c)){
clusters.push_back(c.x, c.y, reinterpret_cast<std::byte*>(c.data));
if (is_selected(c)) {
clusters.push_back(c.x, c.y, reinterpret_cast<std::byte *>(c.data));
}
}
if (m_gain_map)
@ -283,31 +286,30 @@ ClusterVector<int32_t> ClusterFile::read_frame_with_cut() {
return clusters;
}
bool ClusterFile::is_selected(Cluster3x3 &cl) {
//Should fail fast
// Should fail fast
if (m_roi) {
if (!(m_roi->contains(cl.x, cl.y))) {
return false;
}
}
if (m_noise_map){
if (m_noise_map) {
int32_t sum_1x1 = cl.data[4]; // central pixel
int32_t sum_2x2 = cl.sum_2x2(); // highest sum of 2x2 subclusters
int32_t sum_3x3 = cl.sum(); // sum of all pixels
auto noise = (*m_noise_map)(cl.y, cl.x); //TODO! check if this is correct
auto noise =
(*m_noise_map)(cl.y, cl.x); // TODO! check if this is correct
if (sum_1x1 <= noise || sum_2x2 <= 2 * noise || sum_3x3 <= 3 * noise) {
return false;
}
}
//we passed all checks
// we passed all checks
return true;
}
NDArray<double, 2> calculate_eta2(ClusterVector<int> &clusters) {
//TOTO! make work with 2x2 clusters
// TOTO! make work with 2x2 clusters
NDArray<double, 2> eta2({static_cast<int64_t>(clusters.size()), 2});
if (clusters.cluster_size_x() == 3 || clusters.cluster_size_y() == 3) {
@ -316,13 +318,14 @@ NDArray<double, 2> calculate_eta2(ClusterVector<int> &clusters) {
eta2(i, 0) = e.x;
eta2(i, 1) = e.y;
}
}else if(clusters.cluster_size_x() == 2 || clusters.cluster_size_y() == 2){
} else if (clusters.cluster_size_x() == 2 ||
clusters.cluster_size_y() == 2) {
for (size_t i = 0; i < clusters.size(); i++) {
auto e = calculate_eta2(clusters.at<Cluster2x2>(i));
eta2(i, 0) = e.x;
eta2(i, 1) = e.y;
}
}else{
} else {
throw std::runtime_error("Only 3x3 and 2x2 clusters are supported");
}
@ -330,9 +333,9 @@ NDArray<double, 2> calculate_eta2(ClusterVector<int> &clusters) {
}
/**
* @brief Calculate the eta2 values for a 3x3 cluster and return them in a Eta2 struct
* containing etay, etax and the corner of the cluster.
*/
* @brief Calculate the eta2 values for a 3x3 cluster and return them in a Eta2
* struct containing etay, etax and the corner of the cluster.
*/
Eta2 calculate_eta2(Cluster3x3 &cl) {
Eta2 eta{};
@ -347,38 +350,30 @@ Eta2 calculate_eta2(Cluster3x3 &cl) {
switch (c) {
case cBottomLeft:
if ((cl.data[3] + cl.data[4]) != 0)
eta.x =
static_cast<double>(cl.data[4]) / (cl.data[3] + cl.data[4]);
eta.x = static_cast<double>(cl.data[4]) / (cl.data[3] + cl.data[4]);
if ((cl.data[1] + cl.data[4]) != 0)
eta.y =
static_cast<double>(cl.data[4]) / (cl.data[1] + cl.data[4]);
eta.y = static_cast<double>(cl.data[4]) / (cl.data[1] + cl.data[4]);
eta.c = cBottomLeft;
break;
case cBottomRight:
if ((cl.data[2] + cl.data[5]) != 0)
eta.x =
static_cast<double>(cl.data[5]) / (cl.data[4] + cl.data[5]);
eta.x = static_cast<double>(cl.data[5]) / (cl.data[4] + cl.data[5]);
if ((cl.data[1] + cl.data[4]) != 0)
eta.y =
static_cast<double>(cl.data[4]) / (cl.data[1] + cl.data[4]);
eta.y = static_cast<double>(cl.data[4]) / (cl.data[1] + cl.data[4]);
eta.c = cBottomRight;
break;
case cTopLeft:
if ((cl.data[7] + cl.data[4]) != 0)
eta.x =
static_cast<double>(cl.data[4]) / (cl.data[3] + cl.data[4]);
eta.x = static_cast<double>(cl.data[4]) / (cl.data[3] + cl.data[4]);
if ((cl.data[7] + cl.data[4]) != 0)
eta.y =
static_cast<double>(cl.data[7]) / (cl.data[7] + cl.data[4]);
eta.y = static_cast<double>(cl.data[7]) / (cl.data[7] + cl.data[4]);
eta.c = cTopLeft;
break;
case cTopRight:
if ((cl.data[5] + cl.data[4]) != 0)
eta.x =
static_cast<double>(cl.data[5]) / (cl.data[5] + cl.data[4]);
eta.x = static_cast<double>(cl.data[5]) / (cl.data[5] + cl.data[4]);
if ((cl.data[7] + cl.data[4]) != 0)
eta.y =
static_cast<double>(cl.data[7]) / (cl.data[7] + cl.data[4]);
eta.y = static_cast<double>(cl.data[7]) / (cl.data[7] + cl.data[4]);
eta.c = cTopRight;
break;
// no default to allow compiler to warn about missing cases
@ -386,17 +381,15 @@ Eta2 calculate_eta2(Cluster3x3 &cl) {
return eta;
}
Eta2 calculate_eta2(Cluster2x2 &cl) {
Eta2 eta{};
if ((cl.data[0] + cl.data[1]) != 0)
eta.x = static_cast<double>(cl.data[1]) / (cl.data[0] + cl.data[1]);
if ((cl.data[0] + cl.data[2]) != 0)
eta.y = static_cast<double>(cl.data[2]) / (cl.data[0] + cl.data[2]);
eta.sum = cl.data[0] + cl.data[1] + cl.data[2]+ cl.data[3];
eta.c = cBottomLeft; //TODO! This is not correct, but need to put something
eta.sum = cl.data[0] + cl.data[1] + cl.data[2] + cl.data[3];
eta.c = cBottomLeft; // TODO! This is not correct, but need to put something
return eta;
}
} // namespace aare

View File

@ -10,9 +10,8 @@ using aare::Cluster;
using aare::ClusterFile;
using aare::ClusterVector;
TEST_CASE("Read one frame from a cluster file", "[.files]") {
//We know that the frame has 97 clusters
// We know that the frame has 97 clusters
auto fpath = test_data_path() / "clust" / "single_frame_97_clustrers.clust";
REQUIRE(std::filesystem::exists(fpath));
@ -27,7 +26,6 @@ TEST_CASE("Read one frame from a cluster file", "[.files]") {
std::begin(expected_cluster_data)));
}
TEST_CASE("Read one frame using ROI", "[.files]") {
// We know that the frame has 97 clusters
auto fpath = test_data_path() / "clust" / "single_frame_97_clustrers.clust";
@ -60,8 +58,6 @@ TEST_CASE("Read one frame using ROI", "[.files]") {
std::begin(expected_cluster_data)));
}
TEST_CASE("Read clusters from single frame file", "[.files]") {
// frame_number, num_clusters [135] 97

View File

@ -14,22 +14,24 @@ CtbRawFile::CtbRawFile(const std::filesystem::path &fname) : m_master(fname) {
m_file.open(m_master.data_fname(0, 0), std::ios::binary);
}
void CtbRawFile::read_into(std::byte *image_buf, DetectorHeader* header) {
if(m_current_frame >= m_master.frames_in_file()){
void CtbRawFile::read_into(std::byte *image_buf, DetectorHeader *header) {
if (m_current_frame >= m_master.frames_in_file()) {
throw std::runtime_error(LOCATION + " End of file reached");
}
if(m_current_frame != 0 && m_current_frame % m_master.max_frames_per_file() == 0){
open_data_file(m_current_subfile+1);
if (m_current_frame != 0 &&
m_current_frame % m_master.max_frames_per_file() == 0) {
open_data_file(m_current_subfile + 1);
}
if(header){
if (header) {
m_file.read(reinterpret_cast<char *>(header), sizeof(DetectorHeader));
}else{
} else {
m_file.seekg(sizeof(DetectorHeader), std::ios::cur);
}
m_file.read(reinterpret_cast<char *>(image_buf), m_master.image_size_in_bytes());
m_file.read(reinterpret_cast<char *>(image_buf),
m_master.image_size_in_bytes());
m_current_frame++;
}
@ -38,13 +40,16 @@ void CtbRawFile::seek(size_t frame_number) {
open_data_file(index);
}
size_t frame_number_in_file = frame_number % m_master.max_frames_per_file();
m_file.seekg((sizeof(DetectorHeader)+m_master.image_size_in_bytes()) * frame_number_in_file);
m_file.seekg((sizeof(DetectorHeader) + m_master.image_size_in_bytes()) *
frame_number_in_file);
m_current_frame = frame_number;
}
size_t CtbRawFile::tell() const { return m_current_frame; }
size_t CtbRawFile::image_size_in_bytes() const { return m_master.image_size_in_bytes(); }
size_t CtbRawFile::image_size_in_bytes() const {
return m_master.image_size_in_bytes();
}
size_t CtbRawFile::frames_in_file() const { return m_master.frames_in_file(); }
@ -63,12 +68,11 @@ void CtbRawFile::open_data_file(size_t subfile_index) {
throw std::runtime_error(LOCATION + "Subfile index out of range");
}
m_current_subfile = subfile_index;
m_file = std::ifstream(m_master.data_fname(0, subfile_index), std::ios::binary); // only one module for CTB
m_file = std::ifstream(m_master.data_fname(0, subfile_index),
std::ios::binary); // only one module for CTB
if (!m_file.is_open()) {
throw std::runtime_error(LOCATION + "Could not open data file");
}
}
} // namespace aare

View File

@ -10,7 +10,8 @@ namespace aare {
* @brief Construct a DType object from a type_info object
* @param t type_info object
* @throw runtime_error if the type is not supported
* @note supported types are: int8_t, uint8_t, int16_t, uint16_t, int32_t, uint32_t, int64_t, uint64_t, float, double
* @note supported types are: int8_t, uint8_t, int16_t, uint16_t, int32_t,
* uint32_t, int64_t, uint64_t, float, double
* @note the type_info object is obtained using typeid (e.g. typeid(int))
*/
Dtype::Dtype(const std::type_info &t) {
@ -35,7 +36,8 @@ Dtype::Dtype(const std::type_info &t) {
else if (t == typeid(double))
m_type = TypeIndex::DOUBLE;
else
throw std::runtime_error("Could not construct data type. Type not supported.");
throw std::runtime_error(
"Could not construct data type. Type not supported.");
}
/**
@ -63,7 +65,8 @@ uint8_t Dtype::bitdepth() const {
case TypeIndex::NONE:
return 0;
default:
throw std::runtime_error(LOCATION + "Could not get bitdepth. Type not supported.");
throw std::runtime_error(LOCATION +
"Could not get bitdepth. Type not supported.");
}
}
@ -138,7 +141,8 @@ Dtype Dtype::from_bitdepth(uint8_t bitdepth) {
case 64:
return Dtype(TypeIndex::UINT64);
default:
throw std::runtime_error("Could not construct data type from bitdepth.");
throw std::runtime_error(
"Could not construct data type from bitdepth.");
}
}
/**
@ -175,17 +179,27 @@ std::string Dtype::to_string() const {
case TypeIndex::DOUBLE:
return "f8";
case TypeIndex::ERROR:
throw std::runtime_error("Could not get string representation. Type not supported.");
throw std::runtime_error(
"Could not get string representation. Type not supported.");
case TypeIndex::NONE:
throw std::runtime_error("Could not get string representation. Type not supported.");
throw std::runtime_error(
"Could not get string representation. Type not supported.");
}
return {};
}
bool Dtype::operator==(const Dtype &other) const noexcept { return m_type == other.m_type; }
bool Dtype::operator!=(const Dtype &other) const noexcept { return !(*this == other); }
bool Dtype::operator==(const Dtype &other) const noexcept {
return m_type == other.m_type;
}
bool Dtype::operator!=(const Dtype &other) const noexcept {
return !(*this == other);
}
bool Dtype::operator==(const std::type_info &t) const { return Dtype(t) == *this; }
bool Dtype::operator!=(const std::type_info &t) const { return Dtype(t) != *this; }
bool Dtype::operator==(const std::type_info &t) const {
return Dtype(t) == *this;
}
bool Dtype::operator!=(const std::type_info &t) const {
return Dtype(t) != *this;
}
} // namespace aare

View File

@ -51,4 +51,6 @@ TEST_CASE("Construct from string with endianess") {
REQUIRE_THROWS(Dtype(">i4") == typeid(int32_t));
}
TEST_CASE("Convert to string") { REQUIRE(Dtype(typeid(int)).to_string() == "<i4"); }
TEST_CASE("Convert to string") {
REQUIRE(Dtype(typeid(int)).to_string() == "<i4");
}

View File

@ -1,4 +1,7 @@
#include "aare/File.hpp"
#ifdef HDF5_FOUND
#include "aare/Hdf5File.hpp"
#endif
#include "aare/JungfrauDataFile.hpp"
#include "aare/NumpyFile.hpp"
#include "aare/RawFile.hpp"
@ -24,23 +27,29 @@ File::File(const std::filesystem::path &fname, const std::string &mode,
if (fname.extension() == ".raw" || fname.extension() == ".json") {
// file_impl = new RawFile(fname, mode, cfg);
file_impl = std::make_unique<RawFile>(fname, mode);
}
else if (fname.extension() == ".npy") {
} else if (fname.extension() == ".npy") {
// file_impl = new NumpyFile(fname, mode, cfg);
file_impl = std::make_unique<NumpyFile>(fname, mode, cfg);
}else if(fname.extension() == ".dat"){
}
#ifdef HDF5_FOUND
else if (fname.extension() == ".h5") {
file_impl = std::make_unique<Hdf5File>(fname, mode);
}
#else
else if (fname.extension() == ".h5") {
throw std::runtime_error("Enable HDF5 compile option: AARE_HDF5=ON");
}
#endif
else if (fname.extension() == ".dat") {
file_impl = std::make_unique<JungfrauDataFile>(fname);
} else {
throw std::runtime_error("Unsupported file type");
}
}
File::File(File &&other) noexcept { std::swap(file_impl, other.file_impl); }
File::File(File &&other) noexcept{
std::swap(file_impl, other.file_impl);
}
File& File::operator=(File &&other) noexcept {
File &File::operator=(File &&other) noexcept {
if (this != &other) {
File tmp(std::move(other));
std::swap(file_impl, tmp.file_impl);
@ -70,15 +79,16 @@ size_t File::frame_number(size_t frame_index) {
}
size_t File::bytes_per_frame() const { return file_impl->bytes_per_frame(); }
size_t File::pixels_per_frame() const{ return file_impl->pixels_per_frame(); }
size_t File::pixels_per_frame() const { return file_impl->pixels_per_frame(); }
void File::seek(size_t frame_index) { file_impl->seek(frame_index); }
size_t File::tell() const { return file_impl->tell(); }
size_t File::rows() const { return file_impl->rows(); }
size_t File::cols() const { return file_impl->cols(); }
size_t File::bitdepth() const { return file_impl->bitdepth(); }
size_t File::bytes_per_pixel() const { return file_impl->bitdepth() / bits_per_byte; }
size_t File::bytes_per_pixel() const {
return file_impl->bitdepth() / bits_per_byte;
}
DetectorType File::detector_type() const { return file_impl->detector_type(); }
} // namespace aare

View File

@ -6,10 +6,12 @@
namespace aare {
FilePtr::FilePtr(const std::filesystem::path& fname, const std::string& mode = "rb") {
FilePtr::FilePtr(const std::filesystem::path &fname,
const std::string &mode = "rb") {
fp_ = fopen(fname.c_str(), mode.c_str());
if (!fp_)
throw std::runtime_error(fmt::format("Could not open: {}", fname.c_str()));
throw std::runtime_error(
fmt::format("Could not open: {}", fname.c_str()));
}
FilePtr::FilePtr(FilePtr &&other) { std::swap(fp_, other.fp_); }
@ -24,7 +26,8 @@ FILE *FilePtr::get() { return fp_; }
ssize_t FilePtr::tell() {
auto pos = ftell(fp_);
if (pos == -1)
throw std::runtime_error(fmt::format("Error getting file position: {}", error_msg()));
throw std::runtime_error(
fmt::format("Error getting file position: {}", error_msg()));
return pos;
}
FilePtr::~FilePtr() {
@ -32,7 +35,7 @@ FilePtr::~FilePtr() {
fclose(fp_); // check?
}
std::string FilePtr::error_msg(){
std::string FilePtr::error_msg() {
if (feof(fp_)) {
return "End of file reached";
}

View File

@ -1,13 +1,12 @@
#include "aare/Fit.hpp"
#include "aare/utils/task.hpp"
#include "aare/utils/par.hpp"
#include "aare/utils/task.hpp"
#include <lmcurve2.h>
#include <lmfit.hpp>
#include <thread>
#include <array>
namespace aare {
namespace func {
@ -34,8 +33,10 @@ NDArray<double, 1> pol1(NDView<double, 1> x, NDView<double, 1> par) {
return y;
}
double scurve(const double x, const double * par) {
return (par[0] + par[1] * x) + 0.5 * (1 + erf((x - par[2]) / (sqrt(2) * par[3]))) * (par[4] + par[5] * (x - par[2]));
double scurve(const double x, const double *par) {
return (par[0] + par[1] * x) +
0.5 * (1 + erf((x - par[2]) / (sqrt(2) * par[3]))) *
(par[4] + par[5] * (x - par[2]));
}
NDArray<double, 1> scurve(NDView<double, 1> x, NDView<double, 1> par) {
@ -46,8 +47,10 @@ NDArray<double, 1> scurve(NDView<double, 1> x, NDView<double, 1> par) {
return y;
}
double scurve2(const double x, const double * par) {
return (par[0] + par[1] * x) + 0.5 * (1 - erf((x - par[2]) / (sqrt(2) * par[3]))) * (par[4] + par[5] * (x - par[2]));
double scurve2(const double x, const double *par) {
return (par[0] + par[1] * x) +
0.5 * (1 - erf((x - par[2]) / (sqrt(2) * par[3]))) *
(par[4] + par[5] * (x - par[2]));
}
NDArray<double, 1> scurve2(NDView<double, 1> x, NDView<double, 1> par) {
@ -91,7 +94,8 @@ NDArray<double, 3> fit_gaus(NDView<double, 1> x, NDView<double, 3> y,
return result;
}
std::array<double, 3> gaus_init_par(const NDView<double, 1> x, const NDView<double, 1> y) {
std::array<double, 3> gaus_init_par(const NDView<double, 1> x,
const NDView<double, 1> y) {
std::array<double, 3> start_par{0, 0, 0};
auto e = std::max_element(y.begin(), y.end());
auto idx = std::distance(y.begin(), e);
@ -103,20 +107,18 @@ std::array<double, 3> gaus_init_par(const NDView<double, 1> x, const NDView<doub
// For sigma we estimate the fwhm and divide by 2.35
// assuming equally spaced x values
auto delta = x[1] - x[0];
start_par[2] =
std::count_if(y.begin(), y.end(),
start_par[2] = std::count_if(y.begin(), y.end(),
[e](double val) { return val > *e / 2; }) *
delta / 2.35;
return start_par;
}
std::array<double, 2> pol1_init_par(const NDView<double, 1> x, const NDView<double, 1> y){
std::array<double, 2> pol1_init_par(const NDView<double, 1> x,
const NDView<double, 1> y) {
// Estimate the initial parameters for the fit
std::array<double, 2> start_par{0, 0};
auto y2 = std::max_element(y.begin(), y.end());
auto x2 = x[std::distance(y.begin(), y2)];
auto y1 = std::min_element(y.begin(), y.end());
@ -141,7 +143,6 @@ void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
"and par_out, par_err_out must have size 3");
}
// /* Collection of output parameters for status info. */
// typedef struct {
// double fnorm; /* norm of the residue vector fvec. */
@ -153,23 +154,32 @@ void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
// */
// } lm_status_struct;
lm_status_struct status;
par_out = gaus_init_par(x, y);
std::array<double, 9> cov{0, 0, 0, 0, 0, 0, 0 , 0 , 0};
std::array<double, 9> cov{0, 0, 0, 0, 0, 0, 0, 0, 0};
// void lmcurve2( const int n_par, double *par, double *parerr, double *covar, const int m_dat, const double *t, const double *y, const double *dy, double (*f)( const double ti, const double *par ), const lm_control_struct *control, lm_status_struct *status);
// n_par - Number of free variables. Length of parameter vector par.
// par - Parameter vector. On input, it must contain a reasonable guess. On output, it contains the solution found to minimize ||r||.
// parerr - Parameter uncertainties vector. Array of length n_par or NULL. On output, unless it or covar is NULL, it contains the weighted parameter uncertainties for the found parameters.
// covar - Covariance matrix. Array of length n_par * n_par or NULL. On output, unless it is NULL, it contains the covariance matrix.
// m_dat - Number of data points. Length of vectors t, y, dy. Must statisfy n_par <= m_dat.
// t - Array of length m_dat. Contains the abcissae (time, or "x") for which function f will be evaluated.
// y - Array of length m_dat. Contains the ordinate values that shall be fitted.
// dy - Array of length m_dat. Contains the standard deviations of the values y.
// f - A user-supplied parametric function f(ti;par).
// control - Parameter collection for tuning the fit procedure. In most cases, the default &lm_control_double is adequate. If f is only computed with single-precision accuracy, &lm_control_float should be used. Parameters are explained in lmmin2(3).
// status - A record used to return information about the minimization process: For details, see lmmin2(3).
// void lmcurve2( const int n_par, double *par, double *parerr, double
// *covar, const int m_dat, const double *t, const double *y, const double
// *dy, double (*f)( const double ti, const double *par ), const
// lm_control_struct *control, lm_status_struct *status); n_par - Number of
// free variables. Length of parameter vector par. par - Parameter vector.
// On input, it must contain a reasonable guess. On output, it contains the
// solution found to minimize ||r||. parerr - Parameter uncertainties
// vector. Array of length n_par or NULL. On output, unless it or covar is
// NULL, it contains the weighted parameter uncertainties for the found
// parameters. covar - Covariance matrix. Array of length n_par * n_par or
// NULL. On output, unless it is NULL, it contains the covariance matrix.
// m_dat - Number of data points. Length of vectors t, y, dy. Must statisfy
// n_par <= m_dat. t - Array of length m_dat. Contains the abcissae (time,
// or "x") for which function f will be evaluated. y - Array of length
// m_dat. Contains the ordinate values that shall be fitted. dy - Array of
// length m_dat. Contains the standard deviations of the values y. f - A
// user-supplied parametric function f(ti;par). control - Parameter
// collection for tuning the fit procedure. In most cases, the default
// &lm_control_double is adequate. If f is only computed with
// single-precision accuracy, &lm_control_float should be used. Parameters
// are explained in lmmin2(3). status - A record used to return information
// about the minimization process: For details, see lmmin2(3).
lmcurve2(par_out.size(), par_out.data(), par_err_out.data(), cov.data(),
x.size(), x.data(), y.data(), y_err.data(), aare::func::gaus,
@ -178,12 +188,14 @@ void fit_gaus(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
// Calculate chi2
chi2 = 0;
for (ssize_t i = 0; i < y.size(); i++) {
chi2 += std::pow((y(i) - func::gaus(x(i), par_out.data())) / y_err(i), 2);
chi2 +=
std::pow((y(i) - func::gaus(x(i), par_out.data())) / y_err(i), 2);
}
}
void fit_gaus(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
NDView<double, 3> par_out, NDView<double, 3> par_err_out,
NDView<double, 2> chi2_out,
int n_threads) {
@ -200,7 +212,6 @@ void fit_gaus(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
fit_gaus(x, y_view, y_err_view, par_out_view, par_err_out_view,
chi2_out(row, col));
}
}
};
@ -210,7 +221,8 @@ void fit_gaus(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
}
void fit_pol1(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out, double& chi2) {
NDView<double, 1> par_out, NDView<double, 1> par_err_out,
double &chi2) {
// Check that we have the correct sizes
if (y.size() != x.size() || y.size() != y_err.size() ||
@ -230,13 +242,14 @@ void fit_pol1(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
// Calculate chi2
chi2 = 0;
for (ssize_t i = 0; i < y.size(); i++) {
chi2 += std::pow((y(i) - func::pol1(x(i), par_out.data())) / y_err(i), 2);
chi2 +=
std::pow((y(i) - func::pol1(x(i), par_out.data())) / y_err(i), 2);
}
}
void fit_pol1(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads) {
NDView<double, 3> par_out, NDView<double, 3> par_err_out,
NDView<double, 2> chi2_out, int n_threads) {
auto process = [&](ssize_t first_row, ssize_t last_row) {
for (ssize_t row = first_row; row < last_row; row++) {
@ -249,15 +262,14 @@ void fit_pol1(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 1> par_err_out_view(&par_err_out(row, col, 0),
{par_err_out.shape(2)});
fit_pol1(x, y_view, y_err_view, par_out_view, par_err_out_view, chi2_out(row, col));
fit_pol1(x, y_view, y_err_view, par_out_view, par_err_out_view,
chi2_out(row, col));
}
}
};
auto tasks = split_task(0, y.shape(0), n_threads);
RunInParallel(process, tasks);
}
NDArray<double, 1> fit_pol1(NDView<double, 1> x, NDView<double, 1> y) {
@ -300,7 +312,8 @@ NDArray<double, 3> fit_pol1(NDView<double, 1> x, NDView<double, 3> y,
// ~~ S-CURVES ~~
// SCURVE --
std::array<double, 6> scurve_init_par(const NDView<double, 1> x, const NDView<double, 1> y){
std::array<double, 6> scurve_init_par(const NDView<double, 1> x,
const NDView<double, 1> y) {
// Estimate the initial parameters for the fit
std::array<double, 6> start_par{0, 0, 0, 0, 0, 0};
@ -308,7 +321,8 @@ std::array<double, 6> scurve_init_par(const NDView<double, 1> x, const NDView<do
auto ymin = std::min_element(y.begin(), y.end());
start_par[4] = *ymin + (*ymax - *ymin) / 2;
// Find the first x where the corresponding y value is above the threshold (start_par[4])
// Find the first x where the corresponding y value is above the threshold
// (start_par[4])
for (ssize_t i = 0; i < y.size(); ++i) {
if (y[i] >= start_par[4]) {
start_par[2] = x[i];
@ -334,7 +348,8 @@ NDArray<double, 1> fit_scurve(NDView<double, 1> x, NDView<double, 1> y) {
return result;
}
NDArray<double, 3> fit_scurve(NDView<double, 1> x, NDView<double, 3> y, int n_threads) {
NDArray<double, 3> fit_scurve(NDView<double, 1> x, NDView<double, 3> y,
int n_threads) {
NDArray<double, 3> result({y.shape(0), y.shape(1), 6}, 0);
auto process = [&x, &y, &result](ssize_t first_row, ssize_t last_row) {
@ -358,8 +373,9 @@ NDArray<double, 3> fit_scurve(NDView<double, 1> x, NDView<double, 3> y, int n_th
}
// - Error
void fit_scurve(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out, double& chi2) {
void fit_scurve(NDView<double, 1> x, NDView<double, 1> y,
NDView<double, 1> y_err, NDView<double, 1> par_out,
NDView<double, 1> par_err_out, double &chi2) {
// Check that we have the correct sizes
if (y.size() != x.size() || y.size() != y_err.size() ||
@ -380,12 +396,14 @@ void fit_scurve(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_er
// Calculate chi2
chi2 = 0;
for (ssize_t i = 0; i < y.size(); i++) {
chi2 += std::pow((y(i) - func::pol1(x(i), par_out.data())) / y_err(i), 2);
chi2 +=
std::pow((y(i) - func::pol1(x(i), par_out.data())) / y_err(i), 2);
}
}
void fit_scurve(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
void fit_scurve(NDView<double, 1> x, NDView<double, 3> y,
NDView<double, 3> y_err, NDView<double, 3> par_out,
NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads) {
auto process = [&](ssize_t first_row, ssize_t last_row) {
@ -399,20 +417,20 @@ void fit_scurve(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_er
NDView<double, 1> par_err_out_view(&par_err_out(row, col, 0),
{par_err_out.shape(2)});
fit_scurve(x, y_view, y_err_view, par_out_view, par_err_out_view, chi2_out(row, col));
fit_scurve(x, y_view, y_err_view, par_out_view,
par_err_out_view, chi2_out(row, col));
}
}
};
auto tasks = split_task(0, y.shape(0), n_threads);
RunInParallel(process, tasks);
}
// SCURVE2 ---
std::array<double, 6> scurve2_init_par(const NDView<double, 1> x, const NDView<double, 1> y){
std::array<double, 6> scurve2_init_par(const NDView<double, 1> x,
const NDView<double, 1> y) {
// Estimate the initial parameters for the fit
std::array<double, 6> start_par{0, 0, 0, 0, 0, 0};
@ -420,7 +438,8 @@ std::array<double, 6> scurve2_init_par(const NDView<double, 1> x, const NDView<d
auto ymin = std::min_element(y.begin(), y.end());
start_par[4] = *ymin + (*ymax - *ymin) / 2;
// Find the first x where the corresponding y value is above the threshold (start_par[4])
// Find the first x where the corresponding y value is above the threshold
// (start_par[4])
for (ssize_t i = 0; i < y.size(); ++i) {
if (y[i] <= start_par[4]) {
start_par[2] = x[i];
@ -446,7 +465,8 @@ NDArray<double, 1> fit_scurve2(NDView<double, 1> x, NDView<double, 1> y) {
return result;
}
NDArray<double, 3> fit_scurve2(NDView<double, 1> x, NDView<double, 3> y, int n_threads) {
NDArray<double, 3> fit_scurve2(NDView<double, 1> x, NDView<double, 3> y,
int n_threads) {
NDArray<double, 3> result({y.shape(0), y.shape(1), 6}, 0);
auto process = [&x, &y, &result](ssize_t first_row, ssize_t last_row) {
@ -470,8 +490,9 @@ NDArray<double, 3> fit_scurve2(NDView<double, 1> x, NDView<double, 3> y, int n_t
}
// - Error
void fit_scurve2(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_err,
NDView<double, 1> par_out, NDView<double, 1> par_err_out, double& chi2) {
void fit_scurve2(NDView<double, 1> x, NDView<double, 1> y,
NDView<double, 1> y_err, NDView<double, 1> par_out,
NDView<double, 1> par_err_out, double &chi2) {
// Check that we have the correct sizes
if (y.size() != x.size() || y.size() != y_err.size() ||
@ -492,12 +513,14 @@ void fit_scurve2(NDView<double, 1> x, NDView<double, 1> y, NDView<double, 1> y_e
// Calculate chi2
chi2 = 0;
for (ssize_t i = 0; i < y.size(); i++) {
chi2 += std::pow((y(i) - func::pol1(x(i), par_out.data())) / y_err(i), 2);
chi2 +=
std::pow((y(i) - func::pol1(x(i), par_out.data())) / y_err(i), 2);
}
}
void fit_scurve2(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_err,
NDView<double, 3> par_out, NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
void fit_scurve2(NDView<double, 1> x, NDView<double, 3> y,
NDView<double, 3> y_err, NDView<double, 3> par_out,
NDView<double, 3> par_err_out, NDView<double, 2> chi2_out,
int n_threads) {
auto process = [&](ssize_t first_row, ssize_t last_row) {
@ -511,15 +534,14 @@ void fit_scurve2(NDView<double, 1> x, NDView<double, 3> y, NDView<double, 3> y_e
NDView<double, 1> par_err_out_view(&par_err_out(row, col, 0),
{par_err_out.shape(2)});
fit_scurve2(x, y_view, y_err_view, par_out_view, par_err_out_view, chi2_out(row, col));
fit_scurve2(x, y_view, y_err_view, par_out_view,
par_err_out_view, chi2_out(row, col));
}
}
};
auto tasks = split_task(0, y.shape(0), n_threads);
RunInParallel(process, tasks);
}
} // namespace aare

View File

@ -29,8 +29,7 @@ uint64_t Frame::size() const { return m_rows * m_cols; }
size_t Frame::bytes() const { return m_rows * m_cols * m_dtype.bytes(); }
std::byte *Frame::data() const { return m_data; }
std::byte *Frame::pixel_ptr(uint32_t row, uint32_t col) const{
std::byte *Frame::pixel_ptr(uint32_t row, uint32_t col) const {
if ((row >= m_rows) || (col >= m_cols)) {
std::cerr << "Invalid row or column index" << '\n';
return nullptr;
@ -38,7 +37,6 @@ std::byte *Frame::pixel_ptr(uint32_t row, uint32_t col) const{
return m_data + (row * m_cols + col) * (m_dtype.bytes());
}
Frame &Frame::operator=(Frame &&other) noexcept {
if (this == &other) {
return *this;
@ -70,5 +68,4 @@ Frame Frame::clone() const {
return frame;
}
} // namespace aare

View File

@ -65,7 +65,8 @@ TEST_CASE("Set a value in a 64 bit frame") {
// only the value we did set should be non-zero
for (size_t i = 0; i < rows; i++) {
for (size_t j = 0; j < cols; j++) {
uint64_t *data = reinterpret_cast<uint64_t *>(frame.pixel_ptr(i, j));
uint64_t *data =
reinterpret_cast<uint64_t *>(frame.pixel_ptr(i, j));
REQUIRE(data != nullptr);
if (i == 5 && j == 7) {
REQUIRE(*data == value);
@ -150,4 +151,3 @@ TEST_CASE("test explicit copy constructor") {
REQUIRE(frame2.bytes() == rows * cols * bitdepth / 8);
REQUIRE(frame2.data() != data);
}

234
src/Hdf5File.cpp Normal file
View File

@ -0,0 +1,234 @@
#include "aare/Hdf5File.hpp"
#include "aare/PixelMap.hpp"
#include "aare/defs.hpp"
#include "aare/logger.hpp"
#include <fmt/format.h>
namespace aare {
Hdf5File::Hdf5File(const std::filesystem::path &fname, const std::string &mode)
: m_master(fname) {
m_mode = mode;
if (mode == "r") {
open_data_file();
open_header_files();
} else {
throw std::runtime_error(LOCATION +
"Unsupported mode. Can only read Hdf5Files.");
}
}
Frame Hdf5File::read_frame() { return get_frame(m_current_frame++); }
Frame Hdf5File::read_frame(size_t frame_number) {
seek(frame_number);
return read_frame();
}
std::vector<Frame> Hdf5File::read_n(size_t n_frames) {
// TODO: implement this in a more efficient way
std::vector<Frame> frames;
for (size_t i = 0; i < n_frames; i++) {
frames.push_back(this->get_frame(m_current_frame));
m_current_frame++;
}
return frames;
}
void Hdf5File::read_into(std::byte *image_buf, size_t n_frames) {
get_frame_into(m_current_frame++, image_buf, n_frames);
}
void Hdf5File::read_into(std::byte *image_buf) {
get_frame_into(m_current_frame++, image_buf);
}
void Hdf5File::read_into(std::byte *image_buf, DetectorHeader *header) {
get_frame_into(m_current_frame, image_buf, 1, header);
}
void Hdf5File::read_into(std::byte *image_buf, size_t n_frames,
DetectorHeader *header) {
get_frame_into(m_current_frame++, image_buf, n_frames, header);
}
size_t Hdf5File::frame_number(size_t frame_index) {
// TODO: check if it should check total_Frames() at any point
// check why this->read_into.. as in RawFile
// refactor multiple frame reads into a single one using hyperslab
if (frame_index >= m_master.frames_in_file()) {
throw std::runtime_error(LOCATION + " Frame number out of range");
}
uint64_t fnum{0};
int part_index = 0; // assuming first part
m_header_datasets[0]->get_header_into(frame_index, part_index,
reinterpret_cast<std::byte *>(&fnum));
return fnum;
}
size_t Hdf5File::bytes_per_frame() {
return m_rows * m_cols * m_master.bitdepth() / 8;
}
size_t Hdf5File::pixels_per_frame() { return m_rows * m_cols; }
size_t Hdf5File::bytes_per_pixel() const { return m_master.bitdepth() / 8; }
void Hdf5File::seek(size_t frame_index) {
m_data_dataset->seek(frame_index);
for (size_t i = 0; i != header_dataset_names.size(); ++i) {
m_header_datasets[i]->seek(frame_index);
}
m_current_frame = frame_index;
}
size_t Hdf5File::tell() { return m_current_frame; }
size_t Hdf5File::total_frames() const { return m_total_frames; }
size_t Hdf5File::rows() const { return m_rows; }
size_t Hdf5File::cols() const { return m_cols; }
size_t Hdf5File::bitdepth() const { return m_master.bitdepth(); }
xy Hdf5File::geometry() { return m_master.geometry(); }
size_t Hdf5File::n_modules() const { return m_master.n_modules(); }
Hdf5MasterFile Hdf5File::master() const { return m_master; }
DetectorType Hdf5File::detector_type() const {
return m_master.detector_type();
}
Frame Hdf5File::get_frame(size_t frame_index) {
auto f = Frame(m_rows, m_cols, Dtype::from_bitdepth(m_master.bitdepth()));
std::byte *frame_buffer = f.data();
get_frame_into(frame_index, frame_buffer);
return f;
}
void Hdf5File::get_frame_into(size_t frame_index, std::byte *frame_buffer,
size_t n_frames, DetectorHeader *header) {
if ((frame_index + n_frames - 1) >= m_master.frames_in_file()) {
throw std::runtime_error(LOCATION + "Frame number out of range");
}
get_data_into(frame_index, frame_buffer);
m_current_frame += n_frames;
if (header) {
for (size_t i = 0; i < n_frames; i++) {
for (size_t part_idx = 0; part_idx != m_master.n_modules();
++part_idx) {
get_header_into(frame_index + i, part_idx, header);
header++;
}
}
}
}
void Hdf5File::get_data_into(size_t frame_index, std::byte *frame_buffer,
size_t n_frames) {
m_data_dataset->get_data_into(frame_index, frame_buffer, n_frames);
}
void Hdf5File::get_header_into(size_t frame_index, int part_index,
DetectorHeader *header) {
try {
read_hdf5_header_fields(header, [&](size_t iParameter,
std::byte *dest) {
m_header_datasets[iParameter]->get_header_into(frame_index,
part_index, dest);
});
LOG(logDEBUG5) << "Read 1D header for frame " << frame_index;
} catch (const H5::Exception &e) {
fmt::print("Exception type: {}\n", typeid(e).name());
e.printErrorStack();
throw std::runtime_error(
LOCATION + "\nCould not to access header datasets in given file.");
}
}
DetectorHeader Hdf5File::read_header(const std::filesystem::path &fname) {
DetectorHeader h{};
std::vector<std::unique_ptr<H5Handles>> handles;
try {
for (size_t i = 0; i != header_dataset_names.size(); ++i) {
handles.push_back(std::make_unique<H5Handles>(
fname.string(), metadata_group_name + header_dataset_names[i]));
}
read_hdf5_header_fields(&h, [&](size_t iParameter, std::byte *dest) {
handles[iParameter]->get_header_into(0, 0, dest);
});
LOG(logDEBUG5) << "Read 1D header for frame 0";
} catch (const H5::Exception &e) {
handles.clear();
fmt::print("Exception type: {}\n", typeid(e).name());
e.printErrorStack();
throw std::runtime_error(
LOCATION + "\nCould not to access header datasets in given file.");
}
return h;
}
Hdf5File::~Hdf5File() {}
const std::string Hdf5File::metadata_group_name = "/entry/data/";
const std::vector<std::string> Hdf5File::header_dataset_names = {
"frame number",
"exp length or sub exposure time",
"packets caught",
"detector specific 1",
"timestamp",
"mod id",
"row",
"column",
"detector specific 2",
"detector specific 3",
"detector specific 4",
"detector type",
"detector header version",
"packets caught bit mask"};
void Hdf5File::open_data_file() {
if (m_mode != "r")
throw std::runtime_error(LOCATION +
"Unsupported mode. Can only read Hdf5 files.");
try {
m_data_dataset = std::make_unique<H5Handles>(
m_master.file_name().string(), metadata_group_name + "/data");
m_total_frames = m_data_dataset->get_dims()[0];
m_rows = m_data_dataset->get_dims()[1];
m_cols = m_data_dataset->get_dims()[2];
// fmt::print("Data Dataset dimensions: frames = {}, rows = {}, cols =
// {}\n",
// m_total_frames, m_rows, m_cols);
} catch (const H5::Exception &e) {
m_data_dataset.reset();
fmt::print("Exception type: {}\n", typeid(e).name());
e.printErrorStack();
throw std::runtime_error(
LOCATION + "\nCould not to access 'data' dataset in master file.");
}
}
void Hdf5File::open_header_files() {
if (m_mode != "r")
throw std::runtime_error(LOCATION +
"Unsupported mode. Can only read Hdf5 files.");
try {
for (size_t i = 0; i != header_dataset_names.size(); ++i) {
m_header_datasets.push_back(std::make_unique<H5Handles>(
m_master.file_name().string(),
metadata_group_name + header_dataset_names[i]));
LOG(logDEBUG) << header_dataset_names[i]
<< " Dataset dimensions: size = "
<< m_header_datasets[i]->get_dims()[0];
}
} catch (const H5::Exception &e) {
m_header_datasets.clear();
m_data_dataset.reset();
fmt::print("Exception type: {}\n", typeid(e).name());
e.printErrorStack();
throw std::runtime_error(
LOCATION + "\nCould not to access header datasets in master file.");
}
}
} // namespace aare

0
src/Hdf5File.test.cpp Normal file
View File

565
src/Hdf5MasterFile.cpp Normal file
View File

@ -0,0 +1,565 @@
#include "aare/Hdf5MasterFile.hpp"
#include "aare/logger.hpp"
#include "aare/to_string.hpp"
#include <iomanip>
#include <sstream>
namespace aare {
Hdf5MasterFile::Hdf5MasterFile(const std::filesystem::path &fpath)
: m_file_name(fpath) {
if (!std::filesystem::exists(fpath)) {
throw std::runtime_error(LOCATION + " File does not exist");
}
parse_acquisition_metadata(fpath);
}
std::filesystem::path Hdf5MasterFile::file_name() const {
return m_file_name;
}
const std::string &Hdf5MasterFile::version() const { return m_version; }
const DetectorType &Hdf5MasterFile::detector_type() const { return m_type; }
const TimingMode &Hdf5MasterFile::timing_mode() const { return m_timing_mode; }
xy Hdf5MasterFile::geometry() const { return m_geometry; }
int Hdf5MasterFile::image_size_in_bytes() const {
return m_image_size_in_bytes;
}
int Hdf5MasterFile::pixels_y() const { return m_pixels_y; }
int Hdf5MasterFile::pixels_x() const { return m_pixels_x; }
int Hdf5MasterFile::max_frames_per_file() const {
return m_max_frames_per_file;
}
const FrameDiscardPolicy &Hdf5MasterFile::frame_discard_policy() const {
return m_frame_discard_policy;
}
int Hdf5MasterFile::frame_padding() const { return m_frame_padding; }
std::optional<ScanParameters> Hdf5MasterFile::scan_parameters() const {
return m_scan_parameters;
}
size_t Hdf5MasterFile::total_frames_expected() const {
return m_total_frames_expected;
}
std::optional<ns> Hdf5MasterFile::exptime() const { return m_exptime; }
std::optional<ns> Hdf5MasterFile::period() const { return m_period; }
std::optional<BurstMode> Hdf5MasterFile::burst_mode() const {
return m_burst_mode;
}
std::optional<int> Hdf5MasterFile::number_of_udp_interfaces() const {
return m_number_of_udp_interfaces;
}
int Hdf5MasterFile::bitdepth() const { return m_bitdepth; }
std::optional<bool> Hdf5MasterFile::ten_giga() const { return m_ten_giga; }
std::optional<int> Hdf5MasterFile::threshold_energy() const {
return m_threshold_energy;
}
std::optional<std::vector<int>> Hdf5MasterFile::threshold_energy_all() const {
return m_threshold_energy_all;
}
std::optional<ns> Hdf5MasterFile::subexptime() const { return m_subexptime; }
std::optional<ns> Hdf5MasterFile::subperiod() const { return m_subperiod; }
std::optional<bool> Hdf5MasterFile::quad() const { return m_quad; }
std::optional<int> Hdf5MasterFile::number_of_rows() const {
return m_number_of_rows;
}
std::optional<std::vector<size_t>> Hdf5MasterFile::rate_corrections() const {
return m_rate_corrections;
}
std::optional<uint32_t> Hdf5MasterFile::adc_mask() const { return m_adc_mask; }
bool Hdf5MasterFile::analog_flag() const { return m_analog_flag; }
std::optional<int> Hdf5MasterFile::analog_samples() const {
return m_analog_samples;
}
bool Hdf5MasterFile::digital_flag() const { return m_digital_flag; }
std::optional<int> Hdf5MasterFile::digital_samples() const {
return m_digital_samples;
}
std::optional<int> Hdf5MasterFile::dbit_offset() const { return m_dbit_offset; }
std::optional<size_t> Hdf5MasterFile::dbit_list() const { return m_dbit_list; }
std::optional<int> Hdf5MasterFile::transceiver_mask() const {
return m_transceiver_mask;
}
bool Hdf5MasterFile::transceiver_flag() const { return m_transceiver_flag; }
std::optional<int> Hdf5MasterFile::transceiver_samples() const {
return m_transceiver_samples;
}
// g1 roi
std::optional<ROI> Hdf5MasterFile::roi() const { return m_roi; }
std::optional<int> Hdf5MasterFile::counter_mask() const {
return m_counter_mask;
}
std::optional<std::vector<ns>> Hdf5MasterFile::exptime_array() const {
return m_exptime_array;
}
std::optional<std::vector<ns>> Hdf5MasterFile::gate_delay_array() const {
return m_gate_delay_array;
}
std::optional<int> Hdf5MasterFile::gates() const { return m_gates; }
std::optional<std::map<std::string, std::string>>
Hdf5MasterFile::additional_json_header() const {
return m_additional_json_header;
}
size_t Hdf5MasterFile::frames_in_file() const { return m_frames_in_file; }
size_t Hdf5MasterFile::n_modules() const {
return m_geometry.row * m_geometry.col;
}
// optional values, these may or may not be present in the master file
// and are therefore modeled as std::optional
const std::string Hdf5MasterFile::metadata_group_name =
"/entry/instrument/detector/";
template <typename T>
T Hdf5MasterFile::h5_read_scalar_dataset(const H5::DataSet &dataset,
const H5::DataType &data_type) {
T value;
dataset.read(&value, data_type);
return value;
}
template <>
std::string Hdf5MasterFile::h5_read_scalar_dataset<std::string>(
const H5::DataSet &dataset, const H5::DataType &data_type) {
size_t size = data_type.getSize();
std::vector<char> buffer(size + 1, 0);
dataset.read(buffer.data(), data_type);
return std::string(buffer.data());
}
template <typename T>
T Hdf5MasterFile::h5_get_scalar_dataset(const H5::H5File &file,
const std::string &dataset_name) {
H5::DataSet dataset = file.openDataSet(dataset_name);
H5::DataSpace dataspace = dataset.getSpace();
if (dataspace.getSimpleExtentNdims() != 0) {
throw std::runtime_error(LOCATION + "Expected " + dataset_name +
" to be a scalar dataset");
}
H5::DataType data_type = dataset.getDataType();
return h5_read_scalar_dataset<T>(dataset, data_type);
}
void Hdf5MasterFile::parse_acquisition_metadata(
const std::filesystem::path &fpath) {
try {
H5::H5File file(fpath, H5F_ACC_RDONLY);
// Attribute - version
double dVersion{0.0};
{
H5::Attribute attr = file.openAttribute("version");
H5::DataType attr_type = attr.getDataType();
attr.read(attr_type, &dVersion);
std::ostringstream oss;
oss << std::fixed << std::setprecision(1) << dVersion;
m_version = oss.str();
LOG(logDEBUG) << "Version: " << m_version;
}
// Scalar Dataset
H5::Exception::dontPrint();
// Detector Type
m_type = StringTo<DetectorType>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Detector Type")));
LOG(logDEBUG) << "Detector Type: " << ToString(m_type);
// Timing Mode
m_timing_mode = StringTo<TimingMode>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Timing Mode")));
LOG(logDEBUG) << "Timing Mode: " << ToString(m_timing_mode);
// Geometry
m_geometry.row = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Geometry in y axis"));
m_geometry.col = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Geometry in x axis"));
LOG(logDEBUG) << "Geometry: " << m_geometry.to_string();
// Image Size
m_image_size_in_bytes = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Image Size"));
LOG(logDEBUG) << "Image size: " << m_image_size_in_bytes;
// Pixels y
m_pixels_y = h5_get_scalar_dataset<int>(
file,
std::string(metadata_group_name + "Number of pixels in y axis"));
LOG(logDEBUG) << "Pixels in y: " << m_pixels_y;
// Pixels x
m_pixels_x = h5_get_scalar_dataset<int>(
file,
std::string(metadata_group_name + "Number of pixels in x axis"));
LOG(logDEBUG) << "Pixels in x: " << m_pixels_x;
// Max Frames Per File
m_max_frames_per_file = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Maximum frames per file"));
LOG(logDEBUG) << "Max frames per File: " << m_max_frames_per_file;
// Frame Discard Policy
m_frame_discard_policy =
StringTo<FrameDiscardPolicy>(h5_get_scalar_dataset<std::string>(
file,
std::string(metadata_group_name + "Frame Discard Policy")));
LOG(logDEBUG) << "Frame Discard Policy: "
<< ToString(m_frame_discard_policy);
// Frame Padding
m_frame_padding = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Frame Padding"));
LOG(logDEBUG) << "Frame Padding: " << m_frame_padding;
// Scan Parameters
try {
std::string scan_parameters = h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Scan Parameters"));
m_scan_parameters = ScanParameters(scan_parameters);
if (dVersion < 6.61) {
m_scan_parameters
->increment_stop(); // adjust for endpoint being included
}
LOG(logDEBUG) << "Scan Parameters: " << ToString(m_scan_parameters);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Total Frames Expected
m_total_frames_expected = h5_get_scalar_dataset<size_t>(
file, std::string(metadata_group_name + "Total Frames"));
LOG(logDEBUG) << "Total Frames: " << m_total_frames_expected;
// Exptime
try {
m_exptime = StringTo<ns>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Exposure Time")));
LOG(logDEBUG) << "Exptime: " << ToString(m_exptime);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Period
try {
m_period = StringTo<ns>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Acquisition Period")));
LOG(logDEBUG) << "Period: " << ToString(m_period);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// burst mode
try {
m_burst_mode =
StringTo<BurstMode>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Burst Mode")));
LOG(logDEBUG) << "Burst Mode: " << ToString(m_burst_mode);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Number of UDP Interfaces
// Not all detectors write the Number of UDP Interfaces but in case
try {
m_number_of_udp_interfaces = h5_get_scalar_dataset<int>(
file,
std::string(metadata_group_name + "Number of UDP Interfaces"));
LOG(logDEBUG) << "Number of UDP Interfaces: "
<< m_number_of_udp_interfaces;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Bit Depth
// Not all detectors write the bitdepth but in case
// its not there it is 16
try {
m_bitdepth = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Dynamic Range"));
LOG(logDEBUG) << "Bit Depth: " << m_bitdepth;
} catch (H5::FileIException &e) {
m_bitdepth = 16;
}
// Ten Giga
try {
m_ten_giga = h5_get_scalar_dataset<bool>(
file, std::string(metadata_group_name + "Ten Giga Enable"));
LOG(logDEBUG) << "Ten Giga Enable: " << m_ten_giga;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Threshold Energy
try {
m_threshold_energy = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Threshold Energy"));
LOG(logDEBUG) << "Threshold Energy: " << m_threshold_energy;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Threshold All Energy
try {
m_threshold_energy_all =
StringTo<std::vector<int>>(h5_get_scalar_dataset<std::string>(
file,
std::string(metadata_group_name + "Threshold Energies")));
LOG(logDEBUG) << "Threshold Energies: "
<< ToString(m_threshold_energy_all);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Subexptime
try {
m_subexptime = StringTo<ns>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Sub Exposure Time")));
LOG(logDEBUG) << "Subexptime: " << ToString(m_subexptime);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Subperiod
try {
m_subperiod = StringTo<ns>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Sub Period")));
LOG(logDEBUG) << "Subperiod: " << ToString(m_subperiod);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Quad
try {
m_quad = h5_get_scalar_dataset<bool>(
file, std::string(metadata_group_name + "Quad"));
LOG(logDEBUG) << "Quad: " << m_quad;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Number of Rows
// Not all detectors write the Number of rows but in case
try {
m_number_of_rows = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Number of rows"));
LOG(logDEBUG) << "Number of rows: " << m_number_of_rows;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Rate Corrections
try {
m_rate_corrections = StringTo<std::vector<size_t>>(
h5_get_scalar_dataset<std::string>(
file,
std::string(metadata_group_name + "Rate Corrections")));
LOG(logDEBUG) << "Rate Corrections: "
<< ToString(m_rate_corrections);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// ADC Mask
try {
m_adc_mask = h5_get_scalar_dataset<uint32_t>(
file, std::string(metadata_group_name + "ADC Mask"));
LOG(logDEBUG) << "ADC Mask: " << m_adc_mask;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Analog Flag
// ----------------------------------------------------------------
// Special treatment of analog flag because of Moench03
try {
m_analog_flag = h5_get_scalar_dataset<uint8_t>(
file, std::string(metadata_group_name + "Analog Flag"));
LOG(logDEBUG) << "Analog Flag: " << m_analog_flag;
} catch (H5::FileIException &e) {
// if it doesn't work still set it to one
// to try to decode analog samples (Old Moench03)
m_analog_flag = 1;
}
// Analog Samples
try {
if (m_analog_flag) {
m_analog_samples = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Analog Samples"));
LOG(logDEBUG) << "Analog Samples: " << m_analog_samples;
}
} catch (H5::FileIException &e) {
// keep the optional empty
// and set analog flag to 0
m_analog_flag = false;
}
//-----------------------------------------------------------------
// Digital Flag, Digital Samples
try {
m_digital_flag = h5_get_scalar_dataset<bool>(
file, std::string(metadata_group_name + "Digital Flag"));
LOG(logDEBUG) << "Digital Flag: " << m_digital_flag;
if (m_digital_flag) {
m_digital_samples = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Digital Samples"));
}
LOG(logDEBUG) << "Digital Samples: " << m_digital_samples;
} catch (H5::FileIException &e) {
m_digital_flag = false;
}
// Dbit Offset
try {
m_dbit_offset = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Dbit Offset"));
LOG(logDEBUG) << "Dbit Offset: " << m_dbit_offset;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Dbit List
try {
m_dbit_list = h5_get_scalar_dataset<size_t>(
file, std::string(metadata_group_name + "Dbit Bitset List"));
LOG(logDEBUG) << "Dbit list: " << m_dbit_list;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Transceiver Mask
try {
m_transceiver_mask = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Transceiver Mask"));
LOG(logDEBUG) << "Transceiver Mask: " << m_transceiver_mask;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Transceiver Flag, Transceiver Samples
try {
m_transceiver_flag = h5_get_scalar_dataset<bool>(
file, std::string(metadata_group_name + "Transceiver Flag"));
LOG(logDEBUG) << "Transceiver Flag: " << m_transceiver_flag;
if (m_transceiver_flag) {
m_transceiver_samples = h5_get_scalar_dataset<int>(
file,
std::string(metadata_group_name + "Transceiver Samples"));
LOG(logDEBUG)
<< "Transceiver Samples: " << m_transceiver_samples;
}
} catch (H5::FileIException &e) {
m_transceiver_flag = false;
}
// Rx ROI
try {
ROI tmp_roi;
tmp_roi.xmin = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "receiver roi xmin"));
tmp_roi.xmax = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "receiver roi xmax"));
tmp_roi.ymin = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "receiver roi ymin"));
tmp_roi.ymax = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "receiver roi ymax"));
// if any of the values are set update the roi
if (tmp_roi.xmin != -1 || tmp_roi.xmax != -1 ||
tmp_roi.ymin != -1 || tmp_roi.ymax != -1) {
// why?? TODO
//if (dVersion < 6.6) {
tmp_roi.xmax++;
tmp_roi.ymax++;
//}
m_roi = tmp_roi;
}
LOG(logDEBUG) << "ROI: " << m_roi;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Update detector type for Moench
// TODO! How does this work with old .h5 master files?
#ifdef AARE_VERBOSE
fmt::print("Detecting Moench03: m_pixels_y: {}, "
"m_analog_samples: {}\n",
m_pixels_y, m_analog_samples.value_or(0));
#endif
if (m_type == DetectorType::Moench && !m_analog_samples &&
m_pixels_y == 400) {
m_type = DetectorType::Moench03;
} else if (m_type == DetectorType::Moench && m_pixels_y == 400 &&
m_analog_samples == 5000) {
m_type = DetectorType::Moench03_old;
}
// Counter Mask
try {
m_counter_mask = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Counter Mask"));
LOG(logDEBUG) << "Counter Mask: " << m_counter_mask;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Exposure Time Array
try {
m_exptime_array =
StringTo<std::vector<ns>>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Exposure Times")));
LOG(logDEBUG) << "Exposure Times: " << ToString(m_exptime_array);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Gate Delay Array
try {
m_gate_delay_array =
StringTo<std::vector<ns>>(h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name + "Gate Delays")));
LOG(logDEBUG) << "Gate Delays: " << ToString(m_gate_delay_array);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Gates
try {
m_gates = h5_get_scalar_dataset<int>(
file, std::string(metadata_group_name + "Gates"));
LOG(logDEBUG) << "Gates: " << m_gates;
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Additional Json Header
try {
m_additional_json_header =
StringTo<std::map<std::string, std::string>>(
h5_get_scalar_dataset<std::string>(
file, std::string(metadata_group_name +
"Additional JSON Header")));
LOG(logDEBUG) << "Additional JSON Header: "
<< ToString(m_additional_json_header);
} catch (H5::FileIException &e) {
// keep the optional empty
}
// Frames in File
m_frames_in_file = h5_get_scalar_dataset<size_t>(
file, std::string(metadata_group_name + "Frames in File"));
LOG(logDEBUG) << "Frames in File: " << m_frames_in_file;
H5Eset_auto(H5E_DEFAULT, reinterpret_cast<H5E_auto2_t>(H5Eprint2),
stderr);
} catch (const H5::Exception &e) {
fmt::print("Exception type: {}\n", typeid(e).name());
e.printErrorStack();
throw std::runtime_error(LOCATION + "\nCould not parse master file");
}
}
} // namespace aare

168
src/Hdf5MasterFile.test.cpp Normal file
View File

@ -0,0 +1,168 @@
#include "aare/Hdf5MasterFile.hpp"
#include "aare/to_string.hpp"
#include "test_config.hpp"
#include <catch2/catch_test_macros.hpp>
using namespace aare;
TEST_CASE("Parse a multi module jungfrau master file in .h5 format", "[.integration][.hdf5]") {
auto fpath = test_data_path() / "hdf5" / "virtual" / "jungfrau" /
"two_modules_master_0.h5";
REQUIRE(std::filesystem::exists(fpath));
Hdf5MasterFile f(fpath);
REQUIRE(f.version() == "6.6");
// "Timestamp": "Tue Feb 20 08:28:24 2024",
REQUIRE(f.detector_type() == DetectorType::Jungfrau);
REQUIRE(f.timing_mode() == TimingMode::Auto);
REQUIRE(f.geometry().col == 1);
REQUIRE(f.geometry().row == 2);
REQUIRE(f.image_size_in_bytes() == 1048576);
REQUIRE(f.pixels_x() == 1024);
REQUIRE(f.pixels_y() == 512);
REQUIRE(f.max_frames_per_file() == 10000);
REQUIRE(f.frame_discard_policy() == FrameDiscardPolicy::NoDiscard);
REQUIRE(f.frame_padding() == 1);
REQUIRE(f.scan_parameters()->enabled() == false);
REQUIRE(f.total_frames_expected() == 5);
REQUIRE(f.exptime() == std::chrono::microseconds(10));
REQUIRE(f.period() == std::chrono::milliseconds(2));
REQUIRE_FALSE(f.burst_mode().has_value());
REQUIRE(f.number_of_udp_interfaces() == 1);
// Jungfrau doesn't write but it is 16
REQUIRE(f.bitdepth() == 16);
REQUIRE_FALSE(f.ten_giga().has_value());
REQUIRE_FALSE(f.threshold_energy().has_value());
REQUIRE_FALSE(f.threshold_energy_all().has_value());
REQUIRE_FALSE(f.subexptime().has_value());
REQUIRE_FALSE(f.subperiod().has_value());
REQUIRE_FALSE(f.quad().has_value());
REQUIRE(f.number_of_rows() == 512);
REQUIRE_FALSE(f.rate_corrections().has_value());
REQUIRE_FALSE(f.adc_mask().has_value());
REQUIRE_FALSE(f.analog_flag());
REQUIRE_FALSE(f.analog_samples().has_value());
REQUIRE_FALSE(f.digital_flag());
REQUIRE_FALSE(f.digital_samples().has_value());
REQUIRE_FALSE(f.dbit_offset().has_value());
REQUIRE_FALSE(f.dbit_list().has_value());
REQUIRE_FALSE(f.transceiver_mask().has_value());
REQUIRE_FALSE(f.transceiver_flag());
REQUIRE_FALSE(f.transceiver_samples().has_value());
REQUIRE_FALSE(f.roi().has_value());
REQUIRE_FALSE(f.counter_mask().has_value());
REQUIRE_FALSE(f.exptime_array().has_value());
REQUIRE_FALSE(f.gate_delay_array().has_value());
REQUIRE_FALSE(f.gates().has_value());
REQUIRE_FALSE(f.additional_json_header().has_value());
REQUIRE(f.frames_in_file() == 5);
REQUIRE(f.n_modules() == 2);
}
TEST_CASE("Parse a single module jungfrau master file in .h5 format", "[.integration][.hdf5]") {
auto fpath = test_data_path() / "hdf5" / "virtual" / "jungfrau" /
"single_module_master_2.h5";
REQUIRE(std::filesystem::exists(fpath));
Hdf5MasterFile f(fpath);
REQUIRE(f.version() == "6.6");
// "Timestamp": "Tue Feb 20 08:28:24 2024",
REQUIRE(f.detector_type() == DetectorType::Jungfrau);
REQUIRE(f.timing_mode() == TimingMode::Auto);
REQUIRE(f.geometry().col == 1);
REQUIRE(f.geometry().row == 1);
REQUIRE(f.image_size_in_bytes() == 1048576);
REQUIRE(f.pixels_x() == 1024);
REQUIRE(f.pixels_y() == 512);
REQUIRE(f.max_frames_per_file() == 10000);
REQUIRE(f.frame_discard_policy() == FrameDiscardPolicy::NoDiscard);
REQUIRE(f.frame_padding() == 1);
REQUIRE(f.scan_parameters()->enabled() == false);
REQUIRE(f.total_frames_expected() == 5);
REQUIRE(f.exptime() == std::chrono::microseconds(10));
REQUIRE(f.period() == std::chrono::milliseconds(2));
REQUIRE_FALSE(f.burst_mode().has_value());
REQUIRE(f.number_of_udp_interfaces() == 1);
// Jungfrau doesn't write but it is 16
REQUIRE(f.bitdepth() == 16);
REQUIRE_FALSE(f.ten_giga().has_value());
REQUIRE_FALSE(f.threshold_energy().has_value());
REQUIRE_FALSE(f.threshold_energy_all().has_value());
REQUIRE_FALSE(f.subexptime().has_value());
REQUIRE_FALSE(f.subperiod().has_value());
REQUIRE_FALSE(f.quad().has_value());
REQUIRE(f.number_of_rows() == 512);
REQUIRE_FALSE(f.rate_corrections().has_value());
REQUIRE_FALSE(f.adc_mask().has_value());
REQUIRE_FALSE(f.analog_flag());
REQUIRE_FALSE(f.analog_samples().has_value());
REQUIRE_FALSE(f.digital_flag());
REQUIRE_FALSE(f.digital_samples().has_value());
REQUIRE_FALSE(f.dbit_offset().has_value());
REQUIRE_FALSE(f.dbit_list().has_value());
REQUIRE_FALSE(f.transceiver_mask().has_value());
REQUIRE_FALSE(f.transceiver_flag());
REQUIRE_FALSE(f.transceiver_samples().has_value());
REQUIRE_FALSE(f.roi().has_value());
REQUIRE_FALSE(f.counter_mask().has_value());
REQUIRE_FALSE(f.exptime_array().has_value());
REQUIRE_FALSE(f.gate_delay_array().has_value());
REQUIRE_FALSE(f.gates().has_value());
REQUIRE_FALSE(f.additional_json_header().has_value());
REQUIRE(f.frames_in_file() == 5);
REQUIRE(f.n_modules() == 1);
}
TEST_CASE("Parse a mythen3 master file in .h5 format", "[.integration][.hdf5]") {
auto fpath = test_data_path() / "hdf5" / "virtual" / "mythen3" /
"one_module_master_0.h5";
REQUIRE(std::filesystem::exists(fpath));
Hdf5MasterFile f(fpath);
REQUIRE(f.version() == "6.7");
// "Timestamp": "Tue Feb 20 08:28:24 2024",
REQUIRE(f.detector_type() == DetectorType::Mythen3);
REQUIRE(f.timing_mode() == TimingMode::Auto);
REQUIRE(f.geometry().col == 1);
REQUIRE(f.geometry().row == 1);
REQUIRE(f.image_size_in_bytes() == 15360);
REQUIRE(f.pixels_x() == 3840);
REQUIRE(f.pixels_y() == 1);
REQUIRE(f.max_frames_per_file() == 10000);
REQUIRE(f.frame_discard_policy() == FrameDiscardPolicy::NoDiscard);
REQUIRE(f.frame_padding() == 1);
REQUIRE(f.scan_parameters()->enabled() == false);
REQUIRE(f.total_frames_expected() == 1);
REQUIRE_FALSE(f.exptime().has_value());
REQUIRE(f.period() == std::chrono::nanoseconds(0));
REQUIRE_FALSE(f.burst_mode().has_value());
REQUIRE_FALSE(f.number_of_udp_interfaces().has_value());
REQUIRE(f.bitdepth() == 32);
REQUIRE(f.ten_giga() == 1);
REQUIRE_FALSE(f.threshold_energy().has_value());
REQUIRE(ToString(f.threshold_energy_all()) == "[-1, -1, -1]");
REQUIRE_FALSE(f.subexptime().has_value());
REQUIRE_FALSE(f.subperiod().has_value());
REQUIRE_FALSE(f.quad().has_value());
REQUIRE_FALSE(f.number_of_rows().has_value());
REQUIRE_FALSE(f.rate_corrections().has_value());
REQUIRE_FALSE(f.adc_mask().has_value());
REQUIRE_FALSE(f.analog_flag());
REQUIRE_FALSE(f.analog_samples().has_value());
REQUIRE_FALSE(f.digital_flag());
REQUIRE_FALSE(f.digital_samples().has_value());
REQUIRE_FALSE(f.dbit_offset().has_value());
REQUIRE_FALSE(f.dbit_list().has_value());
REQUIRE_FALSE(f.transceiver_mask().has_value());
REQUIRE_FALSE(f.transceiver_flag());
REQUIRE_FALSE(f.transceiver_samples().has_value());
REQUIRE_FALSE(f.roi().has_value());
REQUIRE(f.counter_mask() == 0x7);
REQUIRE(ToString(f.exptime_array()) == "[0.1s, 0.1s, 0.1s]");
REQUIRE(ToString(f.gate_delay_array()) == "[0ns, 0ns, 0ns]");
REQUIRE(f.gates() == 1);
REQUIRE_FALSE(f.additional_json_header().has_value());
REQUIRE(f.frames_in_file() == 1);
REQUIRE(f.n_modules() == 1);
}

View File

@ -19,16 +19,15 @@ JungfrauDataFile::JungfrauDataFile(const std::filesystem::path &fname) {
open_file(m_current_file_index);
}
// FileInterface
Frame JungfrauDataFile::read_frame(){
Frame JungfrauDataFile::read_frame() {
Frame f(rows(), cols(), Dtype::UINT16);
read_into(reinterpret_cast<std::byte *>(f.data()), nullptr);
return f;
}
Frame JungfrauDataFile::read_frame(size_t frame_number){
Frame JungfrauDataFile::read_frame(size_t frame_number) {
seek(frame_number);
Frame f(rows(), cols(), Dtype::UINT16);
read_into(reinterpret_cast<std::byte *>(f.data()), nullptr);
@ -37,7 +36,7 @@ Frame JungfrauDataFile::read_frame(size_t frame_number){
std::vector<Frame> JungfrauDataFile::read_n(size_t n_frames) {
std::vector<Frame> frames;
for(size_t i = 0; i < n_frames; ++i){
for (size_t i = 0; i < n_frames; ++i) {
frames.push_back(read_frame());
}
return frames;
@ -59,7 +58,9 @@ std::array<ssize_t, 2> JungfrauDataFile::shape() const {
return {static_cast<ssize_t>(rows()), static_cast<ssize_t>(cols())};
}
DetectorType JungfrauDataFile::detector_type() const { return DetectorType::Jungfrau; }
DetectorType JungfrauDataFile::detector_type() const {
return DetectorType::Jungfrau;
}
std::string JungfrauDataFile::base_name() const { return m_base_name; }
@ -196,21 +197,22 @@ void JungfrauDataFile::read_into(std::byte *image_buf, size_t n_frames,
if (header) {
for (size_t i = 0; i < n_frames; ++i)
read_into(image_buf + i * m_bytes_per_frame, header + i);
}else{
} else {
for (size_t i = 0; i < n_frames; ++i)
read_into(image_buf + i * m_bytes_per_frame, nullptr);
}
}
void JungfrauDataFile::read_into(NDArray<uint16_t>* image, JungfrauDataHeader* header) {
if(image->shape()!=shape()){
throw std::runtime_error(LOCATION +
"Image shape does not match file size: " + std::to_string(rows()) + "x" + std::to_string(cols()));
void JungfrauDataFile::read_into(NDArray<uint16_t> *image,
JungfrauDataHeader *header) {
if (image->shape() != shape()) {
throw std::runtime_error(
LOCATION + "Image shape does not match file size: " +
std::to_string(rows()) + "x" + std::to_string(cols()));
}
read_into(reinterpret_cast<std::byte *>(image->data()), header);
}
JungfrauDataHeader JungfrauDataFile::read_header() {
JungfrauDataHeader header;
if (auto rc = fread(&header, 1, sizeof(header), m_fp.get());

View File

@ -1,14 +1,14 @@
#include "aare/JungfrauDataFile.hpp"
#include <catch2/catch_test_macros.hpp>
#include "test_config.hpp"
#include <catch2/catch_test_macros.hpp>
using aare::JungfrauDataFile;
using aare::JungfrauDataHeader;
TEST_CASE("Open a Jungfrau data file", "[.files]") {
//we know we have 4 files with 7, 7, 7, and 3 frames
//firs frame number if 1 and the bunch id is frame_number**2
//so we can check the header
// we know we have 4 files with 7, 7, 7, and 3 frames
// firs frame number if 1 and the bunch id is frame_number**2
// so we can check the header
auto fpath = test_data_path() / "dat" / "AldoJF500k_000000.dat";
REQUIRE(std::filesystem::exists(fpath));
@ -25,7 +25,7 @@ TEST_CASE("Open a Jungfrau data file", "[.files]") {
REQUIRE(f.total_frames() == 24);
REQUIRE(f.current_file() == fpath);
//Check that the frame number and buch id is read correctly
// Check that the frame number and buch id is read correctly
for (size_t i = 0; i < 24; ++i) {
JungfrauDataHeader header;
aare::NDArray<uint16_t> image(f.shape());
@ -37,65 +37,64 @@ TEST_CASE("Open a Jungfrau data file", "[.files]") {
}
}
TEST_CASE("Seek in a JungfrauDataFile", "[.files]"){
TEST_CASE("Seek in a JungfrauDataFile", "[.files]") {
auto fpath = test_data_path() / "dat" / "AldoJF65k_000000.dat";
REQUIRE(std::filesystem::exists(fpath));
JungfrauDataFile f(fpath);
//The file should have 113 frames
// The file should have 113 frames
f.seek(19);
REQUIRE(f.tell() == 19);
auto h = f.read_header();
REQUIRE(h.framenum == 19+1);
REQUIRE(h.framenum == 19 + 1);
//Reading again does not change the file pointer
// Reading again does not change the file pointer
auto h2 = f.read_header();
REQUIRE(h2.framenum == 19+1);
REQUIRE(h2.framenum == 19 + 1);
f.seek(59);
REQUIRE(f.tell() == 59);
auto h3 = f.read_header();
REQUIRE(h3.framenum == 59+1);
REQUIRE(h3.framenum == 59 + 1);
JungfrauDataHeader h4;
aare::NDArray<uint16_t> image(f.shape());
f.read_into(&image, &h4);
REQUIRE(h4.framenum == 59+1);
REQUIRE(h4.framenum == 59 + 1);
//now we should be on the next frame
// now we should be on the next frame
REQUIRE(f.tell() == 60);
REQUIRE(f.read_header().framenum == 60+1);
REQUIRE(f.read_header().framenum == 60 + 1);
REQUIRE_THROWS(f.seek(86356)); //out of range
REQUIRE_THROWS(f.seek(86356)); // out of range
}
TEST_CASE("Open a Jungfrau data file with non zero file index", "[.files]"){
TEST_CASE("Open a Jungfrau data file with non zero file index", "[.files]") {
auto fpath = test_data_path() / "dat" / "AldoJF65k_000003.dat";
REQUIRE(std::filesystem::exists(fpath));
JungfrauDataFile f(fpath);
//18 files per data file, opening the 3rd file we ignore the first 3
REQUIRE(f.total_frames() == 113-18*3);
// 18 files per data file, opening the 3rd file we ignore the first 3
REQUIRE(f.total_frames() == 113 - 18 * 3);
REQUIRE(f.tell() == 0);
//Frame numbers start at 1 in the first file
REQUIRE(f.read_header().framenum == 18*3+1);
// Frame numbers start at 1 in the first file
REQUIRE(f.read_header().framenum == 18 * 3 + 1);
// moving relative to the third file
f.seek(5);
REQUIRE(f.read_header().framenum == 18*3+1+5);
REQUIRE(f.read_header().framenum == 18 * 3 + 1 + 5);
// ignoring the first 3 files
REQUIRE(f.n_files() == 4);
REQUIRE(f.current_file().stem() == "AldoJF65k_000003");
}
TEST_CASE("Read into throws if size doesn't match", "[.files]"){
TEST_CASE("Read into throws if size doesn't match", "[.files]") {
auto fpath = test_data_path() / "dat" / "AldoJF65k_000000.dat";
REQUIRE(std::filesystem::exists(fpath));
@ -109,6 +108,4 @@ TEST_CASE("Read into throws if size doesn't match", "[.files]"){
REQUIRE_THROWS(f.read_into(&image));
REQUIRE(f.tell() == 0);
}

View File

@ -35,7 +35,7 @@ TEST_CASE("Construct from an NDView") {
}
}
TEST_CASE("3D NDArray from NDView"){
TEST_CASE("3D NDArray from NDView") {
std::vector<int> data(27);
std::iota(data.begin(), data.end(), 0);
NDView<int, 3> view(data.data(), Shape<3>{3, 3, 3});
@ -44,9 +44,9 @@ TEST_CASE("3D NDArray from NDView"){
REQUIRE(image.size() == view.size());
REQUIRE(image.data() != view.data());
for(ssize_t i=0; i<image.shape(0); i++){
for(ssize_t j=0; j<image.shape(1); j++){
for(ssize_t k=0; k<image.shape(2); k++){
for (ssize_t i = 0; i < image.shape(0); i++) {
for (ssize_t j = 0; j < image.shape(1); j++) {
for (ssize_t k = 0; k < image.shape(2); k++) {
REQUIRE(image(i, j, k) == view(i, j, k));
}
}
@ -132,7 +132,7 @@ TEST_CASE("Elementwise multiplication of 3D image") {
NDArray<int> MultiplyNDArrayUsingOperator(NDArray<int> &a, NDArray<int> &b) {
// return a * a * b * b;
NDArray<int>c = a*b;
NDArray<int> c = a * b;
return c;
}
@ -162,7 +162,6 @@ NDArray<int> AddNDArrayUsingIndex(NDArray<int> &a, NDArray<int> &b) {
return res;
}
TEST_CASE("Compare two images") {
NDArray<int> a;
NDArray<int> b;
@ -222,7 +221,6 @@ TEST_CASE("Bitwise and on data") {
REQUIRE(a(2) == 384);
}
TEST_CASE("Elementwise operations on images") {
std::array<ssize_t, 2> shape{5, 5};
double a_val = 3.0;
@ -258,7 +256,8 @@ TEST_CASE("Elementwise operations on images") {
NDArray<double> A(shape, a_val);
NDArray<double> B(shape, b_val);
NDArray<double> C = A - B;
// auto C = A - B; // This works but the result is a lazy ArraySub object
// auto C = A - B; // This works but the result is a lazy ArraySub
// object
// Value of C matches
for (uint32_t i = 0; i < C.size(); ++i) {
@ -282,7 +281,8 @@ TEST_CASE("Elementwise operations on images") {
SECTION("Multiply two images") {
NDArray<double> A(shape, a_val);
NDArray<double> B(shape, b_val);
// auto C = A * B; // This works but the result is a lazy ArrayMul object
// auto C = A * B; // This works but the result is a lazy ArrayMul
// object
NDArray<double> C = A * B;
// Value of C matches
@ -307,7 +307,8 @@ TEST_CASE("Elementwise operations on images") {
SECTION("Divide two images") {
NDArray<double> A(shape, a_val);
NDArray<double> B(shape, b_val);
// auto C = A / B; // This works but the result is a lazy ArrayDiv object
// auto C = A / B; // This works but the result is a lazy ArrayDiv
// object
NDArray<double> C = A / B;
// Value of C matches

View File

@ -2,8 +2,8 @@
#include <catch2/catch_test_macros.hpp>
#include <iostream>
#include <vector>
#include <numeric>
#include <vector>
using aare::NDView;
using aare::Shape;
@ -151,8 +151,10 @@ TEST_CASE("divide with another span") {
std::vector<int> vec1{3, 2, 1};
std::vector<int> result{3, 6, 3};
NDView<int, 1> data0(vec0.data(), Shape<1>{static_cast<ssize_t>(vec0.size())});
NDView<int, 1> data1(vec1.data(), Shape<1>{static_cast<ssize_t>(vec1.size())});
NDView<int, 1> data0(vec0.data(),
Shape<1>{static_cast<ssize_t>(vec0.size())});
NDView<int, 1> data1(vec1.data(),
Shape<1>{static_cast<ssize_t>(vec1.size())});
data0 /= data1;
@ -181,8 +183,7 @@ TEST_CASE("compare two views") {
REQUIRE((view1 == view2));
}
TEST_CASE("Create a view over a vector"){
TEST_CASE("Create a view over a vector") {
std::vector<int> vec(12);
std::iota(vec.begin(), vec.end(), 0);
auto v = aare::make_view(vec);

View File

@ -4,16 +4,16 @@
namespace aare {
NumpyFile::NumpyFile(const std::filesystem::path &fname, const std::string &mode, FileConfig cfg) {
NumpyFile::NumpyFile(const std::filesystem::path &fname,
const std::string &mode, FileConfig cfg) {
// TODO! add opts to constructor
m_mode = mode;
if (mode == "r") {
fp = fopen(fname.string().c_str(), "rb");
if (!fp) {
throw std::runtime_error(fmt::format("Could not open: {} for reading", fname.string()));
throw std::runtime_error(
fmt::format("Could not open: {} for reading", fname.string()));
}
load_metadata();
} else if (mode == "w") {
@ -24,11 +24,15 @@ NumpyFile::NumpyFile(const std::filesystem::path &fname, const std::string &mode
m_header.shape = {0, cfg.rows, cfg.cols};
fp = fopen(fname.string().c_str(), "wb");
if (!fp) {
throw std::runtime_error(fmt::format("Could not open: {} for reading", fname.string()));
throw std::runtime_error(
fmt::format("Could not open: {} for reading", fname.string()));
}
initial_header_len = aare::NumpyHelpers::write_header(std::filesystem::path(fname.c_str()), m_header);
initial_header_len = aare::NumpyHelpers::write_header(
std::filesystem::path(fname.c_str()), m_header);
}
m_pixels_per_frame = std::accumulate(m_header.shape.begin() + 1, m_header.shape.end(), 1, std::multiplies<>());
m_pixels_per_frame =
std::accumulate(m_header.shape.begin() + 1, m_header.shape.end(), 1,
std::multiplies<>());
m_bytes_per_frame = m_header.dtype.bitdepth() / 8 * m_pixels_per_frame;
}
@ -63,7 +67,8 @@ void NumpyFile::get_frame_into(size_t frame_number, std::byte *image_buf) {
if (frame_number > m_header.shape[0]) {
throw std::invalid_argument("Frame number out of range");
}
if (fseek(fp, header_size + frame_number * m_bytes_per_frame, SEEK_SET)) // NOLINT
if (fseek(fp, header_size + frame_number * m_bytes_per_frame,
SEEK_SET)) // NOLINT
throw std::runtime_error("Could not seek to frame");
size_t const rc = fread(image_buf, m_bytes_per_frame, 1, fp);
@ -113,7 +118,8 @@ NumpyFile::~NumpyFile() noexcept {
// write header
size_t const rc = fwrite(header_str.c_str(), header_str.size(), 1, fp);
if (rc != 1) {
std::cout << "Error writing header to numpy file in destructor" << std::endl;
std::cout << "Error writing header to numpy file in destructor"
<< std::endl;
}
}
@ -140,8 +146,10 @@ void NumpyFile::load_metadata() {
}
// read version
rc = fread(reinterpret_cast<char *>(&major_ver_), sizeof(major_ver_), 1, fp);
rc += fread(reinterpret_cast<char *>(&minor_ver_), sizeof(minor_ver_), 1, fp);
rc =
fread(reinterpret_cast<char *>(&major_ver_), sizeof(major_ver_), 1, fp);
rc +=
fread(reinterpret_cast<char *>(&minor_ver_), sizeof(minor_ver_), 1, fp);
if (rc != 2) {
throw std::runtime_error("Error reading numpy version");
}
@ -159,7 +167,8 @@ void NumpyFile::load_metadata() {
if (rc != 1) {
throw std::runtime_error("Error reading header length");
}
header_size = aare::NumpyHelpers::magic_string_length + 2 + header_len_size + header_len;
header_size = aare::NumpyHelpers::magic_string_length + 2 +
header_len_size + header_len;
if (header_size % 16 != 0) {
fmt::print("Warning: header length is not a multiple of 16\n");
}

View File

@ -1,8 +1,8 @@
#include "aare/NumpyFile.hpp"
#include "aare/NDArray.hpp"
#include <catch2/catch_test_macros.hpp>
#include "test_config.hpp"
#include <catch2/catch_test_macros.hpp>
using aare::Dtype;
using aare::NumpyFile;

View File

@ -29,7 +29,8 @@ namespace aare {
std::string NumpyHeader::to_string() const {
std::stringstream sstm;
sstm << "dtype: " << dtype.to_string() << ", fortran_order: " << fortran_order << ' ';
sstm << "dtype: " << dtype.to_string()
<< ", fortran_order: " << fortran_order << ' ';
sstm << "shape: (";
for (auto item : shape)
sstm << item << ',';
@ -37,10 +38,10 @@ std::string NumpyHeader::to_string() const {
return sstm.str();
}
namespace NumpyHelpers {
std::unordered_map<std::string, std::string> parse_dict(std::string in, const std::vector<std::string> &keys) {
std::unordered_map<std::string, std::string>
parse_dict(std::string in, const std::vector<std::string> &keys) {
std::unordered_map<std::string, std::string> map;
if (keys.empty())
return map;
@ -100,7 +101,8 @@ aare::Dtype parse_descr(std::string typestring) {
constexpr char little_endian_char = '<';
constexpr char big_endian_char = '>';
constexpr char no_endian_char = '|';
constexpr std::array<char, 3> endian_chars = {little_endian_char, big_endian_char, no_endian_char};
constexpr std::array<char, 3> endian_chars = {
little_endian_char, big_endian_char, no_endian_char};
constexpr std::array<char, 4> numtype_chars = {'f', 'i', 'u', 'c'};
const char byteorder_c = typestring[0];
@ -139,7 +141,9 @@ std::string get_value_from_map(const std::string &mapstr) {
return trim(tmp);
}
bool is_digits(const std::string &str) { return std::all_of(str.begin(), str.end(), ::isdigit); }
bool is_digits(const std::string &str) {
return std::all_of(str.begin(), str.end(), ::isdigit);
}
std::vector<std::string> parse_tuple(std::string in) {
std::vector<std::string> v;
@ -215,20 +219,25 @@ inline std::string write_boolean(bool b) {
return "False";
}
inline std::string write_header_dict(const std::string &descr, bool fortran_order, const std::vector<size_t> &shape) {
inline std::string write_header_dict(const std::string &descr,
bool fortran_order,
const std::vector<size_t> &shape) {
std::string const s_fortran_order = write_boolean(fortran_order);
std::string const shape_s = write_tuple(shape);
return "{'descr': '" + descr + "', 'fortran_order': " + s_fortran_order + ", 'shape': " + shape_s + ", }";
return "{'descr': '" + descr + "', 'fortran_order': " + s_fortran_order +
", 'shape': " + shape_s + ", }";
}
size_t write_header(const std::filesystem::path &fname, const NumpyHeader &header) {
size_t write_header(const std::filesystem::path &fname,
const NumpyHeader &header) {
std::ofstream out(fname, std::ios::binary | std::ios::out);
return write_header(out, header);
}
size_t write_header(std::ostream &out, const NumpyHeader &header) {
std::string const header_dict = write_header_dict(header.dtype.to_string(), header.fortran_order, header.shape);
std::string const header_dict = write_header_dict(
header.dtype.to_string(), header.fortran_order, header.shape);
size_t length = magic_string_length + 2 + 2 + header_dict.length() + 1;
@ -247,17 +256,22 @@ size_t write_header(std::ostream &out, const NumpyHeader &header) {
// write header length
if (version_major == 1 && version_minor == 0) {
auto header_len = static_cast<uint16_t>(header_dict.length() + padding.length() + 1);
auto header_len =
static_cast<uint16_t>(header_dict.length() + padding.length() + 1);
std::array<uint8_t, 2> header_len_le16{static_cast<uint8_t>((header_len >> 0) & 0xff),
std::array<uint8_t, 2> header_len_le16{
static_cast<uint8_t>((header_len >> 0) & 0xff),
static_cast<uint8_t>((header_len >> 8) & 0xff)};
out.write(reinterpret_cast<char *>(header_len_le16.data()), 2);
} else {
auto header_len = static_cast<uint32_t>(header_dict.length() + padding.length() + 1);
auto header_len =
static_cast<uint32_t>(header_dict.length() + padding.length() + 1);
std::array<uint8_t, 4> header_len_le32{
static_cast<uint8_t>((header_len >> 0) & 0xff), static_cast<uint8_t>((header_len >> 8) & 0xff),
static_cast<uint8_t>((header_len >> 16) & 0xff), static_cast<uint8_t>((header_len >> 24) & 0xff)};
static_cast<uint8_t>((header_len >> 0) & 0xff),
static_cast<uint8_t>((header_len >> 8) & 0xff),
static_cast<uint8_t>((header_len >> 16) & 0xff),
static_cast<uint8_t>((header_len >> 24) & 0xff)};
out.write(reinterpret_cast<char *>(header_len_le32.data()), 4);
}

View File

@ -19,7 +19,9 @@ TEST_CASE("Check for quotes and return stripped string") {
REQUIRE(parse_str("''") == "");
}
TEST_CASE("parsing a string without quotes throws") { REQUIRE_THROWS(parse_str("hej")); }
TEST_CASE("parsing a string without quotes throws") {
REQUIRE_THROWS(parse_str("hej"));
}
TEST_CASE("trim whitespace") {
REQUIRE(trim(" hej ") == "hej");
@ -53,7 +55,8 @@ TEST_CASE("is element in array") {
}
TEST_CASE("Parse numpy dict") {
std::string in = "{'descr': '<f4', 'fortran_order': False, 'shape': (3, 4)}";
std::string in =
"{'descr': '<f4', 'fortran_order': False, 'shape': (3, 4)}";
std::vector<std::string> keys{"descr", "fortran_order", "shape"};
auto map = parse_dict(in, keys);
REQUIRE(map["descr"] == "'<f4'");

View File

@ -1,8 +1,7 @@
#include "aare/Pedestal.hpp"
#include <catch2/matchers/catch_matchers_floating_point.hpp>
#include <catch2/catch_test_macros.hpp>
#include <catch2/matchers/catch_matchers_floating_point.hpp>
#include <chrono>
#include <random>
@ -58,7 +57,8 @@ TEST_CASE("test pedestal push") {
if (k < 5) {
REQUIRE(pedestal.cur_samples()(i, j) == k + 1);
REQUIRE(pedestal.get_sum()(i, j) == (k + 1) * (i + j));
REQUIRE(pedestal.get_sum2()(i, j) == (k + 1) * (i + j) * (i + j));
REQUIRE(pedestal.get_sum2()(i, j) ==
(k + 1) * (i + j) * (i + j));
} else {
REQUIRE(pedestal.cur_samples()(i, j) == 5);
REQUIRE(pedestal.get_sum()(i, j) == 5 * (i + j));
@ -95,9 +95,12 @@ TEST_CASE("test pedestal with normal distribution") {
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 5; j++) {
REQUIRE_THAT(mean(i, j), Catch::Matchers::WithinAbs(MEAN, MEAN * TOLERANCE));
REQUIRE_THAT(variance(i, j), Catch::Matchers::WithinAbs(VAR, VAR * TOLERANCE));
REQUIRE_THAT(standard_deviation(i, j), Catch::Matchers::WithinAbs(STD, STD * TOLERANCE));
REQUIRE_THAT(mean(i, j),
Catch::Matchers::WithinAbs(MEAN, MEAN * TOLERANCE));
REQUIRE_THAT(variance(i, j),
Catch::Matchers::WithinAbs(VAR, VAR * TOLERANCE));
REQUIRE_THAT(standard_deviation(i, j),
Catch::Matchers::WithinAbs(STD, STD * TOLERANCE));
}
}
}

View File

@ -42,9 +42,9 @@ NDArray<ssize_t, 2> GenerateMoench05PixelMap() {
int adc_nr = adc_numbers[i_sc];
int i_analog = n_pixel * 12 + adc_nr;
// analog_frame[row * 150 + col] = analog_data[i_analog] & 0x3FFF;
// analog_frame[row * 150 + col] = analog_data[i_analog] &
// 0x3FFF;
order_map(row, col) = i_analog;
}
}
}
@ -63,10 +63,9 @@ NDArray<ssize_t, 2> GenerateMoench05PixelMap1g() {
int adc_nr = adc_numbers[i_sc];
int i_analog = n_pixel * 3 + adc_nr;
// analog_frame[row * 150 + col] = analog_data[i_analog] & 0x3FFF;
// analog_frame[row * 150 + col] = analog_data[i_analog] &
// 0x3FFF;
order_map(row, col) = i_analog;
}
}
}
@ -85,42 +84,42 @@ NDArray<ssize_t, 2> GenerateMoench05PixelMapOld() {
int adc_nr = adc_numbers[i_sc];
int i_analog = n_pixel * 32 + adc_nr;
// analog_frame[row * 150 + col] = analog_data[i_analog] & 0x3FFF;
// analog_frame[row * 150 + col] = analog_data[i_analog] &
// 0x3FFF;
order_map(row, col) = i_analog;
}
}
}
return order_map;
}
NDArray<ssize_t, 2>GenerateEigerFlipRowsPixelMap(){
NDArray<ssize_t, 2> GenerateEigerFlipRowsPixelMap() {
NDArray<ssize_t, 2> order_map({256, 512});
for(int row = 0; row < 256; row++){
for(int col = 0; col < 512; col++){
order_map(row, col) = 255*512-row*512 + col;
for (int row = 0; row < 256; row++) {
for (int col = 0; col < 512; col++) {
order_map(row, col) = 255 * 512 - row * 512 + col;
}
}
return order_map;
}
NDArray<ssize_t, 2>GenerateMH02SingleCounterPixelMap(){
NDArray<ssize_t, 2> GenerateMH02SingleCounterPixelMap() {
NDArray<ssize_t, 2> order_map({48, 48});
for(int row = 0; row < 48; row++){
for(int col = 0; col < 48; col++){
order_map(row, col) = row*48 + col;
for (int row = 0; row < 48; row++) {
for (int col = 0; col < 48; col++) {
order_map(row, col) = row * 48 + col;
}
}
return order_map;
}
NDArray<ssize_t, 3> GenerateMH02FourCounterPixelMap(){
NDArray<ssize_t, 3> GenerateMH02FourCounterPixelMap() {
NDArray<ssize_t, 3> order_map({4, 48, 48});
for (int counter=0; counter<4; counter++){
for(int row = 0; row < 48; row++){
for(int col = 0; col < 48; col++){
order_map(counter, row, col) = counter*48*48 + row*48 + col;
for (int counter = 0; counter < 4; counter++) {
for (int row = 0; row < 48; row++) {
for (int col = 0; col < 48; col++) {
order_map(counter, row, col) =
counter * 48 * 48 + row * 48 + col;
}
}
}

Some files were not shown because too many files have changed in this diff Show More