Compare commits

...

29 Commits

Author SHA1 Message Date
1b8657c524 No idea. Bug Fixes probably
All checks were successful
Build on RHEL8 / build (push) Successful in 3m13s
Build on RHEL9 / build (push) Successful in 3m26s
2025-10-01 10:09:36 +02:00
de1fd62e66 Added some debug code (Commented out though) 2025-08-15 12:26:20 +02:00
6b894a5083 Fixed cluster bug. cluster.data is not saved in the correct order (makes extracting 3x3s from 9x9 impossible). This is a bug in all branches
All checks were successful
Build on RHEL8 / build (push) Successful in 3m6s
Build on RHEL9 / build (push) Successful in 3m4s
2025-08-13 14:17:12 +02:00
faaa831238 Separated cluster size for finding and saving. You now specfic the cluster size you want when search through a frame, but you can also now specify a cluster size to save afterwards. For example, you search for 3x3s but save 9x9s 2025-08-13 11:46:24 +02:00
12498dacaa Fixed slow cluster finding with new chunked pedestals (10x slower with multithreading). The pedestal data is now accessed as a 1D pointer 2025-08-12 16:04:31 +02:00
7ea20c6b9d Inital implementation of multithreading for chunked pedestal 2025-08-12 11:58:21 +02:00
29a2374446 Removed check on center pixel (Allows negative clusters). We'll see how it pans out. 2025-08-12 00:01:37 +02:00
efb16ea8c1 Fixed Chunked Pedestal. Now should work as intended, giving sensible results compared to the previous version 2025-08-11 16:44:21 +02:00
7aa3fcfcd0 Bug Fixed for Chunked Pedestal 2025-08-11 15:24:42 +02:00
836dddbc26 First commit of new chunkedpedestal branch. Introduced new pedestal class and the necessary changes to the cluster finder class. This does not include the multithreaded version yet. 2025-08-10 19:03:10 +02:00
5107513ff5 Pedestal, calibration in g0 and counting pixels (#217)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m5s
Build on RHEL9 / build (push) Successful in 3m8s
- NDView operator()(size_t) now returns a view with one less dimension
- Apply calibration takes also a 2D array and then ignores pixels that
switch
- Calculate pedestal from a dataset which contains all three gains 
- G0 variant of pedestal
- Function to count pixels switching
2025-07-25 13:50:53 +02:00
f7aa66a2c9 templated calculate_pedestal with boolean template argument only_gain… (#218)
some refactoring for less code duplication, added functionality
drop_dimension in NDArray
2025-07-25 12:25:41 +02:00
3ac94641e3 Move constructor to drop 1st dimension of NDArray (#219)
All checks were successful
Build on RHEL8 / build (push) Successful in 3m3s
Build on RHEL9 / build (push) Successful in 3m4s
- helper function to initialize shape
- helper function to calculate the number of elements
- move constructor to create a NDArray<T, Ndim-1> if sizes match
2025-07-25 12:03:42 +02:00
froejdh_e
89bb8776ea check Ndim on drop_first_dim 2025-07-25 11:44:27 +02:00
Erik Fröjdh
1527a45cf3 Merge branch 'template_on_gain0' into dev/move_dim 2025-07-25 10:45:20 +02:00
froejdh_e
3d6858ad33 removed data_ref 2025-07-25 10:42:47 +02:00
froejdh_e
d6222027d0 move constructor for Ndim-1 2025-07-25 10:40:32 +02:00
1195a5e100 added drop dimension test, added file calibration.test.cpp 2025-07-25 10:18:55 +02:00
1347158235 templated calculate_pedestal with boolean template argument only_gain0, added drop_dimension to NDArray and reference pointer to data 2025-07-24 15:40:05 +02:00
froejdh_e
8c4d8b687e using make_subview
All checks were successful
Build on RHEL9 / build (push) Successful in 3m2s
Build on RHEL8 / build (push) Successful in 3m5s
2025-07-24 12:16:08 +02:00
froejdh_e
b8e91d0282 zero out switching pixels if 2D calibration is used 2025-07-24 12:10:13 +02:00
froejdh_e
46876bfa73 reduced duplicate code 2025-07-24 10:57:02 +02:00
froejdh_e
348fd0f937 removed unused code 2025-07-24 10:14:29 +02:00
froejdh_e
0fea0f5b0e added safe_divide to NDArray and used it for pedestal 2025-07-24 09:40:38 +02:00
Erik Fröjdh
cb439efb48 added tests
All checks were successful
Build on RHEL8 / build (push) Successful in 3m0s
Build on RHEL9 / build (push) Successful in 3m8s
2025-07-23 11:34:47 +02:00
Erik Fröjdh
5de402f91b added docs 2025-07-23 11:05:44 +02:00
froejdh_e
9a7713e98a added g0 calibration, pedestal and pixel counting 2025-07-22 16:42:09 +02:00
froejdh_e
b898e1c8d0 date also in release
All checks were successful
Build on RHEL9 / build (push) Successful in 3m9s
Build on RHEL8 / build (push) Successful in 3m9s
2025-07-18 10:23:17 +02:00
froejdh_e
4073c0cbe0 bumped version 2025-07-18 10:21:28 +02:00
22 changed files with 964 additions and 94 deletions

View File

@@ -368,6 +368,7 @@ set(PUBLICHEADERS
set(SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/calibration.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/CtbRawFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/decode.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.cpp
@@ -437,6 +438,7 @@ endif()
if(AARE_TESTS)
set(TestSources
${CMAKE_CURRENT_SOURCE_DIR}/src/algorithm.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/calibration.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/decode.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.test.cpp

View File

@@ -5,6 +5,15 @@
Features:
- Apply calibration works in G0 if passes a 2D calibration and pedestal
- count pixels that switch
- calculate pedestal (also g0 version)
### 2025.07.18
Features:
- Cluster finder now works with 5x5, 7x7 and 9x9 clusters
- Added ClusterVector::empty() member
- Added apply_calibration function for Jungfrau data

View File

@@ -1 +1 @@
2025.5.22
2025.7.18

View File

@@ -17,8 +17,24 @@ Functions for applying calibration to data.
# Apply calibration to raw data to convert from raw ADC values to keV
data = aare.apply_calibration(raw_data, pd=pedestal, cal=calibration)
# If you pass a 2D pedestal and calibration only G0 will be used for the conversion
# Pixels that switched to G1 or G2 will be set to 0
data = aare.apply_calibration(raw_data, pd=pedestal[0], cal=calibration[0])
.. py:currentmodule:: aare
.. autofunction:: apply_calibration
.. autofunction:: load_calibration
.. autofunction:: calculate_pedestal
.. autofunction:: calculate_pedestal_float
.. autofunction:: calculate_pedestal_g0
.. autofunction:: calculate_pedestal_g0_float
.. autofunction:: count_switching_pixels

View File

@@ -0,0 +1,152 @@
#pragma once
#include "aare/Frame.hpp"
#include "aare/NDArray.hpp"
#include "aare/NDView.hpp"
#include <cstddef>
//JMulvey
//This is a new way to do pedestals (inspired by Dominic's cluster finder)
//Instead of pedestal tracking, we split the data (photon data) up into chunks (say 50K frames)
//For each chunk, we look at the spectra and fit to the noise peak. When we run the cluster finder, we then use this chunked pedestal data
//The smaller the chunk size, the more accurate, but also the longer it takes to process.
//It is essentially a pre-processing step.
//Ideally this new class will do that processing.
//But for now we will just implement a method to pass in the chunked pedestal values directly (I have my own script which does it for now)
//I've cut this down a lot, knowing full well it'll need changing if we want to merge it with main (happy to do that once I get it work for what I need)
namespace aare {
/**
* @brief Calculate the pedestal of a series of frames. Can be used as
* standalone but mostly used in the ClusterFinder.
*
* @tparam SUM_TYPE type of the sum
*/
template <typename SUM_TYPE = double> class ChunkedPedestal {
uint32_t m_rows;
uint32_t m_cols;
uint32_t m_n_chunks;
uint64_t m_current_frame_number;
uint64_t m_current_chunk_number;
NDArray<SUM_TYPE, 3> m_mean;
NDArray<SUM_TYPE, 3> m_std;
uint32_t m_chunk_size;
public:
ChunkedPedestal(uint32_t rows, uint32_t cols, uint32_t chunk_size = 50000, uint32_t n_chunks = 10)
: m_rows(rows), m_cols(cols), m_chunk_size(chunk_size), m_n_chunks(n_chunks),
m_mean(NDArray<SUM_TYPE, 3>({n_chunks, rows, cols})), m_std(NDArray<SUM_TYPE, 3>({n_chunks, rows, cols})) {
assert(rows > 0 && cols > 0 && chunk_size > 0);
m_mean = 0;
m_std = 0;
m_current_frame_number = 0;
m_current_chunk_number = 0;
}
~ChunkedPedestal() = default;
NDArray<SUM_TYPE, 3> mean() { return m_mean; }
NDArray<SUM_TYPE, 3> std() { return m_std; }
void set_frame_number (uint64_t frame_number) {
m_current_frame_number = frame_number;
m_current_chunk_number = std::floor(frame_number / m_chunk_size);
//Debug
// if (frame_number % 10000 == 0)
// {
// std::cout << "frame_number: " << frame_number << " -> chunk_number: " << m_current_chunk_number << " pedestal at (100, 100): " << m_mean(m_current_chunk_number, 100, 100) << std::endl;
// }
if (m_current_chunk_number >= m_n_chunks)
{
m_current_chunk_number = 0;
throw std::runtime_error(
"Chunk number exceeds the number of chunks");
}
}
SUM_TYPE mean(const uint32_t row, const uint32_t col) const {
return m_mean(m_current_chunk_number, row, col);
}
SUM_TYPE std(const uint32_t row, const uint32_t col) const {
return m_std(m_current_chunk_number, row, col);
}
SUM_TYPE* get_mean_chunk_ptr() {
return &m_mean(m_current_chunk_number, 0, 0);
}
SUM_TYPE* get_std_chunk_ptr() {
return &m_std(m_current_chunk_number, 0, 0);
}
void clear() {
m_mean = 0;
m_std = 0;
m_n_chunks = 0;
}
//Probably don't need to do this one at a time, but let's keep it simple for now
template <typename T> void push_mean(NDView<T, 2> frame, uint32_t chunk_number) {
assert(frame.size() == m_rows * m_cols);
if (chunk_number >= m_n_chunks)
throw std::runtime_error(
"Chunk number is larger than the number of chunks");
// TODO! move away from m_rows, m_cols
if (frame.shape() != std::array<ssize_t, 2>{m_rows, m_cols}) {
throw std::runtime_error(
"Frame shape does not match pedestal shape");
}
for (size_t row = 0; row < m_rows; row++) {
for (size_t col = 0; col < m_cols; col++) {
push_mean<T>(row, col, chunk_number, frame(row, col));
}
}
}
template <typename T> void push_std(NDView<T, 2> frame, uint32_t chunk_number) {
assert(frame.size() == m_rows * m_cols);
if (chunk_number >= m_n_chunks)
throw std::runtime_error(
"Chunk number is larger than the number of chunks");
// TODO! move away from m_rows, m_cols
if (frame.shape() != std::array<ssize_t, 2>{m_rows, m_cols}) {
throw std::runtime_error(
"Frame shape does not match pedestal shape");
}
for (size_t row = 0; row < m_rows; row++) {
for (size_t col = 0; col < m_cols; col++) {
push_std<T>(row, col, chunk_number, frame(row, col));
}
}
}
// pixel level operations (should be refactored to allow users to implement
// their own pixel level operations)
template <typename T>
void push_mean(const uint32_t row, const uint32_t col, const uint32_t chunk_number, const T val_) {
m_mean(chunk_number, row, col) = val_;
}
template <typename T>
void push_std(const uint32_t row, const uint32_t col, const uint32_t chunk_number, const T val_) {
m_std(chunk_number, row, col) = val_;
}
// getter functions
uint32_t rows() const { return m_rows; }
uint32_t cols() const { return m_cols; }
};
} // namespace aare

View File

@@ -4,9 +4,11 @@
#include "aare/Dtype.hpp"
#include "aare/NDArray.hpp"
#include "aare/NDView.hpp"
#include "aare/Pedestal.hpp"
// #include "aare/Pedestal.hpp"
#include "aare/ChunkedPedestal.hpp"
#include "aare/defs.hpp"
#include <cstddef>
#include <iostream>
namespace aare {
@@ -17,11 +19,13 @@ class ClusterFinder {
const PEDESTAL_TYPE m_nSigma;
const PEDESTAL_TYPE c2;
const PEDESTAL_TYPE c3;
Pedestal<PEDESTAL_TYPE> m_pedestal;
ChunkedPedestal<PEDESTAL_TYPE> m_pedestal;
ClusterVector<ClusterType> m_clusters;
const uint32_t ClusterSizeX;
const uint32_t ClusterSizeY;
static const uint8_t ClusterSizeX = ClusterType::cluster_size_x;
static const uint8_t ClusterSizeY = ClusterType::cluster_size_y;
static const uint8_t SavedClusterSizeX = ClusterType::cluster_size_x;
static const uint8_t SavedClusterSizeY = ClusterType::cluster_size_y;
using CT = typename ClusterType::value_type;
public:
@@ -34,25 +38,36 @@ class ClusterFinder {
*
*/
ClusterFinder(Shape<2> image_size, PEDESTAL_TYPE nSigma = 5.0,
size_t capacity = 1000000)
size_t capacity = 1000000,
uint32_t chunk_size = 50000, uint32_t n_chunks = 10,
uint32_t cluster_size_x = 3, uint32_t cluster_size_y = 3)
: m_image_size(image_size), m_nSigma(nSigma),
c2(sqrt((ClusterSizeY + 1) / 2 * (ClusterSizeX + 1) / 2)),
c3(sqrt(ClusterSizeX * ClusterSizeY)),
m_pedestal(image_size[0], image_size[1]), m_clusters(capacity) {
c2(sqrt((cluster_size_y + 1) / 2 * (cluster_size_x + 1) / 2)),
c3(sqrt(cluster_size_x * cluster_size_y)),
ClusterSizeX(cluster_size_x), ClusterSizeY(cluster_size_y),
m_pedestal(image_size[0], image_size[1], chunk_size, n_chunks), m_clusters(capacity) {
LOG(logDEBUG) << "ClusterFinder: "
<< "image_size: " << image_size[0] << "x" << image_size[1]
<< ", nSigma: " << nSigma << ", capacity: " << capacity;
}
void push_pedestal_frame(NDView<FRAME_TYPE, 2> frame) {
m_pedestal.push(frame);
// void push_pedestal_frame(NDView<FRAME_TYPE, 2> frame) {
// m_pedestal.push(frame);
// }
void push_pedestal_mean(NDView<PEDESTAL_TYPE, 2> frame, uint32_t chunk_number) {
m_pedestal.push_mean(frame, chunk_number);
}
void push_pedestal_std(NDView<PEDESTAL_TYPE, 2> frame, uint32_t chunk_number) {
m_pedestal.push_std(frame, chunk_number);
}
//This is here purely to keep the compiler happy for now
void push_pedestal_frame(NDView<FRAME_TYPE, 2> frame) {}
NDArray<PEDESTAL_TYPE, 2> pedestal() { return m_pedestal.mean(); }
NDArray<PEDESTAL_TYPE, 2> noise() { return m_pedestal.std(); }
void clear_pedestal() { m_pedestal.clear(); }
/**
/**
* @brief Move the clusters from the ClusterVector in the ClusterFinder to a
* new ClusterVector and return it.
* @param realloc_same_capacity if true the new ClusterVector will have the
@@ -69,11 +84,13 @@ class ClusterFinder {
return tmp;
}
void find_clusters(NDView<FRAME_TYPE, 2> frame, uint64_t frame_number = 0) {
// // TODO! deal with even size clusters
// // currently 3,3 -> +/- 1
// // 4,4 -> +/- 2
int dy = ClusterSizeY / 2;
int dx = ClusterSizeX / 2;
int dy2 = SavedClusterSizeY / 2;
int dx2 = SavedClusterSizeX / 2;
int has_center_pixel_x =
ClusterSizeX %
2; // for even sized clusters there is no proper cluster center and
@@ -81,27 +98,39 @@ class ClusterFinder {
int has_center_pixel_y = ClusterSizeY % 2;
m_clusters.set_frame_number(frame_number);
m_pedestal.set_frame_number(frame_number);
auto mean_ptr = m_pedestal.get_mean_chunk_ptr();
auto std_ptr = m_pedestal.get_std_chunk_ptr();
for (int iy = 0; iy < frame.shape(0); iy++) {
size_t row_offset = iy * frame.shape(1);
for (int ix = 0; ix < frame.shape(1); ix++) {
// PEDESTAL_TYPE rms = m_pedestal.std(iy, ix);
PEDESTAL_TYPE rms = std_ptr[row_offset + ix];
if (rms == 0) continue;
PEDESTAL_TYPE max = std::numeric_limits<FRAME_TYPE>::min();
PEDESTAL_TYPE total = 0;
// What can we short circuit here?
PEDESTAL_TYPE rms = m_pedestal.std(iy, ix);
PEDESTAL_TYPE value = (frame(iy, ix) - m_pedestal.mean(iy, ix));
// What can we short circuit here?
// PEDESTAL_TYPE value = (frame(iy, ix) - m_pedestal.mean(iy, ix));
PEDESTAL_TYPE value = (frame(iy, ix) - mean_ptr[row_offset + ix]);
if (value < -m_nSigma * rms)
continue; // NEGATIVE_PEDESTAL go to next pixel
// TODO! No pedestal update???
for (int ir = -dy; ir < dy + has_center_pixel_y; ir++) {
size_t inner_row_offset = row_offset + (ir * frame.shape(1));
for (int ic = -dx; ic < dx + has_center_pixel_x; ic++) {
if (ix + ic >= 0 && ix + ic < frame.shape(1) &&
iy + ir >= 0 && iy + ir < frame.shape(0)) {
PEDESTAL_TYPE val =
frame(iy + ir, ix + ic) -
m_pedestal.mean(iy + ir, ix + ic);
// if (m_pedestal.std(iy + ir, ix + ic) == 0) continue;
if (std_ptr[inner_row_offset + ix + ic] == 0) continue;
// PEDESTAL_TYPE val = frame(iy + ir, ix + ic) - m_pedestal.mean(iy + ir, ix + ic);
PEDESTAL_TYPE val = frame(iy + ir, ix + ic) - mean_ptr[inner_row_offset + ix + ic];
total += val;
max = std::max(max, val);
@@ -109,24 +138,64 @@ class ClusterFinder {
}
}
if ((max > m_nSigma * rms)) {
if (value < max)
continue; // Not max go to the next pixel
// but also no pedestal update
} else if (total > c3 * m_nSigma * rms) {
// if (frame_number < 1)
// if ( (ix == 115 && iy == 122) )
// if ( (ix == 175 && iy == 175) )
// {
// // std::cout << std::endl;
// // std::cout << std::endl;
// // std::cout << "frame_number: " << frame_number << std::endl;
// // std::cout << "(" << ix << ", " << iy << "): " << std::endl;
// // std::cout << "frame.shape: (" << frame.shape(0) << ", " << frame.shape(1) << "): " << std::endl;
// // std::cout << "frame(175, 175): " << frame(175, 175) << std::endl;
// // std::cout << "frame(77, 98): " << frame(77, 98) << std::endl;
// // std::cout << "frame(82, 100): " << frame(82, 100) << std::endl;
// // std::cout << "frame(iy, ix): " << frame(iy, ix) << std::endl;
// // std::cout << "mean_ptr[row_offset + ix]: " << mean_ptr[row_offset + ix] << std::endl;
// // std::cout << "total: " << total << std::endl;
// std::cout << "(" << ix << ", " << iy << "): " << frame(iy, ix) << std::endl;
// }
// if ((max > m_nSigma * rms)) {
// if (value < max)
// continue; // Not max go to the next pixel
// // but also no pedestal update
// } else
if (total > c3 * m_nSigma * rms) {
// pass
} else {
// m_pedestal.push(iy, ix, frame(iy, ix)); // Safe option
m_pedestal.push_fast(
iy, ix,
frame(iy,
ix)); // Assume we have reached n_samples in the
// pedestal, slight performance improvement
//Not needed for chunked pedestal
// m_pedestal.push_fast(
// iy, ix,
// frame(iy,
// ix)); // Assume we have reached n_samples in the
// // pedestal, slight performance improvement
continue; // It was a pedestal value nothing to store
}
// Store cluster
if (value == max) {
// if (total < 0)
// {
// std::cout << "" << std::endl;
// std::cout << "frame_number: " << frame_number << std::endl;
// std::cout << "ix: " << ix << std::endl;
// std::cout << "iy: " << iy << std::endl;
// std::cout << "frame(iy, ix): " << frame(iy, ix) << std::endl;
// std::cout << "m_pedestal.mean(iy, ix): " << m_pedestal.mean(iy, ix) << std::endl;
// std::cout << "m_pedestal.std(iy, ix): " << m_pedestal.std(iy, ix) << std::endl;
// std::cout << "max: " << max << std::endl;
// std::cout << "value: " << value << std::endl;
// std::cout << "m_nSigma * rms: " << (m_nSigma * rms) << std::endl;
// std::cout << "total: " << total << std::endl;
// std::cout << "c3 * m_nSigma * rms: " << (c3 * m_nSigma * rms) << std::endl;
// }
ClusterType cluster{};
cluster.x = ix;
cluster.y = iy;
@@ -135,18 +204,24 @@ class ClusterFinder {
// It's worth redoing the look since most of the time we
// don't have a photon
int i = 0;
for (int ir = -dy; ir < dy + has_center_pixel_y; ir++) {
for (int ic = -dx; ic < dx + has_center_pixel_y; ic++) {
for (int ir = -dy2; ir < dy2 + has_center_pixel_y; ir++) {
size_t inner_row_offset = row_offset + (ir * frame.shape(1));
for (int ic = -dx2; ic < dx2 + has_center_pixel_y; ic++) {
if (ix + ic >= 0 && ix + ic < frame.shape(1) &&
iy + ir >= 0 && iy + ir < frame.shape(0)) {
CT tmp =
static_cast<CT>(frame(iy + ir, ix + ic)) -
static_cast<CT>(
m_pedestal.mean(iy + ir, ix + ic));
cluster.data[i] =
tmp; // Watch for out of bounds access
i++;
// if (m_pedestal.std(iy + ir, ix + ic) == 0) continue;
// if (std_ptr[inner_row_offset + ix + ic] == 0) continue;
// CT tmp = static_cast<CT>(frame(iy + ir, ix + ic)) - static_cast<CT>(m_pedestal.mean(iy + ir, ix + ic));
// CT tmp = 0;
if (std_ptr[inner_row_offset + ix + ic] != 0)
{
CT tmp = static_cast<CT>(frame(iy + ir, ix + ic)) - static_cast<CT>(mean_ptr[inner_row_offset + ix + ic]);
cluster.data[i] = tmp; // Watch for out of bounds access
}
}
i++;
}
}
@@ -158,4 +233,4 @@ class ClusterFinder {
}
};
} // namespace aare
} // namespace aare

View File

@@ -20,9 +20,15 @@ enum class FrameType {
struct FrameWrapper {
FrameType type;
uint64_t frame_number;
// NDArray<T, 2> data;
NDArray<uint16_t, 2> data;
// NDArray<double, 2> data;
// void* data_ptr;
// std::type_index data_type;
uint32_t chunk_number;
};
/**
* @brief ClusterFinderMT is a multi-threaded version of ClusterFinder. It uses
* a producer-consumer queue to distribute the frames to the threads. The
@@ -68,6 +74,7 @@ class ClusterFinderMT {
while (!m_stop_requested || !q->isEmpty()) {
if (FrameWrapper *frame = q->frontPtr(); frame != nullptr) {
switch (frame->type) {
case FrameType::DATA:
cf->find_clusters(frame->data.view(), frame->frame_number);
@@ -121,7 +128,9 @@ class ClusterFinderMT {
* @param n_threads number of threads to use
*/
ClusterFinderMT(Shape<2> image_size, PEDESTAL_TYPE nSigma = 5.0,
size_t capacity = 2000, size_t n_threads = 3)
size_t capacity = 2000, size_t n_threads = 3,
uint32_t chunk_size = 50000, uint32_t n_chunks = 10,
uint32_t cluster_size_x = 3, uint32_t cluster_size_y = 3)
: m_n_threads(n_threads) {
LOG(logDEBUG1) << "ClusterFinderMT: "
@@ -134,7 +143,7 @@ class ClusterFinderMT {
m_cluster_finders.push_back(
std::make_unique<
ClusterFinder<ClusterType, FRAME_TYPE, PEDESTAL_TYPE>>(
image_size, nSigma, capacity));
image_size, nSigma, capacity, chunk_size, n_chunks, cluster_size_x, cluster_size_y));
}
for (size_t i = 0; i < n_threads; i++) {
m_input_queues.emplace_back(std::make_unique<InputQueue>(200));
@@ -208,7 +217,7 @@ class ClusterFinderMT {
*/
void push_pedestal_frame(NDView<FRAME_TYPE, 2> frame) {
FrameWrapper fw{FrameType::PEDESTAL, 0,
NDArray(frame)}; // TODO! copies the data!
NDArray(frame), 0}; // TODO! copies the data!
for (auto &queue : m_input_queues) {
while (!queue->write(fw)) {
@@ -217,6 +226,23 @@ class ClusterFinderMT {
}
}
void push_pedestal_mean(NDView<PEDESTAL_TYPE, 2> frame, uint32_t chunk_number) {
if (!m_processing_threads_stopped) {
throw std::runtime_error("ClusterFinderMT is still running");
}
for (auto &cf : m_cluster_finders) {
cf->push_pedestal_mean(frame, chunk_number);
}
}
void push_pedestal_std(NDView<PEDESTAL_TYPE, 2> frame, uint32_t chunk_number) {
if (!m_processing_threads_stopped) {
throw std::runtime_error("ClusterFinderMT is still running");
}
for (auto &cf : m_cluster_finders) {
cf->push_pedestal_std(frame, chunk_number);
}
}
/**
* @brief Push the frame to the queue of the next available thread. Function
* returns once the frame is in a queue.
@@ -224,7 +250,10 @@ class ClusterFinderMT {
*/
void find_clusters(NDView<FRAME_TYPE, 2> frame, uint64_t frame_number = 0) {
FrameWrapper fw{FrameType::DATA, frame_number,
NDArray(frame)}; // TODO! copies the data!
NDArray(frame), 0}; // TODO! copies the data!
// std::cout << "frame(122, 115): " << frame(122, 115) << std::endl;
while (!m_input_queues[m_current_thread % m_n_threads]->write(fw)) {
std::this_thread::sleep_for(m_default_wait);
}
@@ -281,4 +310,4 @@ class ClusterFinderMT {
// }
};
} // namespace aare
} // namespace aare

View File

@@ -25,7 +25,7 @@ template <typename T, ssize_t Ndim = 2>
class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
std::array<ssize_t, Ndim> shape_;
std::array<ssize_t, Ndim> strides_;
size_t size_{};
size_t size_{}; //TODO! do we need to store size when we have shape?
T *data_;
public:
@@ -33,7 +33,7 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
* @brief Default constructor. Will construct an empty NDArray.
*
*/
NDArray() : shape_(), strides_(c_strides<Ndim>(shape_)), data_(nullptr){};
NDArray() : shape_(), strides_(c_strides<Ndim>(shape_)), data_(nullptr) {};
/**
* @brief Construct a new NDArray object with a given shape.
@@ -43,8 +43,7 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
*/
explicit NDArray(std::array<ssize_t, Ndim> shape)
: shape_(shape), strides_(c_strides<Ndim>(shape_)),
size_(std::accumulate(shape_.begin(), shape_.end(), 1,
std::multiplies<>())),
size_(num_elements(shape_)),
data_(new T[size_]) {}
/**
@@ -79,6 +78,24 @@ class NDArray : public ArrayExpr<NDArray<T, Ndim>, Ndim> {
other.reset(); // TODO! is this necessary?
}
//Move constructor from an an array with Ndim + 1
template <ssize_t M, typename = std::enable_if_t<(M == Ndim + 1)>>
NDArray(NDArray<T, M> &&other)
: shape_(drop_first_dim(other.shape())),
strides_(c_strides<Ndim>(shape_)), size_(num_elements(shape_)),
data_(other.data()) {
// For now only allow move if the size matches, to avoid unreachable data
// if the use case arises we can remove this check
if(size() != other.size()) {
data_ = nullptr; // avoid double free, other will clean up the memory in it's destructor
throw std::runtime_error(LOCATION +
"Size mismatch in move constructor of NDArray<T, Ndim-1>");
}
other.reset();
}
// Copy constructor
NDArray(const NDArray &other)
: shape_(other.shape_), strides_(c_strides<Ndim>(shape_)),
@@ -380,12 +397,6 @@ NDArray<T, Ndim> NDArray<T, Ndim>::operator*(const T &value) {
result *= value;
return result;
}
// template <typename T, ssize_t Ndim> void NDArray<T, Ndim>::Print() {
// if (shape_[0] < 20 && shape_[1] < 20)
// Print_all();
// else
// Print_some();
// }
template <typename T, ssize_t Ndim>
std::ostream &operator<<(std::ostream &os, const NDArray<T, Ndim> &arr) {
@@ -437,4 +448,23 @@ NDArray<T, Ndim> load(const std::string &pathname,
return img;
}
template <typename RT, typename NT, typename DT, ssize_t Ndim>
NDArray<RT, Ndim> safe_divide(const NDArray<NT, Ndim> &numerator,
const NDArray<DT, Ndim> &denominator) {
if (numerator.shape() != denominator.shape()) {
throw std::runtime_error(
"Shapes of numerator and denominator must match");
}
NDArray<RT, Ndim> result(numerator.shape());
for (ssize_t i = 0; i < numerator.size(); ++i) {
if (denominator[i] != 0) {
result[i] =
static_cast<RT>(numerator[i]) / static_cast<RT>(denominator[i]);
} else {
result[i] = RT{0}; // or handle division by zero as needed
}
}
return result;
}
} // namespace aare

View File

@@ -26,6 +26,33 @@ Shape<Ndim> make_shape(const std::vector<size_t> &shape) {
return arr;
}
/**
* @brief Helper function to drop the first dimension of a shape.
* This is useful when you want to create a 2D view from a 3D array.
* @param shape The shape to drop the first dimension from.
* @return A new shape with the first dimension dropped.
*/
template<size_t Ndim>
Shape<Ndim-1> drop_first_dim(const Shape<Ndim> &shape) {
static_assert(Ndim > 1, "Cannot drop first dimension from a 1D shape");
Shape<Ndim - 1> new_shape;
std::copy(shape.begin() + 1, shape.end(), new_shape.begin());
return new_shape;
}
/**
* @brief Helper function when constructing NDArray/NDView. Calculates the number
* of elements in the resulting array from a shape.
* @param shape The shape to calculate the number of elements for.
* @return The number of elements in and NDArray/NDView of that shape.
*/
template <size_t Ndim>
size_t num_elements(const Shape<Ndim> &shape) {
return std::accumulate(shape.begin(), shape.end(), 1,
std::multiplies<size_t>());
}
template <ssize_t Dim = 0, typename Strides>
ssize_t element_offset(const Strides & /*unused*/) {
return 0;
@@ -66,17 +93,28 @@ class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
: buffer_(buffer), strides_(c_strides<Ndim>(shape)), shape_(shape),
size_(std::accumulate(std::begin(shape), std::end(shape), 1,
std::multiplies<>())) {}
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) {
return buffer_[element_offset(strides_, index...)];
}
template <typename... Ix>
const std::enable_if_t<sizeof...(Ix) == Ndim, T &> operator()(Ix... index) const {
std::enable_if_t<sizeof...(Ix) == 1 && (Ndim > 1), NDView<T, Ndim - 1>> operator()(Ix... index) {
// return a view of the next dimension
std::array<ssize_t, Ndim - 1> new_shape{};
std::copy_n(shape_.begin() + 1, Ndim - 1, new_shape.begin());
return NDView<T, Ndim - 1>(&buffer_[element_offset(strides_, index...)],
new_shape);
}
template <typename... Ix>
std::enable_if_t<sizeof...(Ix) == Ndim, const T &> operator()(Ix... index) const {
return buffer_[element_offset(strides_, index...)];
}
ssize_t size() const { return static_cast<ssize_t>(size_); }
size_t total_bytes() const { return size_ * sizeof(T); }
std::array<ssize_t, Ndim> strides() const noexcept { return strides_; }
@@ -85,9 +123,19 @@ class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
T *end() { return buffer_ + size_; }
T const *begin() const { return buffer_; }
T const *end() const { return buffer_ + size_; }
T &operator()(ssize_t i) { return buffer_[i]; }
/**
* @brief Access element at index i.
*/
T &operator[](ssize_t i) { return buffer_[i]; }
const T &operator()(ssize_t i) const { return buffer_[i]; }
/**
* @brief Access element at index i.
*/
const T &operator[](ssize_t i) const { return buffer_[i]; }
bool operator==(const NDView &other) const {
@@ -157,6 +205,22 @@ class NDView : public ArrayExpr<NDView<T, Ndim>, Ndim> {
const T *data() const { return buffer_; }
void print_all() const;
/**
* @brief Create a subview of a range of the first dimension.
* This is useful for splitting a batches of frames in parallel processing.
* @param first The first index of the subview (inclusive).
* @param last The last index of the subview (exclusive).
* @return A new NDView that is a subview of the current view.
* @throws std::runtime_error if the range is invalid.
*/
NDView sub_view(ssize_t first, ssize_t last) const {
if (first < 0 || last > shape_[0] || first >= last)
throw std::runtime_error(LOCATION + "Invalid sub_view range");
auto new_shape = shape_;
new_shape[0] = last - first;
return NDView(buffer_ + first * strides_[0], new_shape);
}
private:
T *buffer_{nullptr};
std::array<ssize_t, Ndim> strides_{};

View File

@@ -240,14 +240,14 @@ template <typename T> void VarClusterFinder<T>::first_pass() {
for (ssize_t i = 0; i < original_.size(); ++i) {
if (use_noise_map)
threshold_ = 5 * noiseMap(i);
binary_(i) = (original_(i) > threshold_);
threshold_ = 5 * noiseMap[i];
binary_[i] = (original_[i] > threshold_);
}
for (int i = 0; i < shape_[0]; ++i) {
for (int j = 0; j < shape_[1]; ++j) {
// do we have someting to process?
// do we have something to process?
if (binary_(i, j)) {
auto tmp = check_neighbours(i, j);
if (tmp != 0) {

View File

@@ -1,6 +1,9 @@
#pragma once
#include "aare/NDArray.hpp"
#include "aare/NDView.hpp"
#include "aare/defs.hpp"
#include "aare/utils/par.hpp"
#include "aare/utils/task.hpp"
#include <cstdint>
#include <future>
@@ -55,32 +58,152 @@ ALWAYS_INLINE std::pair<uint16_t, int16_t> get_value_and_gain(uint16_t raw) {
template <class T>
void apply_calibration_impl(NDView<T, 3> res, NDView<uint16_t, 3> raw_data,
NDView<T, 3> ped, NDView<T, 3> cal, int start,
int stop) {
NDView<T, 3> ped, NDView<T, 3> cal, int start,
int stop) {
for (int frame_nr = start; frame_nr != stop; ++frame_nr) {
for (int row = 0; row != raw_data.shape(1); ++row) {
for (int col = 0; col != raw_data.shape(2); ++col) {
auto [value, gain] = get_value_and_gain(raw_data(frame_nr, row, col));
auto [value, gain] =
get_value_and_gain(raw_data(frame_nr, row, col));
// Using multiplication does not seem to speed up the code here
// ADU/keV is the standard unit for the calibration which
// means rewriting the formula is not worth it.
res(frame_nr, row, col) =
(value - ped(gain, row, col)) / cal(gain, row, col); //TODO! use multiplication
(value - ped(gain, row, col)) / cal(gain, row, col);
}
}
}
}
template <class T>
void apply_calibration_impl(NDView<T, 3> res, NDView<uint16_t, 3> raw_data,
NDView<T, 2> ped, NDView<T, 2> cal, int start,
int stop) {
for (int frame_nr = start; frame_nr != stop; ++frame_nr) {
for (int row = 0; row != raw_data.shape(1); ++row) {
for (int col = 0; col != raw_data.shape(2); ++col) {
auto [value, gain] =
get_value_and_gain(raw_data(frame_nr, row, col));
// Using multiplication does not seem to speed up the code here
// ADU/keV is the standard unit for the calibration which
// means rewriting the formula is not worth it.
// Set the value to 0 if the gain is not 0
if (gain == 0)
res(frame_nr, row, col) =
(value - ped(row, col)) / cal(row, col);
else
res(frame_nr, row, col) = 0;
}
}
}
}
template <class T, ssize_t Ndim = 3>
void apply_calibration(NDView<T, 3> res, NDView<uint16_t, 3> raw_data,
NDView<T, 3> ped, NDView<T, 3> cal,
NDView<T, Ndim> ped, NDView<T, Ndim> cal,
ssize_t n_threads = 4) {
std::vector<std::future<void>> futures;
futures.reserve(n_threads);
auto limits = split_task(0, raw_data.shape(0), n_threads);
for (const auto &lim : limits)
futures.push_back(std::async(&apply_calibration_impl<T>, res, raw_data, ped, cal,
lim.first, lim.second));
futures.push_back(std::async(
static_cast<void (*)(NDView<T, 3>, NDView<uint16_t, 3>,
NDView<T, Ndim>, NDView<T, Ndim>, int, int)>(
apply_calibration_impl),
res, raw_data, ped, cal, lim.first, lim.second));
for (auto &f : futures)
f.get();
}
template <bool only_gain0>
std::pair<NDArray<size_t, 3>, NDArray<size_t, 3>>
sum_and_count_per_gain(NDView<uint16_t, 3> raw_data) {
constexpr ssize_t num_gains = only_gain0 ? 1 : 3;
NDArray<size_t, 3> accumulator(
std::array<ssize_t, 3>{num_gains, raw_data.shape(1), raw_data.shape(2)},
0);
NDArray<size_t, 3> count(
std::array<ssize_t, 3>{num_gains, raw_data.shape(1), raw_data.shape(2)},
0);
for (int frame_nr = 0; frame_nr != raw_data.shape(0); ++frame_nr) {
for (int row = 0; row != raw_data.shape(1); ++row) {
for (int col = 0; col != raw_data.shape(2); ++col) {
auto [value, gain] =
get_value_and_gain(raw_data(frame_nr, row, col));
if (gain != 0 && only_gain0)
continue;
accumulator(gain, row, col) += value;
count(gain, row, col) += 1;
}
}
}
return {std::move(accumulator), std::move(count)};
}
template <typename T, bool only_gain0 = false>
NDArray<T, 3 - static_cast<ssize_t>(only_gain0)>
calculate_pedestal(NDView<uint16_t, 3> raw_data, ssize_t n_threads) {
constexpr ssize_t num_gains = only_gain0 ? 1 : 3;
std::vector<std::future<std::pair<NDArray<size_t, 3>, NDArray<size_t, 3>>>>
futures;
futures.reserve(n_threads);
auto subviews = make_subviews(raw_data, n_threads);
for (auto view : subviews) {
futures.push_back(std::async(
static_cast<std::pair<NDArray<size_t, 3>, NDArray<size_t, 3>> (*)(
NDView<uint16_t, 3>)>(&sum_and_count_per_gain<only_gain0>),
view));
}
Shape<3> shape{num_gains, raw_data.shape(1), raw_data.shape(2)};
NDArray<size_t, 3> accumulator(shape, 0);
NDArray<size_t, 3> count(shape, 0);
// Combine the results from the futures
for (auto &f : futures) {
auto [acc, cnt] = f.get();
accumulator += acc;
count += cnt;
}
// Will move to a NDArray<T, 3 - static_cast<ssize_t>(only_gain0)>
// if only_gain0 is true
return safe_divide<T>(accumulator, count);
}
/**
* @brief Count the number of switching pixels in the raw data.
* This function counts the number of pixels that switch between G1 and G2 gain.
* It returns an NDArray with the number of switching pixels per pixel.
* @param raw_data The NDView containing the raw data
* @return An NDArray with the number of switching pixels per pixel
*/
NDArray<int, 2> count_switching_pixels(NDView<uint16_t, 3> raw_data);
/**
* @brief Count the number of switching pixels in the raw data.
* This function counts the number of pixels that switch between G1 and G2 gain.
* It returns an NDArray with the number of switching pixels per pixel.
* @param raw_data The NDView containing the raw data
* @param n_threads The number of threads to use for parallel processing
* @return An NDArray with the number of switching pixels per pixel
*/
NDArray<int, 2> count_switching_pixels(NDView<uint16_t, 3> raw_data,
ssize_t n_threads);
template <typename T>
auto calculate_pedestal_g0(NDView<uint16_t, 3> raw_data, ssize_t n_threads) {
return calculate_pedestal<T, true>(raw_data, n_threads);
}
} // namespace aare

View File

@@ -1,7 +1,10 @@
#pragma once
#include <thread>
#include <utility>
#include <vector>
#include "aare/utils/task.hpp"
namespace aare {
template <typename F>
@@ -15,4 +18,17 @@ void RunInParallel(F func, const std::vector<std::pair<int, int>> &tasks) {
thread.join();
}
}
template <typename T>
std::vector<NDView<T,3>> make_subviews(NDView<T, 3> &data, ssize_t n_threads) {
std::vector<NDView<T, 3>> subviews;
subviews.reserve(n_threads);
auto limits = split_task(0, data.shape(0), n_threads);
for (const auto &lim : limits) {
subviews.push_back(data.sub_view(lim.first, lim.second));
}
return subviews;
}
} // namespace aare

View File

@@ -1,4 +1,4 @@
#pragma once
#include <utility>
#include <vector>

View File

@@ -26,24 +26,24 @@ def _get_class(name, cluster_size, dtype):
def ClusterFinder(image_size, cluster_size, n_sigma=5, dtype = np.int32, capacity = 1024):
def ClusterFinder(image_size, saved_cluster_size, checked_cluster_size, n_sigma=5, dtype = np.int32, capacity = 1024, chunk_size=50000, n_chunks = 10):
"""
Factory function to create a ClusterFinder object. Provides a cleaner syntax for
the templated ClusterFinder in C++.
"""
cls = _get_class("ClusterFinder", cluster_size, dtype)
return cls(image_size, n_sigma=n_sigma, capacity=capacity)
cls = _get_class("ClusterFinder", saved_cluster_size, dtype)
return cls(image_size, n_sigma=n_sigma, capacity=capacity, chunk_size=chunk_size, n_chunks=n_chunks, cluster_size_x=checked_cluster_size[0], cluster_size_y=checked_cluster_size[1])
def ClusterFinderMT(image_size, cluster_size = (3,3), dtype=np.int32, n_sigma=5, capacity = 1024, n_threads = 3):
def ClusterFinderMT(image_size, saved_cluster_size = (3,3), checked_cluster_size = (3,3), dtype=np.int32, n_sigma=5, capacity = 1024, n_threads = 3, chunk_size=50000, n_chunks = 10):
"""
Factory function to create a ClusterFinderMT object. Provides a cleaner syntax for
the templated ClusterFinderMT in C++.
"""
cls = _get_class("ClusterFinderMT", cluster_size, dtype)
return cls(image_size, n_sigma=n_sigma, capacity=capacity, n_threads=n_threads)
cls = _get_class("ClusterFinderMT", saved_cluster_size, dtype)
return cls(image_size, n_sigma=n_sigma, capacity=capacity, n_threads=n_threads, chunk_size=chunk_size, n_chunks=n_chunks, cluster_size_x=checked_cluster_size[0], cluster_size_y=checked_cluster_size[1])

View File

@@ -32,6 +32,7 @@ from .utils import random_pixels, random_pixel, flat_list, add_colorbar
from .func import *
from .calibration import *
from ._aare import apply_calibration
from ._aare import apply_calibration, count_switching_pixels
from ._aare import calculate_pedestal, calculate_pedestal_float, calculate_pedestal_g0, calculate_pedestal_g0_float
from ._aare import VarClusterFinder

View File

@@ -30,14 +30,30 @@ void define_ClusterFinder(py::module &m, const std::string &typestr) {
py::class_<ClusterFinder<ClusterType, uint16_t, pd_type>>(
m, class_name.c_str())
.def(py::init<Shape<2>, pd_type, size_t>(), py::arg("image_size"),
py::arg("n_sigma") = 5.0, py::arg("capacity") = 1'000'000)
.def(py::init<Shape<2>, pd_type, size_t, uint32_t, uint32_t, uint32_t, uint32_t>(),
py::arg("image_size"), py::arg("n_sigma") = 5.0, py::arg("capacity") = 1'000'000,
py::arg("chunk_size") = 50'000, py::arg("n_chunks") = 10,
py::arg("cluster_size_x") = 3, py::arg("cluster_size_y") = 3)
.def("push_pedestal_frame",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame) {
auto view = make_view_2d(frame);
self.push_pedestal_frame(view);
})
.def("push_pedestal_mean",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<double> frame, uint32_t chunk_number) {
auto view = make_view_2d(frame);
self.push_pedestal_mean(view, chunk_number);
})
.def("push_pedestal_std",
[](ClusterFinder<ClusterType, uint16_t, pd_type> &self,
py::array_t<double> frame, uint32_t chunk_number) {
auto view = make_view_2d(frame);
self.push_pedestal_std(view, chunk_number);
})
.def("clear_pedestal",
&ClusterFinder<ClusterType, uint16_t, pd_type>::clear_pedestal)
.def_property_readonly(

View File

@@ -30,15 +30,31 @@ void define_ClusterFinderMT(py::module &m, const std::string &typestr) {
py::class_<ClusterFinderMT<ClusterType, uint16_t, pd_type>>(
m, class_name.c_str())
.def(py::init<Shape<2>, pd_type, size_t, size_t>(),
.def(py::init<Shape<2>, pd_type, size_t, size_t, uint32_t, uint32_t, uint32_t, uint32_t>(),
py::arg("image_size"), py::arg("n_sigma") = 5.0,
py::arg("capacity") = 2048, py::arg("n_threads") = 3)
py::arg("capacity") = 2048, py::arg("n_threads") = 3,
py::arg("chunk_size") = 50'000, py::arg("n_chunks") = 10,
py::arg("cluster_size_x") = 3, py::arg("cluster_size_y") = 3)
.def("push_pedestal_frame",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<uint16_t> frame) {
auto view = make_view_2d(frame);
self.push_pedestal_frame(view);
})
.def("push_pedestal_mean",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<double> frame, uint32_t chunk_number) {
auto view = make_view_2d(frame);
self.push_pedestal_mean(view, chunk_number);
})
.def("push_pedestal_std",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,
py::array_t<double> frame, uint32_t chunk_number) {
auto view = make_view_2d(frame);
self.push_pedestal_std(view, chunk_number);
})
.def(
"find_clusters",
[](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self,

View File

@@ -17,27 +17,137 @@ py::array_t<DataType> pybind_apply_calibration(
calibration,
int n_threads = 4) {
auto data_span = make_view_3d(data);
auto ped = make_view_3d(pedestal);
auto cal = make_view_3d(calibration);
auto data_span = make_view_3d(data); // data is always 3D
/* No pointer is passed, so NumPy will allocate the buffer */
auto result = py::array_t<DataType>(data_span.shape());
auto res = make_view_3d(result);
aare::apply_calibration<DataType>(res, data_span, ped, cal, n_threads);
if (data.ndim() == 3 && pedestal.ndim() == 3 && calibration.ndim() == 3) {
auto ped = make_view_3d(pedestal);
auto cal = make_view_3d(calibration);
aare::apply_calibration<DataType, 3>(res, data_span, ped, cal,
n_threads);
} else if (data.ndim() == 3 && pedestal.ndim() == 2 &&
calibration.ndim() == 2) {
auto ped = make_view_2d(pedestal);
auto cal = make_view_2d(calibration);
aare::apply_calibration<DataType, 2>(res, data_span, ped, cal,
n_threads);
} else {
throw std::runtime_error(
"Invalid number of dimensions for data, pedestal or calibration");
}
return result;
}
py::array_t<int> pybind_count_switching_pixels(
py::array_t<uint16_t, py::array::c_style | py::array::forcecast> data,
ssize_t n_threads = 4) {
auto data_span = make_view_3d(data);
auto arr = new NDArray<int, 2>{};
*arr = aare::count_switching_pixels(data_span, n_threads);
return return_image_data(arr);
}
template <typename T>
py::array_t<T> pybind_calculate_pedestal(
py::array_t<uint16_t, py::array::c_style | py::array::forcecast> data,
ssize_t n_threads) {
auto data_span = make_view_3d(data);
auto arr = new NDArray<T, 3>{};
*arr = aare::calculate_pedestal<T, false>(data_span, n_threads);
return return_image_data(arr);
}
template <typename T>
py::array_t<T> pybind_calculate_pedestal_g0(
py::array_t<uint16_t, py::array::c_style | py::array::forcecast> data,
ssize_t n_threads) {
auto data_span = make_view_3d(data);
auto arr = new NDArray<T, 2>{};
*arr = aare::calculate_pedestal<T, true>(data_span, n_threads);
return return_image_data(arr);
}
void bind_calibration(py::module &m) {
m.def("apply_calibration", &pybind_apply_calibration<double>,
py::arg("raw_data").noconvert(), py::kw_only(),
py::arg("pd").noconvert(), py::arg("cal").noconvert(),
py::arg("n_threads") = 4);
m.def("apply_calibration", &pybind_apply_calibration<float>,
py::arg("raw_data").noconvert(), py::kw_only(),
py::arg("pd").noconvert(), py::arg("cal").noconvert(),
py::arg("n_threads") = 4);
m.def("apply_calibration", &pybind_apply_calibration<double>,
m.def("count_switching_pixels", &pybind_count_switching_pixels,
R"(
Count the number of time each pixel switches to G1 or G2.
Parameters
----------
raw_data : array_like
3D array of shape (frames, rows, cols) to count the switching pixels from.
n_threads : int
The number of threads to use for the calculation.
)",
py::arg("raw_data").noconvert(), py::kw_only(),
py::arg("pd").noconvert(), py::arg("cal").noconvert(),
py::arg("n_threads") = 4);
m.def("calculate_pedestal", &pybind_calculate_pedestal<double>,
R"(
Calculate the pedestal for all three gains and return the result as a 3D array of doubles.
Parameters
----------
raw_data : array_like
3D array of shape (frames, rows, cols) to calculate the pedestal from.
Needs to contain data for all three gains (G0, G1, G2).
n_threads : int
The number of threads to use for the calculation.
)",
py::arg("raw_data").noconvert(), py::arg("n_threads") = 4);
m.def("calculate_pedestal_float", &pybind_calculate_pedestal<float>,
R"(
Same as `calculate_pedestal` but returns a 3D array of floats.
Parameters
----------
raw_data : array_like
3D array of shape (frames, rows, cols) to calculate the pedestal from.
Needs to contain data for all three gains (G0, G1, G2).
n_threads : int
The number of threads to use for the calculation.
)",
py::arg("raw_data").noconvert(), py::arg("n_threads") = 4);
m.def("calculate_pedestal_g0", &pybind_calculate_pedestal_g0<double>,
R"(
Calculate the pedestal for G0 and return the result as a 2D array of doubles.
Pixels in G1 and G2 are ignored.
Parameters
----------
raw_data : array_like
3D array of shape (frames, rows, cols) to calculate the pedestal from.
n_threads : int
The number of threads to use for the calculation.
)",
py::arg("raw_data").noconvert(), py::arg("n_threads") = 4);
m.def("calculate_pedestal_g0_float", &pybind_calculate_pedestal_g0<float>,
R"(
Same as `calculate_pedestal_g0` but returns a 2D array of floats.
Parameters
----------
raw_data : array_like
3D array of shape (frames, rows, cols) to calculate the pedestal from.
n_threads : int
The number of threads to use for the calculation.
)",
py::arg("raw_data").noconvert(), py::arg("n_threads") = 4);
}

View File

@@ -1,6 +1,7 @@
import pytest
import numpy as np
from aare import apply_calibration
import aare
def test_apply_calibration_small_data():
# The raw data consists of 10 4x5 images
@@ -27,7 +28,7 @@ def test_apply_calibration_small_data():
data = apply_calibration(raw, pd = pedestal, cal = calibration)
data = aare.apply_calibration(raw, pd = pedestal, cal = calibration)
# The formula that is applied is:
@@ -41,3 +42,94 @@ def test_apply_calibration_small_data():
assert data[2,2,2] == 0
assert data[0,1,1] == 0
assert data[1,3,0] == 0
@pytest.fixture
def raw_data_3x2x2():
raw = np.zeros((3, 2, 2), dtype=np.uint16)
raw[0, 0, 0] = 100
raw[1,0, 0] = 200
raw[2, 0, 0] = 300
raw[0, 0, 1] = (1<<14) + 100
raw[1, 0, 1] = (1<<14) + 200
raw[2, 0, 1] = (1<<14) + 300
raw[0, 1, 0] = (1<<14) + 37
raw[1, 1, 0] = 38
raw[2, 1, 0] = (3<<14) + 39
raw[0, 1, 1] = (3<<14) + 100
raw[1, 1, 1] = (3<<14) + 200
raw[2, 1, 1] = (3<<14) + 300
return raw
def test_calculate_pedestal(raw_data_3x2x2):
# Calculate the pedestal
pd = aare.calculate_pedestal(raw_data_3x2x2)
assert pd.shape == (3, 2, 2)
assert pd.dtype == np.float64
assert pd[0, 0, 0] == 200
assert pd[1, 0, 0] == 0
assert pd[2, 0, 0] == 0
assert pd[0, 0, 1] == 0
assert pd[1, 0, 1] == 200
assert pd[2, 0, 1] == 0
assert pd[0, 1, 0] == 38
assert pd[1, 1, 0] == 37
assert pd[2, 1, 0] == 39
assert pd[0, 1, 1] == 0
assert pd[1, 1, 1] == 0
assert pd[2, 1, 1] == 200
def test_calculate_pedestal_float(raw_data_3x2x2):
#results should be the same for float
pd2 = aare.calculate_pedestal_float(raw_data_3x2x2)
assert pd2.shape == (3, 2, 2)
assert pd2.dtype == np.float32
assert pd2[0, 0, 0] == 200
assert pd2[1, 0, 0] == 0
assert pd2[2, 0, 0] == 0
assert pd2[0, 0, 1] == 0
assert pd2[1, 0, 1] == 200
assert pd2[2, 0, 1] == 0
assert pd2[0, 1, 0] == 38
assert pd2[1, 1, 0] == 37
assert pd2[2, 1, 0] == 39
assert pd2[0, 1, 1] == 0
assert pd2[1, 1, 1] == 0
assert pd2[2, 1, 1] == 200
def test_calculate_pedestal_g0(raw_data_3x2x2):
pd = aare.calculate_pedestal_g0(raw_data_3x2x2)
assert pd.shape == (2, 2)
assert pd.dtype == np.float64
assert pd[0, 0] == 200
assert pd[1, 0] == 38
assert pd[0, 1] == 0
assert pd[1, 1] == 0
def test_calculate_pedestal_g0_float(raw_data_3x2x2):
pd = aare.calculate_pedestal_g0_float(raw_data_3x2x2)
assert pd.shape == (2, 2)
assert pd.dtype == np.float32
assert pd[0, 0] == 200
assert pd[1, 0] == 38
assert pd[0, 1] == 0
assert pd[1, 1] == 0
def test_count_switching_pixels(raw_data_3x2x2):
# Count the number of pixels that switched gain
count = aare.count_switching_pixels(raw_data_3x2x2)
assert count.shape == (2, 2)
assert count.sum() == 8
assert count[0, 0] == 0
assert count[1, 0] == 2
assert count[0, 1] == 3
assert count[1, 1] == 3

View File

@@ -25,13 +25,13 @@ TEST_CASE("Construct from an NDView") {
REQUIRE(image.data() != view.data());
for (uint32_t i = 0; i < image.size(); ++i) {
REQUIRE(image(i) == view(i));
REQUIRE(image[i] == view[i]);
}
// Changing the image doesn't change the view
image = 43;
for (uint32_t i = 0; i < image.size(); ++i) {
REQUIRE(image(i) != view(i));
REQUIRE(image[i] != view[i]);
}
}
@@ -427,4 +427,30 @@ TEST_CASE("Construct an NDArray from an std::array") {
for (uint32_t i = 0; i < a.size(); ++i) {
REQUIRE(a(i) == b[i]);
}
}
}
TEST_CASE("Move construct from an array with Ndim + 1") {
NDArray<int, 3> a({{1,2,2}}, 0);
a(0, 0, 0) = 1;
a(0, 0, 1) = 2;
a(0, 1, 0) = 3;
a(0, 1, 1) = 4;
NDArray<int, 2> b(std::move(a));
REQUIRE(b.shape() == Shape<2>{2,2});
REQUIRE(b.size() == 4);
REQUIRE(b(0, 0) == 1);
REQUIRE(b(0, 1) == 2);
REQUIRE(b(1, 0) == 3);
REQUIRE(b(1, 1) == 4);
}
TEST_CASE("Move construct from an array with Ndim + 1 throws on size mismatch") {
NDArray<int, 3> a({{2,2,2}}, 0);
REQUIRE_THROWS(NDArray<int, 2>(std::move(a)));
}

44
src/calibration.cpp Normal file
View File

@@ -0,0 +1,44 @@
#include "aare/calibration.hpp"
namespace aare {
NDArray<int, 2> count_switching_pixels(NDView<uint16_t, 3> raw_data) {
NDArray<int, 2> switched(
std::array<ssize_t, 2>{raw_data.shape(1), raw_data.shape(2)}, 0);
for (int frame_nr = 0; frame_nr != raw_data.shape(0); ++frame_nr) {
for (int row = 0; row != raw_data.shape(1); ++row) {
for (int col = 0; col != raw_data.shape(2); ++col) {
auto [value, gain] =
get_value_and_gain(raw_data(frame_nr, row, col));
if (gain != 0) {
switched(row, col) += 1;
}
}
}
}
return switched;
}
NDArray<int, 2> count_switching_pixels(NDView<uint16_t, 3> raw_data,
ssize_t n_threads) {
NDArray<int, 2> switched(
std::array<ssize_t, 2>{raw_data.shape(1), raw_data.shape(2)}, 0);
std::vector<std::future<NDArray<int, 2>>> futures;
futures.reserve(n_threads);
auto subviews = make_subviews(raw_data, n_threads);
for (auto view : subviews) {
futures.push_back(
std::async(static_cast<NDArray<int, 2> (*)(NDView<uint16_t, 3>)>(
&count_switching_pixels),
view));
}
for (auto &f : futures) {
switched += f.get();
}
return switched;
}
} // namespace aare

49
src/calibration.test.cpp Normal file
View File

@@ -0,0 +1,49 @@
/************************************************
* @file test-Cluster.cpp
* @short test case for generic Cluster, ClusterVector, and calculate_eta2
***********************************************/
#include "aare/calibration.hpp"
// #include "catch.hpp"
#include <array>
#include <catch2/catch_all.hpp>
#include <catch2/catch_test_macros.hpp>
using namespace aare;
TEST_CASE("Test Pedestal Generation", "[.calibration]") {
NDArray<uint16_t, 3> raw(std::array<ssize_t, 3>{3, 2, 2}, 0);
// gain 0
raw(0, 0, 0) = 100;
raw(1, 0, 0) = 200;
raw(2, 0, 0) = 300;
// gain 1
raw(0, 0, 1) = (1 << 14) + 100;
raw(1, 0, 1) = (1 << 14) + 200;
raw(2, 0, 1) = (1 << 14) + 300;
raw(0, 1, 0) = (1 << 14) + 37;
raw(1, 1, 0) = 38;
raw(2, 1, 0) = (3 << 14) + 39;
// gain 2
raw(0, 1, 1) = (3 << 14) + 100;
raw(1, 1, 1) = (3 << 14) + 200;
raw(2, 1, 1) = (3 << 14) + 300;
auto pedestal = calculate_pedestal<double>(raw.view(), 4);
REQUIRE(pedestal.size() == raw.size());
CHECK(pedestal(0, 0, 0) == 200);
CHECK(pedestal(1, 0, 0) == 0);
CHECK(pedestal(1, 0, 1) == 200);
auto pedestal_gain0 = calculate_pedestal_g0<double>(raw.view(), 4);
REQUIRE(pedestal_gain0.size() == 4);
CHECK(pedestal_gain0(0, 0) == 200);
CHECK(pedestal_gain0(1, 0) == 38);
}