23 Commits

Author SHA1 Message Date
eb6862ff99 changed name of GainMap to InvertedGainMap
Some checks failed
Build on RHEL9 / build (push) Successful in 2m16s
Build on RHEL8 / build (push) Failing after 2m34s
2025-04-25 12:03:59 +02:00
f06e722dce changes from PR review 2025-04-25 11:38:56 +02:00
7b5e32a824 Api extra (#166)
Changes to be able to run the example notebooks: 

- Invert gain map on setting (multiplication is faster but user supplies
ADU/energy)
- Cast after applying gain map not to loose precision (Important for
int32 clusters)
- "factor" for ClusterFileSink 
- Cluster size available to be able to create the right file sink
2025-04-25 10:31:16 +02:00
86d343f5f5 merged with developer
Some checks failed
Build on RHEL9 / build (push) Successful in 2m9s
Build on RHEL8 / build (push) Failing after 2m32s
2025-04-23 11:45:04 +02:00
129e7e9f9d Merge branch 'developer' of github.com:slsdetectorgroup/aare into developer
All checks were successful
Build on RHEL9 / build (push) Successful in 1m59s
Build on RHEL8 / build (push) Successful in 2m29s
2025-04-22 16:24:32 +02:00
58c934d9cf added mpl to conda specs 2025-04-22 16:24:15 +02:00
4088b0889d Merge branch 'main' into developer 2025-04-22 16:18:48 +02:00
d5f8daf194 removed debug option in CMakelist
All checks were successful
Build on RHEL9 / buildh (push) Successful in 2m36s
2025-04-22 16:16:31 +02:00
c6e8e5f6a1 inverted gain map 2025-04-22 16:16:27 +02:00
b501c31e38 added missed commit 2025-04-22 15:22:47 +02:00
326941e2b4 Custom base for decoding ADC data (#163)
New function apply_custom_weights (can we find a better name) that takes
a uint16 and a NDView<double,1> of bases for the conversion. For each
supplied weight it is used as base (instead of 2) to convert from bits
to a double.

---------

Co-authored-by: siebsi <sieb.patr@gmail.com>
2025-04-22 15:20:46 +02:00
84aafa75f6 Building wheels and uploading to pypi (#160)
All checks were successful
Build on RHEL9 / build (push) Successful in 1m56s
Build on RHEL8 / build (push) Successful in 2m13s
Still to be resolved in another PR: 

- Consistent versioning across compiled code, conda and pypi
2025-04-22 08:36:34 +02:00
177459c98a added multithreaded cluster finder test
All checks were successful
Build on RHEL9 / buildh (push) Successful in 2m20s
2025-04-17 17:09:53 +02:00
c49a2fdf8e removed cluster_2x2 and cluster3x3 specializations
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m58s
2025-04-16 16:40:42 +02:00
14211047ff added function warpper around ClusterFinderMT and ClusterCollector to construct object 2025-04-16 14:22:44 +02:00
acd9d5d487 moved parts of ClusterFile implementation into declaration
All checks were successful
Build on RHEL9 / buildh (push) Successful in 1m55s
2025-04-15 15:15:34 +02:00
d4050ec557 enum is now enum class 2025-04-15 14:57:25 +02:00
fca9d5d2fa replaced extract template parameters 2025-04-15 14:40:09 +02:00
1174f7f434 fixed calculate eta 2025-04-15 13:18:25 +02:00
2bb7d360bf Adding more tests, fixing hitmap and reading with cuts (#161)
- Fix for hitmap
- Fix for reading clusters with cut
- Added more tests around eta
- Added factory function for creating the cluster finder
2025-04-15 12:25:01 +02:00
a59e9656be Making RawSubFile usable from Python (#158)
All checks were successful
Build on RHEL8 / build (push) Successful in 1m55s
Build on RHEL9 / build (push) Successful in 1m44s
- Removed a printout left from debugging
- return also header when reading
- added read_n 
- check for error in ifstream
2025-04-11 16:54:21 +02:00
6e4db45b57 Activated RH8 build on PSI gitea (#155)
All checks were successful
Build on RHEL8 / build (push) Successful in 1m56s
Build on RHEL9 / build (push) Successful in 1m44s
2025-04-10 10:17:16 +02:00
e1533282f1 Cluster cuts (#146)
Some checks failed
Build the package using cmake then documentation / build (ubuntu-latest, 3.12) (push) Failing after 43s
Co-authored-by: Patrick <patrick.sieberer@psi.ch>
Co-authored-by: JulianHeymes <julian.heymes@psi.ch>
Co-authored-by: Dhanya Thattil <dhanya.thattil@psi.ch>
Co-authored-by: Xiangyu Xie <45243914+xiangyuxie@users.noreply.github.com>
Co-authored-by: xiangyu.xie <xiangyu.xie@psi.ch>
2025-04-01 15:15:54 +02:00
41 changed files with 1139 additions and 574 deletions

View File

@ -1,18 +1,24 @@
name: Build on RHEL8
on:
push:
workflow_dispatch:
permissions:
contents: read
jobs:
buildh:
build:
runs-on: "ubuntu-latest"
container:
image: gitea.psi.ch/images/rhel8-developer-gitea-actions
steps:
- uses: actions/checkout@v4
# workaround until actions/checkout@v4 is available for RH8
# - uses: actions/checkout@v4
- name: Clone repository
run: |
echo Cloning ${{ github.ref_name }}
git clone https://${{secrets.GITHUB_TOKEN}}@gitea.psi.ch/${{ github.repository }}.git --branch=${{ github.ref_name }} .
- name: Install dependencies
@ -22,7 +28,7 @@ jobs:
- name: Build library
run: |
mkdir build && cd build
cmake .. -DAARE_PYTHON_BINDINGS=ON -DAARE_TESTS=ON
cmake .. -DAARE_PYTHON_BINDINGS=ON -DAARE_TESTS=ON -DPython_FIND_VIRTUALENV=FIRST
make -j 2
- name: C++ unit tests

View File

@ -8,7 +8,7 @@ permissions:
contents: read
jobs:
buildh:
build:
runs-on: "ubuntu-latest"
container:
image: gitea.psi.ch/images/rhel9-developer-gitea-actions

64
.github/workflows/build_wheel.yml vendored Normal file
View File

@ -0,0 +1,64 @@
name: Build wheel
on:
workflow_dispatch:
pull_request:
push:
branches:
- main
release:
types:
- published
jobs:
build_wheels:
name: Build wheels on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest,]
steps:
- uses: actions/checkout@v4
- name: Build wheels
run: pipx run cibuildwheel==2.23.0
- uses: actions/upload-artifact@v4
with:
name: cibw-wheels-${{ matrix.os }}-${{ strategy.job-index }}
path: ./wheelhouse/*.whl
build_sdist:
name: Build source distribution
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build sdist
run: pipx run build --sdist
- uses: actions/upload-artifact@v4
with:
name: cibw-sdist
path: dist/*.tar.gz
upload_pypi:
needs: [build_wheels, build_sdist]
runs-on: ubuntu-latest
environment: pypi
permissions:
id-token: write
if: github.event_name == 'release' && github.event.action == 'published'
# or, alternatively, upload to PyPI on every tag starting with 'v' (remove on: release above to use this)
# if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v')
steps:
- uses: actions/download-artifact@v4
with:
# unpacks all CIBW artifacts into dist/
pattern: cibw-*
path: dist
merge-multiple: true
- uses: pypa/gh-action-pypi-publish@release/v1

3
.gitignore vendored
View File

@ -17,7 +17,8 @@ Testing/
ctbDict.cpp
ctbDict.h
wheelhouse/
dist/
*.pyc
*/__pycache__/*

View File

@ -1,4 +1,4 @@
cmake_minimum_required(VERSION 3.14)
cmake_minimum_required(VERSION 3.15)
project(aare
VERSION 1.0.0
@ -11,6 +11,14 @@ set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
execute_process(
COMMAND git log -1 --format=%h
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
OUTPUT_VARIABLE GIT_HASH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
message(STATUS "Building from git hash: ${GIT_HASH}")
if (${CMAKE_VERSION} VERSION_GREATER "3.24")
cmake_policy(SET CMP0135 NEW) #Fetch content download timestamp
endif()
@ -381,7 +389,9 @@ set(SourceFiles
${CMAKE_CURRENT_SOURCE_DIR}/src/RawFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawSubFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/RawMasterFile.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/utils/task.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/utils/ifstream_helpers.cpp
)
add_library(aare_core STATIC ${SourceFiles})
@ -415,6 +425,7 @@ if(AARE_TESTS)
set(TestSources
${CMAKE_CURRENT_SOURCE_DIR}/src/algorithm.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/defs.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/decode.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Dtype.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Frame.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/geo_helpers.test.cpp
@ -426,6 +437,7 @@ if(AARE_TESTS)
${CMAKE_CURRENT_SOURCE_DIR}/src/Cluster.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/CalculateEta.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/ClusterFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/ClusterFinderMT.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/Pedestal.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/JungfrauDataFile.test.cpp
${CMAKE_CURRENT_SOURCE_DIR}/src/NumpyFile.test.cpp

View File

@ -1,6 +1,7 @@
package:
name: aare
version: 2025.4.1 #TODO! how to not duplicate this?
version: 2025.4.22 #TODO! how to not duplicate this?
@ -38,6 +39,7 @@ requirements:
run:
- python {{python}}
- numpy {{ numpy }}
- matplotlib
test:

View File

@ -6,14 +6,14 @@
namespace aare {
typedef enum {
enum class corner : int {
cBottomLeft = 0,
cBottomRight = 1,
cTopLeft = 2,
cTopRight = 3
} corner;
};
typedef enum {
enum class pixel : int {
pBottomLeft = 0,
pBottom = 1,
pBottomRight = 2,
@ -23,7 +23,7 @@ typedef enum {
pTopLeft = 6,
pTop = 7,
pTopRight = 8
} pixel;
};
template <typename T> struct Eta2 {
double x;
@ -41,7 +41,7 @@ NDArray<double, 2> calculate_eta2(const ClusterVector<ClusterType> &clusters) {
NDArray<double, 2> eta2({static_cast<int64_t>(clusters.size()), 2});
for (size_t i = 0; i < clusters.size(); i++) {
auto e = calculate_eta2(clusters.at(i));
auto e = calculate_eta2(clusters[i]);
eta2(i, 0) = e.x;
eta2(i, 1) = e.y;
}
@ -64,31 +64,79 @@ calculate_eta2(const Cluster<T, ClusterSizeX, ClusterSizeY, CoordType> &cl) {
eta.sum = max_sum.first;
auto c = max_sum.second;
size_t cluster_center_index =
(ClusterSizeX / 2) + (ClusterSizeY / 2) * ClusterSizeX;
size_t index_bottom_left_max_2x2_subcluster =
(int(c / (ClusterSizeX - 1))) * ClusterSizeX + c % (ClusterSizeX - 1);
if ((cl.data[index_bottom_left_max_2x2_subcluster] +
cl.data[index_bottom_left_max_2x2_subcluster + 1]) != 0)
eta.x = static_cast<double>(
cl.data[index_bottom_left_max_2x2_subcluster + 1]) /
static_cast<double>(
(cl.data[index_bottom_left_max_2x2_subcluster] +
cl.data[index_bottom_left_max_2x2_subcluster + 1]));
// check that cluster center is in max subcluster
if (cluster_center_index != index_bottom_left_max_2x2_subcluster &&
cluster_center_index != index_bottom_left_max_2x2_subcluster + 1 &&
cluster_center_index !=
index_bottom_left_max_2x2_subcluster + ClusterSizeX &&
cluster_center_index !=
index_bottom_left_max_2x2_subcluster + ClusterSizeX + 1)
throw std::runtime_error("Photon center is not in max 2x2_subcluster");
if ((cl.data[index_bottom_left_max_2x2_subcluster] +
cl.data[index_bottom_left_max_2x2_subcluster + ClusterSizeX]) != 0)
eta.y =
if ((cluster_center_index - index_bottom_left_max_2x2_subcluster) %
ClusterSizeX ==
0) {
if ((cl.data[cluster_center_index + 1] +
cl.data[cluster_center_index]) != 0)
eta.x = static_cast<double>(cl.data[cluster_center_index + 1]) /
static_cast<double>((cl.data[cluster_center_index + 1] +
cl.data[cluster_center_index]));
} else {
if ((cl.data[cluster_center_index] +
cl.data[cluster_center_index - 1]) != 0)
eta.x = static_cast<double>(cl.data[cluster_center_index]) /
static_cast<double>((cl.data[cluster_center_index - 1] +
cl.data[cluster_center_index]));
}
if ((cluster_center_index - index_bottom_left_max_2x2_subcluster) /
ClusterSizeX <
1) {
assert(cluster_center_index + ClusterSizeX <
ClusterSizeX * ClusterSizeY); // suppress warning
if ((cl.data[cluster_center_index] +
cl.data[cluster_center_index + ClusterSizeX]) != 0)
eta.y = static_cast<double>(
cl.data[cluster_center_index + ClusterSizeX]) /
static_cast<double>(
cl.data[index_bottom_left_max_2x2_subcluster + ClusterSizeX]) /
(cl.data[cluster_center_index] +
cl.data[cluster_center_index + ClusterSizeX]));
} else {
if ((cl.data[cluster_center_index] +
cl.data[cluster_center_index - ClusterSizeX]) != 0)
eta.y = static_cast<double>(cl.data[cluster_center_index]) /
static_cast<double>(
(cl.data[index_bottom_left_max_2x2_subcluster] +
cl.data[index_bottom_left_max_2x2_subcluster + ClusterSizeX]));
(cl.data[cluster_center_index] +
cl.data[cluster_center_index - ClusterSizeX]));
}
eta.c = c; // TODO only supported for 2x2 and 3x3 clusters -> at least no
// underyling enum class
return eta;
}
// TODO! Look up eta2 calculation - photon center should be top right corner
template <typename T>
Eta2<T> calculate_eta2(const Cluster<T, 2, 2, int16_t> &cl) {
Eta2<T> eta{};
if ((cl.data[0] + cl.data[1]) != 0)
eta.x = static_cast<double>(cl.data[1]) / (cl.data[0] + cl.data[1]);
if ((cl.data[0] + cl.data[2]) != 0)
eta.y = static_cast<double>(cl.data[2]) / (cl.data[0] + cl.data[2]);
eta.sum = cl.sum();
eta.c = static_cast<int>(corner::cBottomLeft); // TODO! This is not correct,
// but need to put something
return eta;
}
// calculates Eta3 for 3x3 cluster based on code from analyze_cluster
// TODO only supported for 3x3 Clusters
template <typename T> Eta2<T> calculate_eta3(const Cluster<T, 3, 3> &cl) {

View File

@ -16,28 +16,43 @@
namespace aare {
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = int16_t>
constexpr bool is_valid_cluster =
std::is_arithmetic_v<T> && std::is_integral_v<CoordType> &&
(ClusterSizeX > 0) && (ClusterSizeY > 0);
// requires clause c++20 maybe update
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = int16_t,
typename Enable = std::enable_if_t<
is_valid_cluster<T, ClusterSizeX, ClusterSizeY, CoordType>>>
typename CoordType = int16_t>
struct Cluster {
static_assert(std::is_arithmetic_v<T>, "T needs to be an arithmetic type");
static_assert(std::is_integral_v<CoordType>,
"CoordType needs to be an integral type");
static_assert(ClusterSizeX > 0 && ClusterSizeY > 0,
"Cluster sizes must be bigger than zero");
CoordType x;
CoordType y;
T data[ClusterSizeX * ClusterSizeY];
std::array<T, ClusterSizeX * ClusterSizeY> data;
T sum() const {
return std::accumulate(data, data + ClusterSizeX * ClusterSizeY, 0);
}
static constexpr uint8_t cluster_size_x = ClusterSizeX;
static constexpr uint8_t cluster_size_y = ClusterSizeY;
using value_type = T;
using coord_type = CoordType;
T sum() const { return std::accumulate(data.begin(), data.end(), T{}); }
std::pair<T, int> max_sum_2x2() const {
if constexpr (cluster_size_x == 3 && cluster_size_y == 3) {
std::array<T, 4> sum_2x2_subclusters;
sum_2x2_subclusters[0] = data[0] + data[1] + data[3] + data[4];
sum_2x2_subclusters[1] = data[1] + data[2] + data[4] + data[5];
sum_2x2_subclusters[2] = data[3] + data[4] + data[6] + data[7];
sum_2x2_subclusters[3] = data[4] + data[5] + data[7] + data[8];
int index = std::max_element(sum_2x2_subclusters.begin(),
sum_2x2_subclusters.end()) -
sum_2x2_subclusters.begin();
return std::make_pair(sum_2x2_subclusters[index], index);
} else if constexpr (cluster_size_x == 2 && cluster_size_y == 2) {
return std::make_pair(data[0] + data[1] + data[2] + data[3], 0);
} else {
constexpr size_t num_2x2_subclusters =
(ClusterSizeX - 1) * (ClusterSizeY - 1);
@ -56,40 +71,6 @@ struct Cluster {
sum_2x2_subcluster.begin();
return std::make_pair(sum_2x2_subcluster[index], index);
}
};
// Specialization for 2x2 clusters (only one sum exists)
template <typename T> struct Cluster<T, 2, 2, int16_t> {
int16_t x;
int16_t y;
T data[4];
T sum() const { return std::accumulate(data, data + 4, 0); }
std::pair<T, int> max_sum_2x2() const {
return std::make_pair(data[0] + data[1] + data[2] + data[3],
0); // Only one possible 2x2 sum
}
};
// Specialization for 3x3 clusters
template <typename T> struct Cluster<T, 3, 3, int16_t> {
int16_t x;
int16_t y;
T data[9];
T sum() const { return std::accumulate(data, data + 9, 0); }
std::pair<T, int> max_sum_2x2() const {
std::array<T, 4> sum_2x2_subclusters;
sum_2x2_subclusters[0] = data[0] + data[1] + data[3] + data[4];
sum_2x2_subclusters[1] = data[1] + data[2] + data[4] + data[5];
sum_2x2_subclusters[2] = data[3] + data[4] + data[6] + data[7];
sum_2x2_subclusters[3] = data[4] + data[5] + data[7] + data[8];
int index = std::max_element(sum_2x2_subclusters.begin(),
sum_2x2_subclusters.end()) -
sum_2x2_subclusters.begin();
return std::make_pair(sum_2x2_subclusters[index], index);
}
};
@ -102,20 +83,4 @@ struct is_cluster<Cluster<T, X, Y, CoordType>> : std::true_type {}; // Cluster
template <typename T> constexpr bool is_cluster_v = is_cluster<T>::value;
template <typename ClusterType,
typename = std::enable_if_t<is_cluster_v<ClusterType>>>
struct extract_template_arguments; // Forward declaration
// helper struct to extract template argument
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType>
struct extract_template_arguments<
Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
using value_type = T;
static constexpr int cluster_size_x = ClusterSizeX;
static constexpr int cluster_size_y = ClusterSizeY;
using coordtype = CoordType;
};
} // namespace aare

View File

@ -37,7 +37,11 @@ class ClusterCollector {
public:
ClusterCollector(ClusterFinderMT<ClusterType, uint16_t, double> *source) {
m_source = source->sink();
m_thread = std::thread(&ClusterCollector::process, this);
m_thread =
std::thread(&ClusterCollector::process,
this); // only one process does that so why isnt it
// automatically written to m_cluster in collect
// - instead of writing first to m_sink?
}
void stop() {
m_stop_requested = true;

View File

@ -46,8 +46,8 @@ class ClusterFile {
std::optional<ROI> m_roi; /*Region of interest, will be applied if set*/
std::optional<NDArray<int32_t, 2>>
m_noise_map; /*Noise map to cut photons, will be applied if set*/
std::optional<GainMap> m_gain_map; /*Gain map to apply to the clusters, will
be applied if set*/
std::optional<InvertedGainMap> m_gain_map; /*Gain map to apply to the
clusters, will be applied if set*/
public:
/**
@ -60,80 +60,8 @@ class ClusterFile {
* @throws std::runtime_error if the file could not be opened
*/
ClusterFile(const std::filesystem::path &fname, size_t chunk_size = 1000,
const std::string &mode = "r");
const std::string &mode = "r")
~ClusterFile();
/**
* @brief Read n_clusters clusters from the file discarding frame numbers.
* If EOF is reached the returned vector will have less than n_clusters
* clusters
*/
ClusterVector<ClusterType> read_clusters(size_t n_clusters);
/**
* @brief Read a single frame from the file and return the clusters. The
* cluster vector will have the frame number set.
* @throws std::runtime_error if the file is not opened for reading or the
* file pointer not at the beginning of a frame
*/
ClusterVector<ClusterType> read_frame();
void write_frame(const ClusterVector<ClusterType> &clusters);
/**
* @brief Return the chunk size
*/
size_t chunk_size() const { return m_chunk_size; }
/**
* @brief Set the region of interest to use when reading clusters. If set
* only clusters within the ROI will be read.
*/
void set_roi(ROI roi);
/**
* @brief Set the noise map to use when reading clusters. If set clusters
* below the noise level will be discarded. Selection criteria one of:
* Central pixel above noise, highest 2x2 sum above 2 * noise, total sum
* above 3 * noise.
*/
void set_noise_map(const NDView<int32_t, 2> noise_map);
/**
* @brief Set the gain map to use when reading clusters. If set the gain map
* will be applied to the clusters that pass ROI and noise_map selection.
*/
void set_gain_map(const NDView<double, 2> gain_map);
void set_gain_map(const GainMap &gain_map);
void set_gain_map(const GainMap &&gain_map);
/**
* @brief Close the file. If not closed the file will be closed in the
* destructor
*/
void close();
/** @brief Open the file in specific mode
*
*/
void open(const std::string &mode);
private:
ClusterVector<ClusterType> read_clusters_with_cut(size_t n_clusters);
ClusterVector<ClusterType> read_clusters_without_cut(size_t n_clusters);
ClusterVector<ClusterType> read_frame_with_cut();
ClusterVector<ClusterType> read_frame_without_cut();
bool is_selected(ClusterType &cl);
ClusterType read_one_cluster();
};
template <typename ClusterType, typename Enable>
ClusterFile<ClusterType, Enable>::ClusterFile(
const std::filesystem::path &fname, size_t chunk_size,
const std::string &mode)
: m_filename(fname.string()), m_chunk_size(chunk_size), m_mode(mode) {
if (mode == "r") {
@ -157,23 +85,112 @@ ClusterFile<ClusterType, Enable>::ClusterFile(
} else {
throw std::runtime_error("Unsupported mode: " + mode);
}
}
}
template <typename ClusterType, typename Enable>
ClusterFile<ClusterType, Enable>::~ClusterFile() {
close();
}
~ClusterFile() { close(); }
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::close() {
/**
* @brief Read n_clusters clusters from the file discarding
* frame numbers. If EOF is reached the returned vector will
* have less than n_clusters clusters
*/
ClusterVector<ClusterType> read_clusters(size_t n_clusters) {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
if (m_noise_map || m_roi) {
return read_clusters_with_cut(n_clusters);
} else {
return read_clusters_without_cut(n_clusters);
}
}
/**
* @brief Read a single frame from the file and return the
* clusters. The cluster vector will have the frame number
* set.
* @throws std::runtime_error if the file is not opened for
* reading or the file pointer not at the beginning of a
* frame
*/
ClusterVector<ClusterType> read_frame() {
if (m_mode != "r") {
throw std::runtime_error(LOCATION + "File not opened for reading");
}
if (m_noise_map || m_roi) {
return read_frame_with_cut();
} else {
return read_frame_without_cut();
}
}
void write_frame(const ClusterVector<ClusterType> &clusters) {
if (m_mode != "w" && m_mode != "a") {
throw std::runtime_error("File not opened for writing");
}
int32_t frame_number = clusters.frame_number();
fwrite(&frame_number, sizeof(frame_number), 1, fp);
uint32_t n_clusters = clusters.size();
fwrite(&n_clusters, sizeof(n_clusters), 1, fp);
fwrite(clusters.data(), clusters.item_size(), clusters.size(), fp);
}
/**
* @brief Return the chunk size
*/
size_t chunk_size() const { return m_chunk_size; }
/**
* @brief Set the region of interest to use when reading
* clusters. If set only clusters within the ROI will be
* read.
*/
void set_roi(ROI roi) { m_roi = roi; }
/**
* @brief Set the noise map to use when reading clusters. If
* set clusters below the noise level will be discarded.
* Selection criteria one of: Central pixel above noise,
* highest 2x2 sum above 2 * noise, total sum above 3 *
* noise.
*/
void set_noise_map(const NDView<int32_t, 2> noise_map) {
m_noise_map = NDArray<int32_t, 2>(noise_map);
}
/**
* @brief Set the gain map to use when reading clusters. If set the gain map
* will be applied to the clusters that pass ROI and noise_map selection.
* The gain map is expected to be in ADU/energy.
*/
void set_gain_map(const NDView<double, 2> gain_map) {
m_gain_map = InvertedGainMap(gain_map);
}
void set_gain_map(const InvertedGainMap &gain_map) {
m_gain_map = gain_map;
}
void set_gain_map(const InvertedGainMap &&gain_map) {
m_gain_map = gain_map;
}
/**
* @brief Close the file. If not closed the file will be
* closed in the destructor
*/
void close() {
if (fp) {
fclose(fp);
fp = nullptr;
}
}
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::open(const std::string &mode) {
/** @brief Open the file in specific mode
*
*/
void open(const std::string &mode) {
if (fp) {
close();
}
@ -202,60 +219,16 @@ void ClusterFile<ClusterType, Enable>::open(const std::string &mode) {
} else {
throw std::runtime_error("Unsupported mode: " + mode);
}
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_roi(ROI roi) {
m_roi = roi;
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_noise_map(
const NDView<int32_t, 2> noise_map) {
m_noise_map = NDArray<int32_t, 2>(noise_map);
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_gain_map(
const NDView<double, 2> gain_map) {
m_gain_map = GainMap(gain_map);
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_gain_map(const GainMap &gain_map) {
m_gain_map = gain_map;
}
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::set_gain_map(const GainMap &&gain_map) {
m_gain_map = gain_map;
}
// TODO generally supported for all clsuter types
template <typename ClusterType, typename Enable>
void ClusterFile<ClusterType, Enable>::write_frame(
const ClusterVector<ClusterType> &clusters) {
if (m_mode != "w" && m_mode != "a") {
throw std::runtime_error("File not opened for writing");
}
int32_t frame_number = clusters.frame_number();
fwrite(&frame_number, sizeof(frame_number), 1, fp);
uint32_t n_clusters = clusters.size();
fwrite(&n_clusters, sizeof(n_clusters), 1, fp);
fwrite(clusters.data(), clusters.item_size(), clusters.size(), fp);
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
ClusterFile<ClusterType, Enable>::read_clusters(size_t n_clusters) {
if (m_mode != "r") {
throw std::runtime_error("File not opened for reading");
}
if (m_noise_map || m_roi) {
return read_clusters_with_cut(n_clusters);
} else {
return read_clusters_without_cut(n_clusters);
}
}
private:
ClusterVector<ClusterType> read_clusters_with_cut(size_t n_clusters);
ClusterVector<ClusterType> read_clusters_without_cut(size_t n_clusters);
ClusterVector<ClusterType> read_frame_with_cut();
ClusterVector<ClusterType> read_frame_without_cut();
bool is_selected(ClusterType &cl);
ClusterType read_one_cluster();
};
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
@ -276,8 +249,8 @@ ClusterFile<ClusterType, Enable>::read_clusters_without_cut(size_t n_clusters) {
// if there are photons left from previous frame read them first
if (nph) {
if (nph > n_clusters) {
// if we have more photons left in the frame then photons to read we
// read directly the requested number
// if we have more photons left in the frame then photons to
// read we read directly the requested number
nn = n_clusters;
} else {
nn = nph;
@ -343,8 +316,8 @@ ClusterFile<ClusterType, Enable>::read_clusters_with_cut(size_t n_clusters) {
while (fread(&frame_number, sizeof(frame_number), 1, fp)) {
if (fread(&m_num_left, sizeof(m_num_left), 1, fp)) {
clusters.set_frame_number(
frame_number); // cluster vector will hold the last frame
// number
frame_number); // cluster vector will hold the last
// frame number
while (m_num_left && clusters.size() < n_clusters) {
ClusterType c = read_one_cluster();
if (is_selected(c)) {
@ -375,18 +348,6 @@ ClusterType ClusterFile<ClusterType, Enable>::read_one_cluster() {
return c;
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType> ClusterFile<ClusterType, Enable>::read_frame() {
if (m_mode != "r") {
throw std::runtime_error(LOCATION + "File not opened for reading");
}
if (m_noise_map || m_roi) {
return read_frame_with_cut();
} else {
return read_frame_without_cut();
}
}
template <typename ClusterType, typename Enable>
ClusterVector<ClusterType>
ClusterFile<ClusterType, Enable>::read_frame_without_cut() {
@ -465,13 +426,9 @@ bool ClusterFile<ClusterType, Enable>::is_selected(ClusterType &cl) {
}
}
auto cluster_size_x = extract_template_arguments<
std::remove_reference_t<decltype(cl)>>::cluster_size_x;
auto cluster_size_y = extract_template_arguments<
std::remove_reference_t<decltype(cl)>>::cluster_size_y;
size_t cluster_center_index =
(cluster_size_x / 2) + (cluster_size_y / 2) * cluster_size_x;
(ClusterType::cluster_size_x / 2) +
(ClusterType::cluster_size_y / 2) * ClusterType::cluster_size_x;
if (m_noise_map) {
auto sum_1x1 = cl.data[cluster_center_index]; // central pixel

View File

@ -20,11 +20,9 @@ class ClusterFinder {
Pedestal<PEDESTAL_TYPE> m_pedestal;
ClusterVector<ClusterType> m_clusters;
static const uint8_t ClusterSizeX =
extract_template_arguments<ClusterType>::cluster_size_x;
static const uint8_t ClusterSizeY =
extract_template_arguments<ClusterType>::cluster_size_x;
using CT = typename extract_template_arguments<ClusterType>::value_type;
static const uint8_t ClusterSizeX = ClusterType::cluster_size_x;
static const uint8_t ClusterSizeY = ClusterType::cluster_size_y;
using CT = typename ClusterType::value_type;
public:
/**
@ -79,7 +77,6 @@ class ClusterFinder {
int has_center_pixel_y = ClusterSizeY % 2;
m_clusters.set_frame_number(frame_number);
std::vector<CT> cluster_data(ClusterSizeX * ClusterSizeY);
for (int iy = 0; iy < frame.shape(0); iy++) {
for (int ix = 0; ix < frame.shape(1); ix++) {
@ -126,8 +123,9 @@ class ClusterFinder {
// Store cluster
if (value == max) {
// Zero out the cluster data
std::fill(cluster_data.begin(), cluster_data.end(), 0);
ClusterType cluster{};
cluster.x = ix;
cluster.y = iy;
// Fill the cluster data since we have a photon to store
// It's worth redoing the look since most of the time we
@ -141,20 +139,15 @@ class ClusterFinder {
static_cast<CT>(frame(iy + ir, ix + ic)) -
static_cast<CT>(
m_pedestal.mean(iy + ir, ix + ic));
cluster_data[i] =
cluster.data[i] =
tmp; // Watch for out of bounds access
i++;
}
}
}
ClusterType new_cluster{};
new_cluster.x = ix;
new_cluster.y = iy;
std::copy(cluster_data.begin(), cluster_data.end(),
new_cluster.data);
// Add the cluster to the output ClusterVector
m_clusters.push_back(new_cluster);
m_clusters.push_back(cluster);
}
}
}

View File

@ -34,7 +34,8 @@ template <typename ClusterType = Cluster<int32_t, 3, 3>,
typename FRAME_TYPE = uint16_t, typename PEDESTAL_TYPE = double>
class ClusterFinderMT {
using CT = typename extract_template_arguments<ClusterType>::value_type;
protected:
using CT = typename ClusterType::value_type;
size_t m_current_thread{0};
size_t m_n_threads{0};
using Finder = ClusterFinder<ClusterType, FRAME_TYPE, PEDESTAL_TYPE>;
@ -50,6 +51,7 @@ class ClusterFinderMT {
std::thread m_collect_thread;
std::chrono::milliseconds m_default_wait{1};
private:
std::atomic<bool> m_stop_requested{false};
std::atomic<bool> m_processing_threads_stopped{true};
@ -120,6 +122,7 @@ class ClusterFinderMT {
ClusterFinderMT(Shape<2> image_size, PEDESTAL_TYPE nSigma = 5.0,
size_t capacity = 2000, size_t n_threads = 3)
: m_n_threads(n_threads) {
for (size_t i = 0; i < n_threads; i++) {
m_cluster_finders.push_back(
std::make_unique<

View File

@ -47,7 +47,7 @@ class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
* @param frame_number frame number of the clusters. Default is 0, which is
* also used to indicate that the clusters come from many frames
*/
ClusterVector(size_t capacity = 300, uint64_t frame_number = 0)
ClusterVector(size_t capacity = 1024, uint64_t frame_number = 0)
: m_frame_number(frame_number) {
m_data.reserve(capacity);
}
@ -76,9 +76,10 @@ class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
std::vector<T> sum() {
std::vector<T> sums(m_data.size());
for (size_t i = 0; i < m_data.size(); i++) {
sums[i] = at(i).sum();
}
std::transform(
m_data.begin(), m_data.end(), sums.begin(),
[](const ClusterType &cluster) { return cluster.sum(); });
return sums;
}
@ -86,13 +87,15 @@ class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
* @brief Sum the pixels in the 2x2 subcluster with the biggest pixel sum in
* each cluster
* @return std::vector<T> vector of sums for each cluster
*/ //TODO if underlying container is a vector use std::for_each
*/
std::vector<T> sum_2x2() {
std::vector<T> sums_2x2(m_data.size());
for (size_t i = 0; i < m_data.size(); i++) {
sums_2x2[i] = at(i).max_sum_2x2().first;
}
std::transform(m_data.begin(), m_data.end(), sums_2x2.begin(),
[](const ClusterType &cluster) {
return cluster.max_sum_2x2().first;
});
return sums_2x2;
}
@ -149,9 +152,9 @@ class ClusterVector<Cluster<T, ClusterSizeX, ClusterSizeY, CoordType>> {
* @brief Return a reference to the i-th cluster casted to type V
* @tparam V type of the cluster
*/
ClusterType &at(size_t i) { return m_data[i]; }
ClusterType &operator[](size_t i) { return m_data[i]; }
const ClusterType &at(size_t i) const { return m_data[i]; }
const ClusterType &operator[](size_t i) const { return m_data[i]; }
/**
* @brief Return the frame number of the clusters. 0 is used to indicate

View File

@ -1,6 +1,7 @@
/************************************************
* @file ApplyGainMap.hpp
* @short function to apply gain map of image size to a vector of clusters
* @file GainMap.hpp
* @short function to apply gain map of image size to a vector of clusters -
*note stored gainmap is inverted for efficient aaplication to images
***********************************************/
#pragma once
@ -12,14 +13,21 @@
namespace aare {
class GainMap {
class InvertedGainMap {
public:
explicit GainMap(const NDArray<double, 2> &gain_map)
: m_gain_map(gain_map) {};
explicit InvertedGainMap(const NDArray<double, 2> &gain_map)
: m_gain_map(gain_map) {
for (auto &item : m_gain_map) {
item = 1.0 / item;
}
};
explicit GainMap(const NDView<double, 2> gain_map) {
explicit InvertedGainMap(const NDView<double, 2> gain_map) {
m_gain_map = NDArray<double, 2>(gain_map);
for (auto &item : m_gain_map) {
item = 1.0 / item;
}
}
template <typename ClusterType,
@ -34,19 +42,21 @@ class GainMap {
int64_t index_cluster_center_x = ClusterSizeX / 2;
int64_t index_cluster_center_y = ClusterSizeY / 2;
for (size_t i = 0; i < clustervec.size(); i++) {
auto &cl = clustervec.at(i);
auto &cl = clustervec[i];
if (cl.x > 0 && cl.y > 0 && cl.x < m_gain_map.shape(1) - 1 &&
cl.y < m_gain_map.shape(0) - 1) {
for (size_t j = 0; j < ClusterSizeX * ClusterSizeY; j++) {
size_t x = cl.x + j % ClusterSizeX - index_cluster_center_x;
size_t y = cl.y + j / ClusterSizeX - index_cluster_center_y;
cl.data[j] = cl.data[j] * static_cast<T>(m_gain_map(y, x));
cl.data[j] = static_cast<T>(
static_cast<double>(cl.data[j]) *
m_gain_map(
y, x)); // cast after conversion to keep precision
}
} else {
memset(cl.data, 0,
ClusterSizeX * ClusterSizeY *
sizeof(T)); // clear edge clusters
// clear edge clusters
cl.data.fill(0);
}
}
}

View File

@ -44,9 +44,8 @@ Interpolator::interpolate(const ClusterVector<ClusterType> &clusters) {
photons.reserve(clusters.size());
if (clusters.cluster_size_x() == 3 || clusters.cluster_size_y() == 3) {
for (size_t i = 0; i < clusters.size(); i++) {
for (const ClusterType &cluster : clusters) {
auto cluster = clusters.at(i);
auto eta = calculate_eta2(cluster);
Photon photon;
@ -70,20 +69,20 @@ Interpolator::interpolate(const ClusterVector<ClusterType> &clusters) {
// cBottomRight = 1,
// cTopLeft = 2,
// cTopRight = 3
switch (eta.c) {
case cTopLeft:
switch (static_cast<corner>(eta.c)) {
case corner::cTopLeft:
dX = -1.;
dY = 0;
break;
case cTopRight:;
case corner::cTopRight:;
dX = 0;
dY = 0;
break;
case cBottomLeft:
case corner::cBottomLeft:
dX = -1.;
dY = -1.;
break;
case cBottomRight:
case corner::cBottomRight:
dX = 0.;
dY = -1.;
break;
@ -94,8 +93,7 @@ Interpolator::interpolate(const ClusterVector<ClusterType> &clusters) {
}
} else if (clusters.cluster_size_x() == 2 ||
clusters.cluster_size_y() == 2) {
for (size_t i = 0; i < clusters.size(); i++) {
auto cluster = clusters.at(i);
for (const ClusterType &cluster : clusters) {
auto eta = calculate_eta2(cluster);
Photon photon;

View File

@ -184,4 +184,9 @@ std::ostream& operator <<(std::ostream& os, const NDView<T, Ndim>& arr){
}
template <typename T>
NDView<T,1> make_view(std::vector<T>& vec){
return NDView<T,1>(vec.data(), {static_cast<int64_t>(vec.size())});
}
} // namespace aare

View File

@ -22,7 +22,7 @@ class RawSubFile {
size_t m_rows{};
size_t m_cols{};
size_t m_bytes_per_frame{};
size_t n_frames{};
size_t m_num_frames{};
uint32_t m_pos_row{};
uint32_t m_pos_col{};
@ -53,6 +53,7 @@ class RawSubFile {
size_t tell();
void read_into(std::byte *image_buf, DetectorHeader *header = nullptr);
void read_into(std::byte *image_buf, size_t n_frames, DetectorHeader *header= nullptr);
void get_part(std::byte *buffer, size_t frame_index);
void read_header(DetectorHeader *header);
@ -66,6 +67,8 @@ class RawSubFile {
size_t pixels_per_frame() const { return m_rows * m_cols; }
size_t bytes_per_pixel() const { return m_bitdepth / bits_per_byte; }
size_t frames_in_file() const { return m_num_frames; }
private:
template <typename T>
void read_with_map(std::byte *image_buf);

View File

@ -1,6 +1,7 @@
#pragma once
#include <cstdint>
#include <vector>
#include <aare/NDView.hpp>
namespace aare {
@ -10,4 +11,16 @@ uint16_t adc_sar_04_decode64to16(uint64_t input);
void adc_sar_05_decode64to16(NDView<uint64_t, 2> input, NDView<uint16_t,2> output);
void adc_sar_04_decode64to16(NDView<uint64_t, 2> input, NDView<uint16_t,2> output);
/**
* @brief Apply custom weights to a 16-bit input value. Will sum up weights[i]**i
* for each bit i that is set in the input value.
* @throws std::out_of_range if weights.size() < 16
* @param input 16-bit input value
* @param weights vector of weights, size must be less than or equal to 16
*/
double apply_custom_weights(uint16_t input, const NDView<double, 1> weights);
void apply_custom_weights(NDView<uint16_t, 1> input, NDView<double, 1> output, const NDView<double, 1> weights);
} // namespace aare

View File

@ -0,0 +1,12 @@
#pragma once
#include <fstream>
#include <string>
namespace aare {
/**
* @brief Get the error message from an ifstream object
*/
std::string ifstream_error_msg(std::ifstream &ifs);
} // namespace aare

View File

@ -4,19 +4,31 @@ build-backend = "scikit_build_core.build"
[project]
name = "aare"
version = "2025.4.1"
version = "2025.4.22"
requires-python = ">=3.11"
dependencies = [
"numpy",
"matplotlib",
]
[tool.cibuildwheel]
build = "cp{311,312,313}-manylinux_x86_64"
[tool.scikit-build]
cmake.verbose = true
build.verbose = true
cmake.build-type = "Release"
install.components = ["python"]
[tool.scikit-build.cmake.define]
AARE_PYTHON_BINDINGS = "ON"
AARE_SYSTEM_LIBRARIES = "ON"
AARE_INSTALL_PYTHONEXT = "ON"
[tool.pytest.ini_options]
markers = [
"files: marks tests that need additional data (deselect with '-m \"not files\"')",

View File

@ -1,12 +1,13 @@
find_package (Python 3.10 COMPONENTS Interpreter Development REQUIRED)
find_package (Python 3.10 COMPONENTS Interpreter Development.Module REQUIRED)
set(PYBIND11_FINDPYTHON ON) # Needed for RH8
# Download or find pybind11 depending on configuration
if(AARE_FETCH_PYBIND11)
FetchContent_Declare(
pybind11
GIT_REPOSITORY https://github.com/pybind/pybind11
GIT_TAG v2.13.0
GIT_TAG v2.13.6
)
FetchContent_MakeAvailable(pybind11)
else()
@ -62,10 +63,16 @@ endforeach(FILE ${PYTHON_EXAMPLES})
if(AARE_INSTALL_PYTHONEXT)
install(TARGETS _aare
install(
TARGETS _aare
EXPORT "${TARGETS_EXPORT_NAME}"
LIBRARY DESTINATION aare
COMPONENT python
)
install(FILES ${PYTHON_FILES} DESTINATION aare)
install(
FILES ${PYTHON_FILES}
DESTINATION aare
COMPONENT python
)
endif()

View File

@ -1,5 +1,8 @@
from ._aare import ClusterFinder_Cluster3x3i
from ._aare import ClusterFinder_Cluster3x3i, ClusterFinder_Cluster2x2i, ClusterFinderMT_Cluster3x3i, ClusterFinderMT_Cluster2x2i, ClusterCollector_Cluster3x3i, ClusterCollector_Cluster2x2i
from ._aare import ClusterFileSink_Cluster3x3i, ClusterFileSink_Cluster2x2i
import numpy as np
def ClusterFinder(image_size, cluster_size, n_sigma=5, dtype = np.int32, capacity = 1024):
@ -9,6 +12,56 @@ def ClusterFinder(image_size, cluster_size, n_sigma=5, dtype = np.int32, capacit
"""
if dtype == np.int32 and cluster_size == (3,3):
return ClusterFinder_Cluster3x3i(image_size, n_sigma = n_sigma, capacity=capacity)
elif dtype == np.int32 and cluster_size == (2,2):
return ClusterFinder_Cluster2x2i(image_size, n_sigma = n_sigma, capacity=capacity)
else:
#TODO! add the other formats
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32 is supported.")
def ClusterFinderMT(image_size, cluster_size = (3,3), dtype=np.int32, n_sigma=5, capacity = 1024, n_threads = 3):
"""
Factory function to create a ClusterFinderMT object. Provides a cleaner syntax for
the templated ClusterFinderMT in C++.
"""
if dtype == np.int32 and cluster_size == (3,3):
return ClusterFinderMT_Cluster3x3i(image_size, n_sigma = n_sigma,
capacity = capacity, n_threads = n_threads)
elif dtype == np.int32 and cluster_size == (2,2):
return ClusterFinderMT_Cluster2x2i(image_size, n_sigma = n_sigma,
capacity = capacity, n_threads = n_threads)
else:
#TODO! add the other formats
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32 is supported.")
def ClusterCollector(clusterfindermt, cluster_size = (3,3), dtype=np.int32):
"""
Factory function to create a ClusterCollector object. Provides a cleaner syntax for
the templated ClusterCollector in C++.
"""
if dtype == np.int32 and cluster_size == (3,3):
return ClusterCollector_Cluster3x3i(clusterfindermt)
elif dtype == np.int32 and cluster_size == (2,2):
return ClusterCollector_Cluster2x2i(clusterfindermt)
else:
#TODO! add the other formats
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32 is supported.")
def ClusterFileSink(clusterfindermt, cluster_file, dtype=np.int32):
"""
Factory function to create a ClusterCollector object. Provides a cleaner syntax for
the templated ClusterCollector in C++.
"""
if dtype == np.int32 and clusterfindermt.cluster_size == (3,3):
return ClusterFileSink_Cluster3x3i(clusterfindermt, cluster_file)
elif dtype == np.int32 and clusterfindermt.cluster_size == (2,2):
return ClusterFileSink_Cluster2x2i(clusterfindermt, cluster_file)
else:
#TODO! add the other formats
raise ValueError(f"Unsupported dtype: {dtype}. Only np.int32 is supported.")

View File

@ -11,12 +11,17 @@ from ._aare import ROI
# from ._aare import ClusterFinderMT, ClusterCollector, ClusterFileSink, ClusterVector_i
from .ClusterFinder import ClusterFinder
from .ClusterFinder import ClusterFinder, ClusterCollector, ClusterFinderMT, ClusterFileSink
from .ClusterVector import ClusterVector
from ._aare import fit_gaus, fit_pol1
from ._aare import Interpolator
from ._aare import calculate_eta2
from ._aare import apply_custom_weights
from .CtbRawFile import CtbRawFile
from .RawFile import RawFile
from .ScanParameters import ScanParameters

View File

@ -21,16 +21,14 @@ using namespace aare;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
template <typename Type, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_ClusterVector(py::module &m, const std::string &typestr) {
using ClusterType =
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType, void>;
using ClusterType = Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType>;
auto class_name = fmt::format("ClusterVector_{}", typestr);
py::class_<ClusterVector<
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType, void>, void>>(
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType>, void>>(
m, class_name.c_str(),
py::buffer_protocol())
@ -41,10 +39,15 @@ void define_ClusterVector(py::module &m, const std::string &typestr) {
self.push_back(cluster);
})
.def("sum", [](ClusterVector<ClusterType> &self) {
.def("sum",
[](ClusterVector<ClusterType> &self) {
auto *vec = new std::vector<Type>(self.sum());
return return_vector(vec);
})
.def("sum_2x2", [](ClusterVector<ClusterType> &self){
auto *vec = new std::vector<Type>(self.sum_2x2());
return return_vector(vec);
})
.def_property_readonly("size", &ClusterVector<ClusterType>::size)
.def("item_size", &ClusterVector<ClusterType>::item_size)
.def_property_readonly("fmt",
@ -75,7 +78,6 @@ void define_ClusterVector(py::module &m, const std::string &typestr) {
// Free functions using ClusterVector
m.def("hitmap",
[](std::array<size_t, 2> image_size, ClusterVector<ClusterType> &cv) {
// Create a numpy array to hold the hitmap
// The shape of the array is (image_size[0], image_size[1])
// note that the python array is passed as [row, col] which
@ -88,13 +90,12 @@ void define_ClusterVector(py::module &m, const std::string &typestr) {
for (py::ssize_t j = 0; j < r.shape(1); j++)
r(i, j) = 0;
// Loop over the clusters and increment the hitmap
// Skip out of bound clusters
for (const auto& cluster : cv) {
for (const auto &cluster : cv) {
auto x = cluster.x;
auto y = cluster.y;
if(x<image_size[1] && y<image_size[0])
if (x < image_size[1] && y < image_size[0])
r(cluster.y, cluster.x) += 1;
}

View File

@ -26,17 +26,18 @@ template <typename Type, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
void define_cluster(py::module &m, const std::string &typestr) {
auto class_name = fmt::format("Cluster{}", typestr);
py::class_<Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType, void>>(
py::class_<Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType>>(
m, class_name.c_str(), py::buffer_protocol())
.def(py::init([](uint8_t x, uint8_t y, py::array_t<Type> data) {
py::buffer_info buf_info = data.request();
Type *ptr = static_cast<Type *>(buf_info.ptr);
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType, void> cluster;
Cluster<Type, ClusterSizeX, ClusterSizeY, CoordType> cluster;
cluster.x = x;
cluster.y = y;
std::copy(ptr, ptr + ClusterSizeX * ClusterSizeY,
cluster.data); // Copy array contents
auto r = data.template unchecked<1>(); // no bounds checks
for (py::ssize_t i = 0; i < data.size(); ++i) {
cluster.data[i] = r(i);
}
return cluster;
}));
@ -64,9 +65,6 @@ void define_cluster(py::module &m, const std::string &typestr) {
*/
}
template <typename T, uint8_t ClusterSizeX, uint8_t ClusterSizeY,
typename CoordType = uint16_t>
void define_cluster_finder_mt_bindings(py::module &m,
@ -95,6 +93,9 @@ void define_cluster_finder_mt_bindings(py::module &m,
return;
},
py::arg(), py::arg("frame_number") = 0)
.def_property_readonly("cluster_size", [](ClusterFinderMT<ClusterType, uint16_t, pd_type> &self){
return py::make_tuple(ClusterSizeX, ClusterSizeY);
})
.def("clear_pedestal",
&ClusterFinderMT<ClusterType, uint16_t, pd_type>::clear_pedestal)
.def("sync", &ClusterFinderMT<ClusterType, uint16_t, pd_type>::sync)
@ -206,6 +207,5 @@ void define_cluster_finder_bindings(py::module &m, const std::string &typestr) {
return;
},
py::arg(), py::arg("frame_number") = 0);
}
#pragma GCC diagnostic pop

View File

@ -59,9 +59,6 @@ void define_cluster_file_io_bindings(py::module &m,
self.set_gain_map(view);
})
// void set_gain_map(const GainMap &gain_map); //TODO do i need a
// gainmap constructor?
.def("close", &ClusterFile<ClusterType>::close)
.def("write_frame", &ClusterFile<ClusterType>::write_frame)
.def("__enter__", [](ClusterFile<ClusterType> &self) { return &self; })

View File

@ -10,6 +10,8 @@
#include "aare/decode.hpp"
// #include "aare/fClusterFileV2.hpp"
#include "np_helper.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/iostream.h>
@ -65,7 +67,27 @@ m.def("adc_sar_04_decode64to16", [](py::array_t<uint8_t> input) {
return output;
});
py::class_<CtbRawFile>(m, "CtbRawFile")
m.def(
"apply_custom_weights",
[](py::array_t<uint16_t, py::array::c_style | py::array::forcecast> &input,
py::array_t<double, py::array::c_style | py::array::forcecast>
&weights) {
// Create new array with same shape as the input array (uninitialized values)
py::buffer_info buf = input.request();
py::array_t<double> output(buf.shape);
// Use NDViews to call into the C++ library
auto weights_view = make_view_1d(weights);
NDView<uint16_t, 1> input_view(input.mutable_data(), {input.size()});
NDView<double, 1> output_view(output.mutable_data(), {output.size()});
apply_custom_weights(input_view, output_view, weights_view);
return output;
});
py::class_<CtbRawFile>(m, "CtbRawFile")
.def(py::init<const std::filesystem::path &>())
.def("read_frame",
[](CtbRawFile &self) {
@ -81,8 +103,7 @@ m.def("adc_sar_04_decode64to16", [](py::array_t<uint8_t> input) {
// always read bytes
image = py::array_t<uint8_t>(shape);
self.read_into(
reinterpret_cast<std::byte *>(image.mutable_data()),
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()),
header.mutable_data());
return py::make_tuple(header, image);

View File

@ -20,6 +20,9 @@
namespace py = pybind11;
using namespace ::aare;
//Disable warnings for unused parameters, as we ignore some
//in the __exit__ method
#pragma GCC diagnostic push
@ -214,36 +217,9 @@ void define_file_io_bindings(py::module &m) {
py::class_<RawSubFile>(m, "RawSubFile")
.def(py::init<const std::filesystem::path &, DetectorType, size_t,
size_t, size_t>())
.def_property_readonly("bytes_per_frame", &RawSubFile::bytes_per_frame)
.def_property_readonly("pixels_per_frame",
&RawSubFile::pixels_per_frame)
.def("seek", &RawSubFile::seek)
.def("tell", &RawSubFile::tell)
.def_property_readonly("rows", &RawSubFile::rows)
.def_property_readonly("cols", &RawSubFile::cols)
.def("read_frame",
[](RawSubFile &self) {
const uint8_t item_size = self.bytes_per_pixel();
py::array image;
std::vector<ssize_t> shape;
shape.reserve(2);
shape.push_back(self.rows());
shape.push_back(self.cols());
if (item_size == 1) {
image = py::array_t<uint8_t>(shape);
} else if (item_size == 2) {
image = py::array_t<uint16_t>(shape);
} else if (item_size == 4) {
image = py::array_t<uint32_t>(shape);
}
fmt::print("item_size: {} rows: {} cols: {}\n", item_size, self.rows(), self.cols());
self.read_into(
reinterpret_cast<std::byte *>(image.mutable_data()));
return image;
});
#pragma GCC diagnostic pop
// py::class_<ClusterHeader>(m, "ClusterHeader")

View File

@ -10,12 +10,12 @@
#include "file.hpp"
#include "fit.hpp"
#include "interpolation.hpp"
#include "pedestal.hpp"
#include "pixel_map.hpp"
#include "raw_file.hpp"
#include "raw_sub_file.hpp"
#include "raw_master_file.hpp"
#include "raw_file.hpp"
#include "pixel_map.hpp"
#include "var_cluster.hpp"
#include "pedestal.hpp"
#include "jungfrau_data_file.hpp"
// Pybind stuff
@ -27,6 +27,7 @@ namespace py = pybind11;
PYBIND11_MODULE(_aare, m) {
define_file_io_bindings(m);
define_raw_file_io_bindings(m);
define_raw_sub_file_io_bindings(m);
define_ctb_raw_file_io_bindings(m);
define_raw_master_file_bindings(m);
define_var_cluster_finder_bindings(m);

110
python/src/raw_sub_file.hpp Normal file
View File

@ -0,0 +1,110 @@
#include "aare/CtbRawFile.hpp"
#include "aare/File.hpp"
#include "aare/Frame.hpp"
#include "aare/RawFile.hpp"
#include "aare/RawMasterFile.hpp"
#include "aare/RawSubFile.hpp"
#include "aare/defs.hpp"
// #include "aare/fClusterFileV2.hpp"
#include <cstdint>
#include <filesystem>
#include <pybind11/iostream.h>
#include <pybind11/numpy.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include <pybind11/stl/filesystem.h>
#include <string>
namespace py = pybind11;
using namespace ::aare;
auto read_frame_from_RawSubFile(RawSubFile &self) {
py::array_t<DetectorHeader> header(1);
const uint8_t item_size = self.bytes_per_pixel();
std::vector<ssize_t> shape{static_cast<ssize_t>(self.rows()),
static_cast<ssize_t>(self.cols())};
py::array image;
if (item_size == 1) {
image = py::array_t<uint8_t>(shape);
} else if (item_size == 2) {
image = py::array_t<uint16_t>(shape);
} else if (item_size == 4) {
image = py::array_t<uint32_t>(shape);
}
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()),
header.mutable_data());
return py::make_tuple(header, image);
}
auto read_n_frames_from_RawSubFile(RawSubFile &self, size_t n_frames) {
py::array_t<DetectorHeader> header(n_frames);
const uint8_t item_size = self.bytes_per_pixel();
std::vector<ssize_t> shape{
static_cast<ssize_t>(n_frames),
static_cast<ssize_t>(self.rows()),
static_cast<ssize_t>(self.cols())
};
py::array image;
if (item_size == 1) {
image = py::array_t<uint8_t>(shape);
} else if (item_size == 2) {
image = py::array_t<uint16_t>(shape);
} else if (item_size == 4) {
image = py::array_t<uint32_t>(shape);
}
self.read_into(reinterpret_cast<std::byte *>(image.mutable_data()), n_frames,
header.mutable_data());
return py::make_tuple(header, image);
}
//Disable warnings for unused parameters, as we ignore some
//in the __exit__ method
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-parameter"
void define_raw_sub_file_io_bindings(py::module &m) {
py::class_<RawSubFile>(m, "RawSubFile")
.def(py::init<const std::filesystem::path &, DetectorType, size_t,
size_t, size_t>())
.def_property_readonly("bytes_per_frame", &RawSubFile::bytes_per_frame)
.def_property_readonly("pixels_per_frame",
&RawSubFile::pixels_per_frame)
.def_property_readonly("bytes_per_pixel", &RawSubFile::bytes_per_pixel)
.def("seek", &RawSubFile::seek)
.def("tell", &RawSubFile::tell)
.def_property_readonly("rows", &RawSubFile::rows)
.def_property_readonly("cols", &RawSubFile::cols)
.def_property_readonly("frames_in_file", &RawSubFile::frames_in_file)
.def("read_frame", &read_frame_from_RawSubFile)
.def("read_n", &read_n_frames_from_RawSubFile)
.def("read", [](RawSubFile &self){
self.seek(0);
auto n_frames = self.frames_in_file();
return read_n_frames_from_RawSubFile(self, n_frames);
})
.def("__enter__", [](RawSubFile &self) { return &self; })
.def("__exit__",
[](RawSubFile &self,
const std::optional<pybind11::type> &exc_type,
const std::optional<pybind11::object> &exc_value,
const std::optional<pybind11::object> &traceback) {
})
.def("__iter__", [](RawSubFile &self) { return &self; })
.def("__next__", [](RawSubFile &self) {
try {
return read_frame_from_RawSubFile(self);
} catch (std::runtime_error &e) {
throw py::stop_iteration();
}
});
}
#pragma GCC diagnostic pop

View File

@ -0,0 +1,36 @@
import pytest
import numpy as np
from aare import RawSubFile, DetectorType
@pytest.mark.files
def test_read_a_jungfrau_RawSubFile(test_data_path):
with RawSubFile(test_data_path / "raw/jungfrau/jungfrau_single_d0_f1_0.raw", DetectorType.Jungfrau, 512, 1024, 16) as f:
assert f.frames_in_file == 3
headers, frames = f.read()
assert headers.size == 3
assert frames.shape == (3, 512, 1024)
# Frame numbers in this file should be 4, 5, 6
for i,h in zip(range(4,7,1), headers):
assert h["frameNumber"] == i
# Compare to canned data using numpy
data = np.load(test_data_path / "raw/jungfrau/jungfrau_single_0.npy")
assert np.all(data[3:6] == frames)
@pytest.mark.files
def test_iterate_over_a_jungfrau_RawSubFile(test_data_path):
data = np.load(test_data_path / "raw/jungfrau/jungfrau_single_0.npy")
with RawSubFile(test_data_path / "raw/jungfrau/jungfrau_single_d0_f0_0.raw", DetectorType.Jungfrau, 512, 1024, 16) as f:
i = 0
for header, frame in f:
assert header["frameNumber"] == i+1
assert np.all(frame == data[i])
i += 1
assert i == 3
assert header["frameNumber"] == 3

View File

@ -21,20 +21,22 @@ using ClusterTypes =
auto get_test_parameters() {
return GENERATE(
std::make_tuple(ClusterTypes{Cluster<int, 2, 2>{0, 0, {1, 2, 3, 1}}},
Eta2<int>{2. / 3, 3. / 4, corner::cBottomLeft, 7}),
Eta2<int>{2. / 3, 3. / 4,
static_cast<int>(corner::cBottomLeft), 7}),
std::make_tuple(
ClusterTypes{Cluster<int, 3, 3>{0, 0, {1, 2, 3, 4, 5, 6, 1, 2, 7}}},
Eta2<int>{6. / 11, 2. / 7, corner::cTopRight, 20}),
Eta2<int>{6. / 11, 2. / 7, static_cast<int>(corner::cTopRight),
20}),
std::make_tuple(ClusterTypes{Cluster<int, 5, 5>{
0, 0, {1, 6, 7, 6, 5, 4, 3, 2, 1, 8, 8, 9, 2,
0, 0, {1, 6, 7, 6, 5, 4, 3, 2, 1, 2, 8, 9, 8,
1, 4, 5, 6, 7, 8, 4, 1, 1, 1, 1, 1}}},
Eta2<int>{9. / 17, 5. / 13, 8, 28}),
Eta2<int>{8. / 17, 7. / 15, 9, 30}),
std::make_tuple(
ClusterTypes{Cluster<int, 4, 2>{0, 0, {1, 4, 7, 2, 5, 6, 4, 3}}},
Eta2<int>{7. / 11, 6. / 10, 1, 21}),
Eta2<int>{4. / 10, 4. / 11, 1, 21}),
std::make_tuple(
ClusterTypes{Cluster<int, 2, 3>{0, 0, {1, 3, 2, 3, 4, 2}}},
Eta2<int>{3. / 5, 4. / 6, 1, 11}));
Eta2<int>{3. / 5, 2. / 5, 1, 11}));
}
TEST_CASE("compute_largest_2x2_subcluster", "[eta_calculation]") {
@ -61,14 +63,13 @@ TEST_CASE("calculate_eta2", "[eta_calculation]") {
CHECK(eta.sum == expected_eta.sum);
}
//3x3 cluster layout (rotated to match the cBottomLeft enum):
// 3x3 cluster layout (rotated to match the cBottomLeft enum):
// 6, 7, 8
// 3, 4, 5
// 0, 1, 2
TEST_CASE("Calculate eta2 for a 3x3 int32 cluster with the largest 2x2 sum in the bottom left",
TEST_CASE("Calculate eta2 for a 3x3 int32 cluster with the largest 2x2 sum in "
"the bottom left",
"[eta_calculation]") {
// Create a 3x3 cluster
@ -90,39 +91,37 @@ TEST_CASE("Calculate eta2 for a 3x3 int32 cluster with the largest 2x2 sum in th
// 30, 23, 5
auto eta = calculate_eta2(cl);
CHECK(eta.c == corner::cBottomLeft);
CHECK(eta.c == static_cast<int>(corner::cBottomLeft));
CHECK(eta.x == 50.0 / (20 + 50)); // 4/(3+4)
CHECK(eta.y == 50.0 / (23 + 50)); // 4/(1+4)
CHECK(eta.sum == 30+23+20+50);
CHECK(eta.sum == 30 + 23 + 20 + 50);
}
TEST_CASE("Calculate eta2 for a 3x3 int32 cluster with the largest 2x2 sum in the top left",
TEST_CASE("Calculate eta2 for a 3x3 int32 cluster with the largest 2x2 sum in "
"the top left",
"[eta_calculation]") {
// Create a 3x3 cluster
Cluster<int32_t, 3, 3> cl;
cl.x = 0;
cl.y = 0;
cl.data[0] = 8;
cl.data[1] = 12;
cl.data[2] = 5;
cl.data[3] = 77;
cl.data[4] = 80;
cl.data[5] = 3;
cl.data[6] = 82;
cl.data[7] = 91;
cl.data[8] = 3;
// Create a 3x3 cluster
Cluster<int32_t, 3, 3> cl;
cl.x = 0;
cl.y = 0;
cl.data[0] = 8;
cl.data[1] = 12;
cl.data[2] = 5;
cl.data[3] = 77;
cl.data[4] = 80;
cl.data[5] = 3;
cl.data[6] = 82;
cl.data[7] = 91;
cl.data[8] = 3;
// 82, 91, 3
// 77, 80, 3
// 8, 12, 5
auto eta = calculate_eta2(cl);
CHECK(eta.c == corner::cTopLeft);
CHECK(eta.x == 80. / (77 + 80)); // 4/(3+4)
CHECK(eta.y == 91.0 / (91 + 80)); // 7/(7+4)
CHECK(eta.sum == 77+80+82+91);
// 82, 91, 3
// 77, 80, 3
// 8, 12, 5
auto eta = calculate_eta2(cl);
CHECK(eta.c == static_cast<int>(corner::cTopLeft));
CHECK(eta.x == 80. / (77 + 80)); // 4/(3+4)
CHECK(eta.y == 91.0 / (91 + 80)); // 7/(7+4)
CHECK(eta.sum == 77 + 80 + 82 + 91);
}

View File

@ -14,19 +14,6 @@
using namespace aare;
TEST_CASE("Correct Instantiation of Cluster and ClusterVector",
"[.cluster][.instantiation]") {
CHECK(is_valid_cluster<double, 3, 3>);
CHECK(is_valid_cluster<double, 3, 2>);
CHECK(not is_valid_cluster<int, 0, 0>);
CHECK(not is_valid_cluster<std::string, 2, 2>);
CHECK(not is_valid_cluster<int, 2, 2, double>);
CHECK(not is_cluster_v<int>);
CHECK(is_cluster_v<Cluster<int, 3, 3>>);
}
TEST_CASE("Test sum of Cluster", "[.cluster]") {
Cluster<int, 2, 2> cluster{0, 0, {1, 2, 3, 4}};

View File

@ -10,8 +10,9 @@ using aare::Cluster;
using aare::ClusterFile;
using aare::ClusterVector;
TEST_CASE("Read one frame from a cluster file", "[.files]") {
// We know that the frame has 97 clusters
//We know that the frame has 97 clusters
auto fpath = test_data_path() / "clust" / "single_frame_97_clustrers.clust";
REQUIRE(std::filesystem::exists(fpath));
@ -19,14 +20,14 @@ TEST_CASE("Read one frame from a cluster file", "[.files]") {
auto clusters = f.read_frame();
CHECK(clusters.size() == 97);
CHECK(clusters.frame_number() == 135);
CHECK(clusters.at(0).x == 1);
CHECK(clusters.at(0).y == 200);
CHECK(clusters[0].x == 1);
CHECK(clusters[0].y == 200);
int32_t expected_cluster_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
CHECK(std::equal(std::begin(clusters.at(0).data),
std::end(clusters.at(0).data),
CHECK(std::equal(std::begin(clusters[0].data), std::end(clusters[0].data),
std::begin(expected_cluster_data)));
}
TEST_CASE("Read one frame using ROI", "[.files]") {
// We know that the frame has 97 clusters
auto fpath = test_data_path() / "clust" / "single_frame_97_clustrers.clust";
@ -45,21 +46,22 @@ TEST_CASE("Read one frame using ROI", "[.files]") {
// Check that all clusters are within the ROI
for (size_t i = 0; i < clusters.size(); i++) {
auto c = clusters.at(i);
auto c = clusters[i];
REQUIRE(c.x >= roi.xmin);
REQUIRE(c.x <= roi.xmax);
REQUIRE(c.y >= roi.ymin);
REQUIRE(c.y <= roi.ymax);
}
CHECK(clusters.at(0).x == 1);
CHECK(clusters.at(0).y == 200);
CHECK(clusters[0].x == 1);
CHECK(clusters[0].y == 200);
int32_t expected_cluster_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
CHECK(std::equal(std::begin(clusters.at(0).data),
std::end(clusters.at(0).data),
CHECK(std::equal(std::begin(clusters[0].data), std::end(clusters[0].data),
std::begin(expected_cluster_data)));
}
TEST_CASE("Read clusters from single frame file", "[.files]") {
// frame_number, num_clusters [135] 97
@ -162,6 +164,7 @@ TEST_CASE("Read clusters from single frame file", "[.files]") {
// [ 97 296] [864 865 866 867 868 869 870 871 872]
auto fpath = test_data_path() / "clust" / "single_frame_97_clustrers.clust";
REQUIRE(std::filesystem::exists(fpath));
SECTION("Read fewer clusters than available") {
@ -170,10 +173,10 @@ TEST_CASE("Read clusters from single frame file", "[.files]") {
REQUIRE(clusters.size() == 50);
REQUIRE(clusters.frame_number() == 135);
int32_t expected_cluster_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
REQUIRE(clusters.at(0).x == 1);
REQUIRE(clusters.at(0).y == 200);
CHECK(std::equal(std::begin(clusters.at(0).data),
std::end(clusters.at(0).data),
REQUIRE(clusters[0].x == 1);
REQUIRE(clusters[0].y == 200);
CHECK(std::equal(std::begin(clusters[0].data),
std::end(clusters[0].data),
std::begin(expected_cluster_data)));
}
SECTION("Read more clusters than available") {
@ -183,10 +186,10 @@ TEST_CASE("Read clusters from single frame file", "[.files]") {
REQUIRE(clusters.size() == 97);
REQUIRE(clusters.frame_number() == 135);
int32_t expected_cluster_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
REQUIRE(clusters.at(0).x == 1);
REQUIRE(clusters.at(0).y == 200);
CHECK(std::equal(std::begin(clusters.at(0).data),
std::end(clusters.at(0).data),
REQUIRE(clusters[0].x == 1);
REQUIRE(clusters[0].y == 200);
CHECK(std::equal(std::begin(clusters[0].data),
std::end(clusters[0].data),
std::begin(expected_cluster_data)));
}
SECTION("Read all clusters") {
@ -194,11 +197,11 @@ TEST_CASE("Read clusters from single frame file", "[.files]") {
auto clusters = f.read_clusters(97);
REQUIRE(clusters.size() == 97);
REQUIRE(clusters.frame_number() == 135);
REQUIRE(clusters.at(0).x == 1);
REQUIRE(clusters.at(0).y == 200);
REQUIRE(clusters[0].x == 1);
REQUIRE(clusters[0].y == 200);
int32_t expected_cluster_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
CHECK(std::equal(std::begin(clusters.at(0).data),
std::end(clusters.at(0).data),
CHECK(std::equal(std::begin(clusters[0].data),
std::end(clusters[0].data),
std::begin(expected_cluster_data)));
}
}
@ -220,11 +223,10 @@ TEST_CASE("Read clusters from single frame file with ROI", "[.files]") {
CHECK(clusters.size() == 10);
CHECK(clusters.frame_number() == 135);
CHECK(clusters.at(0).x == 1);
CHECK(clusters.at(0).y == 200);
CHECK(clusters[0].x == 1);
CHECK(clusters[0].y == 200);
int32_t expected_cluster_data[] = {0, 1, 2, 3, 4, 5, 6, 7, 8};
CHECK(std::equal(std::begin(clusters.at(0).data),
std::end(clusters.at(0).data),
CHECK(std::equal(std::begin(clusters[0].data), std::end(clusters[0].data),
std::begin(expected_cluster_data)));
}
@ -309,20 +311,20 @@ TEST_CASE("Write cluster with potential padding", "[.files][.ClusterFile]") {
CHECK(read_cluster_vector.size() == 2);
CHECK(read_cluster_vector.frame_number() == 0);
CHECK(read_cluster_vector.at(0).x == clustervec.at(0).x);
CHECK(read_cluster_vector.at(0).y == clustervec.at(0).y);
CHECK(std::equal(clustervec.at(0).data, clustervec.at(0).data + 9,
read_cluster_vector.at(0).data, [](double a, double b) {
return std::abs(a - b) <
std::numeric_limits<double>::epsilon();
CHECK(read_cluster_vector[0].x == clustervec[0].x);
CHECK(read_cluster_vector[0].y == clustervec[0].y);
CHECK(std::equal(
clustervec[0].data.begin(), clustervec[0].data.end(),
read_cluster_vector[0].data.begin(), [](double a, double b) {
return std::abs(a - b) < std::numeric_limits<double>::epsilon();
}));
CHECK(read_cluster_vector.at(1).x == clustervec.at(1).x);
CHECK(read_cluster_vector.at(1).y == clustervec.at(1).y);
CHECK(std::equal(clustervec.at(1).data, std::end(clustervec.at(1).data),
read_cluster_vector.at(1).data, [](double a, double b) {
return std::abs(a - b) <
std::numeric_limits<double>::epsilon();
CHECK(read_cluster_vector[1].x == clustervec[1].x);
CHECK(read_cluster_vector[1].y == clustervec[1].y);
CHECK(std::equal(
clustervec[1].data.begin(), clustervec[1].data.end(),
read_cluster_vector[1].data.begin(), [](double a, double b) {
return std::abs(a - b) < std::numeric_limits<double>::epsilon();
}));
}
@ -341,10 +343,9 @@ TEST_CASE("Read frame and modify cluster data", "[.files][.ClusterFile]") {
Cluster<int32_t, 3, 3>{0, 0, {0, 1, 2, 3, 4, 5, 6, 7, 8}});
CHECK(clusters.size() == 98);
CHECK(clusters.at(0).x == 1);
CHECK(clusters.at(0).y == 200);
CHECK(clusters[0].x == 1);
CHECK(clusters[0].y == 200);
CHECK(std::equal(std::begin(clusters.at(0).data),
std::end(clusters.at(0).data),
CHECK(std::equal(std::begin(clusters[0].data), std::end(clusters[0].data),
std::begin(expected_cluster_data)));
}

View File

@ -0,0 +1,99 @@
#include "aare/ClusterFinderMT.hpp"
#include "aare/Cluster.hpp"
#include "aare/ClusterCollector.hpp"
#include "aare/File.hpp"
#include "test_config.hpp"
#include <catch2/catch_test_macros.hpp>
#include <filesystem>
#include <memory>
using namespace aare;
// wrapper function to access private member variables for testing
template <typename ClusterType, typename FRAME_TYPE = uint16_t,
typename PEDESTAL_TYPE = double>
class ClusterFinderMTWrapper
: public ClusterFinderMT<ClusterType, FRAME_TYPE, PEDESTAL_TYPE> {
public:
ClusterFinderMTWrapper(Shape<2> image_size, PEDESTAL_TYPE nSigma = 5.0,
size_t capacity = 2000, size_t n_threads = 3)
: ClusterFinderMT<ClusterType, FRAME_TYPE, PEDESTAL_TYPE>(
image_size, nSigma, capacity, n_threads) {}
size_t get_m_input_queues_size() const {
return this->m_input_queues.size();
}
size_t get_m_output_queues_size() const {
return this->m_output_queues.size();
}
size_t get_m_cluster_finders_size() const {
return this->m_cluster_finders.size();
}
bool m_output_queues_are_empty() const {
for (auto &queue : this->m_output_queues) {
if (!queue->isEmpty())
return false;
}
return true;
}
bool m_input_queues_are_empty() const {
for (auto &queue : this->m_input_queues) {
if (!queue->isEmpty())
return false;
}
return true;
}
bool m_sink_is_empty() const { return this->m_sink.isEmpty(); }
size_t m_sink_size() const { return this->m_sink.sizeGuess(); }
};
TEST_CASE("multithreaded cluster finder", "[.files][.ClusterFinder]") {
auto fpath = "/mnt/sls_det_storage/matterhorn_data/aare_test_data/"
"Moench03new/cu_half_speed_master_4.json";
File file(fpath);
size_t n_threads = 2;
size_t n_frames_pd = 10;
using ClusterType = Cluster<int32_t, 3, 3>;
ClusterFinderMTWrapper<ClusterType> cf(
{static_cast<int64_t>(file.rows()), static_cast<int64_t>(file.cols())},
5, 2000, n_threads); // no idea what frame type is!!! default uint16_t
CHECK(cf.get_m_input_queues_size() == n_threads);
CHECK(cf.get_m_output_queues_size() == n_threads);
CHECK(cf.get_m_cluster_finders_size() == n_threads);
CHECK(cf.m_output_queues_are_empty() == true);
CHECK(cf.m_input_queues_are_empty() == true);
for (size_t i = 0; i < n_frames_pd; ++i) {
cf.find_clusters(file.read_frame().view<uint16_t>());
}
cf.stop();
CHECK(cf.m_output_queues_are_empty() == true);
CHECK(cf.m_input_queues_are_empty() == true);
CHECK(cf.m_sink_size() == n_frames_pd);
ClusterCollector<ClusterType> clustercollector(&cf);
clustercollector.stop();
CHECK(cf.m_sink_size() == 0);
auto clustervec = clustercollector.steal_clusters();
// CHECK(clustervec.size() == ) //dont know how many clusters to expect
}

View File

@ -60,7 +60,7 @@ TEST_CASE("ClusterVector 2x2 int32_t capacity 4, push back then read",
REQUIRE(cv.size() == 1);
REQUIRE(cv.capacity() == 4);
auto c2 = cv.at(0);
auto c2 = cv[0];
// Check that the data is the same
REQUIRE(c1.x == c2.x);

View File

@ -3,6 +3,7 @@
#include <iostream>
#include <vector>
#include <numeric>
using aare::NDView;
using aare::Shape;
@ -21,10 +22,8 @@ TEST_CASE("Element reference 1D") {
}
TEST_CASE("Element reference 2D") {
std::vector<int> vec;
for (int i = 0; i != 12; ++i) {
vec.push_back(i);
}
std::vector<int> vec(12);
std::iota(vec.begin(), vec.end(), 0);
NDView<int, 2> data(vec.data(), Shape<2>{3, 4});
REQUIRE(vec.size() == static_cast<size_t>(data.size()));
@ -58,10 +57,8 @@ TEST_CASE("Element reference 3D") {
}
TEST_CASE("Plus and miuns with single value") {
std::vector<int> vec;
for (int i = 0; i != 12; ++i) {
vec.push_back(i);
}
std::vector<int> vec(12);
std::iota(vec.begin(), vec.end(), 0);
NDView<int, 2> data(vec.data(), Shape<2>{3, 4});
data += 5;
int i = 0;
@ -116,10 +113,8 @@ TEST_CASE("elementwise assign") {
}
TEST_CASE("iterators") {
std::vector<int> vec;
for (int i = 0; i != 12; ++i) {
vec.push_back(i);
}
std::vector<int> vec(12);
std::iota(vec.begin(), vec.end(), 0);
NDView<int, 1> data(vec.data(), Shape<1>{12});
int i = 0;
for (const auto item : data) {
@ -167,27 +162,31 @@ TEST_CASE("divide with another span") {
}
TEST_CASE("Retrieve shape") {
std::vector<int> vec;
for (int i = 0; i != 12; ++i) {
vec.push_back(i);
}
std::vector<int> vec(12);
std::iota(vec.begin(), vec.end(), 0);
NDView<int, 2> data(vec.data(), Shape<2>{3, 4});
REQUIRE(data.shape()[0] == 3);
REQUIRE(data.shape()[1] == 4);
}
TEST_CASE("compare two views") {
std::vector<int> vec1;
for (int i = 0; i != 12; ++i) {
vec1.push_back(i);
}
std::vector<int> vec1(12);
std::iota(vec1.begin(), vec1.end(), 0);
NDView<int, 2> view1(vec1.data(), Shape<2>{3, 4});
std::vector<int> vec2;
for (int i = 0; i != 12; ++i) {
vec2.push_back(i);
}
std::vector<int> vec2(12);
std::iota(vec2.begin(), vec2.end(), 0);
NDView<int, 2> view2(vec2.data(), Shape<2>{3, 4});
REQUIRE((view1 == view2));
}
TEST_CASE("Create a view over a vector"){
std::vector<int> vec(12);
std::iota(vec.begin(), vec.end(), 0);
auto v = aare::make_view(vec);
REQUIRE(v.shape()[0] == 12);
REQUIRE(v[0] == 0);
REQUIRE(v[11] == 11);
}

View File

@ -1,9 +1,12 @@
#include "aare/RawSubFile.hpp"
#include "aare/PixelMap.hpp"
#include "aare/utils/ifstream_helpers.hpp"
#include <cstring> // memcpy
#include <fmt/core.h>
#include <iostream>
namespace aare {
RawSubFile::RawSubFile(const std::filesystem::path &fname,
@ -20,7 +23,7 @@ RawSubFile::RawSubFile(const std::filesystem::path &fname,
}
if (std::filesystem::exists(fname)) {
n_frames = std::filesystem::file_size(fname) /
m_num_frames = std::filesystem::file_size(fname) /
(sizeof(DetectorHeader) + rows * cols * bitdepth / 8);
} else {
throw std::runtime_error(
@ -35,7 +38,7 @@ RawSubFile::RawSubFile(const std::filesystem::path &fname,
}
#ifdef AARE_VERBOSE
fmt::print("Opened file: {} with {} frames\n", m_fname.string(), n_frames);
fmt::print("Opened file: {} with {} frames\n", m_fname.string(), m_num_frames);
fmt::print("m_rows: {}, m_cols: {}, m_bitdepth: {}\n", m_rows, m_cols,
m_bitdepth);
fmt::print("file size: {}\n", std::filesystem::file_size(fname));
@ -43,8 +46,8 @@ RawSubFile::RawSubFile(const std::filesystem::path &fname,
}
void RawSubFile::seek(size_t frame_index) {
if (frame_index >= n_frames) {
throw std::runtime_error(LOCATION + fmt::format("Frame index {} out of range in a file with {} frames", frame_index, n_frames));
if (frame_index >= m_num_frames) {
throw std::runtime_error(LOCATION + fmt::format("Frame index {} out of range in a file with {} frames", frame_index, m_num_frames));
}
m_file.seekg((sizeof(DetectorHeader) + bytes_per_frame()) * frame_index);
}
@ -60,6 +63,10 @@ void RawSubFile::read_into(std::byte *image_buf, DetectorHeader *header) {
m_file.seekg(sizeof(DetectorHeader), std::ios::cur);
}
if (m_file.fail()){
throw std::runtime_error(LOCATION + ifstream_error_msg(m_file));
}
// TODO! expand support for different bitdepths
if (m_pixel_map) {
// read into a temporary buffer and then copy the data to the buffer
@ -79,8 +86,24 @@ void RawSubFile::read_into(std::byte *image_buf, DetectorHeader *header) {
// read directly into the buffer
m_file.read(reinterpret_cast<char *>(image_buf), bytes_per_frame());
}
if (m_file.fail()){
throw std::runtime_error(LOCATION + ifstream_error_msg(m_file));
}
}
void RawSubFile::read_into(std::byte *image_buf, size_t n_frames, DetectorHeader *header) {
for (size_t i = 0; i < n_frames; i++) {
read_into(image_buf, header);
image_buf += bytes_per_frame();
if (header) {
++header;
}
}
}
template <typename T>
void RawSubFile::read_with_map(std::byte *image_buf) {
auto part_buffer = new std::byte[bytes_per_frame()];

View File

@ -1,5 +1,5 @@
#include "aare/decode.hpp"
#include <cmath>
namespace aare {
uint16_t adc_sar_05_decode64to16(uint64_t input){
@ -22,6 +22,10 @@ uint16_t adc_sar_05_decode64to16(uint64_t input){
}
void adc_sar_05_decode64to16(NDView<uint64_t, 2> input, NDView<uint16_t,2> output){
if(input.shape() != output.shape()){
throw std::invalid_argument(LOCATION + " input and output shapes must match");
}
for(int64_t i = 0; i < input.shape(0); i++){
for(int64_t j = 0; j < input.shape(1); j++){
output(i,j) = adc_sar_05_decode64to16(input(i,j));
@ -49,6 +53,9 @@ uint16_t adc_sar_04_decode64to16(uint64_t input){
}
void adc_sar_04_decode64to16(NDView<uint64_t, 2> input, NDView<uint16_t,2> output){
if(input.shape() != output.shape()){
throw std::invalid_argument(LOCATION + " input and output shapes must match");
}
for(int64_t i = 0; i < input.shape(0); i++){
for(int64_t j = 0; j < input.shape(1); j++){
output(i,j) = adc_sar_04_decode64to16(input(i,j));
@ -56,6 +63,40 @@ void adc_sar_04_decode64to16(NDView<uint64_t, 2> input, NDView<uint16_t,2> outpu
}
}
double apply_custom_weights(uint16_t input, const NDView<double, 1> weights) {
if(weights.size() > 16){
throw std::invalid_argument("weights size must be less than or equal to 16");
}
double result = 0.0;
for (ssize_t i = 0; i < weights.size(); ++i) {
result += ((input >> i) & 1) * std::pow(weights[i], i);
}
return result;
}
void apply_custom_weights(NDView<uint16_t, 1> input, NDView<double, 1> output, const NDView<double,1> weights) {
if(input.shape() != output.shape()){
throw std::invalid_argument(LOCATION + " input and output shapes must match");
}
//Calculate weights to avoid repeatedly calling std::pow
std::vector<double> weights_powers(weights.size());
for (ssize_t i = 0; i < weights.size(); ++i) {
weights_powers[i] = std::pow(weights[i], i);
}
// Apply custom weights to each element in the input array
for (ssize_t i = 0; i < input.shape(0); i++) {
double result = 0.0;
for (size_t bit_index = 0; bit_index < weights_powers.size(); ++bit_index) {
result += ((input(i) >> bit_index) & 1) * weights_powers[bit_index];
}
output(i) = result;
}
}
} // namespace aare

80
src/decode.test.cpp Normal file
View File

@ -0,0 +1,80 @@
#include "aare/decode.hpp"
#include <catch2/matchers/catch_matchers_floating_point.hpp>
#include <catch2/catch_test_macros.hpp>
#include "aare/NDArray.hpp"
using Catch::Matchers::WithinAbs;
#include <vector>
TEST_CASE("test_adc_sar_05_decode64to16"){
uint64_t input = 0;
uint16_t output = aare::adc_sar_05_decode64to16(input);
CHECK(output == 0);
// bit 29 on th input is bit 0 on the output
input = 1UL << 29;
output = aare::adc_sar_05_decode64to16(input);
CHECK(output == 1);
// test all bits by iteratting through the bitlist
std::vector<int> bitlist = {29, 19, 28, 18, 31, 21, 27, 20, 24, 23, 25, 22};
for (size_t i = 0; i < bitlist.size(); i++) {
input = 1UL << bitlist[i];
output = aare::adc_sar_05_decode64to16(input);
CHECK(output == (1 << i));
}
// test a few "random" values
input = 0;
input |= (1UL << 29);
input |= (1UL << 19);
input |= (1UL << 28);
output = aare::adc_sar_05_decode64to16(input);
CHECK(output == 7UL);
input = 0;
input |= (1UL << 18);
input |= (1UL << 27);
input |= (1UL << 25);
output = aare::adc_sar_05_decode64to16(input);
CHECK(output == 1096UL);
input = 0;
input |= (1UL << 25);
input |= (1UL << 22);
output = aare::adc_sar_05_decode64to16(input);
CHECK(output == 3072UL);
}
TEST_CASE("test_apply_custom_weights") {
uint16_t input = 1;
aare::NDArray<double, 1> weights_data({3}, 0.0);
weights_data(0) = 1.7;
weights_data(1) = 2.1;
weights_data(2) = 1.8;
auto weights = weights_data.view();
double output = aare::apply_custom_weights(input, weights);
CHECK_THAT(output, WithinAbs(1.0, 0.001));
input = 1 << 1;
output = aare::apply_custom_weights(input, weights);
CHECK_THAT(output, WithinAbs(2.1, 0.001));
input = 1 << 2;
output = aare::apply_custom_weights(input, weights);
CHECK_THAT(output, WithinAbs(3.24, 0.001));
input = 0b111;
output = aare::apply_custom_weights(input, weights);
CHECK_THAT(output, WithinAbs(6.34, 0.001));
}

View File

@ -0,0 +1,18 @@
#include "aare/utils/ifstream_helpers.hpp"
namespace aare {
std::string ifstream_error_msg(std::ifstream &ifs) {
std::ios_base::iostate state = ifs.rdstate();
if (state & std::ios_base::eofbit) {
return " End of file reached";
} else if (state & std::ios_base::badbit) {
return " Bad file stream";
} else if (state & std::ios_base::failbit) {
return " File read failed";
}else{
return " Unknown/no error";
}
}
} // namespace aare