first commit

This commit is contained in:
Dmitry Ozerov
2023-09-28 16:10:55 +02:00
parent a63e69291e
commit 942eea9808
11 changed files with 1388 additions and 78 deletions

23
Makefile Normal file
View File

@ -0,0 +1,23 @@
all: peakfinder8_extension.so
peakfinder8_extension.so: peakfinder8/libpeakfinder8.so
python setup.py build_ext --inplace
clean:
rm -rf build
rm -rf peakfinder8/libpeakfinder8.so peakfinder8/peakfinders.o
rm -rf peakfinder8_extension.*.so
rm -rf dist peakfinder8_extension.egg-info
rm -rf peakfinder8/cython/peakfinder8_extension.cpp
peakfinder8/peakfinders.o: peakfinder8/cheetahmodules.h peakfinder8/peakfinders.cpp peakfinder8/peakfinders.h
gcc -fPIC -I ./peakfinder8 -c peakfinder8/peakfinders.cpp -o peakfinder8/peakfinders.o
peakfinder8/libpeakfinder8.so: peakfinder8/peakfinders.o
gcc -shared -o peakfinder8/libpeakfinder8.so peakfinder8/peakfinders.o -lc
install: all
cp peakfinder8/libpeakfinder8.so ${CONDA_PREFIX}/lib/.
python setup.py install

View File

@ -3,91 +3,34 @@
Detector Analysis Pipeline
Runs on detector data stream produced by sf-daq
## Getting started
To make it easy for you to get started with GitLab, here's a list of recommended next steps.
Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)!
## Add your files
- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files
- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command:
```
cd existing_repo
git remote add origin https://gitlab.psi.ch/sf-daq/dap.git
git branch -M main
git push -uf origin main
```
## Integrate with your tools
- [ ] [Set up project integrations](https://gitlab.psi.ch/sf-daq/dap/-/settings/integrations)
## Collaborate with your team
- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/)
- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html)
- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically)
- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/)
- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html)
## Test and Deploy
Use the built-in continuous integration in GitLab.
- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html)
- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/)
- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html)
- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/)
- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html)
***
# Editing this README
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
## Suggestions for a good README
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
## Name
Choose a self-explaining name for your project.
DAP is a Detector Analysis Pipeline which runs on detector stream data produced by sf-daq
## Description
Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors.
## Badges
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
## Visuals
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
## Installation
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
At PSI-Swissfel there is already pre-installed package/conda environment to use with:
```
source /sf/jungfrau/applications/miniconda3/etc/profile.d/conda.sh
conda activate dap
```
### To install from source
Create conda environment
```
conda create -n test-dap cython numpy pyzmq jungfrau_util
conda activate test-dap
```
Clone code of the dap and install peakfinder8_extension in the conda environment
```
git clone https://gitlab.psi.ch/sf-daq/dap.git
cd dap
make install
```
## Usage
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
## Support
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
## Acknowledgment
## Roadmap
If you have ideas for releases in the future, it is a good idea to list them in the README.
## Contributing
State if you are open to contributions and what your requirements are for accepting them.
For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self.
You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser.
## Authors and acknowledgment
Show your appreciation to those who have contributed to the project.
## License
For open source projects, say how it is licensed.
## Project status
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
Initially DAP was made for SFX analysis, to run peak finder on a stream of data. peakfinder8 from cheetah was used for this purpose and big thanks to Valerio Mariani who provided cython implementation of peakfinder8

66
ap/accumulator.py Normal file
View File

@ -0,0 +1,66 @@
import zmq
import os
import argparse
flags = 0
OUTPUT_DIR_NAME = "/gpfs/photonics/swissfel/buffer/dap/data"
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--accumulator", default="localhost", help="name of host where accumulator works")
parser.add_argument("--accumulator_port", default=13002, type=int, help="accumulator port")
args = parser.parse_args()
FA_HOST_ACCUMULATE = args.accumulator
FA_PORT_ACCUMULATE = args.accumulator_port
zmq_context = zmq.Context(io_threads=4)
poller = zmq.Poller()
# Accumulator
if True:
accumulator_socket = zmq_context.socket(zmq.PULL)
accumulator_socket.bind('tcp://*:%s' % FA_PORT_ACCUMULATE )
poller.register(accumulator_socket, zmq.POLLIN)
n_frames_received = 0
run_name_before = 'very_first_start'
fNameOutput = f'{OUTPUT_DIR_NAME}/{run_name_before}.dap'
if not os.path.isdir(os.path.dirname(fNameOutput)):
os.makedirs(os.path.dirname(fNameOutput))
outputDap=open(fNameOutput, 'a')
while True:
events = dict(poller.poll(10)) # in accumulator check for worker output every 0.01 seconds
if accumulator_socket in events:
results = accumulator_socket.recv_json(flags)
n_frames_received += 1
pulse_id = results.get('pulse_id', 0)
run_name = str(pulse_id//10000*10000)
detector = results.get('detector_name', "")
if run_name != run_name_before:
run_name_before = run_name
outputDap.close()
fNameOutput = f'{OUTPUT_DIR_NAME}/{detector}/{run_name_before}.dap'
if not os.path.isdir(os.path.dirname(fNameOutput)):
os.makedirs(os.path.dirname(fNameOutput))
outputDap=open(fNameOutput, 'a')
pr_rois = results.get("roi_intensities",[])
print(pulse_id, results.get('is_good_frame', -1), results.get('is_hit_frame', False), results.get('number_of_spots', -1), results.get("laser_on", False), *pr_rois, file=outputDap)
else:
outputDap.flush() # may be too intensive
if __name__ == "__main__":
main()

501
ap/worker.py Normal file
View File

@ -0,0 +1,501 @@
from copy import copy
import numpy as np
import zmq
from time import time, sleep
from datetime import datetime
import os
import argparse
from random import randint, gauss
from math import exp
import json
from peakfinder8_extension import peakfinder_8
import jungfrau_utils as ju
flags = 0
MODULE_SIZE_X = 1024
MODULE_SIZE_Y = 512
def radial_profile(data, r, nr, keep_pixels=None):
if keep_pixels is not None:
tbin = np.bincount(r, data[keep_pixels].ravel())
else:
tbin = np.bincount(r, data.ravel())
radialprofile = tbin / nr
return radialprofile
def prepare_radial_profile(data, center, keep_pixels=None):
y, x = np.indices((data.shape))
r = np.sqrt((x - center[0])**2 + (y - center[1])**2)
if keep_pixels is not None:
r = r[keep_pixels].astype(int).ravel()
else:
r = r.astype(np.int).ravel()
nr = np.bincount(r)
return r, nr
def mask_corner_pixels(pixel_mask, n_pixels):
return pixel_mask
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--backend", default=None, help="backend address")
parser.add_argument("--accumulator", default="localhost", help="name of host where accumulator works")
parser.add_argument("--accumulator_port", default=13002, type=int, help="accumulator port")
parser.add_argument("--visualisation", default="localhost", help="name of host where visualisation works")
parser.add_argument("--visualisation_port", default=13002, type=int, help="visualisation port")
parser.add_argument("--peakfinder_parameters", default=None, help="json file with peakfinder parameters")
parser.add_argument("--skip_frames_rate", default=1, type=int, help="send to streamvis each of skip_frames_rate frames")
args = parser.parse_args()
if args.backend:
BACKEND_ADDRESS = args.backend
else:
print("no backend address defined, exit")
exit()
FA_HOST_ACCUMULATE = args.accumulator
FA_PORT_ACCUMULATE = args.accumulator_port
FA_HOST_VISUALISATION = args.visualisation
FA_PORT_VISUALISATION = args.visualisation_port
skip_frames_rate = args.skip_frames_rate
peakfinder_parameters = {}
peakfinder_parameters_time = -1
if args.peakfinder_parameters is not None and os.path.exists(args.peakfinder_parameters):
with open(args.peakfinder_parameters, "r") as read_file:
peakfinder_parameters = json.load(read_file)
peakfinder_parameters_time = os.path.getmtime(args.peakfinder_parameters)
pulseid = 0
ju_stream_adapter = ju.StreamAdapter()
zmq_context = zmq.Context(io_threads=4)
poller = zmq.Poller()
# all the normal workers
if True:
worker = 1
# receive from backend:
backend_socket = zmq_context.socket(zmq.PULL)
backend_socket.connect(BACKEND_ADDRESS)
poller.register(backend_socket, zmq.POLLIN)
accumulator_socket = zmq_context.socket(zmq.PUSH)
accumulator_socket.connect('tcp://%s:%s' % (FA_HOST_ACCUMULATE, FA_PORT_ACCUMULATE) )
visualisation_socket = zmq_context.socket(zmq.PUB)
visualisation_socket.connect('tcp://%s:%s' % (FA_HOST_VISUALISATION, FA_PORT_VISUALISATION) )
# in case of problem with communication to visualisation, keep in 0mq buffer only few messages
visualisation_socket.set_hwm(10)
keep_pixels = None
r_radial_integration = None
center_radial_integration = None
results = {}
pedestal_file_name_saved = None
pixel_mask_corrected = None
pixel_mask_pf = None
n_aggregated_images = 1
data_summed = None
while True:
# check if peakfinder parameters changed and then re-read it
try:
if peakfinder_parameters_time > 0:
new_time = os.path.getmtime(args.peakfinder_parameters)
if ( new_time - peakfinder_parameters_time ) > 2.0:
old_peakfinder_parameters = peakfinder_parameters
sleep(0.5)
with open(args.peakfinder_parameters, "r") as read_file:
peakfinder_parameters = json.load(read_file)
peakfinder_parameters_time = new_time
center_radial_integration = None
if worker == 0:
print(f'({pulseid}) update peakfinder parameters {old_peakfinder_parameters}', flush=True)
print(f' --> {peakfinder_parameters}', flush=True)
print("",flush=True)
except:
print(f'({pulseid}) problem to read peakfinder parameters file, worker : {worker}', flush=True)
events = dict(poller.poll(2000)) # check every 2 seconds in each worker
if backend_socket in events:
metadata = backend_socket.recv_json(flags)
image = backend_socket.recv(flags, copy=False, track=False)
image = np.frombuffer(image, dtype=metadata['type']).reshape(metadata['shape'])
results = copy(metadata)
if results['shape'][0] == 2 and results['shape'][1] == 2:
continue
pulseid = results.get('pulse_id', 0)
results.update(peakfinder_parameters)
detector = results.get('detector_name', "")
results['laser_on'] = False
results['number_of_spots'] = 0
results['is_hit_frame'] = False
daq_rec = results.get("daq_rec",0)
event_laser = bool((daq_rec>>16)&1)
event_darkshot = bool((daq_rec>>17)&1)
event_fel = bool((daq_rec>>18)&1)
event_ppicker = bool((daq_rec>>19)&1)
#
if not event_darkshot:
results['laser_on'] = event_laser
# special settings for Cristallina JF16 detector
# if detector in ["JF16T03V01"]:
# if not event_ppicker:
# continue
# special settings for p20270, only few shots were opened by pulse-picker
# if detector in ["JF06T32V02"]:
# if event_ppicker:
# results['number_of_spots'] = 50
# results['is_hit_frame'] = True
double_pixels = results.get('double_pixels', "mask")
pedestal_file_name = metadata.get("pedestal_name", None)
if pedestal_file_name is not None and pedestal_file_name != pedestal_file_name_saved:
n_corner_pixels_mask = results.get("n_corner_pixels_mask", 0)
pixel_mask_current = ju_stream_adapter.handler.pixel_mask
pixel_mask_current = mask_corner_pixels(pixel_mask_current, n_corner_pixels_mask)
ju_stream_adapter.handler.pixel_mask = pixel_mask_current
pedestal_file_name_saved = pedestal_file_name
data = ju_stream_adapter.process(image, metadata, double_pixels=double_pixels)
# pedestal file is not in stream, skip this frame
if ju_stream_adapter.handler.pedestal_file is None or ju_stream_adapter.handler.pedestal_file == "":
continue
data=np.ascontiguousarray(data)
# starting from ju 3.3.1 pedestal file is cached in library, re-calculated only if parameters(and/or pedestal file) are changed
id_pixel_mask_1 = id(pixel_mask_corrected)
pixel_mask_corrected=ju_stream_adapter.handler.get_pixel_mask(geometry=True, gap_pixels=True, double_pixels=double_pixels)
id_pixel_mask_2 = id(pixel_mask_corrected)
if id_pixel_mask_1 != id_pixel_mask_2:
keep_pixels = None
r_radial_integration = None
if pixel_mask_corrected is not None:
#pixel_mask_corrected=np.ascontiguousarray(pixel_mask_corrected)
pixel_mask_pf = np.ascontiguousarray(pixel_mask_corrected).astype(np.int8, copy=False)
else:
pixel_mask_pf = None
#
disabled_modules = results.get("disabled_modules", [])
# add additional mask at the edge of modules for JF06T08
apply_additional_mask = (results.get("apply_additional_mask", 0) == 1)
if detector == "JF06T08V04" and apply_additional_mask:
# edge pixels
pixel_mask_pf[67:1097,1063] = 0
pixel_mask_pf[0:1030, 1100] = 0
pixel_mask_pf[1106:2136, 1131] = 0
pixel_mask_pf[1039:2069, 1168] = 0
pixel_mask_pf[1039:2069, 1718] = 0
pixel_mask_pf[1039:2069, 1681] = 0
pixel_mask_pf[1106:2136, 618] = 0
pixel_mask_pf[1106:2136, 581] = 0
pixel_mask_pf[67:1097,513] = 0
pixel_mask_pf[67:1097, 550] = 0
pixel_mask_pf[0:1030, 1650] = 0
pixel_mask_pf[0:1030, 1613] = 0
pixel_mask_pf[1106, 68:582] = 0
pixel_mask_pf[1096, 550:1064] = 0
pixel_mask_pf[1106, 618:1132] = 0
pixel_mask_pf[1029, 1100:1614] = 0
pixel_mask_pf[1039, 1168:1682] = 0
pixel_mask_pf[1039, 1718:2230] = 0
pixel_mask_pf[1096, 0:513] = 0
pixel_mask_pf[1029, 1650:2163] = 0
pixel_mask_pf[2068, 1168:2232] = 0
pixel_mask_pf[67,0:1063] = 0
#bad region in left bottom inner module
pixel_mask_pf[842:1097, 669:671] = 0
#second bad region in left bottom inner module
pixel_mask_pf[1094, 620:807] = 0
# vertical line in upper left bottom module
pixel_mask_pf[842:1072, 87:90] = 0
pixel_mask_pf[1794, 1503:1550] = 0
if detector == "JF17T16V01" and apply_additional_mask:
# mask module 11
pixel_mask_pf[2619:3333,1577:2607] = 0
if pixel_mask_corrected is not None:
data_s = copy(image)
saturated_pixels_coordinates = ju_stream_adapter.handler.get_saturated_pixels(data_s, mask=True, geometry=True, gap_pixels=True, double_pixels=double_pixels)
results['saturated_pixels'] = len(saturated_pixels_coordinates[0])
results['saturated_pixels_x'] = saturated_pixels_coordinates[1].tolist()
results['saturated_pixels_y'] = saturated_pixels_coordinates[0].tolist()
# pump probe analysis
do_radial_integration = results.get("do_radial_integration", 0)
if (do_radial_integration != 0):
data_copy_1 = np.copy(data)
if keep_pixels is None and pixel_mask_pf is not None:
keep_pixels = pixel_mask_pf!=0
if center_radial_integration is None:
center_radial_integration = [results['beam_center_x'], results['beam_center_y']]
r_radial_integration = None
if r_radial_integration is None:
r_radial_integration, nr_radial_integration = prepare_radial_profile(data_copy_1, center_radial_integration, keep_pixels)
r_min_max = [int(np.min(r_radial_integration)), int(np.max(r_radial_integration))+1]
apply_threshold = results.get('apply_threshold', 0)
if (apply_threshold != 0) and all ( k in results for k in ('threshold_min', 'threshold_max')):
threshold_min = float(results['threshold_min'])
threshold_max = float(results['threshold_max'])
data_copy_1[data_copy_1 < threshold_min] = np.nan
if threshold_max > threshold_min:
data_copy_1[data_copy_1 > threshold_max] = np.nan
rp = radial_profile(data_copy_1, r_radial_integration, nr_radial_integration, keep_pixels)
silent_region_min = results.get("radial_integration_silent_min", None)
silent_region_max = results.get("radial_integration_silent_max", None)
if ( silent_region_min is not None and silent_region_max is not None and
silent_region_max > silent_region_min and
silent_region_min > r_min_max[0] and silent_region_max < r_min_max[1] ):
integral_silent_region = np.sum(rp[silent_region_min:silent_region_max])
rp = rp/integral_silent_region
results['radint_normalised'] = [silent_region_min, silent_region_max]
results['radint_I'] = list(rp[r_min_max[0]:])
results['radint_q'] = r_min_max
#copy image to work with peakfinder, just in case
d=np.copy(data)
# make all masked pixels values nan's
if pixel_mask_pf is not None:
d[pixel_mask_pf != 1] = np.nan
apply_threshold = results.get('apply_threshold', 0)
threshold_value_choice = results.get('threshold_value', "NaN")
threshold_value = 0 if threshold_value_choice == "0" else np.nan
if (apply_threshold != 0) and all ( k in results for k in ('threshold_min', 'threshold_max')):
threshold_min = float(results['threshold_min'])
threshold_max = float(results['threshold_max'])
d[d < threshold_min] = threshold_value
if threshold_max > threshold_min:
d[d > threshold_max] = threshold_value
# if roi calculation request is present, make it
roi_x1 = results.get('roi_x1', [])
roi_x2 = results.get('roi_x2', [])
roi_y1 = results.get('roi_y1', [])
roi_y2 = results.get('roi_y2', [])
if len(roi_x1) > 0 and len(roi_x1) == len(roi_x2) and len(roi_x1) == len(roi_y1) and len(roi_x1) == len(roi_y2):
roi_results = [0]*len(roi_x1)
roi_results_normalised = [0]*len(roi_x1)
if pixel_mask_pf is not None:
results['roi_intensities_x'] = []
results['roi_intensities_proj_x'] = []
for iRoi in range(len(roi_x1)):
data_roi = np.copy(d[roi_y1[iRoi]:roi_y2[iRoi],roi_x1[iRoi]:roi_x2[iRoi]])
roi_results[iRoi] = np.nansum(data_roi)
if threshold_value_choice == "NaN":
roi_results_normalised[iRoi] = roi_results[iRoi]/((roi_y2[iRoi]-roi_y1[iRoi])*(roi_x2[iRoi]-roi_x1[iRoi]))
else:
roi_results_normalised[iRoi] = np.nanmean(data_roi)
results['roi_intensities_x'].append([roi_x1[iRoi],roi_x2[iRoi]])
results['roi_intensities_proj_x'].append(np.nansum(data_roi,axis=0).tolist())
results['roi_intensities'] = [ float(r) for r in roi_results]
results['roi_intensities_normalised'] = [ float(r) for r in roi_results_normalised ]
# SPI analysis
do_spi_analysis = results.get("do_spi_analysis", 0)
if (do_spi_analysis != 0) and 'roi_intensities_normalised' in results and len(results['roi_intensities_normalised']) >= 2:
if 'spi_limit' in results and len(results['spi_limit']) == 2:
number_of_spots = 0
if results['roi_intensities_normalised'][0] >= results['spi_limit'][0]:
number_of_spots += 25
if results['roi_intensities_normalised'][1] >= results['spi_limit'][1]:
number_of_spots += 50
results['number_of_spots'] = number_of_spots
if number_of_spots > 0:
results['is_hit_frame'] = True
# in case all needed parameters are present, make peakfinding
do_peakfinder_analysis = results.get('do_peakfinder_analysis', 0)
if (do_peakfinder_analysis != 0) and pixel_mask_pf is not None and all ( k in results for k in ('beam_center_x', 'beam_center_y', 'hitfinder_min_snr', 'hitfinder_min_pix_count', 'hitfinder_adc_thresh')):
x_beam = results['beam_center_x'] - 0.5 # to coordinates where position of first pixel/point is 0.5, 0.5
y_beam = results['beam_center_y'] - 0.5 # to coordinates where position of first pixel/point is 0.5, 0.5
hitfinder_min_snr = results['hitfinder_min_snr']
hitfinder_min_pix_count = int(results['hitfinder_min_pix_count'])
hitfinder_adc_thresh = results['hitfinder_adc_thresh']
asic_ny, asic_nx = d.shape
nasics_y, nasics_x = 1, 1
hitfinder_max_pix_count = 100
max_num_peaks = 10000
# usually don't need to change this value, rather robust
hitfinder_local_bg_radius= 20.
#in case of further modification with the mask, make a new one, independent from real mask
maskPr = np.copy(pixel_mask_pf)
y, x = np.indices(d.shape)
pix_r = np.sqrt((x-x_beam)**2 + (y-y_beam)**2)
(peak_list_x, peak_list_y, peak_list_value) = peakfinder_8(max_num_peaks,
d.astype(np.float32),
maskPr.astype(np.int8),
pix_r.astype(np.float32),
asic_nx, asic_ny,
nasics_x, nasics_y,
hitfinder_adc_thresh,
hitfinder_min_snr,
hitfinder_min_pix_count,
hitfinder_max_pix_count,
hitfinder_local_bg_radius)
number_of_spots = len(peak_list_x)
results['number_of_spots'] = number_of_spots
if number_of_spots != 0:
results['spot_x'] = [-1.0] * number_of_spots
results['spot_y'] = [-1.0] * number_of_spots
results['spot_intensity'] = copy(peak_list_value)
for i in range(number_of_spots):
results['spot_x'][i] = peak_list_x[i]+0.5
results['spot_y'][i] = peak_list_y[i]+0.5
else:
results['spot_x'] = []
results['spot_y'] = []
results['spot_intensity'] = []
npeaks_threshold_hit = results.get('npeaks_threshold_hit', 15)
if number_of_spots >= npeaks_threshold_hit:
results['is_hit_frame'] = True
forceSendVisualisation = False
if data.dtype != np.uint16:
apply_threshold = results.get('apply_threshold', 0)
apply_aggregation = results.get('apply_aggregation', 0)
if apply_aggregation == 0:
data_summed = None
n_aggregated_images = 1
if (apply_threshold != 0) or (apply_aggregation != 0 ):
if (apply_threshold != 0) and all ( k in results for k in ('threshold_min', 'threshold_max')):
threshold_min = float(results['threshold_min'])
threshold_max = float(results['threshold_max'])
data[data < threshold_min] = 0.0
if threshold_max > threshold_min:
data[data > threshold_max] = 0.0
if (apply_aggregation != 0 ) and 'aggregation_max' in results:
if data_summed is not None:
data += data_summed
n_aggregated_images += 1
data_summed = data.copy()
data_summed[data == -np.nan] = -np.nan
results['aggregated_images'] = n_aggregated_images
results['worker'] = worker
if n_aggregated_images >= results['aggregation_max']:
forceSendVisualisation = True
data_summed = None
n_aggregated_images = 1
data[pixel_mask_pf == 0] = np.NaN
else:
data = image
results['type'] = str(data.dtype)
results['shape'] = data.shape
frame_number = metadata['frame']
#
accumulator_socket.send_json(results, flags)
if (apply_aggregation != 0 ) and 'aggregation_max' in results:
if forceSendVisualisation:
visualisation_socket.send_json(results, flags | zmq.SNDMORE)
visualisation_socket.send(data, flags, copy=True, track=True)
else:
data = np.empty((2,2), dtype=np.uint16)
results['type'] = str(data.dtype)
results['shape'] = data.shape
visualisation_socket.send_json(results, flags | zmq.SNDMORE)
visualisation_socket.send(data, flags, copy=True, track=True)
else:
if results['is_good_frame'] and (results['is_hit_frame'] or randint(1, skip_frames_rate) == 1):
visualisation_socket.send_json(results, flags | zmq.SNDMORE)
visualisation_socket.send(data, flags, copy=True, track=True)
else:
data = np.empty((2,2), dtype=np.uint16)
results['type'] = str(data.dtype)
results['shape'] = data.shape
visualisation_socket.send_json(results, flags | zmq.SNDMORE)
visualisation_socket.send(data, flags, copy=True, track=True)
if __name__ == "__main__":
main()

5
peakfinder8/.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
build/
cython/peakfinder8_extension.cpp
peakfinder8_extension.cpython-*-x86_64-linux-gnu.so
libcheetah.so
peakfinders.o

View File

@ -0,0 +1,46 @@
/*
* cheetahmodules.h
* cheetah
*
* Created by Anton Barty on 7/2/11.
* Copyright 2011 CFEL. All rights reserved.
*
*/
int peakfinder3(tPeakList*, float*, char*, long, long, long, long, float, float, long, long, long);
int peakfinder6(tPeakList*, float*, char*, long, long, long, long, float, float, long, long, long, float);
int peakfinder8(tPeakList*, float*, char*, float*, long, long, long, long, float, float, long, long, long);
int peakfinder8old(tPeakList*, float*, char*, float*, long, long, long, long, float, float, long, long, long);
int killNearbyPeaks(tPeakList*, float );
#include <stdint.h>
// from detectorObject.h file
//--------------------------------------------------------------------------------------------------------------------
/*
* Bits for pixel masks
* Oriented along conventions defined for CXI file format ( https://github.com/FilipeMaia/CXI/raw/master/cxi_file_format.pdf )
* CONVENTIONS:
* - All options are dominantly inherited during assembly and pixel integration (see assembleImage.cpp)
* - The default value for all options is "false"
* */
static const uint16_t PIXEL_IS_PERFECT = 0; // Remember to change this value if necessary after adding a new option
static const uint16_t PIXEL_IS_INVALID = 1; // bit 0
static const uint16_t PIXEL_IS_SATURATED = 2; // bit 1
static const uint16_t PIXEL_IS_HOT = 4; // bit 2
static const uint16_t PIXEL_IS_DEAD = 8; // bit 3
static const uint16_t PIXEL_IS_SHADOWED = 16; // bit 4
static const uint16_t PIXEL_IS_IN_PEAKMASK = 32; // bit 5
static const uint16_t PIXEL_IS_TO_BE_IGNORED = 64; // bit 6
static const uint16_t PIXEL_IS_BAD = 128; // bit 7
static const uint16_t PIXEL_IS_OUT_OF_RESOLUTION_LIMITS = 256; // bit 8
static const uint16_t PIXEL_IS_MISSING = 512; // bit 9
static const uint16_t PIXEL_IS_NOISY = 1024; // bit 10
static const uint16_t PIXEL_IS_ARTIFACT_CORRECTED = 2048; // bit 11
static const uint16_t PIXEL_FAILED_ARTIFACT_CORRECTION = 4096; // bit 12
static const uint16_t PIXEL_IS_PEAK_FOR_HITFINDER = 8192; // bit 13
static const uint16_t PIXEL_IS_PHOTON_BACKGROUND_CORRECTED = 16384; // bit 14
static const uint16_t PIXEL_IS_ALL = PIXEL_IS_INVALID | PIXEL_IS_SATURATED | PIXEL_IS_HOT | PIXEL_IS_DEAD | PIXEL_IS_SHADOWED | PIXEL_IS_IN_PEAKMASK | PIXEL_IS_TO_BE_IGNORED | PIXEL_IS_BAD | PIXEL_IS_OUT_OF_RESOLUTION_LIMITS | PIXEL_IS_MISSING | PIXEL_IS_NOISY | PIXEL_IS_ARTIFACT_CORRECTED | PIXEL_FAILED_ARTIFACT_CORRECTION | PIXEL_IS_PEAK_FOR_HITFINDER | PIXEL_IS_PHOTON_BACKGROUND_CORRECTED; // all bits
//
inline bool isAnyOfBitOptionsSet(uint16_t value, uint16_t option) {return ((value & option)!=0);}
//--------------------------------------------------------------------------------------------------------------------

View File

@ -0,0 +1 @@
int peakfinder8(tPeakList*, float*, char*, float*, long, long, long, long, float, float, long, long, long);

View File

@ -0,0 +1,90 @@
cimport numpy
from libcpp.vector cimport vector
from libc.stdlib cimport malloc, free
#
# PEAKFINDER 8
#
cdef extern from "peakfinders.h":
ctypedef struct tPeakList:
long nPeaks
long nHot
float peakResolution
float peakResolutionA
float peakDensity
float peakNpix
float peakTotal
int memoryAllocated
long nPeaks_max
float *peak_maxintensity
float *peak_totalintensity
float *peak_sigma
float *peak_snr
float *peak_npix
float *peak_com_x
float *peak_com_y
long *peak_com_index
float *peak_com_x_assembled
float *peak_com_y_assembled
float *peak_com_r_assembled
float *peak_com_q
float *peak_com_res
void allocatePeakList(tPeakList* peak_list, long max_num_peaks)
void freePeakList(tPeakList peak_list)
cdef extern from "peakfinder8.hh":
int peakfinder8(tPeakList *peaklist,
float *data, char *mask, float *pix_r, long asic_nx, long asic_ny,
long nasics_x, long nasics_y, float ADCthresh, float hitfinderMinSNR,
long hitfinderMinPixCount, long hitfinderMaxPixCount,
long hitfinderLocalBGRadius)
def peakfinder_8(int max_num_peaks,
numpy.ndarray[numpy.float32_t, ndim=2, mode="c"] data,
numpy.ndarray[numpy.int8_t, ndim=2, mode="c"] mask,
numpy.ndarray[numpy.float32_t, ndim=2, mode="c"] pix_r,
long asic_nx, long asic_ny,
long nasics_x, long nasics_y, float adc_thresh, float hitfinder_min_snr,
long hitfinder_min_pix_count, long hitfinder_max_pix_count,
long hitfinder_local_bg_radius):
cdef numpy.int8_t *mask_pointer = &mask[0,0]
cdef char *mask_char_pointer = <char*> mask_pointer
cdef tPeakList peak_list
allocatePeakList(&peak_list, max_num_peaks)
peakfinder8(&peak_list, &data[0,0], mask_char_pointer, &pix_r[0,0],
asic_nx, asic_ny, nasics_x, nasics_y,
adc_thresh, hitfinder_min_snr, hitfinder_min_pix_count,
hitfinder_max_pix_count, hitfinder_local_bg_radius)
cdef int i
cdef float peak_x, peak_y, peak_value
cdef vector[double] peak_list_x
cdef vector[double] peak_list_y
cdef vector[double] peak_list_value
num_peaks = peak_list.nPeaks
if num_peaks > max_num_peaks:
num_peaks = max_num_peaks
for i in range(0, num_peaks):
peak_x = peak_list.peak_com_x[i]
peak_y = peak_list.peak_com_y[i]
peak_value = peak_list.peak_totalintensity[i]
peak_list_x.push_back(peak_x)
peak_list_y.push_back(peak_y)
peak_list_value.push_back(peak_value)
freePeakList(peak_list)
return (peak_list_x, peak_list_y, peak_list_value)

566
peakfinder8/peakfinders.cpp Normal file
View File

@ -0,0 +1,566 @@
//
// peakfinders.cpp
// cheetah
//
// Created by Anton Barty on 23/3/13.
//
//
#include <stdio.h>
#include <string.h>
#include <pthread.h>
#include <math.h>
#include <stdlib.h>
#include "peakfinders.h"
#include "cheetahmodules.h"
/*
* Create arrays for remembering Bragg peak data
*/
void allocatePeakList(tPeakList *peak, long NpeaksMax) {
peak->nPeaks = 0;
peak->nPeaks_max = NpeaksMax;
peak->nHot = 0;
peak->peakResolution = 0;
peak->peakResolutionA = 0;
peak->peakDensity = 0;
peak->peakNpix = 0;
peak->peakTotal = 0;
peak->peak_maxintensity = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_totalintensity = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_sigma = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_snr = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_npix = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_com_x = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_com_y = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_com_index = (long *) calloc(NpeaksMax, sizeof(long));
peak->peak_com_x_assembled = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_com_y_assembled = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_com_r_assembled = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_com_q = (float *) calloc(NpeaksMax, sizeof(float));
peak->peak_com_res = (float *) calloc(NpeaksMax, sizeof(float));
peak->memoryAllocated = 1;
}
/*
* Clean up Bragg peak arrays
*/
void freePeakList(tPeakList peak) {
free(peak.peak_maxintensity);
free(peak.peak_totalintensity);
free(peak.peak_sigma);
free(peak.peak_snr);
free(peak.peak_npix);
free(peak.peak_com_x);
free(peak.peak_com_y);
free(peak.peak_com_index);
free(peak.peak_com_x_assembled);
free(peak.peak_com_y_assembled);
free(peak.peak_com_r_assembled);
free(peak.peak_com_q);
free(peak.peak_com_res);
peak.memoryAllocated = 0;
}
/*
* Peakfinder 8
* Version before modifications during Cherezov December 2014 LE80
* Count peaks by searching for connected pixels above threshold
* Anton Barty
*/
int peakfinder8(tPeakList *peaklist, float *data, char *mask, float *pix_r, long asic_nx, long asic_ny, long nasics_x, long nasics_y, float ADCthresh, float hitfinderMinSNR, long hitfinderMinPixCount, long hitfinderMaxPixCount, long hitfinderLocalBGRadius) {
// Derived values
long pix_nx = asic_nx*nasics_x;
long pix_ny = asic_ny*nasics_y;
long pix_nn = pix_nx*pix_ny;
//long asic_nn = asic_nx*asic_ny;
long hitfinderNpeaksMax = peaklist->nPeaks_max;
peaklist->nPeaks = 0;
peaklist->peakNpix = 0;
peaklist->peakTotal = 0;
// Variables for this hitfinder
long nat = 0;
long lastnat = 0;
//long counter=0;
float total;
int search_x[] = {0,-1,0,1,-1,1,-1,0,1};
int search_y[] = {0,-1,-1,-1,0,0,1,1,1};
int search_n = 9;
long e;
long *inx = (long *) calloc(pix_nn, sizeof(long));
long *iny = (long *) calloc(pix_nn, sizeof(long));
float thisI, thisIraw;
float totI,totIraw;
float maxI, maxIraw;
float snr;
float peak_com_x;
float peak_com_y;
long thisx;
long thisy;
long fs, ss;
float com_x, com_y, com_e;
float thisADCthresh;
nat = 0;
//counter = 0;
total = 0.0;
snr=0;
maxI = 0;
/*
* Create a buffer for image data so we don't nuke the main image by mistake
*/
float *temp = (float*) malloc(pix_nn*sizeof(float));
memcpy(temp, data, pix_nn*sizeof(float));
/*
* Apply mask (multiply data by 0 to ignore regions - this makes data below threshold for peak finding)
*/
for(long i=0;i<pix_nn;i++){
temp[i] *= mask[i];
}
/*
* Determine noise and offset as a funciton of radius
*/
float fminr, fmaxr;
long lminr, lmaxr;
fminr = 1e9;
fmaxr = -1e9;
// Figure out radius bounds
for(long i=0;i<pix_nn;i++){
if (pix_r[i] > fmaxr)
fmaxr = pix_r[i];
if (pix_r[i] < fminr)
fminr = pix_r[i];
}
lmaxr = (int)ceil(fmaxr)+1;
lminr = 0;
// Allocate and zero arrays
float *rsigma = (float*) malloc(lmaxr*sizeof(float));
float *roffset = (float*) malloc(lmaxr*sizeof(float));
long *rcount = (long*) malloc(lmaxr*sizeof(long));
float *rthreshold = (float*) malloc(lmaxr*sizeof(float));
long *peakpixels = (long *) calloc(hitfinderMaxPixCount, sizeof(long));
char *peakpixel = (char *) calloc(pix_nn, sizeof(char));
char *rthreshold_changed = (char *) malloc(lmaxr*sizeof(char));
int *pix_rint = (int *) malloc(pix_nn*sizeof(int));
long *pixels_check = (long *) malloc(pix_nn*sizeof(long));
long peakCounter = 0;
for(long i=0; i<lmaxr; i++) {
rthreshold[i] = 1e9;
rthreshold_changed[i] = 1;
}
for(long i=0;i<pix_nn;i++){
pix_rint[i] = lrint(pix_r[i]);
pixels_check[i] = i;
}
long n_pixels_check = pix_nn;
// Compute sigma and average of data values at each radius
// From this, compute the ADC threshold to be applied at each radius
// Iterate a few times to reduce the effect of positive outliers (ie: peaks)
long thisr;
float thisoffset, thissigma;
float thisthreshold;
int counter = 0;
bool rthreshold_converged = false;
//goto END;
// for(long counter=0; counter<5; counter++) {
while ( !rthreshold_converged & counter < 10 ) {
//printf("counter %i %i\n", counter, n_pixels_check);
counter++;
//for(long i=0; i<lmaxr; i++) {
// roffset[i] = 0;
// rsigma[i] = 0;
// rcount[i] = 0;
//}
memset(roffset,0,lmaxr*sizeof(float));
memset(rsigma, 0,lmaxr*sizeof(float));
memset(rcount, 0,lmaxr*sizeof(long));
long new_pixels_check=0;
//for(long i=0;i<pix_nn;i++){
for(long i_pixel=0; i_pixel<n_pixels_check; i_pixel++) {
long i = pixels_check[i_pixel];
thisr = pix_rint[i];
if ( rthreshold_changed[thisr] == 1 ) {
if(mask[i] != 0) {
if(temp[i] < rthreshold[thisr]) {
roffset[thisr] += temp[i];
rsigma[thisr] += (temp[i]*temp[i]);
rcount[thisr] += 1;
}
pixels_check[new_pixels_check] = i;
new_pixels_check++;
}
}
}
n_pixels_check = new_pixels_check;
rthreshold_converged = true;
for(long i=0; i<lmaxr; i++) {
if ( rthreshold_changed[i] == 1 ) {
if(rcount[i] == 0) {
roffset[i] = 0;
rsigma[i] = 0;
thisthreshold = 1e9;
//rthreshold[i] = ADCthresh; // For testing
}
else {
thisoffset = roffset[i]/rcount[i];
thissigma = sqrt(rsigma[i]/rcount[i] - (thisoffset)*(thisoffset));
roffset[i] = thisoffset;
rsigma[i] = thissigma;
thisthreshold = roffset[i] + hitfinderMinSNR*rsigma[i];
if(thisthreshold < ADCthresh)
thisthreshold = ADCthresh;
//rthreshold[i] = ADCthresh; // For testing
}
rthreshold_changed[i] = 0;
if ( fabs(thisthreshold-rthreshold[i]) > 0.1*rsigma[i] ) {
rthreshold_changed[i] = 1;
rthreshold_converged = false;
}
rthreshold[i] = thisthreshold;
}
}
}
com_x=0;
com_y=0;
//goto END;
for(long mj=0; mj<nasics_y; mj++){
for(long mi=0; mi<nasics_x; mi++){
// Loop over pixels within a module
for(long j=1; j<asic_ny-1; j++){
for(long i=1; i<asic_nx-1; i++){
ss = (j+mj*asic_ny)*pix_nx;
fs = i+mi*asic_nx;
e = ss + fs;
if(e > pix_nn) {
printf("Array bounds error: e=%li\n",e);
exit(1);
}
thisr = pix_rint[e];
thisADCthresh = rthreshold[thisr];
if(temp[e] > thisADCthresh && peakpixel[e] == 0){
// This might be the start of a new peak - start searching
inx[0] = i;
iny[0] = j;
peakpixels[0] = e;
nat = 1;
totI = 0;
totIraw = 0;
maxI = 0;
maxIraw = 0;
peak_com_x = 0;
peak_com_y = 0;
// Keep looping until the pixel count within this peak does not change
do {
lastnat = nat;
// Loop through points known to be within this peak
for(long p=0; p<nat; p++){
// Loop through search pattern
for(long k=0; k<search_n; k++){
// Array bounds check
if((inx[p]+search_x[k]) < 0)
continue;
if((inx[p]+search_x[k]) >= asic_nx)
continue;
if((iny[p]+search_y[k]) < 0)
continue;
if((iny[p]+search_y[k]) >= asic_ny)
continue;
// Neighbour point in big array
thisx = inx[p]+search_x[k]+mi*asic_nx;
thisy = iny[p]+search_y[k]+mj*asic_ny;
e = thisx + thisy*pix_nx;
//if(e < 0 || e >= pix_nn){
// printf("Array bounds error: e=%i\n",e);
// continue;
//}
thisr = pix_rint[e];
thisADCthresh = rthreshold[thisr];
// Above threshold?
if(temp[e] > thisADCthresh && peakpixel[e] == 0 && mask[e] != 0){
//if(nat < 0 || nat >= global->pix_nn) {
// printf("Array bounds error: nat=%i\n",nat);
// break
//}
thisI = temp[e] - roffset[thisr];
totI += thisI; // add to integrated intensity
totIraw += temp[e];
peak_com_x += thisI*( (float) thisx ); // for center of mass x
peak_com_y += thisI*( (float) thisy ); // for center of mass y
//temp[e] = 0; // zero out this intensity so that we don't count it again
inx[nat] = inx[p]+search_x[k];
iny[nat] = iny[p]+search_y[k];
peakpixel[e] = 1;
if(nat < hitfinderMaxPixCount)
peakpixels[nat] = e;
if (thisI > maxI)
maxI = thisI;
if (thisI > maxIraw)
maxIraw = temp[e];
nat++;
}
}
}
} while(lastnat != nat);
// Too many or too few pixels means ignore this 'peak'; move on now
if(nat<hitfinderMinPixCount || nat>hitfinderMaxPixCount) {
continue;
}
/*
* Calculate center of mass for this peak from initial peak search
*/
com_x = peak_com_x/fabs(totI);
com_y = peak_com_y/fabs(totI);
com_e = lrint(com_x) + lrint(com_y)*pix_nx;
long com_xi = lrint(com_x) - mi*asic_nx;
long com_yi = lrint(com_y) - mj*asic_ny;
/*
* Calculate the local signal-to-noise ratio and local background in an annulus around this peak
* (excluding pixels which look like they might be part of another peak)
*/
float localSigma=0;
float localOffset=0;
long ringWidth = 2*hitfinderLocalBGRadius;
float sumI = 0;
float sumIsquared = 0;
long np_sigma = 0;
long np_counted = 0;
float fbgr;
float backgroundMaxI=0;
float fBackgroundThresh=0;
for(long bj=-ringWidth; bj<ringWidth; bj++){
for(long bi=-ringWidth; bi<ringWidth; bi++){
// Within-ASIC check
if((com_xi+bi) < 0)
continue;
if((com_xi+bi) >= asic_nx)
continue;
if((com_yi+bj) < 0)
continue;
if((com_yi+bj) >= asic_ny)
continue;
// Within outer ring check
fbgr = sqrt( bi*bi + bj*bj );
if( fbgr > ringWidth )// || fbgr <= hitfinderLocalBGRadius ) // || fbgr > hitfinderLocalBGRadius)
continue;
// Position of this point in data stream
thisx = com_xi + bi + mi*asic_nx;
thisy = com_yi + bj + mj*asic_ny;
e = thisx + thisy*pix_nx;
thisr = pix_rint[e];
thisADCthresh = rthreshold[thisr];
// Intensity above background
thisI = temp[e];
// If above ADC threshold, this could be part of another peak
//if (temp[e] > thisADCthresh)
// continue;
// Keep track of value and value-squared for offset and sigma calculation
// if(peakpixel[e] == 0 && mask[e] != 0) {
if(temp[e] < thisADCthresh && peakpixel[e] == 0 && mask[e] != 0) {
np_sigma++;
sumI += thisI;
sumIsquared += (thisI*thisI);
if(thisI > backgroundMaxI) {
backgroundMaxI = thisI;
}
}
np_counted += 1;
}
}
// Calculate local background and standard deviation
if (np_sigma != 0) {
localOffset = sumI/np_sigma;
localSigma = sqrt(sumIsquared/np_sigma - ((sumI/np_sigma)*(sumI/np_sigma)));
}
else {
localOffset = roffset[pix_rint[lrint(com_e)]];
localSigma = 0.01;
}
/*
* Re-integrate (and re-centroid) peak using local background estimates
*/
totI = 0;
totIraw = 0;
maxI = 0;
maxIraw = 0;
peak_com_x = 0;
peak_com_y = 0;
for(long counter=1; counter<nat && counter <= hitfinderMaxPixCount; counter++) {
e = peakpixels[counter];
thisIraw = temp[e];
thisI = thisIraw - localOffset;
totI += thisI;
totIraw += thisIraw;
// Remember that e = thisx + thisy*pix_nx;
ldiv_t xy = ldiv(e, pix_nx);
thisx = xy.rem;
thisy = xy.quot;
peak_com_x += thisI*( (float) thisx ); // for center of mass x
peak_com_y += thisI*( (float) thisy ); // for center of mass y
if (thisIraw > maxIraw)
maxIraw = thisIraw;
if (thisI > maxI)
maxI = thisI;
}
com_x = peak_com_x/fabs(totI);
com_y = peak_com_y/fabs(totI);
com_e = lrint(com_x) + lrint(com_y)*pix_nx;
/*
* Calculate signal-to-noise and apply SNR criteria
*/
snr = (float) (totI)/localSigma;
//snr = (float) (maxI)/localSigma;
//snr = (float) (totIraw-nat*localOffset)/localSigma;
//snr = (float) (maxIraw-localOffset)/localSigma;
// The more pixels there are in the peak, the more relaxed we are about this criterion
if( snr < hitfinderMinSNR ) // - nat +hitfinderMinPixCount
continue;
// Is the maximum intensity in the peak enough above intensity in background region to be a peak and not noise?
// The more pixels there are in the peak, the more relaxed we are about this criterion
//fBackgroundThresh = hitfinderMinSNR - nat;
//if(fBackgroundThresh > 4) fBackgroundThresh = 4;
fBackgroundThresh = 1;
fBackgroundThresh *= (backgroundMaxI-localOffset);
if( maxI < fBackgroundThresh)
continue;
// This is a peak? If so, add info to peak list
if(nat>=hitfinderMinPixCount && nat<=hitfinderMaxPixCount ) {
// This CAN happen!
if(totI == 0)
continue;
//com_x = peak_com_x/fabs(totI);
//com_y = peak_com_y/fabs(totI);
e = lrint(com_x) + lrint(com_y)*pix_nx;
if(e < 0 || e >= pix_nn){
printf("Array bounds error: e=%ld\n",e);
continue;
}
// Remember peak information
if (peakCounter < hitfinderNpeaksMax) {
peaklist->peakNpix += nat;
peaklist->peakTotal += totI;
peaklist->peak_com_index[peakCounter] = e;
peaklist->peak_npix[peakCounter] = nat;
peaklist->peak_com_x[peakCounter] = com_x;
peaklist->peak_com_y[peakCounter] = com_y;
peaklist->peak_totalintensity[peakCounter] = totI;
peaklist->peak_maxintensity[peakCounter] = maxI;
peaklist->peak_sigma[peakCounter] = localSigma;
peaklist->peak_snr[peakCounter] = snr;
peakCounter++;
peaklist->nPeaks = peakCounter;
}
else {
peakCounter++;
}
}
}
}
}
}
}
//END: ;
free(temp);
free(inx);
free(iny);
free(peakpixel);
free(peakpixels);
free(roffset);
free(rsigma);
free(rcount);
free(rthreshold);
free(pix_rint);
free(pixels_check);
free(rthreshold_changed);
peaklist->nPeaks = peakCounter;
return(peaklist->nPeaks);
/*************************************************/
}

54
peakfinder8/peakfinders.h Normal file
View File

@ -0,0 +1,54 @@
//
// peakfinders.h
// cheetah
//
// Created by Anton Barty on 23/3/13.
//
//
#ifndef cheetah_peakfinders_h
#define cheetah_peakfinders_h
typedef struct {
public:
long nPeaks;
long nHot;
float peakResolution; // Radius of 80% of peaks
float peakResolutionA; // Radius of 80% of peaks
float peakDensity; // Density of peaks within this 80% figure
float peakNpix; // Number of pixels in peaks
float peakTotal; // Total integrated intensity in peaks
int memoryAllocated;
long nPeaks_max;
float *peak_maxintensity; // Maximum intensity in peak
float *peak_totalintensity; // Integrated intensity in peak
float *peak_sigma; // Signal-to-noise ratio of peak
float *peak_snr; // Signal-to-noise ratio of peak
float *peak_npix; // Number of pixels in peak
float *peak_com_x; // peak center of mass x (in raw layout)
float *peak_com_y; // peak center of mass y (in raw layout)
long *peak_com_index; // closest pixel corresponding to peak
float *peak_com_x_assembled; // peak center of mass x (in assembled layout)
float *peak_com_y_assembled; // peak center of mass y (in assembled layout)
float *peak_com_r_assembled; // peak center of mass r (in assembled layout)
float *peak_com_q; // Scattering vector of this peak
float *peak_com_res; // REsolution of this peak
} tPeakList;
//void cPeakList::cPeakList(void) {
//}
//void cPeakList::~cPeakList(void) {
// free();
//}
void allocatePeakList(tPeakList*, long);
void freePeakList(tPeakList);
#endif

15
setup.py Normal file
View File

@ -0,0 +1,15 @@
peakfinder8_include_dir = 'peakfinder8/'
peakfinder8_library_dir = 'peakfinder8/'
from distutils.core import setup , Extension
from Cython.Build import cythonize
import numpy
peakfinder8_ext = Extension ( "peakfinder8_extension" , sources = [ "peakfinder8/cython/peakfinder8_extension.pyx"] ,
include_dirs = [ peakfinder8_include_dir, numpy.get_include() ],
library_dirs = [ peakfinder8_library_dir ],
libraries=["peakfinder8"],
language = "c++" )
setup ( name = "peakfinder8_extension" , ext_modules = cythonize ( peakfinder8_ext ))