61 Commits

Author SHA1 Message Date
9c164ea19f test(live_processing): change test mode to something more unlikely to be implemented 2025-06-23 18:34:56 +02:00
gac-x02da
096ac3ff43 Added process_average 2025-06-23 18:15:16 +02:00
gac-x02da
8043785885 Test fixes 2025-06-18 18:15:14 +02:00
gac-x05la
9aa4db3b3b Finally works 2025-06-18 18:15:14 +02:00
gac-x05la
06d0b5ae59 Tests pass in parts due to positionerror 2025-06-18 18:15:14 +02:00
gac-x05la
739811959e Task tests passing 2025-06-18 18:15:14 +02:00
gac-x05la
44b93c529e WIP 2025-06-18 18:15:14 +02:00
gac-x05la
9633006fb4 WIP on tests 2025-06-18 18:15:14 +02:00
gac-x02da
2a0b5c2a61 fix: fix std_daq live processing and simple scans 2025-06-18 15:55:24 +02:00
c5799cb37e fix(std_daq): temporarily handle file_path type in update_reference_with_file 2025-06-16 21:00:10 +02:00
8a1ceecc1b fix(std_daq): prevent division by zero in flat correction 2025-06-16 20:59:57 +02:00
gac-x02da
44d02790f4 fix(scan components): separate complete and restore config 2025-06-16 20:29:59 +02:00
gac-x02da
ad6eb97804 fix(std_daq processing): replace internal daq mount path with VM mount 2025-06-16 20:28:45 +02:00
gac-x02da
f722b76306 fix(pco): fix h5 data entry path for pco 2025-06-16 20:27:18 +02:00
gac-x02da
e566bfaf8d fix(pco): fix type conversion for statuscode 2025-06-16 20:26:41 +02:00
gac-x02da
b098109880 perf(std_daq): only reset backend if status is not idle 2025-06-16 20:26:09 +02:00
gac-x05la
ae85d179f5 refactor: std daq integration
Major refactor of the std daq integration for the PCO Edge camera and Gigafrost camera.
New live processing capabilities have been added, and the code has been cleaned up for better maintainability.
2025-06-16 16:59:08 +02:00
gac-x02da
c82e7c3d56 Added config file 2025-06-16 16:59:08 +02:00
gac-x05la
bbb61d71a6 chore: add tomcat as default session 2025-06-16 16:59:08 +02:00
96ca9c32d1 Update copier template source to github 2025-06-11 16:13:43 +02:00
4c342de5c4 chore: upgrade to copier template v1 2025-05-19 11:21:58 +02:00
gac-x05la
f688cfca0f Flaking 2025-04-30 13:31:10 +02:00
gac-x05la
5d5eef11f6 Fixed threading mutex 2025-04-30 13:10:53 +02:00
gac-x05la
b3e0a64bf1 Found out stdDAQ filename problem 2025-04-28 17:37:17 +02:00
gac-x05la
672cb60e72 Works without stdDAQ 2025-04-17 14:21:27 +02:00
gac-x05la
2d23c5e071 Verifying scan compatibility 2025-04-17 13:01:22 +02:00
gac-x05la
9a878db49a Device code to 9.6 2025-04-16 15:47:28 +02:00
gac-x05la
3689ec0677 Merging development 2025-04-16 15:28:54 +02:00
gac-x05la
f3961322e3 WIP 2025-04-16 15:23:25 +02:00
gac-x05la
4437bb13b8 WIP 2025-04-16 15:01:06 +02:00
bea5f95503 Merge branch 'main' into 'feature/revisiting-aerotech'
# Conflicts:
#   tomcat_bec/device_configs/microxas_test_bed.yaml
#   tomcat_bec/devices/aerotech/AerotechDriveDataCollection.py
#   tomcat_bec/devices/gigafrost/gigafrostcamera.py
#   tomcat_bec/devices/gigafrost/pcoedgecamera.py
#   tomcat_bec/scans/__init__.py
#   tomcat_bec/scans/tutorial_fly_scan.py
2025-04-16 14:10:42 +02:00
gac-x05la
56f4a6a61e Cutting back on unittests 2025-04-16 14:03:22 +02:00
gac-x05la
6dd03d24a0 WIP 2025-04-16 12:42:24 +02:00
gac-x05la
38fe391654 WIP 2025-04-16 12:37:55 +02:00
gac-x05la
faaccafec6 WIP 2025-04-16 12:32:27 +02:00
gac-x05la
5f5bf291a1 Upgraed DDC 2025-04-16 12:15:27 +02:00
gac-x05la
84bc0d692f BEC free consumer 2025-03-31 12:24:10 +02:00
gac-x05la
8a2d9c3569 WIP 2025-03-24 13:41:56 +01:00
gac-x05la
5726b4349a I'm up against higher powers 2025-03-21 17:37:49 +01:00
gac-x05la
f64d603cff Zombie thread fix 2025-03-21 17:00:38 +01:00
gac-x05la
eff10e1386 WIP 2025-03-21 16:22:03 +01:00
gac-x05la
bb5c2316e1 WIP 2025-03-21 16:11:10 +01:00
gac-x05la
045f348322 GF seems done, working on PCO 2025-03-21 15:42:40 +01:00
gac-x05la
32b976f9d6 Starting to look good 2025-03-20 17:46:51 +01:00
gac-x05la
49e8c64433 Run control works 2025-03-19 14:37:30 +01:00
gac-x05la
20dcd1849a WIP 2025-03-19 09:52:34 +01:00
gac-x05la
c9a2ce0dc5 Livestream with access mutex 2025-03-18 11:18:06 +01:00
gac-x05la
9051e1a9ee WIP 2025-03-17 18:27:47 +01:00
gac-x05la
6c0405ec7a Fix triggering order 2025-03-17 17:21:39 +01:00
gac-x05la
1c5e4a691a GF camera part seems working 2025-03-17 17:21:21 +01:00
gac-x05la
ab72fa3ffa WIP 2025-03-17 17:21:03 +01:00
gac-x05la
e02bb5892e WIP 2025-03-03 12:20:56 +01:00
gac-x05la
4266798e30 Bump 2025-02-25 14:31:46 +01:00
gac-x05la
f15fd00712 After session with Klaus 2025-02-20 12:42:33 +01:00
gac-x05la
76cf6ac447 New additions from klaus 2025-02-19 16:58:25 +01:00
gac-x05la
0bc3778d3f Before redeployment 2025-02-19 10:18:57 +01:00
gac-x05la
9d104173bd Work on fede scans 2025-02-18 18:50:04 +01:00
gac-x05la
f6fecfdc3f WIP with Fede's scans 2025-02-17 17:46:52 +01:00
gac-x05la
e8b3aedb10 Trigger mode notes 2025-02-14 17:46:27 +01:00
gac-x05la
c3cf12b8e2 SnapNStep works, Sequence fails on Aerotech axis 2025-02-14 17:39:36 +01:00
gac-x05la
d1e072b8d9 Delayed start on Aerotech tasks 2025-02-14 13:48:47 +01:00
53 changed files with 4915 additions and 2678 deletions

9
.copier-answers.yml Normal file
View File

@@ -0,0 +1,9 @@
# Do not edit this file!
# It is needed to track the repo template version, and editing may break things.
# This file will be overwritten by copier on template updates.
_commit: v1.0.0
_src_path: https://github.com/bec-project/plugin_copier_template.git
make_commit: false
project_name: tomcat_bec
widget_plugins_input: []

View File

@@ -1,6 +1,7 @@
BSD 3-Clause License
Copyright (c) 2024, Paul Scherrer Institute
Copyright (c) 2025, Paul Scherrer Institute
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
@@ -25,4 +26,4 @@ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

1
bin/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
# Add anything you don't want to check in to git, e.g. very large files

View File

@@ -5,7 +5,7 @@ build-backend = "hatchling.build"
[project]
name = "tomcat_bec"
version = "0.0.0"
description = "Custom device implementations based on the ophyd hardware abstraction layer"
description = "The TOMCAT plugin repository for BEC"
requires-python = ">=3.10"
classifiers = [
"Development Status :: 3 - Alpha",
@@ -24,6 +24,7 @@ dependencies = [
[project.optional-dependencies]
dev = [
"black",
"copier",
"isort",
"coverage",
"pylint",
@@ -32,6 +33,7 @@ dev = [
"ophyd_devices",
"bec_server",
"requests-mock",
"fakeredis",
]
[project.entry-points."bec"]
@@ -46,12 +48,15 @@ plugin_file_writer = "tomcat_bec.file_writer"
[project.entry-points."bec.scans"]
plugin_scans = "tomcat_bec.scans"
[project.entry-points."bec.scans.metadata_schema"]
plugin_metadata_schema = "tomcat_bec.scans.metadata_schema"
[project.entry-points."bec.ipython_client_startup"]
plugin_ipython_client_pre = "tomcat_bec.bec_ipython_client.startup.pre_startup"
plugin_ipython_client_post = "tomcat_bec.bec_ipython_client.startup"
[project.entry-points."bec.widgets.auto_updates"]
plugin_widgets_update = "tomcat_bec.bec_widgets.auto_updates:PlotUpdate"
plugin_widgets_update = "tomcat_bec.bec_widgets.auto_updates"
[project.entry-points."bec.widgets.user_widgets"]
plugin_widgets = "tomcat_bec.bec_widgets.widgets"

View File

@@ -1,31 +1,34 @@
# Getting Started with Testing using pytest
BEC is using the [pytest](https://docs.pytest.org/en/8.0.x/) framework.
It can be install via
``` bash
BEC is using the [pytest](https://docs.pytest.org/en/latest/) framework.
It can be installed via
```bash
pip install pytest
```
in your *python environment*.
in your _python environment_.
We note that pytest is part of the optional-dependencies `[dev]` of the plugin package.
## Introduction
Tests in this package should be stored in the `tests` directory.
We suggest to sort tests of different submodules, i.e. `scans` or `devices` in the respective folder structure, and to folow a naming convention of `<test_module_name.py>`.
It is mandatory for test files to begin with `test_` for pytest to discover them.
To run all tests, navigate to the directory of the plugin from the command line, and run the command
To run all tests, navigate to the directory of the plugin from the command line, and run the command
``` bash
```bash
pytest -v --random-order ./tests
```
Note, the python environment needs to be active.
The additional arg `-v` allows pytest to run in verbose mode which provides more detailed information about the tests being run.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
## Test examples
Writing tests can be quite specific for the given function.
Writing tests can be quite specific for the given function.
We recommend writing tests as isolated as possible, i.e. try to test single functions instead of full classes.
A very useful class to enable isolated testing is [MagicMock](https://docs.python.org/3/library/unittest.mock.html).
In addition, we also recommend to take a look at the [How-to guides from pytest](https://docs.pytest.org/en/8.0.x/how-to/index.html).

View File

@@ -1,31 +1,34 @@
# Getting Started with Testing using pytest
BEC is using the [pytest](https://docs.pytest.org/en/8.0.x/) framework.
It can be install via
``` bash
BEC is using the [pytest](https://docs.pytest.org/en/latest/) framework.
It can be installed via
```bash
pip install pytest
```
in your *python environment*.
in your _python environment_.
We note that pytest is part of the optional-dependencies `[dev]` of the plugin package.
## Introduction
Tests in this package should be stored in the `tests` directory.
We suggest to sort tests of different submodules, i.e. `scans` or `devices` in the respective folder structure, and to folow a naming convention of `<test_module_name.py>`.
It is mandatory for test files to begin with `test_` for pytest to discover them.
To run all tests, navigate to the directory of the plugin from the command line, and run the command
To run all tests, navigate to the directory of the plugin from the command line, and run the command
``` bash
```bash
pytest -v --random-order ./tests
```
Note, the python environment needs to be active.
The additional arg `-v` allows pytest to run in verbose mode which provides more detailed information about the tests being run.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
## Test examples
Writing tests can be quite specific for the given function.
Writing tests can be quite specific for the given function.
We recommend writing tests as isolated as possible, i.e. try to test single functions instead of full classes.
A very useful class to enable isolated testing is [MagicMock](https://docs.python.org/3/library/unittest.mock.html).
In addition, we also recommend to take a look at the [How-to guides from pytest](https://docs.pytest.org/en/8.0.x/how-to/index.html).

View File

@@ -0,0 +1,34 @@
# Getting Started with Testing using pytest
BEC is using the [pytest](https://docs.pytest.org/en/latest/) framework.
It can be installed via
```bash
pip install pytest
```
in your _python environment_.
We note that pytest is part of the optional-dependencies `[dev]` of the plugin package.
## Introduction
Tests in this package should be stored in the `tests` directory.
We suggest to sort tests of different submodules, i.e. `scans` or `devices` in the respective folder structure, and to folow a naming convention of `<test_module_name.py>`.
It is mandatory for test files to begin with `test_` for pytest to discover them.
To run all tests, navigate to the directory of the plugin from the command line, and run the command
```bash
pytest -v --random-order ./tests
```
Note, the python environment needs to be active.
The additional arg `-v` allows pytest to run in verbose mode which provides more detailed information about the tests being run.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
## Test examples
Writing tests can be quite specific for the given function.
We recommend writing tests as isolated as possible, i.e. try to test single functions instead of full classes.
A very useful class to enable isolated testing is [MagicMock](https://docs.python.org/3/library/unittest.mock.html).
In addition, we also recommend to take a look at the [How-to guides from pytest](https://docs.pytest.org/en/8.0.x/how-to/index.html).

View File

@@ -1,31 +1,34 @@
# Getting Started with Testing using pytest
BEC is using the [pytest](https://docs.pytest.org/en/8.0.x/) framework.
It can be install via
``` bash
BEC is using the [pytest](https://docs.pytest.org/en/latest/) framework.
It can be installed via
```bash
pip install pytest
```
in your *python environment*.
in your _python environment_.
We note that pytest is part of the optional-dependencies `[dev]` of the plugin package.
## Introduction
Tests in this package should be stored in the `tests` directory.
We suggest to sort tests of different submodules, i.e. `scans` or `devices` in the respective folder structure, and to folow a naming convention of `<test_module_name.py>`.
It is mandatory for test files to begin with `test_` for pytest to discover them.
To run all tests, navigate to the directory of the plugin from the command line, and run the command
To run all tests, navigate to the directory of the plugin from the command line, and run the command
``` bash
```bash
pytest -v --random-order ./tests
```
Note, the python environment needs to be active.
The additional arg `-v` allows pytest to run in verbose mode which provides more detailed information about the tests being run.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
## Test examples
Writing tests can be quite specific for the given function.
Writing tests can be quite specific for the given function.
We recommend writing tests as isolated as possible, i.e. try to test single functions instead of full classes.
A very useful class to enable isolated testing is [MagicMock](https://docs.python.org/3/library/unittest.mock.html).
In addition, we also recommend to take a look at the [How-to guides from pytest](https://docs.pytest.org/en/8.0.x/how-to/index.html).

View File

@@ -0,0 +1,337 @@
from unittest import mock
import pytest
from tomcat_bec.devices.gigafrost.gigafrost_base import GigaFrostBase
from tomcat_bec.devices.gigafrost.gigafrostcamera import GigaFrostCamera, default_config
from tomcat_bec.devices.std_daq.std_daq_client import StdDaqClient
from tomcat_bec.devices.std_daq.std_daq_preview import StdDaqPreview
@pytest.fixture()
def gfcam_base():
gfcam = GigaFrostCamera(
"X02DA-CAM-GF2:",
name="gfcam",
std_daq_rest="http://example.com/rest",
std_daq_ws="ws://example.com/ws",
)
for component in gfcam.component_names:
type.__setattr__(GigaFrostCamera, component, mock.MagicMock())
yield gfcam
def test_gfcam_init_raises_without_rest_ws():
with pytest.raises(ValueError) as excinfo:
GigaFrostCamera("X02DA-CAM-GF2:", name="gfcam")
excinfo.match("std_daq_rest and std_daq_ws must be provided")
def test_gfcam_init():
gfcam = GigaFrostCamera(
"X02DA-CAM-GF2:",
name="gfcam",
std_daq_rest="http://example.com/rest",
std_daq_ws="ws://example.com/ws",
)
assert gfcam.name == "gfcam"
assert isinstance(gfcam.backend, StdDaqClient)
assert gfcam.live_preview is None
def test_gfcam_init_with_live_preview():
gfcam = GigaFrostCamera(
"X02DA-CAM-GF2:",
name="gfcam",
std_daq_rest="http://example.com/rest",
std_daq_ws="ws://example.com/ws",
std_daq_live="http://example.com/live_preview",
)
assert gfcam.live_preview is not None
assert isinstance(gfcam.live_preview, StdDaqPreview)
def test_gfcam_configure(gfcam_base):
with mock.patch.object(gfcam_base, "stop_camera") as stop_camera:
with mock.patch.object(gfcam_base.backend, "set_config") as set_config:
with mock.patch.object(GigaFrostBase, "configure") as base_configure:
gfcam_base.configure({})
stop_camera.assert_called_once()
stop_camera().wait.assert_called_once()
set_config.assert_not_called()
config = default_config()
base_configure.assert_called_once_with(config)
def test_gfcam_default_config_copies():
assert isinstance(default_config(), dict)
assert id(default_config()) != id(default_config())
def test_gfcam_configure_sets_exp_time_in_ms(gfcam_base):
with mock.patch.object(gfcam_base, "stop_camera") as stop_camera:
with mock.patch.object(gfcam_base.backend, "set_config") as set_config:
with mock.patch.object(GigaFrostBase, "configure") as base_configure:
gfcam_base.configure({"exp_time": 0.1})
stop_camera.assert_called_once()
stop_camera().wait.assert_called_once()
set_config.assert_not_called()
config = default_config()
config.update({"exposure": 100}) # in ms
base_configure.assert_called_once_with(config)
def test_gfcam_set_acquisition_mode_invalid(gfcam_base):
"""Test setting invalid acquisition mode"""
with pytest.raises(RuntimeError) as excinfo:
gfcam_base.set_acquisition_mode("invalid_mode")
excinfo.match("Unsupported acquisition mode: invalid_mode")
@pytest.mark.parametrize(
"mode_soft, mode_external, mode_always, expected_result",
[
(0, 0, 0, None), # No enable mode set
(1, 0, 0, "soft"), # Only soft mode enabled
(0, 1, 0, "external"), # Only external mode enabled
(1, 1, 0, "soft+ext"), # Both soft and external enabled
(0, 0, 1, "always"), # Always mode enabled
(1, 0, 1, "always"), # Always overrides soft
(0, 1, 1, "always"), # Always overrides external
(1, 1, 1, "always"), # Always overrides both soft and external
],
)
def test_gfcam_enable_mode_property(
gfcam_base, mode_soft, mode_external, mode_always, expected_result
):
"""Test that the enable_mode property returns the correct mode based on signal values"""
# Configure the mock return values for the mode signals
gfcam_base.mode_endbl_soft.get.return_value = mode_soft
gfcam_base.mode_enbl_ext.get.return_value = mode_external
gfcam_base.mode_enbl_auto.get.return_value = mode_always
# Check that the property returns the expected result
assert gfcam_base.enable_mode == expected_result
@pytest.mark.parametrize(
"mode,expected_settings",
[
("soft", {"mode_enbl_ext": 0, "mode_endbl_soft": 1, "mode_enbl_auto": 0}),
("external", {"mode_enbl_ext": 1, "mode_endbl_soft": 0, "mode_enbl_auto": 0}),
("soft+ext", {"mode_enbl_ext": 1, "mode_endbl_soft": 1, "mode_enbl_auto": 0}),
("always", {"mode_enbl_ext": 0, "mode_endbl_soft": 0, "mode_enbl_auto": 1}),
],
)
def test_gfcam_enable_mode_setter(gfcam_base, mode, expected_settings):
"""Test setting the enable mode of the GigaFRoST camera"""
# Mock the const.gf_valid_enable_modes to avoid importing the constants
with mock.patch(
"tomcat_bec.devices.gigafrost.gigafrostcamera.const.gf_valid_enable_modes",
["soft", "external", "soft+ext", "always"],
):
# Set the enable mode
gfcam_base.enable_mode = mode
# Verify the correct signals were set
gfcam_base.mode_enbl_ext.set.assert_called_once_with(expected_settings["mode_enbl_ext"])
gfcam_base.mode_endbl_soft.set.assert_called_once_with(expected_settings["mode_endbl_soft"])
gfcam_base.mode_enbl_auto.set.assert_called_once_with(expected_settings["mode_enbl_auto"])
# Verify wait was called on each set operation
gfcam_base.mode_enbl_ext.set().wait.assert_called_once()
gfcam_base.mode_endbl_soft.set().wait.assert_called_once()
gfcam_base.mode_enbl_auto.set().wait.assert_called_once()
# Verify parameters were committed
gfcam_base.set_param.set.assert_called_once_with(1)
gfcam_base.set_param.set().wait.assert_called_once()
def test_gfcam_enable_mode_setter_invalid(gfcam_base):
"""Test setting an invalid enable mode raises an error"""
# Mock the const.gf_valid_enable_modes to avoid importing the constants
with mock.patch(
"tomcat_bec.devices.gigafrost.gigafrostcamera.const.gf_valid_enable_modes",
["soft", "external", "soft+ext", "always"],
):
with pytest.raises(ValueError) as excinfo:
gfcam_base.enable_mode = "invalid_mode"
assert "Invalid enable mode invalid_mode!" in str(excinfo.value)
assert "Valid modes are:" in str(excinfo.value)
# Verify no signals were set
gfcam_base.mode_enbl_ext.set.assert_not_called()
gfcam_base.mode_endbl_soft.set.assert_not_called()
gfcam_base.mode_enbl_auto.set.assert_not_called()
gfcam_base.set_param.set.assert_not_called()
@pytest.mark.parametrize(
"mode_auto, mode_soft, mode_timer, mode_external, expected_result",
[
(0, 0, 0, 0, None), # No trigger mode set
(1, 0, 0, 0, "auto"), # Only auto mode enabled
(0, 1, 0, 0, "soft"), # Only soft mode enabled
(0, 0, 1, 0, "timer"), # Only timer mode enabled
(0, 0, 0, 1, "external"), # Only external mode enabled
(1, 1, 0, 0, "auto"), # Auto takes precedence over soft
(1, 0, 1, 0, "auto"), # Auto takes precedence over timer
(1, 0, 0, 1, "auto"), # Auto takes precedence over external
(0, 1, 1, 0, "soft"), # Soft takes precedence over timer
(0, 1, 0, 1, "soft"), # Soft takes precedence over external
(0, 0, 1, 1, "timer"), # Timer takes precedence over external
(1, 1, 1, 1, "auto"), # Auto takes precedence over all
],
)
def test_gfcam_trigger_mode_property(
gfcam_base, mode_auto, mode_soft, mode_timer, mode_external, expected_result
):
"""Test that the trigger_mode property returns the correct mode based on signal values"""
# Configure the mock return values for the mode signals
gfcam_base.mode_trig_auto.get.return_value = mode_auto
gfcam_base.mode_trig_soft.get.return_value = mode_soft
gfcam_base.mode_trig_timer.get.return_value = mode_timer
gfcam_base.mode_trig_ext.get.return_value = mode_external
# Check that the property returns the expected result
assert gfcam_base.trigger_mode == expected_result
@pytest.mark.parametrize(
"mode,expected_settings",
[
(
"auto",
{"mode_trig_auto": 1, "mode_trig_soft": 0, "mode_trig_timer": 0, "mode_trig_ext": 0},
),
(
"soft",
{"mode_trig_auto": 0, "mode_trig_soft": 1, "mode_trig_timer": 0, "mode_trig_ext": 0},
),
(
"timer",
{"mode_trig_auto": 0, "mode_trig_soft": 0, "mode_trig_timer": 1, "mode_trig_ext": 0},
),
(
"external",
{"mode_trig_auto": 0, "mode_trig_soft": 0, "mode_trig_timer": 0, "mode_trig_ext": 1},
),
],
)
def test_gfcam_trigger_mode_setter(gfcam_base, mode, expected_settings):
"""Test setting the trigger mode of the GigaFRoST camera"""
# Set the trigger mode
gfcam_base.trigger_mode = mode
# Verify the correct signals were set
gfcam_base.mode_trig_auto.set.assert_called_with(expected_settings["mode_trig_auto"])
gfcam_base.mode_trig_soft.set.assert_called_with(expected_settings["mode_trig_soft"])
gfcam_base.mode_trig_timer.set.assert_called_with(expected_settings["mode_trig_timer"])
gfcam_base.mode_trig_ext.set.assert_called_with(expected_settings["mode_trig_ext"])
# Verify wait was called on each set operation
gfcam_base.mode_trig_auto.set().wait.assert_called_once()
gfcam_base.mode_trig_soft.set().wait.assert_called_once()
gfcam_base.mode_trig_timer.set().wait.assert_called_once()
gfcam_base.mode_trig_ext.set().wait.assert_called_once()
# Verify parameters were committed
gfcam_base.set_param.set.assert_called_once_with(1)
gfcam_base.set_param.set().wait.assert_called_once()
def test_gfcam_trigger_mode_setter_invalid(gfcam_base):
"""Test setting an invalid trigger mode raises an error"""
with pytest.raises(ValueError) as excinfo:
gfcam_base.trigger_mode = "invalid_mode"
assert "Invalid trigger mode!" in str(excinfo.value)
assert "Valid modes are: ['auto', 'external', 'timer', 'soft']" in str(excinfo.value)
# Verify no signals were set
gfcam_base.mode_trig_auto.set.assert_not_called()
gfcam_base.mode_trig_soft.set.assert_not_called()
gfcam_base.mode_trig_timer.set.assert_not_called()
gfcam_base.mode_trig_ext.set.assert_not_called()
gfcam_base.set_param.set.assert_not_called()
@pytest.mark.parametrize(
"start_bit, end_bit, expected_result",
[
(0, 0, "off"), # Both bits off
(1, 0, "start"), # Only start bit on
(0, 1, "end"), # Only end bit on
(1, 1, "start+end"), # Both bits on
],
)
def test_gfcam_fix_nframes_mode_property(gfcam_base, start_bit, end_bit, expected_result):
"""Test that the fix_nframes_mode property returns the correct mode based on bit values"""
# Configure the mock return values for the bits
gfcam_base.cnt_startbit.get.return_value = start_bit
# Note: The original code has a bug here - it calls cnt_startbit.get() twice instead of cnt_endbit.get()
# For testing purposes, we'll mock both appropriately
gfcam_base.cnt_endbit.get.return_value = end_bit
# Check that the property returns the expected result
assert gfcam_base.fix_nframes_mode == expected_result
@pytest.mark.parametrize(
"mode, expected_settings",
[
("off", {"cnt_startbit": 0, "cnt_endbit": 0}),
("start", {"cnt_startbit": 1, "cnt_endbit": 0}),
("end", {"cnt_startbit": 0, "cnt_endbit": 1}),
("start+end", {"cnt_startbit": 1, "cnt_endbit": 1}),
],
)
def test_gfcam_fix_nframes_mode_setter(gfcam_base, mode, expected_settings):
"""Test setting the fixed number of frames mode of the GigaFRoST camera"""
# Mock the const.gf_valid_fix_nframe_modes to avoid importing the constants
with mock.patch(
"tomcat_bec.devices.gigafrost.gigafrostcamera.const.gf_valid_fix_nframe_modes",
["off", "start", "end", "start+end"],
):
# Set the mode
gfcam_base.fix_nframes_mode = mode
# Verify the class attribute was set
assert gfcam_base._fix_nframes_mode == mode
# Verify the correct signals were set
gfcam_base.cnt_startbit.set.assert_called_once_with(expected_settings["cnt_startbit"])
gfcam_base.cnt_endbit.set.assert_called_once_with(expected_settings["cnt_endbit"])
# Verify wait was called on each set operation
gfcam_base.cnt_startbit.set().wait.assert_called_once()
gfcam_base.cnt_endbit.set().wait.assert_called_once()
# Verify parameters were committed
gfcam_base.set_param.set.assert_called_once_with(1)
gfcam_base.set_param.set().wait.assert_called_once()
def test_gfcam_fix_nframes_mode_setter_invalid(gfcam_base):
"""Test setting an invalid fixed number of frames mode raises an error"""
# Mock the const.gf_valid_fix_nframe_modes to avoid importing the constants
with mock.patch(
"tomcat_bec.devices.gigafrost.gigafrostcamera.const.gf_valid_fix_nframe_modes",
["off", "start", "end", "start+end"],
):
with pytest.raises(ValueError) as excinfo:
gfcam_base.fix_nframes_mode = "invalid_mode"
assert "Invalid fixed frame number mode!" in str(excinfo.value)
assert "Valid modes are:" in str(excinfo.value)
# Verify no signals were set
gfcam_base.cnt_startbit.set.assert_not_called()
gfcam_base.cnt_endbit.set.assert_not_called()
gfcam_base.set_param.set.assert_not_called()

View File

@@ -0,0 +1,25 @@
from unittest import mock
import pytest
from tomcat_bec.devices.pco_edge.pcoedgecamera import PcoEdge5M
@pytest.fixture()
def pcocam_base():
gfcam = PcoEdge5M(
"X02DA-CAM-GF2:",
name="pco_edge_camera",
std_daq_rest="http://example.com/rest",
std_daq_ws="ws://example.com/ws",
)
for component in gfcam.component_names:
type.__setattr__(PcoEdge5M, component, mock.MagicMock())
yield gfcam
def test_pcocam_init_raises_without_rest_ws():
with pytest.raises(ValueError) as excinfo:
PcoEdge5M("X02DA-CAM-GF2:", name="pco_edge_camera")
excinfo.match("std_daq_rest and std_daq_ws must be provided")

View File

@@ -0,0 +1,172 @@
from unittest import mock
import fakeredis
import h5py
import numpy as np
import pytest
from bec_lib.redis_connector import RedisConnector
from ophyd_devices.interfaces.base_classes.psi_device_base import PSIDeviceBase
from typeguard import TypeCheckError
from tomcat_bec.devices.std_daq.std_daq_live_processing import StdDaqLiveProcessing
def fake_redis_server(host, port):
redis = fakeredis.FakeRedis()
return redis
@pytest.fixture
def connected_connector():
connector = RedisConnector("localhost:1", redis_cls=fake_redis_server) # type: ignore
connector._redis_conn.flushall()
try:
yield connector
finally:
connector.shutdown()
class MockPSIDeviceBase(PSIDeviceBase):
def __init__(self, *args, device_manager=None, **kwargs):
super().__init__(*args, **kwargs)
self.device_manager = device_manager
@pytest.fixture
def mock_device(connected_connector):
device_manager = mock.Mock()
device_manager.connector = connected_connector
device = MockPSIDeviceBase(name="mock_device", device_manager=device_manager)
yield device
@pytest.fixture
def std_daq_live_processing(mock_device):
signal = mock.Mock()
signal2 = mock.Mock()
live_processing = StdDaqLiveProcessing(mock_device, signal, signal2)
yield live_processing
def test_std_daq_live_processing_set_mode(std_daq_live_processing):
std_daq_live_processing.set_mode("sum")
assert std_daq_live_processing.get_mode() == "sum"
with pytest.raises(TypeCheckError):
std_daq_live_processing.set_mode("mode_that_does_not_exist")
with pytest.raises(TypeCheckError):
std_daq_live_processing.set_mode(123)
@pytest.fixture(params=["flat", "dark"])
def reference_type(request):
return request.param
def test_std_daq_live_processing_flat_default(std_daq_live_processing, reference_type):
with mock.patch.object(
std_daq_live_processing, "_get_from_redis", return_value=None
) as mock_get_from_redis:
get_method = (
std_daq_live_processing.get_flat
if reference_type == "flat"
else std_daq_live_processing.get_dark
)
out = get_method((100, 100))
mock_get_from_redis.assert_called_once_with(
std_daq_live_processing._redis_endpoint_name(ref_type=reference_type, shape=(100, 100))
)
assert isinstance(out, np.ndarray)
assert out.shape == (100, 100)
if reference_type == "flat":
assert np.all(out == 1), "Default should be all ones"
else:
assert np.all(out == 0), "Default should be all zeros"
@pytest.mark.parametrize("value", [np.random.rand(100, 100), np.random.rand(3, 100, 100)])
def test_std_daq_live_processing_fetch(tmp_path, std_daq_live_processing, value, reference_type):
with h5py.File(tmp_path / "test_data.h5", "w") as f:
f.create_dataset("tomcat-pco/data", data=value)
status = std_daq_live_processing.update_reference_with_file(
reference_type, tmp_path / "test_data.h5", "tomcat-pco/data"
)
status.wait()
get_method = (
std_daq_live_processing.get_flat
if reference_type == "flat"
else std_daq_live_processing.get_dark
)
out = get_method((100, 100))
assert isinstance(out, np.ndarray)
assert out.shape == (100, 100)
# Check that the data is cached locally
assert np.array_equal(
std_daq_live_processing.references[f"{reference_type}_(100, 100)"], out
), "Cached flat data should match fetched data"
redis_data = std_daq_live_processing._get_from_redis(
std_daq_live_processing._redis_endpoint_name(ref_type=reference_type, shape=(100, 100))
)
assert isinstance(redis_data, np.ndarray)
assert redis_data.shape == (100, 100)
assert np.array_equal(redis_data, out), "Redis data should match the locally cached data"
def test_std_daq_live_processing_apply_flat_dark_correction(std_daq_live_processing):
# Create a mock image
image = np.random.rand(100, 100)
# Set flat and dark references
std_daq_live_processing.references["flat_(100, 100)"] = np.ones((100, 100))
std_daq_live_processing.references["dark_(100, 100)"] = np.zeros((100, 100))
# Apply flat and dark correction
corrected_image = std_daq_live_processing.apply_flat_dark_correction(image)
assert isinstance(corrected_image, np.ndarray)
assert corrected_image.shape == (100, 100)
assert np.all(corrected_image >= 0), "Corrected image should not have negative values"
def test_std_daq_live_processing_apply_flat_dark_correction_with_dark(std_daq_live_processing):
# Create a mock image
image = np.random.rand(100, 100) * 1000 # Scale to simulate a realistic image
dark = np.random.rand(100, 100) * 100 # Simulate a dark reference
image += dark # Add dark to the image to simulate a realistic scenario
# Set flat and dark references
std_daq_live_processing.references["flat_(100, 100)"] = np.ones((100, 100))
std_daq_live_processing.references["dark_(100, 100)"] = dark
# Apply flat and dark correction
corrected_image = std_daq_live_processing.apply_flat_dark_correction(image)
assert isinstance(corrected_image, np.ndarray)
assert corrected_image.shape == (100, 100)
assert np.all(corrected_image >= 0), "Corrected image should not have negative values"
def test_std_daq_live_processing_apply_flat_correction_zero_division(std_daq_live_processing):
# Create a mock image
image = np.random.rand(100, 100) * 1000 + 10 # Scale to simulate a realistic image
# Set flat reference with epsilon values
flat = np.ones((100, 100)) * 2
std_daq_live_processing.references["flat_(100, 100)"] = flat
# Set dark reference to ones
dark = np.ones((100, 100)) * 2
std_daq_live_processing.references["dark_(100, 100)"] = dark
# Apply flat correction
corrected_image = std_daq_live_processing.apply_flat_dark_correction(image)
assert isinstance(corrected_image, np.ndarray)
assert corrected_image.shape == (100, 100)
assert np.all(corrected_image >= 0), "Corrected image should not have negative values"
assert np.any(corrected_image < np.inf), "Corrected image should not have infinite values"

View File

@@ -8,7 +8,7 @@ import typeguard
from ophyd import StatusBase
from websockets import WebSocketException
from tomcat_bec.devices.gigafrost.std_daq_client import (
from tomcat_bec.devices.std_daq.std_daq_client import (
StdDaqClient,
StdDaqConfig,
StdDaqError,
@@ -60,11 +60,14 @@ def test_stddaq_client(client):
def test_stddaq_client_get_daq_config(client, full_config):
with requests_mock.Mocker() as m:
response = full_config
m.get("http://localhost:5000/api/config/get?user=ioc", json=response.model_dump())
m.get(
"http://localhost:5000/api/config/get?user=ioc",
json=response.model_dump(exclude_defaults=True),
)
out = client.get_config()
# Check that the response is simply the json response
assert out == response.model_dump()
assert out == response.model_dump(exclude_defaults=True)
assert client._config == response
@@ -80,7 +83,7 @@ def test_stddaq_client_set_config_pydantic(client, full_config):
client.set_config(config)
# Verify the last request
assert m.last_request.json() == full_config.model_dump()
assert m.last_request.json() == full_config.model_dump(exclude_defaults=True)
def test_std_daq_client_set_config_dict(client, full_config):
@@ -95,7 +98,7 @@ def test_std_daq_client_set_config_dict(client, full_config):
config_dict = full_config.model_dump()
with mock.patch.object(client, "_pre_restart"), mock.patch.object(client, "_post_restart"):
client.set_config(config_dict)
assert m.last_request.json() == full_config.model_dump()
assert m.last_request.json() == full_config.model_dump(exclude_defaults=True)
def test_stddaq_client_set_config_ignores_extra_keys(client, full_config):
@@ -111,7 +114,7 @@ def test_stddaq_client_set_config_ignores_extra_keys(client, full_config):
config_dict["extra_key"] = "extra_value"
with mock.patch.object(client, "_pre_restart"), mock.patch.object(client, "_post_restart"):
client.set_config(config_dict)
assert m.last_request.json() == full_config.model_dump()
assert m.last_request.json() == full_config.model_dump(exclude_defaults=True)
def test_stddaq_client_set_config_error(client, full_config):
@@ -146,7 +149,7 @@ def test_stddaq_client_status(client):
def test_stddaq_client_start(client):
with mock.patch("tomcat_bec.devices.gigafrost.std_daq_client.StatusBase") as StatusBase:
with mock.patch("tomcat_bec.devices.std_daq.std_daq_client.StatusBase") as StatusBase:
client.start(file_path="test_file_path", file_prefix="test_file_prefix", num_images=10)
out = client._send_queue.get()
assert out == {

View File

@@ -1,31 +1,34 @@
# Getting Started with Testing using pytest
BEC is using the [pytest](https://docs.pytest.org/en/8.0.x/) framework.
It can be install via
``` bash
BEC is using the [pytest](https://docs.pytest.org/en/latest/) framework.
It can be installed via
```bash
pip install pytest
```
in your *python environment*.
in your _python environment_.
We note that pytest is part of the optional-dependencies `[dev]` of the plugin package.
## Introduction
Tests in this package should be stored in the `tests` directory.
We suggest to sort tests of different submodules, i.e. `scans` or `devices` in the respective folder structure, and to folow a naming convention of `<test_module_name.py>`.
It is mandatory for test files to begin with `test_` for pytest to discover them.
To run all tests, navigate to the directory of the plugin from the command line, and run the command
To run all tests, navigate to the directory of the plugin from the command line, and run the command
``` bash
```bash
pytest -v --random-order ./tests
```
Note, the python environment needs to be active.
The additional arg `-v` allows pytest to run in verbose mode which provides more detailed information about the tests being run.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
## Test examples
Writing tests can be quite specific for the given function.
Writing tests can be quite specific for the given function.
We recommend writing tests as isolated as possible, i.e. try to test single functions instead of full classes.
A very useful class to enable isolated testing is [MagicMock](https://docs.python.org/3/library/unittest.mock.html).
In addition, we also recommend to take a look at the [How-to guides from pytest](https://docs.pytest.org/en/8.0.x/how-to/index.html).

View File

@@ -1,31 +1,34 @@
# Getting Started with Testing using pytest
BEC is using the [pytest](https://docs.pytest.org/en/8.0.x/) framework.
It can be install via
``` bash
BEC is using the [pytest](https://docs.pytest.org/en/latest/) framework.
It can be installed via
```bash
pip install pytest
```
in your *python environment*.
in your _python environment_.
We note that pytest is part of the optional-dependencies `[dev]` of the plugin package.
## Introduction
Tests in this package should be stored in the `tests` directory.
We suggest to sort tests of different submodules, i.e. `scans` or `devices` in the respective folder structure, and to folow a naming convention of `<test_module_name.py>`.
It is mandatory for test files to begin with `test_` for pytest to discover them.
To run all tests, navigate to the directory of the plugin from the command line, and run the command
To run all tests, navigate to the directory of the plugin from the command line, and run the command
``` bash
```bash
pytest -v --random-order ./tests
```
Note, the python environment needs to be active.
The additional arg `-v` allows pytest to run in verbose mode which provides more detailed information about the tests being run.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
The argument `--random-order` instructs pytest to run the tests in random order, which is the default in the CI pipelines.
## Test examples
Writing tests can be quite specific for the given function.
Writing tests can be quite specific for the given function.
We recommend writing tests as isolated as possible, i.e. try to test single functions instead of full classes.
A very useful class to enable isolated testing is [MagicMock](https://docs.python.org/3/library/unittest.mock.html).
In addition, we also recommend to take a look at the [How-to guides from pytest](https://docs.pytest.org/en/8.0.x/how-to/index.html).

View File

@@ -10,7 +10,7 @@ While command-line arguments have to be set in the pre-startup script, the
post-startup script can be used to load beamline specific information and
to setup the prompts.
from bec_lib import bec_logger
from bec_lib.logger import bec_logger
logger = bec_logger.logger
@@ -34,3 +34,6 @@ to setup the prompts.
"""
# pylint: disable=invalid-name, unused-import, import-error, undefined-variable, unused-variable, unused-argument, no-name-in-module
bec._ip.prompts.session_name = "TOMCAT"
bec._ip.prompts.status = 1

View File

@@ -1,6 +1,6 @@
"""
Pre-startup script for BEC client. This script is executed before the BEC client
is started. It can be used to add additional command line arguments.
is started. It can be used to add additional command line arguments.
"""
from bec_lib.service_config import ServiceConfig

View File

@@ -0,0 +1,208 @@
gfcam:
description: GigaFrost camera client
deviceClass: tomcat_bec.devices.GigaFrostCamera
deviceConfig:
prefix: "X02DA-CAM-GF2:"
backend_url: "http://sls-daq-001:8080"
auto_soft_enable: true
std_daq_live: "tcp://129.129.95.111:20000"
std_daq_ws: "ws://129.129.95.111:8080"
std_daq_rest: "http://129.129.95.111:5000"
deviceTags:
- camera
- trigger
- gfcam
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: true
fe_sldi_centerx_mm:
description: FE slit horizontal center position in mm
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:CENTERX
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_centerx_mrad:
description: FE slit horizontal center position in mrad
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:CENTERX_MRAD
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_centery_mm:
description: FE slit vertical center position in mm
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:CENTERY
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_centery_mrad:
description: FE slit vertical center position in mrad
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:CENTERY_MRAD
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_sizex_mm:
description: FE slit horizontal size in mm
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:SIZEX
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_sizex_mrad:
description: FE slit horizontal size in mrad
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:SIZEX_MRAD
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_sizey_mm:
description: FE slit vertical size in mm
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:SIZEY
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_sizey_mrad:
description: FE slit vertical size in mrad
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:SIZEY_MRAD
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_trxr:
description: Ring FE slit X motion
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:TRXR
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_trxw:
description: Wall FE slit X motion
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:TRXW
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_tryb:
description: Bottom FE slit Y motion
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:TRYB
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
fe_sldi_tryt:
description: Top FE slit Y motion
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-FE-SLDI:TRYT
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
op_fi1:
description: Optics Filter 1
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-OP-FI1:TRY
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
op_fi2:
description: Optics Filter 2
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-OP-FI2:TRY
deviceTags:
- ??
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: baseline
softwareTrigger: false
# pcocam:
# description: PCO.edge camera client
# deviceClass: tomcat_bec.devices.PcoEdge5M
# deviceConfig:
# prefix: 'X02DA-CCDCAM2:'
# std_daq_live: 'tcp://129.129.95.111:20010'
# std_daq_ws: 'ws://129.129.95.111:8081'
# std_daq_rest: 'http://129.129.95.111:5010'
# deviceTags:
# - camera
# - trigger
# - pcocam
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: true

View File

@@ -40,7 +40,7 @@ femto_mean_curr:
es1_roty:
readoutPriority: monitored
description: 'Test rotation stage'
description: "Test rotation stage"
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-ES1-SMP1:ROTY
@@ -49,62 +49,72 @@ es1_roty:
onFailure: buffer
enabled: true
readOnly: false
softwareTrigger: false
softwareTrigger: false
# es1_ismc:
# description: 'Automation1 iSMC interface'
# deviceClass: tomcat_bec.devices.aa1Controller
# deviceConfig:
# prefix: 'X02DA-ES1-SMP1:CTRL:'
# deviceTags:
# - es1
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false
es1_trx:
readoutPriority: monitored
description: "Test translation stage"
deviceClass: ophyd.EpicsMotor
deviceConfig:
prefix: X02DA-ES1-SMP1:TRX
deviceTags:
- es1-sam
onFailure: buffer
enabled: true
readOnly: false
softwareTrigger: false
# es1_tasks:
# description: 'Automation1 task management interface'
# deviceClass: tomcat_bec.devices.aa1Tasks
# deviceConfig:
# prefix: 'X02DA-ES1-SMP1:TASK:'
# deviceTags:
# - es1
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false
es1_ismc:
description: "Automation1 iSMC interface"
deviceClass: tomcat_bec.devices.aa1Controller
deviceConfig:
prefix: "X02DA-ES1-SMP1:CTRL:"
deviceTags:
- es1
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
es1_tasks:
description: "Automation1 task management interface"
deviceClass: tomcat_bec.devices.aa1Tasks
deviceConfig:
prefix: "X02DA-ES1-SMP1:TASK:"
deviceTags:
- es1
enabled: false
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
# es1_psod:
# description: 'AA1 PSO output interface (trigger)'
# deviceClass: tomcat_bec.devices.aa1AxisPsoDistance
# deviceConfig:
# prefix: 'X02DA-ES1-SMP1:ROTY:PSO:'
# deviceTags:
# - es1
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: true
# es1_ddaq:
# description: 'Automation1 position recording interface'
# deviceClass: tomcat_bec.devices.aa1AxisDriveDataCollection
# deviceConfig:
# prefix: 'X02DA-ES1-SMP1:ROTY:DDC:'
# deviceTags:
# - es1
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false
es1_psod:
description: "AA1 PSO output interface (trigger)"
deviceClass: tomcat_bec.devices.aa1AxisPsoDistance
deviceConfig:
prefix: "X02DA-ES1-SMP1:ROTY:PSO:"
deviceTags:
- es1
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
es1_ddaq:
description: "Automation1 position recording interface"
deviceClass: tomcat_bec.devices.aa1AxisDriveDataCollection
deviceConfig:
prefix: "X02DA-ES1-SMP1:ROTY:DDC:"
deviceTags:
- es1
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
#camera:
# description: Grashopper Camera
@@ -119,108 +129,130 @@ es1_roty:
# readoutPriority: monitored
# softwareTrigger: true
# gfcam:
# description: GigaFrost camera client
# deviceClass: tomcat_bec.devices.GigaFrostCamera
# deviceConfig:
# prefix: 'X02DA-CAM-GF2:'
# backend_url: 'http://sls-daq-001:8080'
# auto_soft_enable: true
# deviceTags:
# - camera
# - trigger
# - gfcam
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: true
gfcam:
description: GigaFrost camera client
deviceClass: tomcat_bec.devices.GigaFrostCamera
deviceConfig:
prefix: 'X02DA-CAM-GF2:'
backend_url: 'http://sls-daq-001:8080'
prefix: "X02DA-CAM-GF2:"
backend_url: "http://sls-daq-001:8080"
auto_soft_enable: true
std_daq_live: "tcp://129.129.95.111:20000"
std_daq_ws: "ws://129.129.95.111:8080"
std_daq_rest: "http://129.129.95.111:5000"
deviceTags:
- camera
- trigger
- gfcam
- camera
- trigger
- gfcam
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: true
# gfdaq:
# description: GigaFrost stdDAQ client
# deviceClass: tomcat_bec.devices.StdDaqClient
# deviceConfig:
# ws_url: 'ws://129.129.95.111:8080'
# rest_url: 'http://129.129.95.111:5000'
# data_source_name: 'gfcam'
# deviceTags:
# - std-daq
# - gfcam
# enabled: false
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false
gfdaq:
description: GigaFrost stdDAQ client
deviceClass: tomcat_bec.devices.StdDaqClient
deviceConfig:
ws_url: 'ws://129.129.95.111:8080'
rest_url: 'http://129.129.95.111:5000'
data_source_name: 'gfcam'
deviceTags:
- std-daq
- gfcam
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
# gf_stream0:
# description: stdDAQ preview (2 every 555)
# deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
# deviceConfig:
# url: 'tcp://129.129.95.111:20000'
# deviceTags:
# - std-daq
# - gfcam
# enabled: false
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false
daq_stream0:
description: stdDAQ preview (2 every 555)
deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
deviceConfig:
url: 'tcp://129.129.95.111:20000'
deviceTags:
- std-daq
- gfcam
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
# pcocam:
# description: PCO.edge camera client
# deviceClass: tomcat_bec.devices.PcoEdge5M
# deviceConfig:
# prefix: 'X02DA-CCDCAM2:'
# deviceTags:
# - camera
# - trigger
# - pcocam
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: true
daq_stream1:
description: stdDAQ preview (1 at 5 Hz)
deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
deviceConfig:
url: 'tcp://129.129.95.111:20001'
deviceTags:
- std-daq
- gfcam
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
# pcocam:
# description: PCO.edge camera client
# deviceClass: tomcat_bec.devices.PcoEdge5M
# deviceConfig:
# prefix: 'X02DA-CCDCAM2:'
# std_daq_live: 'tcp://129.129.95.111:20010'
# std_daq_ws: 'ws://129.129.95.111:8081'
# std_daq_rest: 'http://129.129.95.111:5010'
# deviceTags:
# - camera
# - trigger
# - pcocam
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: true
# pcodaq:
# description: GigaFrost stdDAQ client
# deviceClass: tomcat_bec.devices.StdDaqClient
# deviceConfig:
# ws_url: 'ws://129.129.95.111:8081'
# rest_url: 'http://129.129.95.111:5010'
# deviceTags:
# - std-daq
# - pcocam
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false
pcocam:
description: PCO.edge camera client
deviceClass: tomcat_bec.devices.PcoEdge5M
deviceConfig:
prefix: 'X02DA-CCDCAM2:'
deviceTags:
- camera
- trigger
- pcocam
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: true
pcodaq:
description: GigaFrost stdDAQ client
deviceClass: tomcat_bec.devices.StdDaqClient
deviceConfig:
ws_url: 'ws://129.129.95.111:8081'
rest_url: 'http://129.129.95.111:5010'
deviceTags:
- std-daq
- pcocam
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
pco_stream0:
description: stdDAQ preview (2 every 555)
deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
deviceConfig:
url: 'tcp://129.129.95.111:20010'
deviceTags:
- std-daq
- pcocam
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
# pco_stream0:
# description: stdDAQ preview (2 every 555)
# deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
# deviceConfig:
# url: 'tcp://129.129.95.111:20010'
# deviceTags:
# - std-daq
# - pcocam
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false

View File

@@ -21,7 +21,7 @@ eyex:
# onFailure: buffer
# enabled: true
# readOnly: false
# softwareTrigger: false
# softwareTrigger: false
# eyez:
# readoutPriority: baseline
# description: X-ray eye axis Z
@@ -50,24 +50,24 @@ femto_mean_curr:
es1_roty:
readoutPriority: baseline
description: 'Test rotation stage'
description: "Test rotation stage"
deviceClass: tomcat_bec.devices.psimotor.EpicsMotorMR
deviceConfig:
prefix: X02DA-ES1-SMP1:ROTY
deviceTags:
- es1-sam
onFailure: buffer
enabled: true
enabled: false
readOnly: false
softwareTrigger: false
softwareTrigger: false
es1_ismc:
description: 'Automation1 iSMC interface'
deviceClass: tomcat_bec.devices.aa1Controller
deviceConfig:
prefix: 'X02DA-ES1-SMP1:CTRL:'
description: "Automation1 iSMC interface"
deviceClass: tomcat_bec.devices.aa1Controller
deviceConfig:
prefix: "X02DA-ES1-SMP1:CTRL:"
deviceTags:
- es1
- es1
enabled: true
onFailure: buffer
readOnly: false
@@ -75,47 +75,44 @@ es1_ismc:
softwareTrigger: false
es1_tasks:
description: 'Automation1 task management interface'
deviceClass: tomcat_bec.devices.aa1Tasks
deviceConfig:
prefix: 'X02DA-ES1-SMP1:TASK:'
description: "Automation1 task management interface"
deviceClass: tomcat_bec.devices.aa1Tasks
deviceConfig:
prefix: "X02DA-ES1-SMP1:TASK:"
deviceTags:
- es1
enabled: true
- es1
enabled: false
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
es1_psod:
description: 'AA1 PSO output interface (trigger)'
deviceClass: tomcat_bec.devices.aa1AxisPsoDistance
description: "AA1 PSO output interface (trigger)"
deviceClass: tomcat_bec.devices.aa1AxisPsoDistance
deviceConfig:
prefix: 'X02DA-ES1-SMP1:ROTY:PSO:'
prefix: "X02DA-ES1-SMP1:ROTY:PSO:"
deviceTags:
- es1
- es1
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: true
es1_ddaq:
description: 'Automation1 position recording interface'
deviceClass: tomcat_bec.devices.aa1AxisDriveDataCollection
description: "Automation1 position recording interface"
deviceClass: tomcat_bec.devices.aa1AxisDriveDataCollection
deviceConfig:
prefix: 'X02DA-ES1-SMP1:ROTY:DDC:'
prefix: "X02DA-ES1-SMP1:ROTY:DDC:"
deviceTags:
- es1
enabled: true
- es1
enabled: false
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
#camera:
# description: Grashopper Camera
# deviceClass: tomcat_bec.devices.GrashopperTOMCAT
@@ -133,54 +130,53 @@ gfcam:
description: GigaFrost camera client
deviceClass: tomcat_bec.devices.GigaFrostCamera
deviceConfig:
prefix: 'X02DA-CAM-GF2:'
backend_url: 'http://sls-daq-001:8080'
prefix: "X02DA-CAM-GF2:"
backend_url: "http://sls-daq-001:8080"
auto_soft_enable: true
deviceTags:
- camera
- trigger
- camera
- trigger
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: true
# gfdaq:
# description: GigaFrost stdDAQ client
# deviceClass: tomcat_bec.devices.StdDaqClient
# deviceConfig:
# ws_url: 'ws://129.129.95.111:8080'
# rest_url: 'http://129.129.95.111:5000'
# deviceTags:
# - std-daq
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false
gfdaq:
description: GigaFrost stdDAQ client
deviceClass: tomcat_bec.devices.StdDaqClient
deviceConfig:
ws_url: 'ws://129.129.95.111:8080'
rest_url: 'http://129.129.95.111:5000'
deviceTags:
- std-daq
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
# daq_stream0:
# description: stdDAQ preview (2 every 555)
# deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
# deviceConfig:
# url: 'tcp://129.129.95.111:20000'
# deviceTags:
# - std-daq
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false
daq_stream0:
description: stdDAQ preview (2 every 555)
deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
deviceConfig:
url: 'tcp://129.129.95.111:20000'
deviceTags:
- std-daq
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
daq_stream1:
description: stdDAQ preview (1 at 5 Hz)
deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
deviceConfig:
url: 'tcp://129.129.95.111:20001'
deviceTags:
- std-daq
enabled: true
onFailure: buffer
readOnly: false
readoutPriority: monitored
softwareTrigger: false
# daq_stream1:
# description: stdDAQ preview (1 at 5 Hz)
# deviceClass: tomcat_bec.devices.StdDaqPreviewDetector
# deviceConfig:
# url: 'tcp://129.129.95.111:20001'
# deviceTags:
# - std-daq
# enabled: true
# onFailure: buffer
# readOnly: false
# readoutPriority: monitored
# softwareTrigger: false

View File

@@ -5,13 +5,8 @@ from .aerotech import (
aa1GlobalVariableBindings,
aa1GlobalVariables,
aa1Tasks,
aa1Controller
)
from .grashopper_tomcat import GrashopperTOMCAT
from .psimotor import EpicsMotorMR, EpicsMotorEC
from .gigafrost.gigafrostcamera import GigaFrostCamera
from .gigafrost.pcoedgecamera import PcoEdge5M
from .gigafrost.stddaq_client import StdDaqClient
from .gigafrost.stddaq_preview import StdDaqPreviewDetector
from .grashopper_tomcat import GrashopperTOMCAT
from .pco_edge.pcoedgecamera import PcoEdge5M
from .psimotor import EpicsMotorEC, EpicsMotorMR

View File

@@ -35,21 +35,22 @@ class aa1Controller(Device):
"""Ophyd proxy class for the Aerotech Automation 1's core controller functionality"""
# ToDo: Add error subscription
controllername = Component(EpicsSignalRO, "NAME", kind=Kind.config)
serialnumber = Component(EpicsSignalRO, "SN", kind=Kind.config)
apiversion = Component(EpicsSignalRO, "API_VERSION", kind=Kind.config)
controllername = Component(EpicsSignalRO, "NAME", string=True, kind=Kind.config)
serialnumber = Component(EpicsSignalRO, "SN", string=True, kind=Kind.config)
apiversion = Component(EpicsSignalRO, "API_VERSION", string=True, kind=Kind.config)
axiscount = Component(EpicsSignalRO, "AXISCOUNT", kind=Kind.config)
taskcount = Component(EpicsSignalRO, "TASKCOUNT", kind=Kind.config)
fastpoll = Component(EpicsSignalRO, "POLLTIME", auto_monitor=True, kind=Kind.normal)
slowpoll = Component(EpicsSignalRO, "DRVPOLLTIME", auto_monitor=True, kind=Kind.normal)
errno = Component(EpicsSignalRO, "ERRNO", auto_monitor=True, kind=Kind.normal)
errnmsg = Component(EpicsSignalRO, "ERRMSG", auto_monitor=True, kind=Kind.normal)
errnmsg = Component(EpicsSignalRO, "ERRMSG", string=True, auto_monitor=True, kind=Kind.normal)
_set_ismc = Component(EpicsSignal, "SET", put_complete=True, kind=Kind.omitted)
USER_ACCESS = ["reset"]
def reset(self):
""" Resets the Automation1 iSMC reloading the default configuration"""
"""Resets the Automation1 iSMC reloading the default configuration. Note that this will
erase all settings that were set during startup and not saved to the MCD file."""
self._set_ismc.set(3).wait()

View File

@@ -8,68 +8,17 @@ drive data collection (DDC) interface.
import time
from collections import OrderedDict
from ophyd import Component, EpicsSignal, EpicsSignalRO, Kind
from ophyd import Device, Component, EpicsSignal, EpicsSignalRO, Kind
from ophyd.status import SubscriptionStatus
from ophyd_devices.interfaces.base_classes.psi_detector_base import PSIDetectorBase as PSIDeviceBase
from ophyd_devices.interfaces.base_classes.psi_detector_base import (
CustomDetectorMixin as CustomDeviceMixin,
)
from ophyd_devices.interfaces.base_classes.psi_device_base import PSIDeviceBase
from bec_lib import bec_logger
logger = bec_logger.logger
class AerotechDriveDataCollectionMixin(CustomDeviceMixin):
"""Mixin class for self-configuration and staging
NOTE: scripted scans start drive data collection internally
"""
# parent : aa1Tasks
def on_stage(self) -> None:
"""Configuration and staging"""
# Fish out configuration from scaninfo (does not need to be full configuration)
d = {}
if "kwargs" in self.parent.scaninfo.scan_msg.info:
scanargs = self.parent.scaninfo.scan_msg.info["kwargs"]
# NOTE: Scans don't have to fully configure the device
if "ddc_trigger" in scanargs:
d["ddc_trigger"] = scanargs["ddc_trigger"]
if "ddc_num_points" in scanargs:
d["num_points_total"] = scanargs["ddc_num_points"]
else:
# Try to figure out number of points
num_points = 1
points_valid = False
if "steps" in scanargs and scanargs['steps'] is not None:
num_points *= scanargs["steps"]
points_valid = True
elif "exp_burst" in scanargs and scanargs['exp_burst'] is not None:
num_points *= scanargs["exp_burst"]
points_valid = True
elif "repeats" in scanargs and scanargs['repeats'] is not None:
num_points *= scanargs["repeats"]
points_valid = True
if points_valid:
d["num_points_total"] = num_points
# Perform bluesky-style configuration
if len(d) > 0:
logger.warning(f"[{self.parent.name}] Configuring with:\n{d}")
self.parent.configure(d=d)
# Stage the data collection if not in internally launced mode
# NOTE: Scripted scans start acquiring from the scrits
if self.parent.scaninfo.scan_type not in ("script", "scripted"):
self.parent.bluestage()
def on_unstage(self):
"""Standard bluesky unstage"""
self.parent._switch.set("Stop", settle_time=0.2).wait()
class aa1AxisDriveDataCollection(PSIDeviceBase):
class aa1AxisDriveDataCollection(PSIDeviceBase, Device):
"""Axis data collection
This class provides convenience wrappers around the Aerotech API's axis
@@ -88,9 +37,10 @@ class aa1AxisDriveDataCollection(PSIDeviceBase):
...
ret = yield from ddc.collect()
NOTE: scripted scans start drive data collection internally
NOTE: Expected behavior is that the device is disabled when not in use,
i.e. there's avtive enable/disable management.
i.e. there's active enable/disable management.
"""
# ########################################################################
@@ -111,8 +61,32 @@ class aa1AxisDriveDataCollection(PSIDeviceBase):
_buffer0 = Component(EpicsSignalRO, "BUFFER0", auto_monitor=True, kind=Kind.normal)
_buffer1 = Component(EpicsSignalRO, "BUFFER1", auto_monitor=True, kind=Kind.normal)
custom_prepare_cls = AerotechDriveDataCollectionMixin
USER_ACCESS = ["configure", "reset"]
USER_ACCESS = ["configure", "reset", "arm", "disarm"]
# pylint: disable=duplicate-code, too-many-arguments
def __init__(
self,
prefix="",
*,
name,
kind=None,
read_attrs=None,
configuration_attrs=None,
parent=None,
scan_info=None,
**kwargs,
):
# Need to call super() to call the mixin class
super().__init__(
prefix=prefix,
name=name,
kind=kind,
read_attrs=read_attrs,
configuration_attrs=configuration_attrs,
parent=parent,
scan_info=scan_info,
**kwargs,
)
def configure(self, d: dict = None) -> tuple:
"""Configure data capture
@@ -128,21 +102,74 @@ class aa1AxisDriveDataCollection(PSIDeviceBase):
if "num_points_total" in d:
self.npoints.set(d["num_points_total"]).wait()
if "ddc_trigger" in d:
self._trigger.set(d['ddc_trigger']).wait()
self._trigger.set(d["ddc_trigger"]).wait()
if "ddc_source0" in d:
self._input0.set(d['ddc_source0']).wait()
self._input0.set(d["ddc_source0"]).wait()
if "ddc_source1" in d:
self._input1.set(d['ddc_source1']).wait()
self._input1.set(d["ddc_source1"]).wait()
# Reset incremental readback
self._switch.set("ResetRB", settle_time=0.1).wait()
new = self.read_configuration()
return (old, new)
def bluestage(self) -> None:
def on_stage(self) -> None:
"""Configuration and staging"""
# Fish out configuration from scaninfo (does not need to be full configuration)
d = {}
scan_args = {
**self.scan_info.msg.request_inputs["inputs"],
**self.scan_info.msg.request_inputs["kwargs"],
**self.scan_info.msg.scan_parameters,
}
# NOTE: Scans don't have to fully configure the device
if "ddc_trigger" in scan_args:
d["ddc_trigger"] = scan_args["ddc_trigger"]
if "ddc_num_points" in scan_args:
d["num_points_total"] = scan_args["ddc_num_points"]
else:
# Try to figure out number of points
num_points = 1
points_valid = False
if "steps" in scan_args and scan_args["steps"] is not None:
num_points *= scan_args["steps"]
points_valid = True
if "exp_burst" in scan_args and scan_args["exp_burst"] is not None:
num_points *= scan_args["exp_burst"]
points_valid = True
if "repeats" in scan_args and scan_args["repeats"] is not None:
num_points *= scan_args["repeats"]
points_valid = True
if "burst_at_each_point" in scan_args and scan_args["burst_at_each_point"] is not None:
num_points *= scan_args["burst_at_each_point"]
points_valid = True
if points_valid:
d["num_points_total"] = num_points
# Perform bluesky-style configuration
if d:
self.configure(d=d)
# Stage the data collection if not in internally launced mode
# NOTE: Scripted scans start acquiring from the scrits at kickoff
self.arm()
# Reset readback
self.reset()
def on_unstage(self):
"""Standard bluesky unstage"""
self.disarm()
def arm(self) -> None:
"""Bluesky-style stage"""
self._switch.set("ResetRB", settle_time=0.1).wait()
self._switch.set("Start", settle_time=0.2).wait()
def disarm(self):
"""Standard bluesky unstage"""
self._switch.set("Stop", settle_time=0.2).wait()
def reset(self):
"""Reset incremental readback"""
self._switch.set("ResetRB", settle_time=0.1).wait()
@@ -164,20 +191,22 @@ class aa1AxisDriveDataCollection(PSIDeviceBase):
timestamp_ = timestamp
return result
status = None
if index == 0:
status = SubscriptionStatus(self._readstatus0, neg_edge, settle_time=0.5)
self._readback0.set(1).wait()
elif index == 1:
status = SubscriptionStatus(self._readstatus1, neg_edge, settle_time=0.5)
self._readback1.set(1).wait()
else:
raise RuntimeError(f"Unsupported drive data collection channel: {index}")
# Start asynchronous readback
status.wait()
return status
def describe_collect(self) -> OrderedDict:
"""Describes collected array format according to JSONschema
"""
"""Describes collected array format according to JSONschema"""
ret = OrderedDict()
ret["buffer0"] = {
"source": "internal",

View File

@@ -7,78 +7,17 @@ synchronized output (PSO) interface.
"""
from time import sleep
import numpy as np
from ophyd import Component, EpicsSignal, EpicsSignalRO, Kind
from ophyd import Device, Component, EpicsSignal, EpicsSignalRO, Kind
from ophyd.status import DeviceStatus
from ophyd_devices.interfaces.base_classes.psi_detector_base import PSIDetectorBase as PSIDeviceBase
from ophyd_devices.interfaces.base_classes.psi_detector_base import (
CustomDetectorMixin as CustomDeviceMixin,
)
from ophyd_devices.interfaces.base_classes.psi_device_base import PSIDeviceBase
from bec_lib import bec_logger
logger = bec_logger.logger
class AerotechPsoDistanceMixin(CustomDeviceMixin):
"""Mixin class for self-configuration and staging
"""
# parent : aa1Tasks
def on_stage(self) -> None:
"""Configuration and staging
NOTE: Scans don't have to fully configure the device, that can be done
manually outside. However we expect that the device is disabled
when not in use. I.e. this method is not expected to be called when
PSO is not needed or when it'd conflict with other devices.
"""
# Fish out configuration from scaninfo (does not need to be full configuration)
d = {}
if "kwargs" in self.parent.scaninfo.scan_msg.info:
scanargs = self.parent.scaninfo.scan_msg.info["kwargs"]
if "pso_distance" in scanargs:
d["pso_distance"] = scanargs["pso_distance"]
if "pso_wavemode" in scanargs:
d["pso_wavemode"] = scanargs["pso_wavemode"]
if "pso_w_pulse" in scanargs:
d["pso_w_pulse"] = scanargs["pso_w_pulse"]
if "pso_t_pulse" in scanargs:
d["pso_t_pulse"] = scanargs["pso_t_pulse"]
if "pso_n_pulse" in scanargs:
d["pso_n_pulse"] = scanargs["pso_n_pulse"]
# Perform bluesky-style configuration
if len(d) > 0:
logger.info(f"[{self.parent.name}] Configuring with:\n{d}")
self.parent.configure(d=d)
# Stage the PSO distance module
self.parent.bluestage()
def on_unstage(self):
"""Standard bluesky unstage"""
# Ensure output is set to low
# if self.parent.output.value:
# self.parent.toggle()
# Turn off window mode
self.parent.winOutput.set("Off").wait()
self.parent.winEvents.set("Off").wait()
# Turn off distance mode
self.parent.dstEventsEna.set("Off").wait()
self.parent.dstCounterEna.set("Off").wait()
# Disable output
self.parent.outSource.set("None").wait()
# Sleep for one poll period
sleep(0.2)
def on_trigger(self) -> None | DeviceStatus:
"""Fire a single PSO event (i.e. manual software trigger)"""
# Only trigger if distance was set to invalid
logger.warning(f"[{self.parent.name}] Triggerin...")
if self.parent.dstDistanceVal.get() == 0:
status = self.parent._eventSingle.set(1, settle_time=0.1)
return status
class aa1AxisPsoBase(PSIDeviceBase):
class AerotechPsoBase(PSIDeviceBase, Device):
"""Position Sensitive Output - Base class
This class provides convenience wrappers around the Aerotech IOC's PSO
@@ -100,6 +39,7 @@ class aa1AxisPsoBase(PSIDeviceBase):
output = Component(EpicsSignalRO, "OUTPUT-RBV", auto_monitor=True, kind=Kind.normal)
address = Component(EpicsSignalRO, "ARRAY-ADDR", kind=Kind.config)
_eventSingle = Component(EpicsSignal, "EVENT:SINGLE", put_complete=True, kind=Kind.omitted)
switch = Component(EpicsSignal, "SWITCH", put_complete=True, kind=Kind.omitted)
# ########################################################################
# PSO Distance event module
@@ -152,8 +92,34 @@ class aa1AxisPsoBase(PSIDeviceBase):
outPin = Component(EpicsSignalRO, "PIN", auto_monitor=True, kind=Kind.config)
outSource = Component(EpicsSignal, "SOURCE", put_complete=True, kind=Kind.config)
def trigger(self, settle_time=0.1) -> DeviceStatus:
# pylint: disable=duplicate-code, too-many-arguments
def __init__(
self,
prefix="",
*,
name,
kind=None,
read_attrs=None,
configuration_attrs=None,
parent=None,
scan_info=None,
**kwargs,
):
# Need to call super() to call the mixin class
super().__init__(
prefix=prefix,
name=name,
kind=kind,
read_attrs=read_attrs,
configuration_attrs=configuration_attrs,
parent=parent,
scan_info=scan_info,
**kwargs,
)
def fire(self, settle_time=0.1) -> None | DeviceStatus:
"""Fire a single PSO event (i.e. manual software trigger)"""
# Only trigger if distance was set to invalid
self._eventSingle.set(1, settle_time=settle_time).wait()
status = DeviceStatus(self)
status.set_finished()
@@ -163,7 +129,7 @@ class aa1AxisPsoBase(PSIDeviceBase):
"""Toggle waveform outup"""
orig_wave_mode = self.waveMode.get()
self.waveMode.set("Toggle").wait()
self.trigger(0.1)
self.fire(0.1)
self.waveMode.set(orig_wave_mode).wait()
def configure(self, d: dict):
@@ -173,6 +139,8 @@ class aa1AxisPsoBase(PSIDeviceBase):
w_pulse = d.get("pso_w_pulse", 200)
n_pulse = d.get("pso_n_pulse", 1)
self.switch.set("Manual").wait()
# Configure the pulsed/toggled waveform
if wmode in ["toggle", "toggled"]:
# Switching to simple toggle mode
@@ -195,15 +163,9 @@ class aa1AxisPsoBase(PSIDeviceBase):
else:
raise RuntimeError(f"Unsupported window mode: {wmode}")
# Set PSO output data source
# FIXME : This is essentially staging...
if wmode in ["toggle", "toggled", "pulse", "pulsed"]:
self.outSource.set("Waveform").wait()
elif wmode in ["output", "flag"]:
self.outSource.set("Window").wait()
class aa1AxisPsoDistance(aa1AxisPsoBase):
class aa1AxisPsoDistance(AerotechPsoBase):
"""Position Sensitive Output - Distance mode
This class provides convenience wrappers around the Aerotech API's PSO functionality in
@@ -232,13 +194,12 @@ class aa1AxisPsoDistance(aa1AxisPsoBase):
```
"""
custom_prepare_cls = AerotechPsoDistanceMixin
USER_ACCESS = ["configure", "prepare", "toggle"]
USER_ACCESS = ["configure", "fire", "toggle", "arm", "disarm"]
_distance_value = None
# ########################################################################
# PSO high level interface
def configure(self, d: dict = {}) -> tuple:
def configure(self, d: dict = None) -> tuple:
"""Simplified configuration interface to access the most common
functionality for distance mode PSO.
@@ -262,10 +223,6 @@ class aa1AxisPsoDistance(aa1AxisPsoBase):
raise RuntimeError(f"Unsupported distace triggering mode: {pso_wavemode}")
old = self.read_configuration()
# Disable everything
self.winEvents.set("Off").wait()
self.dstCounterEna.set("Off").wait()
self.dstEventsEna.set("Off").wait()
# Configure distance generator (also resets counter to 0)
self._distance_value = pso_distance
@@ -282,14 +239,66 @@ class aa1AxisPsoDistance(aa1AxisPsoBase):
logger.info(f"[{self.name}] PSO configured to {pso_wavemode} mode")
return (old, new)
def bluestage(self) -> None:
"""Bluesky style stage"""
def on_stage(self) -> None:
"""Configuration and staging
NOTE: Scans don't have to fully configure the device, that can be done
manually outside. However we expect that the device is disabled
when not in use. I.e. this method is not expected to be called when
PSO is not needed or when it'd conflict with other devices.
"""
# Fish out configuration from scaninfo (does not need to be full configuration)
d = {}
scan_args = {
**self.scan_info.msg.request_inputs["inputs"],
**self.scan_info.msg.request_inputs["kwargs"],
**self.scan_info.msg.scan_parameters,
}
if "pso_distance" in scan_args:
d["pso_distance"] = scan_args["pso_distance"]
if "pso_wavemode" in scan_args:
d["pso_wavemode"] = scan_args["pso_wavemode"]
if "pso_w_pulse" in scan_args:
d["pso_w_pulse"] = scan_args["pso_w_pulse"]
if "pso_t_pulse" in scan_args:
d["pso_t_pulse"] = scan_args["pso_t_pulse"]
if "pso_n_pulse" in scan_args:
d["pso_n_pulse"] = scan_args["pso_n_pulse"]
# Perform bluesky-style configuration
if d:
self.configure(d=d)
# Stage the PSO distance module
self.arm()
def on_unstage(self):
"""Turn off the PSO module"""
self.disarm()
def on_trigger(self, settle_time=0.1) -> None | DeviceStatus:
"""Fire a single PSO event (i.e. manual software trigger)"""
# Only trigger if distance was set to invalid
# if self.dstDistanceVal.get() == 0:
# logger.warning(f"[{self.name}] Triggerin...")
return self.fire(settle_time)
def arm(self) -> None:
"""Bluesky style stage
It re-arms the distance array and re-sets the distance counter at current position
"""
# Stage the PSO distance module and zero counter
if isinstance(self._distance_value, (np.ndarray, list, tuple)):
self.dstArrayRearm.set(1).wait()
# Wait for polling
sleep(0.5)
# Start monitoring the counters if distance is valid
if self.dstDistanceVal.get() > 0:
self.dstEventsEna.set("On").wait()
self.dstCounterEna.set("On").wait()
# Set distance and wait for polling
self.switch.set("Distance", settle_time=0.2).wait()
def disarm(self):
"""Standard bluesky unstage"""
self.switch.set("Manual", settle_time=0.2).wait()
def launch(self):
"""Re-set the counters"""
if isinstance(self._distance_value, (np.ndarray, list, tuple)):
self.dstArrayRearm.set(1).wait()

View File

@@ -33,8 +33,9 @@ program
//////////////////////////////////////////////////////////////////////////
// Internal parameters - dont use
var $axis as axis = ROTY
var $ii as integer
var $axis as axis = {{ scan.rotaxis or 'ROTY' }}
var $ii as integer
var $axisFaults as integer = 0
var $iDdcSafeSpace as integer = 4096
// Set acceleration
@@ -126,7 +127,12 @@ program
if $eScanType == ScanType.POS || $eScanType == ScanType.NEG
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayPosAddr, $iPsoArrayPosSize, 0)
MoveAbsolute($axis, $fPosEnd, $fVelScan)
WaitForInPosition($axis)
WaitForMotionDone($axis)
$axisFaults = StatusGetAxisItem($axis, AxisDataSignal.AxisFault)
if $axisFaults
TaskSetError(TaskGetIndex(), "AxisFault on axis ROTY")
end
elseif $eScanType == ScanType.POSNEG || $eScanType == ScanType.NEGPOS
for $ii = 0 to ($iNumRepeat-1)
// Feedback on progress
@@ -134,11 +140,15 @@ program
if ($ii % 2) == 0
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayPosAddr, $iPsoArrayPosSize, 0)
MoveAbsolute($axis, $fPosEnd, $fVelScan)
WaitForInPosition($axis)
elseif ($ii % 2) == 1
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayNegAddr, $iPsoArrayNegSize, 0)
MoveAbsolute($axis, $fPosStart, $fVelScan)
WaitForInPosition($axis)
end
WaitForMotionDone($axis)
$axisFaults = StatusGetAxisItem($axis, AxisDataSignal.AxisFault)
if $axisFaults
TaskSetError(TaskGetIndex(), "AxisFault on axis ROTY")
end
Dwell(0.2)
end

View File

@@ -20,7 +20,7 @@ program
// Internal
var $axis as axis = ROTY
var $axis as axis = {{ scan.rotaxis or 'ROTY' }}
var $ii as integer
var $axisFaults as integer = 0
@@ -78,6 +78,7 @@ program
///////////////////////////////////////////////////////////
$iglobal[2] = $iNumSteps
for $ii = 0 to ($iNumSteps-1)
$rglobal[4] = $ii
MoveAbsolute($axis, $fStartPosition + $ii * $fStepSize, $fVelScan)
WaitForMotionDone($axis)

View File

@@ -5,67 +5,18 @@ interface.
@author: mohacsi_i
"""
from time import sleep
from ophyd import Component, EpicsSignal, EpicsSignalRO, Kind
from ophyd.status import DeviceStatus, SubscriptionStatus
import time
from ophyd import Device, Component, EpicsSignal, EpicsSignalRO, Kind
from ophyd.status import SubscriptionStatus
from ophyd_devices.interfaces.base_classes.psi_device_base import PSIDeviceBase
from ophyd_devices.interfaces.base_classes.psi_detector_base import PSIDetectorBase as PSIDeviceBase
from ophyd_devices.interfaces.base_classes.psi_detector_base import (
CustomDetectorMixin as CustomDeviceMixin,
)
from bec_lib import bec_logger
logger = bec_logger.logger
class AerotechTasksMixin(CustomDeviceMixin):
"""Mixin class for self-configuration and staging
"""
# parent : aa1Tasks
def on_stage(self) -> None:
"""Configuration and staging
In the BEC model ophyd devices must fish out their own configuration from the 'scaninfo'.
I.e. they need to know which parameters are relevant for them at each scan.
NOTE: Scans don't have to fully configure the device, that can be done
manually outside. However we expect that the device is disabled
when not in use. I.e. this method is not expected to be called when
PSO is not needed or when it'd conflict with other devices.
"""
# logger.warning(self.parent.scaninfo.scan_msg.info['kwargs'].keys())
# Fish out our configuration from scaninfo (via explicit or generic addressing)
d = {}
if "kwargs" in self.parent.scaninfo.scan_msg.info:
scanargs = self.parent.scaninfo.scan_msg.info["kwargs"]
if self.parent.scaninfo.scan_type in ("script", "scripted"):
# NOTE: Scans don't have to fully configure the device
if "script_text" in scanargs and scanargs["script_text"] is not None:
d["script_text"] = scanargs["script_text"]
if "script_file" in scanargs and scanargs["script_file"] is not None:
d["script_file"] = scanargs["script_file"]
if "script_task" in scanargs and scanargs["script_task"] is not None:
d["script_task"] = scanargs["script_task"]
# Perform bluesky-style configuration
if len(d) > 0:
logger.warning(f"[{self.parent.name}] Configuring with:\n{d}")
self.parent.configure(d=d)
# The actual staging
self.parent.bluestage()
def on_unstage(self):
"""Stop the currently selected task"""
self.parent.switch.set("Stop").wait()
def on_stop(self):
"""Stop the currently selected task"""
self.parent.switch.set("Stop").wait()
class aa1Tasks(PSIDeviceBase):
class aa1Tasks(PSIDeviceBase, Device):
"""Task management API
The place to manage tasks and AeroScript user files on the controller.
@@ -96,7 +47,7 @@ class aa1Tasks(PSIDeviceBase):
"""
custom_prepare_cls = AerotechTasksMixin
USER_ACCESS = ["arm", "disarm", "launch", "kickoff"]
_failure = Component(EpicsSignalRO, "FAILURE", auto_monitor=True, kind=Kind.normal)
errStatus = Component(EpicsSignalRO, "ERRW", auto_monitor=True, kind=Kind.normal)
@@ -109,55 +60,156 @@ class aa1Tasks(PSIDeviceBase):
_executeReply = Component(EpicsSignalRO, "EXECUTE_RBV", string=True, auto_monitor=True)
fileName = Component(EpicsSignal, "FILENAME", string=True, kind=Kind.omitted, put_complete=True)
# _fileRead = Component(EpicsPassiveRO, "FILEREAD", string=True, kind=Kind.omitted)
_fileWrite = Component(
EpicsSignal, "FILEWRITE", string=True, kind=Kind.omitted, put_complete=True
)
# pylint: disable=duplicate-code, too-many-arguments
def __init__(
self,
prefix="",
*,
name,
kind=None,
read_attrs=None,
configuration_attrs=None,
parent=None,
scan_info=None,
**kwargs,
):
# Need to call super() to call the mixin class
super().__init__(
prefix=prefix,
name=name,
kind=kind,
read_attrs=read_attrs,
configuration_attrs=configuration_attrs,
parent=parent,
scan_info=scan_info,
**kwargs,
)
def configure(self, d: dict) -> tuple:
"""Configuration interface for flying"""
"""Configure the scripting interface
Handles AeroScript loading and the launching of existing script files
on the Automation1 iSMC. The interface is meant to be used for flying.
"""
# Common operations
old = self.read_configuration()
self.switch.set("Reset").wait()
# Check what we got
if "script_task" in d:
if d['script_task'] < 3 or d['script_task'] > 21:
if d["script_task"] < 3 or d["script_task"] > 21:
raise RuntimeError(f"Invalid task index: {d['script_task']}")
self.taskIndex.set(d['script_task']).wait()
self.taskIndex.set(d["script_task"]).wait()
if "script_file" in d:
self.fileName.set(d["script_file"]).wait()
if "script_text" in d:
# Compile text for syntax checking
# NOTE: This will load to 'script_file'
self._fileWrite.set(d['script_text'], settle_time=0.2).wait()
self.switch.set("Load").wait()
# Check the result of load
if self._failure.value:
raise RuntimeError("Failed to load script, perhaps a syntax error?")
# NOTE: This will overwrite 'script_file'
self._fileWrite.set(d["script_text"], settle_time=0.2).wait()
new = self.read_configuration()
return (old, new)
def bluestage(self) -> None:
"""Bluesky style stage"""
def on_stage(self) -> None:
"""Configuration and staging
In the BEC model ophyd devices must fish out their own configuration from the 'scaninfo'.
I.e. they need to know which parameters are relevant for them at each scan.
NOTE: Scans don't have to fully configure the device, that can be done
manually outside. However we expect that the device is disabled
when not in use. I.e. this method is not expected to be called when
PSO is not needed or when it'd conflict with other devices.
"""
# Fish out our configuration from scaninfo (via explicit or generic addressing)
d = {}
scan_args = {
**self.scan_info.msg.request_inputs["inputs"],
**self.scan_info.msg.request_inputs["kwargs"],
**self.scan_info.msg.scan_parameters,
}
# if self.scan_info.scan_type in ("script", "scripted"):
# NOTE: Scans don't have to fully configure the device
if "script_text" in scan_args and scan_args["script_text"] is not None:
d["script_text"] = scan_args["script_text"]
if "script_file" in scan_args and scan_args["script_file"] is not None:
d["script_file"] = scan_args["script_file"]
if "script_task" in scan_args and scan_args["script_task"] is not None:
d["script_task"] = scan_args["script_task"]
# FIXME: The above should be exchanged with:
# d = self.scan_info.scan_msg.scan_parameters.get("aerotech_config")
# Perform bluesky-style configuration
if d:
self.configure(d=d)
# The actual staging
self.arm()
def on_unstage(self):
"""Stop the currently selected task"""
self.disarm()
def on_stop(self):
"""Stop the currently selected task"""
self.unstage()
def on_kickoff(self):
"""Start execution of the selected task"""
self.launch()
def arm(self) -> None:
"""Bluesky style stage, prepare, but does not execute"""
if self.taskIndex.get() in (0, 1, 2):
logger.error(f"[{self.name}] Launching AeroScript on system task. Daring today are we?")
# Launch and check success
status = self.switch.set("Run", settle_time=0.2)
logger.error(f"[{self.name}] Loading AeroScript on system task. Daring today are we?")
if len(self.fileName.get())==0:
self.fileName.set("bec.ascript", settle_time=0.1).wait()
self._failure.read()
ts_old = float(self._failure.timestamp)
def wait_until_new(*_, old_value, value, timestamp, **__):
nonlocal ts_old
result = bool(ts_old != timestamp) and (value!=-1)
# print(f"{old_value}\t{value}\t{ts_old}\t{timestamp}\t{result}")
return result
# Subscribe and wait for update
status = SubscriptionStatus(self._failure, wait_until_new, settle_time=0.1)
# Load and check success
self.switch.set("Load", settle_time=0.0).wait()
status.wait()
if self._failure.value:
raise RuntimeError("Failed to kick off task, please check the Aerotech IOC")
raise RuntimeError("Failed to load task, please check the Aerotech IOC")
return status
##########################################################################
# Bluesky flyer interface
def complete(self) -> DeviceStatus:
def disarm(self):
"""Bluesky style unstage, stops execution"""
self.switch.set("Stop").wait()
def launch(self):
"""Bluesky style kickoff"""
# Launch and check success
status = self.switch.set("Start", settle_time=0.2)
status.wait()
if self._failure.value:
raise RuntimeError("Failed to load task, please check the Aerotech IOC")
return status
def complete(self) -> SubscriptionStatus:
"""Wait for a RUNNING task"""
# Sleep for foll period
time.sleep(1)
timestamp_ = 0
task_idx = int(self.taskIndex.get())
def not_running(*args, value, timestamp, **kwargs):
def not_running(*, value, timestamp, **_):
nonlocal timestamp_
# print(f"State {value[task_idx]} in {value}")
result = value[task_idx] not in ["Running", 4]
timestamp_ = timestamp
return result

View File

@@ -1,4 +1,13 @@
from .AerotechTasks import aa1Tasks
from .AerotechPso import aa1AxisPsoDistance
from .AerotechDriveDataCollection import aa1AxisDriveDataCollection
from .AerotechAutomation1 import aa1Controller, aa1GlobalVariables, aa1GlobalVariableBindings, aa1AxisIo
from .AerotechAutomation1 import (
aa1Controller,
aa1GlobalVariables,
aa1GlobalVariableBindings,
aa1AxisIo,
)
from .AerotechAutomation1Enums import (
DriveDataCaptureInput,
DriveDataCaptureTrigger,
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,18 @@
// Test program for zig-zag line scanning with PSO window output "enable"
// signal and DDC synchronized to external trigger input.
// Test program for simple zig-zag line scanning with PSO window output
// "enable" signal and DDC synchronized to external trigger input.
// The file expects external parameter validation
// The PSO locations arrays are set externally from EPICS PV
//
#define DDC_ADDR 0x800000
#define PSO_ADDR 0x0
enum ScanType
Pos = 0
Neg = 1
PosNeg = 2
NegPos = 3
POS = 0
NEG = 1
POSNEG = 2
NEGPOS = 3
end
@@ -15,74 +20,98 @@ program
//////////////////////////////////////////////////////////////////////////
// External parameters - USE THEESE
var $fStartPosition as real = {{ scan.startpos }}
var $fScanRange as real = $fStartPosition + {{ scan.scanrange }}
var $fScanRange as real = {{ scan.scanrange }}
var $iNumRepeat as integer = {{ scan.nrepeat }}
var $eScanType as ScanType = ScanType.{{ scan.scandir or 'Pos' }}
var $eScanType as ScanType = ScanType.{{ scan.scandir or 'POS' }}
var $iNumDdcRead as integer = {{ scan.npoints }}
var $fVelJog as real = {{ scan.jogvel or 200 }}
var $fVelScan as real = {{ scan.scanvel }}
var $fAcceleration = {{ scan.scanacc or 500 }}
var $fSafeDist = 10.0
var $fAcceleration = {{ scan.scanacc }}
var $fAccDistance as real = {{ scan.accdist }}
var $eDdcTrigger as DriveDataCaptureTrigger = DriveDataCaptureTrigger.{{ scan.psotrigger }}
//////////////////////////////////////////////////////////////////////////
// Internal parameters - dont use
var $axis as axis = ROTY
var $ii as integer
var $axis as axis = {{ scan.rotaxis or 'ROTY' }}
var $ii as integer
var $axisFaults as integer = 0
var $iDdcSafeSpace as integer = 4096
// Set acceleration
SetupAxisRampType($axis, RampType.Linear)
SetupAxisRampValue($axis,0,$fAcceleration)
var $fAccDistance as real = 0.5 * $fVelScan * $fVelScan / $fAcceleration + $fSafeDist
SetupAxisRampValue($axis, 0, $fAcceleration)
// set the actual scan range
var $fPosStart as real
var $fPosEnd as real
if $eScanType == ScanType.Pos
if $eScanType == ScanType.POS
$fPosStart = $fStartPosition - $fAccDistance
$fPosEnd = $fStartPosition + $fScanRange + $fAccDistance
elseif $eScanType == ScanType.Neg
elseif $eScanType == ScanType.NEG
$fPosStart = $fStartPosition + $fAccDistance
$fPosEnd = $fStartPosition - $fScanRange - $fAccDistance
elseif $eScanType == ScanType.PosNeg
elseif $eScanType == ScanType.POSNEG
$fPosStart = $fStartPosition - $fAccDistance
$fPosEnd = $fStartPosition + $fScanRange + $fAccDistance
elseif $eScanType == ScanType.NegPos
elseif $eScanType == ScanType.NEGPOS
$fPosStart = $fStartPosition + $fAccDistance
$fPosEnd = $fStartPosition - $fScanRange - $fAccDistance
end
// Move to start position before the scan
// NOTE: Also wait for GigaFrost to start, otherwise early triggers might be missed
MoveAbsolute($axis, $fPosStart, $fVelJog)
WaitForInPosition($axis)
Dwell(2)
// Set globals for feedback
$rglobal[2] = $fPosStart
$rglobal[3] = $fPosEnd
// Configure PSO
// FIXME : When the controller is restarted
PsoDistanceConfigureInputs($axis, [PsoDistanceInput.XC4PrimaryFeedback])
PsoDistanceCounterOff($axis)
PsoDistanceEventsOff($axis)
PsoWindowConfigureEvents($axis, PsoWindowEventMode.None)
PsoWaveformOff($axis)
PsoOutputConfigureSource($axis, PsoOutputSource.Waveform)
var $iPsoArrayAddr as integer = 0
var $iPsoArray[] as real = [UnitsToCounts($axis, $fAccDistance), UnitsToCounts($axis, $fScanRange)]
DriveArrayWrite($axis, $iPsoArray, $iPsoArrayAddr, length($iPsoArray), DriveArrayType.PsoDistanceEventDistances)
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayAddr, 2, 0)
// Simple PSO trigger pattern
var $iPsoArrayPosAddr as integer = PSO_ADDR
var $iPsoArrayPos[] as real = [{% for psoDist in scan.psoBoundsPos[:-1] %} UnitsToCounts($axis, {{ psoDist }}), {% endfor %} UnitsToCounts($axis, {{ scan.psoBoundsPos[-1] }}) ]
DriveArrayWrite($axis, $iPsoArrayPos, $iPsoArrayPosAddr, length($iPsoArrayPos), DriveArrayType.PsoDistanceEventDistances)
{% if scan.psoBoundsNeg|length > 0 %}
var $iPsoArrayNegAddr as integer = ($iPsoArrayPosAddr + length($iPsoArrayPos)) * 4
var $iPsoArrayNeg[] as real = [{% for psoDist in scan.psoBoundsNeg[:-1] %} UnitsToCounts($axis, {{ psoDist }}), {% endfor %} UnitsToCounts($axis, {{ scan.psoBoundsNeg[-1] }}) ]
DriveArrayWrite($axis, $iPsoArrayNeg, $iPsoArrayNegAddr, length($iPsoArrayNeg), DriveArrayType.PsoDistanceEventDistances)
{% else %}
var $iPsoArrayNegAddr as integer = ($iPsoArrayPosAddr + length($iPsoArrayPos)) * 4
var $iPsoArrayNegSize as integer = 0
var $iPsoArrayNeg[] as real = []
// DriveArrayWrite($axis, $iPsoArrayNeg, $iPsoArrayNegAddr, length($iPsoArrayNeg), DriveArrayType.PsoDistanceEventDistances)
{% endif %}
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayPosAddr, length($iPsoArrayPos), 0)
PsoDistanceCounterOn($axis)
PsoDistanceEventsOn($axis)
PsoWaveformConfigureMode($axis, PsoWaveformMode.Toggle)
PsoWaveformOn($axis)
PsoWaveformOn($axis)
PsoOutputConfigureSource($axis, PsoOutputSource.Waveform)
// Configure Drive Data Collection
var $iDdcArrayAddr as integer = 8388608
var $iDdcArraySize as integer = 5000
var $iDdcArraySize as integer = $iNumDdcRead
DriveDataCaptureConfigureInput($axis, 0, DriveDataCaptureInput.PrimaryFeedback);
DriveDataCaptureConfigureInput($axis, 1, DriveDataCaptureInput.AnalogInput0 );
DriveDataCaptureConfigureTrigger($axis, 0, DriveDataCaptureTrigger.PsoOutput );
DriveDataCaptureConfigureTrigger($axis, 1, DriveDataCaptureTrigger.PsoOutput );
DriveDataCaptureConfigureTrigger($axis, 0, $eDdcTrigger );
DriveDataCaptureConfigureTrigger($axis, 1, $eDdcTrigger );
DriveDataCaptureConfigureArray($axis, 0, $iDdcArrayAddr, $iDdcArraySize);
DriveDataCaptureConfigureArray($axis, 1, $iDdcArrayAddr + $iDdcSafeSpace + 8 * $iDdcArraySize, $iDdcArraySize);
DriveDataCaptureConfigureArray($axis, 0, DDC_ADDR, $iDdcArraySize);
DriveDataCaptureConfigureArray($axis, 1, DDC_ADDR + $iDdcSafeSpace + 8 * $iDdcArraySize, $iDdcArraySize);
// Directly before scan
PsoDistanceCounterOn($axis)
@@ -92,41 +121,44 @@ program
///////////////////////////////////////////////////////////
// Start the actual scanning
///////////////////////////////////////////////////////////
for $ii = 0 to ($iNumRepeat-1)
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayAddr, 2, 0)
if $eScanType == ScanType.Pos
MoveAbsolute($axis, $fPosEnd, $fVelScan)
WaitForInPosition($axis)
MoveAbsolute($axis, $fPosStart, $fVelScan)
WaitForInPosition($axis)
elseif $eScanType == ScanType.Neg
MoveAbsolute($axis, $fPosEnd, $fVelScan)
WaitForInPosition($axis)
MoveAbsolute($axis, $fPosStart, $fVelScan)
WaitForInPosition($axis)
elseif $eScanType == ScanType.PosNeg
if ($ii % 2) == 0
MoveAbsolute($axis, $fPosEnd, $fVelScan)
WaitForInPosition($axis)
elseif ($ii % 2) == 1
MoveAbsolute($axis, $fPosStart, $fVelScan)
WaitForInPosition($axis)
end
elseif $eScanType == ScanType.NegPos
if ($ii % 2) == 0
MoveAbsolute($axis, $fPosEnd, $fVelScan)
WaitForInPosition($axis)
elseif ($ii % 2) == 1
MoveAbsolute($axis, $fPosStart, $fVelScan)
WaitForInPosition($axis)
end
if $eScanType == ScanType.POS || $eScanType == ScanType.NEG
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayPosAddr, length($iPsoArrayPos), 0)
MoveAbsolute($axis, $fPosEnd, $fVelScan)
WaitForMotionDone($axis)
$axisFaults = StatusGetAxisItem($axis, AxisDataSignal.AxisFault)
if $axisFaults
TaskSetError(TaskGetIndex(), "AxisFault on axis ROTY")
end
elseif $eScanType == ScanType.POSNEG || $eScanType == ScanType.NEGPOS
for $ii = 0 to ($iNumRepeat-1)
// Feedback on progress
$rglobal[4] = $ii
if ($ii % 2) == 0
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayPosAddr, length($iPsoArrayPos), 0)
MoveAbsolute($axis, $fPosEnd, $fVelScan)
elseif ($ii % 2) == 1
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayNegAddr, length($iPsoArrayNeg), 0)
MoveAbsolute($axis, $fPosStart, $fVelScan)
end
WaitForMotionDone($axis)
$axisFaults = StatusGetAxisItem($axis, AxisDataSignal.AxisFault)
if $axisFaults
TaskSetError(TaskGetIndex(), "AxisFault on axis ROTY")
end
Dwell(0.2)
end
Dwell(0.2)
end
// Directly after scan
PsoDistanceCounterOff($axis)
DriveDataCaptureOff($axis, 0)
DriveDataCaptureOff($axis, 1)
// move back to start position
MoveAbsolute($axis, $fPosStart, $fVelJog)
WaitForInPosition($axis)
Dwell(2)
end

View File

@@ -1,5 +1,5 @@
"""
This module contains the PV definitions for the Gigafrost camera at Tomcat. It
This module contains the PV definitions for the Gigafrost camera at Tomcat. It
does not contain any logic to control the camera.
"""
@@ -91,7 +91,7 @@ class GigaFrostBase(Device):
sync_swhw = Cpt(EpicsSignal, "SYNC_SWHW.PROC", put_complete=True, kind=Kind.omitted)
start_cam = Cpt(EpicsSignal, "START_CAM", put_complete=True, kind=Kind.omitted)
set_param = Cpt(EpicsSignal, "SET_PARAM.PROC", put_complete=True, kind=Kind.omitted)
acqmode = Cpt(EpicsSignal, "ACQMODE", put_complete=True, kind=Kind.config)
acqmode = Cpt(EpicsSignal, "ACQMODE", auto_monitor=True, put_complete=True, kind=Kind.config)
array_size = DynamicDeviceComponent(
{
@@ -102,10 +102,14 @@ class GigaFrostBase(Device):
)
# UDP header
ports = Cpt(EpicsSignal, "PORTS", put_complete=True, kind=Kind.config)
framenum = Cpt(EpicsSignal, "FRAMENUM", put_complete=True, kind=Kind.config)
ht_offset = Cpt(EpicsSignal, "HT_OFFSET", put_complete=True, kind=Kind.config)
write_srv = Cpt(EpicsSignal, "WRITE_SRV.PROC", put_complete=True, kind=Kind.omitted)
ports = Cpt(EpicsSignal, "PORTS", auto_monitor=True, put_complete=True, kind=Kind.config)
framenum = Cpt(EpicsSignal, "FRAMENUM", auto_monitor=True, put_complete=True, kind=Kind.config)
ht_offset = Cpt(
EpicsSignal, "HT_OFFSET", auto_monitor=True, put_complete=True, kind=Kind.config
)
write_srv = Cpt(
EpicsSignal, "WRITE_SRV.PROC", auto_monitor=True, put_complete=True, kind=Kind.omitted
)
# Standard camera configs
exposure = Cpt(EpicsSignal, "EXPOSURE", put_complete=True, auto_monitor=True, kind=Kind.config)
@@ -135,9 +139,9 @@ class GigaFrostBase(Device):
)
# Software signals
soft_enable = Cpt(EpicsSignal, "SOFT_ENABLE", put_complete=True)
soft_enable = Cpt(EpicsSignal, "SOFT_ENABLE", put_complete=True, auto_monitor=True)
soft_trig = Cpt(EpicsSignal, "SOFT_TRIG.PROC", put_complete=True, kind=Kind.omitted)
soft_exp = Cpt(EpicsSignal, "SOFT_EXP", put_complete=True)
soft_exp = Cpt(EpicsSignal, "SOFT_EXP", put_complete=True, auto_monitor=True)
###############################################################################################
# Enable schemes
@@ -146,6 +150,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_ENBL_EXP_RBV",
write_pv="MODE_ENBL_EXP",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -154,6 +159,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_ENBL_EXT_RBV",
write_pv="MODE_ENBL_EXT",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -161,6 +167,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_ENBL_SOFT_RBV",
write_pv="MODE_ENBL_SOFT",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -168,6 +175,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_ENBL_AUTO_RBV",
write_pv="MODE_ENBL_AUTO",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -178,6 +186,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_TRIG_EXT_RBV",
write_pv="MODE_TRIG_EXT",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -185,6 +194,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_TRIG_SOFT_RBV",
write_pv="MODE_TRIG_SOFT",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -192,6 +202,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_TRIG_TIMER_RBV",
write_pv="MODE_TRIG_TIMER",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -199,6 +210,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_TRIG_AUTO_RBV",
write_pv="MODE_TRIG_AUTO",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -210,6 +222,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_EXP_EXT_RBV",
write_pv="MODE_EXP_EXT",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -217,6 +230,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_EXP_SOFT_RBV",
write_pv="MODE_EXP_SOFT",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -224,6 +238,7 @@ class GigaFrostBase(Device):
EpicsSignal,
"MODE_EXP_TIMER_RBV",
write_pv="MODE_EXP_TIMER",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
@@ -235,19 +250,32 @@ class GigaFrostBase(Device):
EpicsSignal,
"CNT_STARTBIT_RBV",
write_pv="CNT_STARTBIT",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
cnt_endbit = Cpt(
EpicsSignal, "CNT_ENDBIT_RBV", write_pv="CNT_ENDBIT", put_complete=True, kind=Kind.config
EpicsSignal,
"CNT_ENDBIT_RBV",
write_pv="CNT_ENDBIT",
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
# Line swap selection
ls_sw = Cpt(EpicsSignal, "LS_SW", put_complete=True, kind=Kind.config)
ls_nw = Cpt(EpicsSignal, "LS_NW", put_complete=True, kind=Kind.config)
ls_se = Cpt(EpicsSignal, "LS_SE", put_complete=True, kind=Kind.config)
ls_ne = Cpt(EpicsSignal, "LS_NE", put_complete=True, kind=Kind.config)
conn_parm = Cpt(EpicsSignal, "CONN_PARM", string=True, put_complete=True, kind=Kind.config)
ls_sw = Cpt(EpicsSignal, "LS_SW", put_complete=True, auto_monitor=True, kind=Kind.config)
ls_nw = Cpt(EpicsSignal, "LS_NW", put_complete=True, auto_monitor=True, kind=Kind.config)
ls_se = Cpt(EpicsSignal, "LS_SE", put_complete=True, auto_monitor=True, kind=Kind.config)
ls_ne = Cpt(EpicsSignal, "LS_NE", put_complete=True, auto_monitor=True, kind=Kind.config)
conn_parm = Cpt(
EpicsSignal,
"CONN_PARM",
string=True,
auto_monitor=True,
put_complete=True,
kind=Kind.config,
)
# HW settings as read only
pixrate = Cpt(EpicsSignalRO, "PIXRATE", auto_monitor=True, kind=Kind.config)

View File

@@ -1,40 +1,36 @@
# -*- coding: utf-8 -*-
"""
GigaFrost camera class module
from __future__ import annotations
Created on Thu Jun 27 17:28:43 2024
import os
from typing import TYPE_CHECKING, Literal, cast
@author: mohacsi_i
"""
from time import sleep
from typing import Literal
import numpy as np
from bec_lib.logger import bec_logger
from ophyd import DeviceStatus
from ophyd import Component as Cpt
from ophyd import DeviceStatus, Kind, Signal, StatusBase
from ophyd_devices.interfaces.base_classes.psi_device_base import PSIDeviceBase
from ophyd_devices.utils.bec_signals import PreviewSignal, ProgressSignal
import tomcat_bec.devices.gigafrost.gfconstants as const
from tomcat_bec.devices.gigafrost.gfutils import extend_header_table
from tomcat_bec.devices.gigafrost.gigafrost_base import GigaFrostBase
from tomcat_bec.devices.gigafrost.std_daq_client import (
from tomcat_bec.devices.std_daq.std_daq_client import (
StdDaqClient,
StdDaqConfigPartial,
StdDaqStatus,
)
from tomcat_bec.devices.std_daq.std_daq_live_processing import StdDaqLiveProcessing
from tomcat_bec.devices.std_daq.std_daq_preview import StdDaqPreview
from tomcat_bec.devices.gigafrost.std_daq_preview import StdDaqPreview
if TYPE_CHECKING:
from bec_lib.devicemanager import DeviceManagerBase
logger = bec_logger.logger
"""
TBD:
- Why is mode_enbl_exp not set during the enable_mode setter, only in the set_acquisition_mode?
- Why is set_acquisition_mode a method and not a property?
- When is a call to set_param necessary?
- Which access pattern is more common, setting the signal directly or using the method / property?
If the latter, we may want to change the inheritance structure to 'hide' the signals in a sub-component.
"""
def default_config() -> dict:
"""
Minimal configuration for the GigaFrost camera.
"""
return {"corr_mode": 5, "scan_id": 0} # default correction mode # default scan id
class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
@@ -69,10 +65,16 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
"enable_mode",
"backend",
"acq_done",
"live_preview"
"live_preview",
"live_processing",
]
_initialized = False
analysis_signal = Cpt(Signal, name="analysis_signal", kind=Kind.hinted, doc="Analysis Signal")
analysis_signal2 = Cpt(Signal, name="analysis_signal2", kind=Kind.hinted, doc="Analysis Signal")
preview = Cpt(PreviewSignal, name="preview", ndim=2, doc="Preview signal of the gfcam")
progress = Cpt(ProgressSignal, name="progress")
def __init__(
self,
prefix="",
@@ -88,9 +90,10 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
std_daq_rest: str | None = None,
std_daq_ws: str | None = None,
std_daq_live: str | None = None,
device_manager: DeviceManagerBase | None = None,
**kwargs,
):
# Ugly hack to pass values before on_init()
self.device_manager = device_manager
self._signals_to_be_set = {}
self._signals_to_be_set["auto_soft_enable"] = auto_soft_enable
self._signals_to_be_set["backend_url"] = backend_url
@@ -108,13 +111,17 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
)
if std_daq_rest is None or std_daq_ws is None:
raise ValueError("Both std_daq_rest and std_daq_ws must be provided")
self.live_processing = StdDaqLiveProcessing(
parent=self, signal=self.analysis_signal, signal2=self.analysis_signal2
)
self.backend = StdDaqClient(parent=self, ws_url=std_daq_ws, rest_url=std_daq_rest)
self.backend.add_count_callback(self._on_count_update)
self.live_preview = None
self.acq_configs = {}
if std_daq_live is not None:
self.live_preview = StdDaqPreview(url=std_daq_live, cb=self._on_preview_update)
def configure(self, d: dict = None):
def configure(self, d: dict | None = None):
"""Configure the next scan with the GigaFRoST camera
Parameters as 'd' dictionary
@@ -147,14 +154,18 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
acq_mode : str, optional
Select one of the pre-configured trigger behavior
"""
if d is None:
return
# Stop acquisition
self.set_idle()
self.stop_camera().wait(timeout=10)
backend_config = StdDaqConfigPartial(**d)
self.backend.update_config(backend_config)
# Update all specified ophyd signals
config = {}
config = default_config()
for key in self.component_names:
val = d.get(key)
if val is not None:
@@ -163,10 +174,6 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
if d.get("exp_time", 0) > 0:
config["exposure"] = d["exp_time"] * 1000 # exposure time in ms
if "corr_mode" not in config:
config["corr_mode"] = 5
if "scan_id" not in config:
config["scan_id"] = 0
super().configure(config)
# If the acquisition mode is specified, set it
@@ -176,7 +183,9 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
# Commit parameters
self.set_param.set(1).wait()
def set_acquisition_mode(self, acq_mode):
def set_acquisition_mode(
self, acq_mode: Literal["default", "ext_enable", "soft", "ext", "external"]
):
"""Set acquisition mode
Utility function to quickly select between pre-configured and tested
@@ -236,7 +245,7 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
raise RuntimeError(f"Unsupported acquisition mode: {acq_mode}")
@property
def exposure_mode(self):
def exposure_mode(self) -> str | None:
"""Returns the current exposure mode of the GigaFRost camera.
Returns
@@ -258,7 +267,7 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
return None
@exposure_mode.setter
def exposure_mode(self, exp_mode):
def exposure_mode(self, exp_mode: Literal["external", "timer", "soft"]):
"""Apply the exposure mode for the GigaFRoST camera.
Parameters
@@ -295,7 +304,7 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
The camera's active fixed number of frames mode.
"""
start_bit = self.cnt_startbit.get()
end_bit = self.cnt_startbit.get()
end_bit = self.cnt_endbit.get()
if not start_bit and not end_bit:
return "off"
@@ -522,6 +531,8 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
def destroy(self):
self.backend.shutdown()
if self.live_preview:
self.live_preview.stop()
super().destroy()
def _build_udp_header_table(self):
@@ -551,8 +562,24 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
return udp_header_table
def _on_preview_update(self, img:np.ndarray):
self._run_subs(sub_type=self.SUB_DEVICE_MONITOR_2D, obj=self, value=img)
def _on_preview_update(self, img: np.ndarray):
corrected_img = self.live_processing.apply_flat_dark_correction(img)
self.live_processing.on_new_data(corrected_img)
self.preview.put(corrected_img)
self._run_subs(sub_type=self.SUB_DEVICE_MONITOR_2D, obj=self, value=corrected_img)
def _on_count_update(self, count: int):
"""
Callback for the count update from the backend.
Updates the progress signal.
Args:
count (int): The current count of images acquired by the camera.
"""
expected_counts = cast(int, self.num_images.get())
self.progress.put(
value=count, max_value=expected_counts, done=bool(count == expected_counts)
)
def acq_done(self) -> DeviceStatus:
"""
@@ -572,6 +599,143 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
)
return status
def restart_with_new_config(
self,
name: str,
file_path: str = "",
file_prefix: str = "",
num_images: int | None = None,
frames_per_trigger: int | None = None,
) -> DeviceStatus:
"""
Restart the camera with a new configuration.
This method allows to change the file path, file prefix, and number of images.
Args:
name (str): Name of the configuration to be saved.
file_path (str): New file path for the acquisition. If empty, the current file path is used.
file_prefix (str): New file prefix for the acquisition. If empty, the current file prefix is used.
num_images (int | None): New number of images to acquire. If None, the current number of images is used.
frames_per_trigger (int | None): New number of frames per trigger. If None, the current value is used.
Returns:
DeviceStatus: The status of the restart operation. It resolves when the camera is ready to receive the first image.
"""
self.acq_configs[name] = {}
conf = {}
if file_path:
self.acq_configs[name]["file_path"] = self.file_path.get()
conf["file_path"] = file_path
if file_prefix:
self.acq_configs[name]["file_prefix"] = self.file_prefix.get()
conf["file_prefix"] = file_prefix
if num_images is not None:
self.acq_configs[name]["num_images"] = self.num_images.get()
conf["num_images"] = num_images
if frames_per_trigger is not None:
self.acq_configs[name]["cnt_num"] = self.cnt_num.get()
conf["cnt_num"] = frames_per_trigger
# Stop the camera and wait for it to become idle
status = self.stop_camera()
status.wait(timeout=10)
# update the configuration
self.configure(conf)
# Restart the camera with the new configuration
return self.start_camera()
def restore_config(self, name: str) -> None:
"""
Restore a previously saved configuration and restart the camera.
Args:
name (str): Name of the configuration to restore.
"""
status = self.stop_camera()
status.wait(timeout=10)
config = self.acq_configs.pop(name, {})
self.configure(config)
def update_live_processing_reference(
self, reference_type: Literal["dark", "flat"]
) -> StatusBase:
"""
Update the flat or dark reference for the live processing.
Args:
reference_type (Literal["dark", "flat"]): Type of the reference to update.
If 'dark', the dark reference will be updated, if 'flat', the flat reference will be updated.
Returns:
StatusBase: The status of the update operation.
"""
if reference_type not in ["dark", "flat"]:
raise ValueError("Invalid reference type! Must be 'dark' or 'flat'.")
# Use the current acquisition to update the reference
if self.live_processing is None:
raise RuntimeError("Live processing is not available. Cannot update reference.")
status = self.live_processing.update_reference_with_file(
reference_type=reference_type,
file_path=self.target_file,
entry="tomcat-gf/data", # type: ignore
wait=False, # Do not wait for the update to finish
)
return status
def start_camera(self) -> DeviceStatus:
"""
Start the camera and the backend.
Returns:
DeviceStatus: The status of the startup. It resolves when the backend is ready to receive the first image.
"""
status = DeviceStatus(self)
self.backend.add_status_callback(
status,
success=[StdDaqStatus.WAITING_FOR_FIRST_IMAGE],
error=[StdDaqStatus.REJECTED, StdDaqStatus.ERROR],
)
self.backend.start(
file_path=self.file_path.get(), # type: ignore
file_prefix=self.file_prefix.get(), # type: ignore
num_images=self.num_images.get(), # type: ignore
)
self.start_cam.set(1).wait()
def _emit_file_event(_status: DeviceStatus):
"""
Emit a file event when the camera is ready.
"""
self._run_subs(
sub_type=self.SUB_FILE_EVENT,
file_path=self.target_file,
done=False,
successful=False,
hinted_h5_entries={"data": "data"},
)
status.add_callback(_emit_file_event)
return status
def stop_camera(self) -> DeviceStatus:
"""Stop the camera acquisition and set it to idle state."""
self.set_idle()
status = DeviceStatus(self)
self.backend.add_status_callback(
status, success=[StdDaqStatus.IDLE], error=[StdDaqStatus.REJECTED, StdDaqStatus.ERROR]
)
self.backend.stop()
return status
@property
def target_file(self) -> str:
"""Return the target file path for the current acquisition."""
file_path = cast(str, self.file_path.get())
file_prefix = cast(str, self.file_prefix.get())
return os.path.join(file_path, f"{file_prefix.removesuffix('_')}.h5")
########################################
# Beamline Specific Implementations #
########################################
@@ -590,10 +754,10 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
Default values for signals should be set here.
"""
# TODO: check if this can be moved to the config file
# pylint: disable=protected-access
self.auto_soft_enable._metadata["write_access"] = False
self.backend_url._metadata["write_access"] = False
# # TODO: check if this can be moved to the config file
# # pylint: disable=protected-access
# self.auto_soft_enable._metadata["write_access"] = False
# self.backend_url._metadata["write_access"] = False
self.auto_soft_enable.put(self._signals_to_be_set["auto_soft_enable"], force=True)
self.backend_url.put(self._signals_to_be_set["backend_url"], force=True)
@@ -601,25 +765,41 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
self.backend.connect()
if self.live_preview:
self.live_preview.start()
def on_stage(self) -> DeviceStatus | None:
"""
Called while staging the device.
Information about the upcoming scan can be accessed from the scan_info object.
"""
# Gigafrost can finish a run without explicit unstaging
# If the camera is busy, stop it first
if self.busy_stat.value:
logger.warning("Camera is already running, unstaging it first!")
self.unstage()
sleep(0.5)
self.stop_camera()
scan_msg = self.scan_info.msg
if scan_msg is None or scan_msg.request_inputs is None or scan_msg.scan_parameters is None:
# I don't think this can happen outside of tests, but just in case
logger.warning(
f"[{self.name}] Scan message is not available or incomplete. "
"Cannot configure the GigaFrost camera."
)
self.acq_configs = {}
return
scan_args = {
**scan_msg.request_inputs["inputs"],
**scan_msg.request_inputs["kwargs"],
**scan_msg.request_inputs.get("inputs", {}),
**scan_msg.request_inputs.get("kwargs", {}),
**scan_msg.scan_parameters,
}
if "file_path" not in scan_args:
scan_args["file_path"] = (
"/gpfs/test/test-beamline" # FIXME: This should be from the scan message
)
if "file_prefix" not in scan_args:
scan_args["file_prefix"] = scan_msg.info["file_components"][0].split("/")[-1] + "_"
self.configure(scan_args)
# Sync if out of sync
@@ -634,29 +814,26 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
)
self.num_images.set(num_points).wait()
# reset the acquisition configs
self.acq_configs = {}
def on_unstage(self) -> DeviceStatus | None:
"""Called while unstaging the device."""
# Switch to idle
self.set_idle()
logger.info(f"StdDaq status on unstage: {self.backend.status}")
self.backend.stop()
return self.stop_camera()
def on_pre_scan(self) -> DeviceStatus | None:
"""Called right before the scan starts on all devices automatically."""
# Switch to acquiring
self.backend.start(
file_path=self.file_path.get(),
file_prefix=self.file_prefix.get(),
num_images=self.num_images.get(),
)
self.start_cam.set(1).wait()
return self.start_camera()
def on_trigger(self) -> DeviceStatus | None:
def on_trigger(self) -> DeviceStatus | StatusBase | None:
"""Called when the device is triggered."""
if self.busy_stat.get() in (0, "IDLE"):
raise RuntimeError("GigaFrost must be running before triggering")
logger.warning(f"[{self.name}] SW triggering gigafrost")
logger.info(f"[{self.name}] SW triggering gigafrost")
# Soft triggering based on operation mode
if (
@@ -666,24 +843,38 @@ class GigaFrostCamera(PSIDeviceBase, GigaFrostBase):
):
# BEC teststand operation mode: posedge of SoftEnable if Started
self.soft_enable.set(0).wait()
self.soft_enable.set(1).wait()
else:
self.soft_trig.set(1).wait()
return self.soft_enable.set(1)
return self.soft_trig.set(1)
def on_complete(self) -> DeviceStatus | None:
"""Called to inquire if a device has completed a scans."""
return self.acq_done()
def _create_dataset(_status: DeviceStatus):
self.backend.create_virtual_datasets(
self.file_path.get(), file_prefix=self.file_prefix.get() # type: ignore
)
self._run_subs(
sub_type=self.SUB_FILE_EVENT,
file_path=self.target_file,
done=True,
successful=True,
hinted_location={"data": "data"},
)
status = self.acq_done()
status.add_callback(_create_dataset)
return status
def on_kickoff(self) -> DeviceStatus | None:
"""Called to kickoff a device for a fly scan. Has to be called explicitly."""
def on_stop(self) -> None:
def on_stop(self) -> DeviceStatus:
"""Called when the device is stopped."""
return self.on_unstage()
return self.stop_camera()
# Automatically connect to MicroSAXS testbench if directly invoked
if __name__ == "__main__":
if __name__ == "__main__": # pragma: no cover
gf = GigaFrostCamera(
"X02DA-CAM-GF2:", name="gf2", backend_url="http://xbl-daq-28:8080", auto_soft_enable=True
)

View File

@@ -1,187 +0,0 @@
# -*- coding: utf-8 -*-
"""
Standard DAQ preview image stream module
Created on Thu Jun 27 17:28:43 2024
@author: mohacsi_i
"""
from time import sleep, time
from threading import Thread
import zmq
from ophyd import Device, Signal, Component, Kind
from ophyd_devices.interfaces.base_classes.psi_detector_base import (
CustomDetectorMixin,
PSIDetectorBase,
)
from bec_lib import bec_logger
logger = bec_logger.logger
ZMQ_TOPIC_FILTER = b""
class PcoTestConsumerMixin(CustomDetectorMixin):
"""Setup class for the standard DAQ preview stream
Parent class: CustomDetectorMixin
"""
# pylint: disable=protected-access
def on_stage(self):
"""Start listening for preview data stream"""
if self.parent._mon is not None:
self.parent.unstage()
sleep(0.5)
self.parent.connect()
self._stop_polling = False
self.parent._mon = Thread(target=self.poll, daemon=True)
self.parent._mon.start()
def on_unstage(self):
"""Stop a running preview"""
if self.parent._mon is not None:
self._stop_polling = True
# Might hang on recv_multipart
self.parent._mon.join(timeout=1)
# So also disconnect the socket
self.parent.disconnect()
def on_stop(self):
"""Stop a running preview"""
self.parent.disconnect()
def poll(self):
"""Collect streamed updates"""
try:
t_last = time()
while True:
try:
# Exit loop and finish monitoring
if self._stop_polling:
logger.info(f"[{self.parent.name}]\tDetaching monitor")
break
# pylint: disable=no-member
r = self.parent._socket.recv()
# Length and throtling checks
t_curr = time()
t_elapsed = t_curr - t_last
if t_elapsed < self.parent.throttle.get():
continue
# # Unpack the Array V1 reply to metadata and array data
# meta, data = r
# print(meta)
# # Update image and update subscribers
# header = json.loads(meta)
# if header["type"] == "uint16":
# image = np.frombuffer(data, dtype=np.uint16)
# if image.size != np.prod(header['shape']):
# err = f"Unexpected array size of {image.size} for header: {header}"
# raise ValueError(err)
# image = image.reshape(header['shape'])
# # Update image and update subscribers
# self.parent.frame.put(header['frame'], force=True)
# self.parent.image_shape.put(header['shape'], force=True)
# self.parent.image.put(image, force=True)
# self.parent._last_image = image
# self.parent._run_subs(sub_type=self.parent.SUB_MONITOR, value=image)
t_last = t_curr
# logger.info(
# f"[{self.parent.name}] Updated frame {header['frame']}\t"
# f"Shape: {header['shape']}\tMean: {np.mean(image):.3f}"
# )
except ValueError:
# Happens when ZMQ partially delivers the multipart message
pass
except zmq.error.Again:
# Happens when receive queue is empty
sleep(0.1)
except Exception as ex:
logger.info(f"[{self.parent.name}]\t{str(ex)}")
raise
finally:
try:
self.parent._socket.disconnect()
except RuntimeError:
pass
self.parent._mon = None
logger.info(f"[{self.parent.name}]\tDetaching monitor")
class PcoTestConsumer(PSIDetectorBase):
"""Detector wrapper class around the StdDaq preview image stream.
This was meant to provide live image stream directly from the StdDAQ.
Note that the preview stream must be already throtled in order to cope
with the incoming data and the python class might throttle it further.
You can add a preview widget to the dock by:
cam_widget = gui.add_dock('cam_dock1').add_widget('BECFigure').image('daq_stream1')
"""
# Subscriptions for plotting image
USER_ACCESS = ["get_last_image"]
SUB_MONITOR = "device_monitor_2d"
_default_sub = SUB_MONITOR
custom_prepare_cls = PcoTestConsumerMixin
# Status attributes
url = Component(Signal, kind=Kind.config)
throttle = Component(Signal, value=0.25, kind=Kind.config)
frame = Component(Signal, kind=Kind.hinted)
image_shape = Component(Signal, kind=Kind.normal)
# FIXME: The BEC client caches the read()s from the last 50 scans
image = Component(Signal, kind=Kind.omitted)
_last_image = None
_mon = None
_socket = None
def __init__(
self, *args, url: str = "tcp://129.129.95.38:20000", parent: Device = None, **kwargs
) -> None:
super().__init__(*args, parent=parent, **kwargs)
self.url._metadata["write_access"] = False
self.image._metadata["write_access"] = False
self.frame._metadata["write_access"] = False
self.image_shape._metadata["write_access"] = False
self.url.set(url, force=True).wait()
def connect(self):
"""Connect to te StDAQs PUB-SUB streaming interface
StdDAQ may reject connection for a few seconds when it restarts,
so if it fails, wait a bit and try to connect again.
"""
# pylint: disable=no-member
# Socket to talk to server
context = zmq.Context()
self._socket = context.socket(zmq.PULL)
try:
self._socket.connect(self.url.get())
except ConnectionRefusedError:
sleep(1)
self._socket.connect(self.url.get())
def disconnect(self):
"""Disconnect"""
try:
if self._socket is not None:
self._socket.disconnect(self.url.get())
except zmq.ZMQError:
pass
finally:
self._socket = None
def get_image(self):
return self._last_image
# Automatically connect to MicroSAXS testbench if directly invoked
if __name__ == "__main__":
daq = PcoTestConsumerMixin(url="tcp://129.129.106.124:8080", name="preview")
daq.wait_for_connection()

View File

@@ -1,473 +0,0 @@
# -*- coding: utf-8 -*-
"""
Created on Wed Dec 6 11:33:54 2023
@author: mohacsi_i
"""
import time
from ophyd import Component, EpicsSignal, EpicsSignalRO, Kind
from ophyd.status import SubscriptionStatus, DeviceStatus
from ophyd_devices import BECDeviceBase
from ophyd_devices.interfaces.base_classes.psi_detector_base import (
CustomDetectorMixin as CustomPrepare,
)
try:
from bec_lib import bec_logger
logger = bec_logger.logger
except ModuleNotFoundError:
import logging
logger = logging.getLogger("PcoEdgeCam")
class PcoEdgeCameraMixin(CustomPrepare):
"""Mixin class to setup the Helge camera bae class.
This class will be called by the custom_prepare_cls attribute of the detector class.
"""
# pylint: disable=protected-access
def on_stage(self) -> None:
"""Configure and arm PCO.Edge camera for acquisition"""
# PCO can finish a run without explicit unstaging
if self.parent.state not in ("IDLE"):
logger.warning(
f"Trying to stage the camera from state {self.parent.state}, unstaging it first!"
)
self.parent.unstage()
time.sleep(0.5)
# Fish out our configuration from scaninfo (via explicit or generic addressing)
scanparam = self.parent.scaninfo.scan_msg.info
d = {}
if "kwargs" in scanparam:
scanargs = scanparam["kwargs"]
if "exp_burst" in scanargs and scanargs["exp_burst"] is not None:
d["exposure_num_burst"] = scanargs["exp_burst"]
if "image_width" in scanargs and scanargs["image_width"] is not None:
d["image_width"] = scanargs["image_width"]
if "image_height" in scanargs and scanargs["image_height"] is not None:
d["image_height"] = scanargs["image_height"]
if "exp_time" in scanargs and scanargs["exp_time"] is not None:
d["exposure_time_ms"] = scanargs["exp_time"]
if "exp_period" in scanargs and scanargs["exp_period"] is not None:
d["exposure_period_ms"] = scanargs["exp_period"]
# if 'exp_burst' in scanargs and scanargs['exp_burst'] is not None:
# d['exposure_num_burst'] = scanargs['exp_burst']
# if 'acq_mode' in scanargs and scanargs['acq_mode'] is not None:
# d['acq_mode'] = scanargs['acq_mode']
# elif self.parent.scaninfo.scan_type == "step":
# d['acq_mode'] = "default"
if "pco_store_mode" in scanargs and scanargs["pco_store_mode"] is not None:
d["store_mode"] = scanargs["pco_store_mode"]
if "pco_data_format" in scanargs and scanargs["pco_data_format"] is not None:
d["data_format"] = scanargs["pco_data_format"]
# Perform bluesky-style configuration
if len(d) > 0:
logger.warning(f"[{self.parent.name}] Configuring with:\n{d}")
self.parent.configure(d=d)
# ARM the camera
self.parent.bluestage()
def on_unstage(self) -> None:
"""Disarm the PCO.Edge camera"""
self.parent.blueunstage()
def on_stop(self) -> None:
"""Stop the PCO.Edge camera"""
self.parent.blueunstage()
def on_trigger(self) -> None | DeviceStatus:
"""Trigger mode operation
Use it to repeatedly record a fixed number of frames and send it to stdDAQ. The method waits
for the acquisition and data transfer to complete.
NOTE: Maciej confirmed that sparse data is no problem to the stdDAQ.
TODO: Optimize data transfer to launch at end and check completion at the beginning.
"""
# Ensure that previous data transfer finished
# def sentIt(*args, value, timestamp, **kwargs):
# return value==0
# status = SubscriptionStatus(self.parent.file_savebusy, sentIt, timeout=120)
# status.wait()
# Not sure if it always sends the first batch of images or the newest
def wait_bufferreset(*, old_value, value, timestamp, **_):
return (value < old_value) or (value == 0)
self.parent.buffer_clear.set(1).wait()
status = SubscriptionStatus(self.parent.buffer_used, wait_bufferreset, timeout=5)
status.wait()
t_expected = (
self.parent.acquire_time.get() + self.parent.acquire_delay.get()
) * self.parent.file_savestop.get()
# Wait until the buffer fills up with enough images
def wait_acquisition(*, value, timestamp, **_):
num_target = self.parent.file_savestop.get()
# logger.warning(f"{value} of {num_target}")
return bool(value >= num_target)
max_wait = max(5, 5 * t_expected)
status = SubscriptionStatus(
self.parent.buffer_used, wait_acquisition, timeout=max_wait, settle_time=0.2
)
status.wait()
# Then start file transfer (need to get the save busy flag update)
# self.parent.file_transfer.set(1, settle_time=0.2).wait()
self.parent.file_transfer.set(1).wait()
# And wait until the images have been sent
# NOTE: this does not wait for new value, the first check will be
# against values from the previous cycle, i.e. pass automatically.
t_start = time.time()
def wait_sending(*args, old_value, value, timestamp, **kwargs):
t_elapsed = timestamp - t_start
# logger.warning(f"{old_value}\t{value}\t{t_elapsed}")
return old_value == 1 and value == 0 and t_elapsed > 0
status = SubscriptionStatus(
self.parent.file_savebusy, wait_sending, timeout=120, settle_time=0.2
)
status.wait()
class HelgeCameraBase(BECDeviceBase):
"""Ophyd baseclass for Helge camera IOCs
This class provides wrappers for Helge's camera IOCs around SwissFEL and
for high performance SLS 2.0 cameras. The IOC's operation is a bit arcane
and there are different versions and cameras all around. So this device
only covers the absolute basics.
Probably the most important part is the configuration state machine. As
the SET_PARAMS takes care of buffer allocations it might take some time,
as well as a full re-configuration is required every time we change the
binning, roi, etc... This is automatically performed upon starting an
exposure (if it heven't been done before).
The status flag state machine during re-configuration is:
BUSY low, SET low -> BUSY high, SET low -> BUSY low, SET high -> BUSY low, SET low
UPDATE: Data sending operation modes
- Switch to ZMQ streaming by setting FILEFORMAT to ZEROMQ
- Set SAVESTART and SAVESTOP to select a ROI of image indices
- Start file transfer with FTRANSFER.
The ZMQ connection operates in PUSH-PULL mode, i.e. it needs incoming connection.
STOREMODE sets the acquisition mode:
if STOREMODE == Recorder
Fills up the buffer with images. Here SAVESTART and SAVESTOP selects a ROI
of image indices to be streamed out (i.e. maximum buffer_size number of images)
if STOREMODE == FIFO buffer
Continously streams out data using the buffer as a FIFO queue.
Here SAVESTART and SAVESTOP selects a ROI of image indices to be streamed continously
(i.e. a large SAVESTOP streams indefinitely). Note that in FIFO mode buffer reads are
destructive. to prevent this, we don't have EPICS preview
"""
# ########################################################################
# General hardware info (in AD nomenclature)
manufacturer = Component(EpicsSignalRO, "QUERY", kind=Kind.config, doc="Camera model info")
model = Component(EpicsSignalRO, "BOARD", kind=Kind.omitted, doc="Camera board info")
# ########################################################################
# Acquisition commands
camStatusCmd = Component(EpicsSignal, "CAMERASTATUS", put_complete=True, kind=Kind.config)
# ########################################################################
# Acquisition configuration (in AD nomenclature)
acquire_time = Component(
EpicsSignal, "EXPOSURE", put_complete=True, auto_monitor=True, kind=Kind.config
)
acquire_delay = Component(
EpicsSignal, "DELAY", put_complete=True, auto_monitor=True, kind=Kind.config
)
trigger_mode = Component(
EpicsSignal, "TRIGGER", put_complete=True, auto_monitor=True, kind=Kind.config
)
# ########################################################################
# Image size configuration (in AD nomenclature)
bin_x = Component(EpicsSignal, "BINX", put_complete=True, auto_monitor=True, kind=Kind.config)
bin_y = Component(EpicsSignal, "BINY", put_complete=True, auto_monitor=True, kind=Kind.config)
array_size_x = Component(
EpicsSignalRO, "WIDTH", auto_monitor=True, kind=Kind.config, doc="Final image width"
)
array_size_y = Component(
EpicsSignalRO, "HEIGHT", auto_monitor=True, kind=Kind.config, doc="Final image height"
)
# ########################################################################
# General hardware info
camError = Component(EpicsSignalRO, "ERRCODE", auto_monitor=True, kind=Kind.config)
camWarning = Component(EpicsSignalRO, "WARNCODE", auto_monitor=True, kind=Kind.config)
# ########################################################################
# Buffer configuration
bufferRecMode = Component(EpicsSignalRO, "RECMODE", auto_monitor=True, kind=Kind.config)
bufferStoreMode = Component(EpicsSignal, "STOREMODE", auto_monitor=True, kind=Kind.config)
fileRecMode = Component(EpicsSignalRO, "RECMODE", auto_monitor=True, kind=Kind.config)
buffer_used = Component(EpicsSignalRO, "PIC_BUFFER", auto_monitor=True, kind=Kind.normal)
buffer_size = Component(EpicsSignalRO, "PIC_MAX", auto_monitor=True, kind=Kind.normal)
buffer_clear = Component(EpicsSignal, "CLEARMEM", put_complete=True, kind=Kind.omitted)
# ########################################################################
# File saving interface
cam_data_rate = Component(EpicsSignalRO, "CAMRATE", auto_monitor=True, kind=Kind.normal)
file_data_rate = Component(EpicsSignalRO, "FILERATE", auto_monitor=True, kind=Kind.normal)
file_savestart = Component(EpicsSignal, "SAVESTART", put_complete=True, kind=Kind.config)
file_savestop = Component(EpicsSignal, "SAVESTOP", put_complete=True, kind=Kind.config)
file_format = Component(EpicsSignal, "FILEFORMAT", put_complete=True, kind=Kind.config)
file_transfer = Component(EpicsSignal, "FTRANSFER", put_complete=True, kind=Kind.config)
file_savebusy = Component(EpicsSignalRO, "FILESAVEBUSY", auto_monitor=True, kind=Kind.normal)
# ########################################################################
# Configuration state maschine with separate transition states
camStatusCode = Component(EpicsSignalRO, "STATUSCODE", auto_monitor=True, kind=Kind.config)
camSetParam = Component(EpicsSignal, "SET_PARAM", auto_monitor=True, kind=Kind.config)
camSetParamBusy = Component(
EpicsSignalRO, "BUSY_SET_PARAM", auto_monitor=True, kind=Kind.config
)
camCamera = Component(EpicsSignalRO, "CAMERA", auto_monitor=True, kind=Kind.config)
camCameraBusy = Component(EpicsSignalRO, "BUSY_CAMERA", auto_monitor=True, kind=Kind.config)
camInit = Component(EpicsSignalRO, "INIT", auto_monitor=True, kind=Kind.config)
camInitBusy = Component(EpicsSignalRO, "BUSY_INIT", auto_monitor=True, kind=Kind.config)
# ########################################################################
# Throtled image preview
image = Component(EpicsSignalRO, "FPICTURE", kind=Kind.omitted, doc="Throttled image preview")
# ########################################################################
# Misc PVs
# camRemoval = Component(EpicsSignalRO, "REMOVAL", auto_monitor=True, kind=Kind.config)
camStateString = Component(
EpicsSignalRO, "SS_CAMERA", string=True, auto_monitor=True, kind=Kind.config
)
@property
def state(self) -> str:
"""Single word camera state"""
if self.camSetParamBusy.value:
return "BUSY"
if self.camStatusCode.value == 2 and self.camInit.value == 1:
return "IDLE"
if self.camStatusCode.value == 6 and self.camInit.value == 1:
return "RUNNING"
# if self.camRemoval.value==0 and self.camInit.value==0:
if self.camInit.value == 0:
return "OFFLINE"
# if self.camRemoval.value:
# return "REMOVED"
return "UNKNOWN"
@state.setter
def state(self):
raise RuntimeError("State is a ReadOnly property")
def configure(self, d: dict = {}) -> tuple:
"""Configure the base Helge camera device
Parameters as 'd' dictionary
----------------------------
num_images : int
Number of images to be taken during each scan. Meaning depends on
store mode.
exposure_time_ms : float
Exposure time [ms], usually gets set back to 20 ms
exposure_period_ms : float
Exposure period [ms], up to 200 ms.
store_mode : str
Buffer operation mode
*'Recorder' to record in buffer
*'FIFO buffer' for continous streaming
data_format : str
Usually set to 'ZEROMQ'
"""
if self.state not in ("IDLE"):
raise RuntimeError(f"Can't change configuration from state {self.state}")
# If Bluesky style configure
if d is not None:
# Commonly changed settings
if "exposure_num_burst" in d:
self.file_savestop.set(d["exposure_num_burst"]).wait()
if "exposure_time_ms" in d:
self.acquire_time.set(d["exposure_time_ms"]).wait()
if "exposure_period_ms" in d:
self.acquire_delay.set(d["exposure_period_ms"]).wait()
if "exposure_period_ms" in d:
self.acquire_delay.set(d["exposure_period_ms"]).wait()
if "store_mode" in d:
self.bufferStoreMode.set(d["store_mode"]).wait()
if "data_format" in d:
self.file_format.set(d["data_format"]).wait()
# State machine
# Initial: BUSY and SET both low
# 0. Write 1 to SET_PARAM
# 1. BUSY goes high, SET stays low
# 2. BUSY goes low, SET goes high
# 3. BUSY stays low, SET goes low
# So we need a 'negedge' on SET_PARAM
self.camSetParam.set(1).wait()
def negedge(*, old_value, value, timestamp, **_):
return bool(old_value and not value)
# Subscribe and wait for update
status = SubscriptionStatus(self.camSetParam, negedge, timeout=5, settle_time=0.5)
status.wait()
def bluestage(self):
"""Bluesky style stage: arm the detector"""
logger.warning("Staging PCO")
# Acquisition is only allowed when the IOC is not busy
if self.state in ("OFFLINE", "BUSY", "REMOVED", "RUNNING"):
raise RuntimeError(f"Camera in in state: {self.state}")
if (
self.bufferStoreMode.get() in ("Recorder", 0)
and self.file_savestop.get() > self.buffer_size.get()
):
logger.warning(
f"You'll send empty images, {self.file_savestop.get()} is above buffer size"
)
# Start the acquisition (this sets parameers and starts acquisition)
self.camStatusCmd.set("Running").wait()
# Subscribe and wait for update
def is_running(*, value, timestamp, **_):
return bool(value == 6)
status = SubscriptionStatus(self.camStatusCode, is_running, timeout=5, settle_time=0.2)
status.wait()
def blueunstage(self):
"""Bluesky style unstage: stop the detector"""
self.camStatusCmd.set("Idle").wait()
# Data streaming is stopped by setting the max index to 0
# FIXME: This might interrupt data transfer
self.file_savestop.set(0).wait()
def bluekickoff(self):
"""Start data transfer
TODO: Need to revisit this once triggering is complete
"""
self.file_transfer.set(1).wait()
class PcoEdge5M(HelgeCameraBase):
"""Ophyd baseclass for PCO.Edge cameras
This class provides wrappers for Helge's camera IOCs around SwissFEL and
for high performance SLS 2.0 cameras. Theese are mostly PCO cameras running
on a special Windows IOC host with lots of RAM and CPU power.
"""
custom_prepare_cls = PcoEdgeCameraMixin
USER_ACCESS = ["bluestage", "blueunstage", "bluekickoff"]
# ########################################################################
# Additional status info
busy = Component(EpicsSignalRO, "BUSY", auto_monitor=True, kind=Kind.config)
camState = Component(EpicsSignalRO, "SS_CAMERA", auto_monitor=True, kind=Kind.config)
camProgress = Component(EpicsSignalRO, "CAMPROGRESS", auto_monitor=True, kind=Kind.config)
camRate = Component(EpicsSignalRO, "CAMRATE", auto_monitor=True, kind=Kind.config)
# ########################################################################
# Acquisition configuration
acqMode = Component(EpicsSignalRO, "ACQMODE", auto_monitor=True, kind=Kind.config)
acqDelay = Component(EpicsSignalRO, "DELAY", auto_monitor=True, kind=Kind.config)
acqTriggerEna = Component(EpicsSignalRO, "TRIGGER", auto_monitor=True, kind=Kind.config)
# acqTriggerSource = Component(
# EpicsSignalRO, "TRIGGERSOURCE", auto_monitor=True, kind=Kind.config)
# acqTriggerEdge = Component(EpicsSignalRO, "TRIGGEREDGE", auto_monitor=True, kind=Kind.config)
# ########################################################################
# Image size settings
# Priority is: binning -> roi -> final size
pxRoiX_lo = Component(
EpicsSignal, "REGIONX_START", put_complete=True, auto_monitor=True, kind=Kind.config
)
pxRoiX_hi = Component(
EpicsSignal, "REGIONX_END", put_complete=True, auto_monitor=True, kind=Kind.config
)
pxRoiY_lo = Component(
EpicsSignal, "REGIONY_START", put_complete=True, auto_monitor=True, kind=Kind.config
)
pxRoiY_hi = Component(
EpicsSignal, "REGIONY_END", put_complete=True, auto_monitor=True, kind=Kind.config
)
def configure(self, d: dict = {}) -> tuple:
"""
Camera configuration instructions:
After setting the corresponding PVs, one needs to process SET_PARAM and wait until
BUSY_SET_PARAM goes high and low, followed by SET_PARAM goes high and low. This will
both send the settings to the camera and allocate the necessary buffers in the correct
size and shape (that takes time). Starting the exposure with CAMERASTATUS will also
call SET_PARAM, but it might take long.
NOTE:
The camera IOC will automatically round up RoiX coordinates to the
next multiple of 160. This means that configure can only change image
width in steps of 320 pixels (or manually of 160). Roi
Parameters as 'd' dictionary
----------------------------
exposure_time_ms : float, optional
Exposure time [ms].
exposure_period_ms : float, optional
Exposure period [ms], ignored in soft trigger mode.
image_width : int, optional
ROI size in the x-direction, multiple of 320 [pixels]
image_height : int, optional
ROI size in the y-direction, multiple of 2 [pixels]
image_binx : int optional
Binning along image width [pixels]
image_biny: int, optional
Binning along image height [pixels]
acq_mode : str, not yet implemented
Select one of the pre-configured trigger behavior
"""
if d is not None:
# Need to be smart how we set the ROI....
# Image sensor is 2560x2160 (X and Y)
# Values are rounded to multiples of 16
if "image_width" in d and d["image_width"] is not None:
width = d["image_width"]
self.pxRoiX_lo.set(2560 / 2 - width / 2).wait()
self.pxRoiX_hi.set(2560 / 2 + width / 2).wait()
if "image_height" in d and d["image_height"] is not None:
height = d["image_height"]
self.pxRoiY_lo.set(2160 / 2 - height / 2).wait()
self.pxRoiY_hi.set(2160 / 2 + height / 2).wait()
if "image_binx" in d and d["image_binx"] is not None:
self.bin_x.set(d["image_binx"]).wait()
if "image_biny" in d and d["image_biny"] is not None:
self.bin_y.set(d["image_biny"]).wait()
# Call super() to commit the changes
super().configure(d)
# Automatically connect to test camera if directly invoked
if __name__ == "__main__":
# Drive data collection
cam = PcoEdge5M("X02DA-CCDCAM2:", name="mcpcam")
cam.wait_for_connection()

View File

@@ -1,512 +0,0 @@
# -*- coding: utf-8 -*-
"""
Standard DAQ control interface module through the websocket API
Created on Thu Jun 27 17:28:43 2024
@author: mohacsi_i
"""
import json
from time import sleep
from threading import Thread
import requests
import os
from ophyd import Signal, Component, Kind
from ophyd.status import SubscriptionStatus
from websockets.sync.client import connect, ClientConnection
from websockets.exceptions import ConnectionClosedOK, ConnectionClosedError
from ophyd_devices.interfaces.base_classes.psi_detector_base import PSIDetectorBase as PSIDeviceBase
from ophyd_devices.interfaces.base_classes.psi_detector_base import (
CustomDetectorMixin as CustomDeviceMixin,
)
from bec_lib import bec_logger
logger = bec_logger.logger
class StdDaqMixin(CustomDeviceMixin):
# pylint: disable=protected-access
_mon = None
def on_stage(self) -> None:
"""Configuration and staging
In the BEC model ophyd devices must fish out their own configuration from the 'scaninfo'.
I.e. they need to know which parameters are relevant for them at each scan.
NOTE: Tomcat might use multiple cameras with their own separate DAQ instances.
"""
# Fish out our configuration from scaninfo (via explicit or generic addressing)
# NOTE: Scans don't have to fully configure the device
d = {}
if "kwargs" in self.parent.scaninfo.scan_msg.info:
scanargs = self.parent.scaninfo.scan_msg.info["kwargs"]
if "image_width" in scanargs and scanargs["image_width"] is not None:
d["image_width"] = scanargs["image_width"]
if "image_height" in scanargs and scanargs["image_height"] is not None:
d["image_height"] = scanargs["image_height"]
if "nr_writers" in scanargs and scanargs["nr_writers"] is not None:
d["nr_writers"] = scanargs["nr_writers"]
if "file_path" in scanargs and scanargs["file_path"] is not None:
self.parent.file_path.set(scanargs["file_path"].replace("data", "gpfs")).wait()
print(scanargs["file_path"])
if os.path.isdir(scanargs["file_path"]):
print("isdir")
pass
else:
print("creating")
try:
os.makedirs(scanargs["file_path"], 0o777)
os.system("chmod -R 777 " + scanargs["base_path"])
except:
print("Problem with creating folder")
if "file_prefix" in scanargs and scanargs["file_prefix"] != None:
print(scanargs["file_prefix"])
self.parent.file_prefix.set(scanargs["file_prefix"]).wait()
if "daq_num_points" in scanargs:
d["num_points_total"] = scanargs["daq_num_points"]
else:
# Try to figure out number of points
num_points = 1
points_valid = False
if "steps" in scanargs and scanargs["steps"] is not None:
num_points *= scanargs["steps"]
points_valid = True
if "exp_burst" in scanargs and scanargs["exp_burst"] is not None:
num_points *= scanargs["exp_burst"]
points_valid = True
if "repeats" in scanargs and scanargs["repeats"] is not None:
num_points *= scanargs["repeats"]
points_valid = True
if points_valid:
d["num_points_total"] = num_points
# Perform bluesky-style configuration
if len(d) > 0:
# Configure new run (will restart the stdDAQ)
logger.warning(f"[{self.parent.name}] stdDAQ needs reconfiguring with:\n{d}")
self.parent.configure(d=d)
# Wait for REST API to kill the DAQ
sleep(0.5)
# Try to start a new run (reconnects)
self.parent.bluestage()
# And start status monitoring
self._mon = Thread(target=self.monitor, daemon=True)
self._mon.start()
def on_unstage(self):
"""Stop a running acquisition and close connection"""
print("Creating virtual dataset")
self.parent.create_virtual_dataset()
self.parent.blueunstage()
def on_stop(self):
"""Stop a running acquisition and close connection"""
self.parent.blueunstage()
def monitor(self) -> None:
"""Monitor status messages while connection is open. This will block the reply monitoring
to calling unstage() might throw. Status updates are sent every 1 seconds, but finishing
acquisition means StdDAQ will close connection, so there's no idle state polling.
"""
try:
sleep(0.2)
for msg in self.parent._wsclient:
message = json.loads(msg)
self.parent.runstatus.put(message["status"], force=True)
# logger.info(f"[{self.parent.name}] Pushed status: {message['status']}")
except (ConnectionClosedError, ConnectionClosedOK, AssertionError):
# Libraty throws theese after connection is closed
return
except Exception as ex:
logger.warning(f"[{self.parent.name}] Exception in polling: {ex}")
return
finally:
self._mon = None
class StdDaqClient(PSIDeviceBase):
"""StdDaq API
This class combines the new websocket and REST interfaces of the stdDAQ. The websocket
interface starts and stops the acquisition and provides status, while the REST interface
can read and write the JSON configuration file. The stdDAQ needs to restart all services
to reconfigure with a new config, which might corrupt
the currently written files (fix is underway).
Example:
```
daq = StdDaqClient(name="daq", ws_url="ws://xbl-daq-29:8080", rest_url="http://xbl-daq-29:5000")
```
"""
# pylint: disable=too-many-instance-attributes
custom_prepare_cls = StdDaqMixin
USER_ACCESS = [
"set_daq_config",
"get_daq_config",
"nuke",
"connect",
"message",
"state",
"bluestage",
"blueunstage",
]
_wsclient = None
# Status attributes
ws_url = Component(Signal, kind=Kind.config, metadata={"write_access": False})
runstatus = Component(
Signal, value="unknown", kind=Kind.normal, metadata={"write_access": False}
)
num_images = Component(Signal, value=10000, kind=Kind.config)
file_path = Component(Signal, value="/gpfs/test/test-beamline", kind=Kind.config)
file_prefix = Component(Signal, value="file", kind=Kind.config)
# Configuration attributes
rest_url = Component(Signal, kind=Kind.config, metadata={"write_access": False})
cfg_detector_name = Component(Signal, kind=Kind.config)
cfg_detector_type = Component(Signal, kind=Kind.config)
cfg_bit_depth = Component(Signal, kind=Kind.config)
cfg_pixel_height = Component(Signal, kind=Kind.config)
cfg_pixel_width = Component(Signal, kind=Kind.config)
cfg_nr_writers = Component(Signal, kind=Kind.config)
def __init__(
self,
prefix="",
*,
name,
kind=None,
read_attrs=None,
configuration_attrs=None,
parent=None,
device_manager=None,
ws_url: str = "ws://localhost:8080",
rest_url: str = "http://localhost:5000",
data_source_name=None,
**kwargs,
) -> None:
super().__init__(
prefix=prefix,
name=name,
kind=kind,
read_attrs=read_attrs,
configuration_attrs=configuration_attrs,
parent=parent,
device_manager=device_manager,
**kwargs,
)
self.ws_url.set(ws_url, force=True).wait()
self.rest_url.set(rest_url, force=True).wait()
self.data_source_name = data_source_name
# Connect to the DAQ and initialize values
try:
self.get_daq_config(update=True)
except Exception as ex:
logger.error(f"Failed to connect to the stdDAQ REST API\n{ex}")
def connect(self) -> ClientConnection:
"""Connect to the StdDAQ's websockets interface
StdDAQ may reject connection for a few seconds after restart, or when
it wants so if it fails, wait a bit and try to connect again.
"""
num_retry = 0
while num_retry < 5:
try:
logger.debug(f"[{self.name}] Connecting to stdDAQ at {self.ws_url.get()}")
connection = connect(self.ws_url.get())
logger.debug(f"[{self.name}] Connected to stdDAQ after {num_retry} tries")
return connection
except ConnectionRefusedError:
num_retry += 1
sleep(2)
raise ConnectionRefusedError("The stdDAQ websocket interface refused connection 5 times.")
def message(self, message: dict, timeout=1, wait_reply=True, client=None) -> None | str:
"""Send a message to the StdDAQ and receive a reply
Note: finishing acquisition means StdDAQ will close connection, so
there's no idle state polling.
"""
# Prepare message
msg = json.dumps(message) if isinstance(message, dict) else str(message)
# Connect if client was destroyed
if self._wsclient is None:
self._wsclient = self.connect()
# Send message (reopen connection if needed)
msg = json.dumps(message) if isinstance(message, dict) else str(message)
try:
self._wsclient.send(msg)
except (ConnectionClosedError, ConnectionClosedOK, AttributeError) as ex:
# Re-connect if the connection was closed
self._wsclient = self.connect()
self._wsclient.send(msg)
# Wait for reply
reply = None
if wait_reply:
try:
reply = self._wsclient.recv(timeout)
return reply
except (ConnectionClosedError, ConnectionClosedOK) as ex:
self._wsclient = None
logger.error(f"[{self.name}] WS connection was closed before reply: {ex}")
except (TimeoutError, RuntimeError) as ex:
logger.error(f"[{self.name}] Error in receiving ws reply: {ex}")
return reply
def configure(self, d: dict = None):
"""Configure the next scan with the stdDAQ
Parameters as 'd' dictionary, the default is unchanged.
----------------------------
num_points_total : int, optional
Number of images to be taken during each scan. Set to -1 for an unlimited number of
images (limited by the ringbuffer size and backend speed).
file_path: str, optional
File path to save the data, usually GPFS.
image_width : int, optional
ROI size in the x-direction [pixels].
image_height : int, optional
ROI size in the y-direction [pixels].
bit_depth: int, optional
Image bit depth for cameras that can change it [int].
nr_writers: int, optional
Number of writers [int].
"""
# Configuration parameters
if "image_width" in d and d["image_width"] != None:
self.cfg_pixel_width.set(d["image_width"]).wait()
if "image_height" in d and d["image_height"] != None:
self.cfg_pixel_height.set(d["image_height"]).wait()
if "bit_depth" in d:
self.cfg_bit_depth.set(d["bit_depth"]).wait()
if "nr_writers" in d and d["nr_writers"] != None:
self.cfg_nr_writers.set(d["nr_writers"]).wait()
# Run parameters
if "num_points_total" in d:
self.num_images.set(d["num_points_total"]).wait()
# Restart the DAQ if resolution changed
cfg = self.get_daq_config()
if (
cfg["image_pixel_height"] != self.cfg_pixel_height.get()
or cfg["image_pixel_width"] != self.cfg_pixel_width.get()
or cfg["bit_depth"] != self.cfg_bit_depth.get()
or cfg["number_of_writers"] != self.cfg_nr_writers.get()
):
# Stop if current status is not idle
if self.state() != "idle":
logger.warning(f"[{self.name}] stdDAQ reconfiguration might corrupt files")
# Update retrieved config
cfg["image_pixel_height"] = int(self.cfg_pixel_height.get())
cfg["image_pixel_width"] = int(self.cfg_pixel_width.get())
cfg["bit_depth"] = int(self.cfg_bit_depth.get())
cfg["number_of_writers"] = int(self.cfg_nr_writers.get())
self.set_daq_config(cfg)
sleep(1)
self.get_daq_config(update=True)
def bluestage(self):
"""Stages the stdDAQ
Opens a new connection to the stdDAQ, sends the start command with
the current configuration. It waits for the first reply and checks
it for obvious failures.
"""
# Can't stage into a running exposure
if self.state() != "idle":
raise RuntimeError(f"[{self.name}] stdDAQ can't stage from state: {self.state()}")
# Must make sure that image size matches the data source
if self.data_source_name is not None:
cam_img_w = self.device_manager.devices[self.data_source_name].cfgRoiX.get()
cam_img_h = self.device_manager.devices[self.data_source_name].cfgRoiY.get()
daq_img_w = self.cfg_pixel_width.get()
daq_img_h = self.cfg_pixel_height.get()
if not (daq_img_w == cam_img_w and daq_img_h == cam_img_h):
raise RuntimeError(
f"[{self.name}] stdDAQ image resolution ({daq_img_w} , {daq_img_h}) does not match camera with ({cam_img_w} , {cam_img_h})"
)
else:
logger.warning(
f"[{self.name}] stdDAQ image resolution ({daq_img_w} , {daq_img_h}) matches camera with ({cam_img_w} , {cam_img_h})"
)
file_path = self.file_path.get()
num_images = self.num_images.get()
file_prefix = self.file_prefix.get()
print(file_prefix)
# New connection
self._wsclient = self.connect()
message = {
"command": "start",
"path": file_path,
"file_prefix": file_prefix,
"n_image": num_images,
}
reply = self.message(message)
if reply is not None:
reply = json.loads(reply)
self.runstatus.set(reply["status"], force=True).wait()
logger.info(f"[{self.name}] Start DAQ reply: {reply}")
# Give it more time to reconfigure
if reply["status"] in ("rejected"):
# FIXME: running exposure is a nogo
if reply["reason"] == "driver is busy!":
raise RuntimeError(
f"[{self.name}] Start stdDAQ command rejected: already running"
)
else:
# Give it more time to consolidate
sleep(1)
else:
# Success!!!
print(f"[{self.name}] Started stdDAQ in: {reply['status']}")
return
raise RuntimeError(
f"[{self.name}] Failed to start the stdDAQ in 1 tries, reason: {reply['reason']}"
)
def blueunstage(self):
"""Unstages the stdDAQ
Opens a new connection to the stdDAQ, sends the stop command and
waits for the idle state.
"""
ii = 0
while ii < 10:
# Stop the DAQ (will close connection) - reply is always "success"
self._wsclient = self.connect()
self.message({"command": "stop_all"}, wait_reply=False)
# Let it consolidate
sleep(0.2)
# Check final status (from new connection)
self._wsclient = self.connect()
reply = self.message({"command": "status"})
if reply is not None:
logger.info(f"[{self.name}] DAQ status reply: {reply}")
reply = json.loads(reply)
if reply["status"] in ("idle", "error"):
# Only 'idle' state accepted
print(f"DAQ stopped on try {ii}")
return
elif reply["status"] in ("stop"):
# Give it more time to stop
sleep(0.5)
elif ii >= 6:
raise RuntimeError(f"Failed to stop StdDAQ: {reply}")
ii += 1
raise RuntimeError(f"Failed to stop StdDAQ in time")
##########################################################################
# Bluesky flyer interface
def complete(self) -> SubscriptionStatus:
"""Wait for current run. Must end in status 'file_saved'."""
def is_running(*args, value, timestamp, **kwargs):
result = value in ["idle", "file_saved", "error"]
return result
status = SubscriptionStatus(self.runstatus, is_running, settle_time=0.5)
return status
def get_daq_config(self, update=False) -> dict:
"""Read the current configuration from the DAQ"""
r = requests.get(self.rest_url.get() + "/api/config/get", params={"user": "ioc"}, timeout=2)
if r.status_code != 200:
raise ConnectionError(f"[{self.name}] Error {r.status_code}:\t{r.text}")
cfg = r.json()
if update:
self.cfg_detector_name.set(cfg["detector_name"]).wait()
self.cfg_detector_type.set(cfg["detector_type"]).wait()
self.cfg_bit_depth.set(cfg["bit_depth"]).wait()
self.cfg_pixel_height.set(cfg["image_pixel_height"]).wait()
self.cfg_pixel_width.set(cfg["image_pixel_width"]).wait()
self.cfg_nr_writers.set(cfg["number_of_writers"]).wait()
return cfg
def set_daq_config(self, config, settle_time=1):
"""Write a full configuration to the DAQ"""
url = self.rest_url.get() + "/api/config/set"
r = requests.post(
url,
params={"user": "ioc"},
json=config,
timeout=2,
headers={"Content-Type": "application/json"},
)
if r.status_code != 200:
raise ConnectionError(f"[{self.name}] Error {r.status_code}:\t{r.text}")
# Wait for service to restart (and connect to make sure)
# sleep(settle_time)
self.connect()
return r.json()
def create_virtual_dataset(self):
"""Combine the stddaq written files in a given folder in an interleaved
h5 virtual dataset
"""
url = self.rest_url.get() + "/api/h5/create_interleaved_vds"
file_path = self.file_path.get()
file_prefix = self.file_prefix.get()
r = requests.post(
url,
params={"user": "ioc"},
json={
"base_path": file_path,
"file_prefix": file_prefix,
"output_file": file_prefix.rstrip("_") + ".h5",
},
timeout=2,
headers={"Content-type": "application/json"},
)
def nuke(self, restarttime=5):
"""Reconfigures the stdDAQ to restart the services. This causes
systemd to kill the current DAQ service and restart it with the same
configuration. Which might corrupt the currently written file...
"""
cfg = self.get_daq_config()
self.set_daq_config(cfg)
sleep(restarttime)
def state(self) -> str | None:
"""Querry the current system status"""
try:
wsclient = self.connect()
wsclient.send(json.dumps({"command": "status"}))
r = wsclient.recv(timeout=1)
r = json.loads(r)
return r["status"]
except ConnectionRefusedError:
raise
# Automatically connect to microXAS testbench if directly invoked
if __name__ == "__main__":
daq = StdDaqClient(
name="daq", ws_url="ws://sls-daq-001:8080", rest_url="http://sls-daq-001:5000"
)
daq.wait_for_connection()

View File

@@ -1,196 +0,0 @@
# -*- coding: utf-8 -*-
"""
Standard DAQ preview image stream module
Created on Thu Jun 27 17:28:43 2024
@author: mohacsi_i
"""
import json
import enum
from time import sleep, time
from threading import Thread
import zmq
import numpy as np
from ophyd import Device, Signal, Component, Kind, DeviceStatus
from ophyd_devices.interfaces.base_classes.psi_detector_base import (
CustomDetectorMixin,
PSIDetectorBase,
)
from bec_lib import bec_logger
logger = bec_logger.logger
ZMQ_TOPIC_FILTER = b''
class StdDaqPreviewState(enum.IntEnum):
"""Standard DAQ ophyd device states"""
UNKNOWN = 0
DETACHED = 1
MONITORING = 2
class StdDaqPreviewMixin(CustomDetectorMixin):
"""Setup class for the standard DAQ preview stream
Parent class: CustomDetectorMixin
"""
_mon = None
def on_stage(self):
"""Start listening for preview data stream"""
if self._mon is not None:
self.parent.unstage()
sleep(0.5)
self.parent.connect()
self._stop_polling = False
self._mon = Thread(target=self.poll, daemon=True)
self._mon.start()
def on_unstage(self):
"""Stop a running preview"""
if self._mon is not None:
self._stop_polling = True
# Might hang on recv_multipart
self._mon.join(timeout=1)
# So also disconnect the socket
self.parent._socket.disconnect(self.parent.url.get())
def on_stop(self):
"""Stop a running preview"""
self.on_unstage()
def poll(self):
"""Collect streamed updates"""
self.parent.status.set(StdDaqPreviewState.MONITORING, force=True)
try:
t_last = time()
while True:
try:
# Exit loop and finish monitoring
if self._stop_polling:
logger.info(f"[{self.parent.name}]\tDetaching monitor")
break
# pylint: disable=no-member
r = self.parent._socket.recv_multipart(flags=zmq.NOBLOCK)
# Length and throtling checks
if len(r) != 2:
logger.warning(
f"[{self.parent.name}] Received malformed array of length {len(r)}")
t_curr = time()
t_elapsed = t_curr - t_last
if t_elapsed < self.parent.throttle.get():
sleep(0.1)
continue
# Unpack the Array V1 reply to metadata and array data
meta, data = r
# Update image and update subscribers
header = json.loads(meta)
if header["type"] == "uint16":
image = np.frombuffer(data, dtype=np.uint16)
if image.size != np.prod(header['shape']):
err = f"Unexpected array size of {image.size} for header: {header}"
raise ValueError(err)
image = image.reshape(header['shape'])
# Update image and update subscribers
self.parent.frame.put(header['frame'], force=True)
self.parent.image_shape.put(header['shape'], force=True)
self.parent.image.put(image, force=True)
self.parent._last_image = image
self.parent._run_subs(sub_type=self.parent.SUB_MONITOR, value=image)
t_last = t_curr
logger.info(
f"[{self.parent.name}] Updated frame {header['frame']}\t"
f"Shape: {header['shape']}\tMean: {np.mean(image):.3f}"
)
except ValueError:
# Happens when ZMQ partially delivers the multipart message
pass
except zmq.error.Again:
# Happens when receive queue is empty
sleep(0.1)
except Exception as ex:
logger.info(f"[{self.parent.name}]\t{str(ex)}")
raise
finally:
self._mon = None
self.parent.status.set(StdDaqPreviewState.DETACHED, force=True)
logger.info(f"[{self.parent.name}]\tDetaching monitor")
class StdDaqPreviewDetector(PSIDetectorBase):
"""Detector wrapper class around the StdDaq preview image stream.
This was meant to provide live image stream directly from the StdDAQ.
Note that the preview stream must be already throtled in order to cope
with the incoming data and the python class might throttle it further.
You can add a preview widget to the dock by:
cam_widget = gui.add_dock('cam_dock1').add_widget('BECFigure').image('daq_stream1')
"""
# Subscriptions for plotting image
USER_ACCESS = ["kickoff", "get_last_image"]
SUB_MONITOR = "device_monitor_2d"
_default_sub = SUB_MONITOR
custom_prepare_cls = StdDaqPreviewMixin
# Status attributes
url = Component(Signal, kind=Kind.config)
throttle = Component(Signal, value=0.25, kind=Kind.config)
status = Component(Signal, value=StdDaqPreviewState.UNKNOWN, kind=Kind.omitted)
frame = Component(Signal, kind=Kind.hinted)
image_shape = Component(Signal, kind=Kind.normal)
# FIXME: The BEC client caches the read()s from the last 50 scans
image = Component(Signal, kind=Kind.omitted)
_last_image = None
def __init__(
self, *args, url: str = "tcp://129.129.95.38:20000", parent: Device = None, **kwargs
) -> None:
super().__init__(*args, parent=parent, **kwargs)
self.url._metadata["write_access"] = False
self.status._metadata["write_access"] = False
self.image._metadata["write_access"] = False
self.frame._metadata["write_access"] = False
self.image_shape._metadata["write_access"] = False
self.url.set(url, force=True).wait()
# Connect ro the DAQ
self.connect()
def connect(self):
"""Connect to te StDAQs PUB-SUB streaming interface
StdDAQ may reject connection for a few seconds when it restarts,
so if it fails, wait a bit and try to connect again.
"""
# pylint: disable=no-member
# Socket to talk to server
context = zmq.Context()
self._socket = context.socket(zmq.SUB)
self._socket.setsockopt(zmq.SUBSCRIBE, ZMQ_TOPIC_FILTER)
try:
self._socket.connect(self.url.get())
except ConnectionRefusedError:
sleep(1)
self._socket.connect(self.url.get())
def get_image(self):
return self._last_image
def kickoff(self) -> DeviceStatus:
""" The DAQ was not meant to be toggled"""
return DeviceStatus(self, done=True, success=True, settle_time=0.1)
# Automatically connect to MicroSAXS testbench if directly invoked
if __name__ == "__main__":
daq = StdDaqPreviewDetector(url="tcp://129.129.95.111:20000", name="preview")
daq.wait_for_connection()

View File

View File

@@ -0,0 +1,247 @@
# -*- coding: utf-8 -*-
"""
Created on Wed Dec 6 11:33:54 2023
@author: mohacsi_i
"""
import enum
from ophyd import Component as Cpt
from ophyd import Device, DynamicDeviceComponent, EpicsSignal, EpicsSignalRO, Kind, Signal
class TriggerMode(str, enum.Enum):
AUTO_TRIGGER = "auto trigger"
SOFT_TRIGGER = "soft trigger"
EXTERNAL_EXP_TRIGGER = "ext.exp sfttrg"
EXTERNAL_EXP_CONTR = "ext.exp contr"
class CameraStatus(str, enum.Enum):
OFFLINE = "Offline"
IDLE = "Idle"
RUNNING = "Running"
class RecMode(str, enum.Enum):
SEQUENCE = "Sequence"
RING_BUFFER = "Ring buffer"
class StoreMode(str, enum.Enum):
RECORDER = "Recorder"
FIFO_BUFFER = "FIFO buffer"
class CameraInitStatus(str, enum.Enum):
OFFLINE = "OFFLINE"
INIT = "INIT"
class CameraStatusCode(enum.IntEnum):
IDLE = 2
RUNNING = 6
class PcoEdgeBase(Device):
"""Ophyd baseclass for Helge camera IOCs
This class provides wrappers for Helge's camera IOCs around SwissFEL and
for high performance SLS 2.0 cameras. The IOC's operation is a bit arcane
and there are different versions and cameras all around. So this device
only covers the absolute basics.
Probably the most important part is the configuration state machine. As
the SET_PARAMS takes care of buffer allocations it might take some time,
as well as a full re-configuration is required every time we change the
binning, roi, etc... This is automatically performed upon starting an
exposure (if it heven't been done before).
The status flag state machine during re-configuration is:
BUSY low, SET low -> BUSY high, SET low -> BUSY low, SET high -> BUSY low, SET low
UPDATE: Data sending operation modes
- Switch to ZMQ streaming by setting FILEFORMAT to ZEROMQ
- Set SAVESTART and SAVESTOP to select a ROI of image indices
- Start file transfer with FTRANSFER.
The ZMQ connection operates in PUSH-PULL mode, i.e. it needs incoming connection.
STOREMODE sets the acquisition mode:
if STOREMODE == Recorder
Fills up the buffer with images. Here SAVESTART and SAVESTOP selects a ROI
of image indices to be streamed out (i.e. maximum buffer_size number of images)
if STOREMODE == FIFO buffer
Continously streams out data using the buffer as a FIFO queue.
Here SAVESTART and SAVESTOP selects a ROI of image indices to be streamed continously
(i.e. a large SAVESTOP streams indefinitely). Note that in FIFO mode buffer reads are
destructive. to prevent this, we don't have EPICS preview
"""
# ########################################################################
# General hardware info (in AD nomenclature)
query = Cpt(EpicsSignalRO, "QUERY", kind=Kind.config, doc="Camera manufacturer info")
board = Cpt(EpicsSignalRO, "BOARD", kind=Kind.omitted, doc="Camera board info")
# ########################################################################
# Acquisition configuration (in AD nomenclature)
camera_status = Cpt(
EpicsSignal,
"CAMERASTATUS",
put_complete=True,
kind=Kind.omitted,
string=True,
doc="Camera acquisition status, either 'Offline', 'Idle' or 'Running'",
)
exposure = Cpt(
EpicsSignal,
"EXPOSURE",
put_complete=True,
auto_monitor=True,
kind=Kind.config,
doc="Exposure time in milliseconds.",
)
delay = Cpt(
EpicsSignal,
"DELAY",
put_complete=True,
auto_monitor=True,
kind=Kind.config,
doc="Delay time in milliseconds.",
)
# trigger_mode cannot be called 'trigger' as it is a reserved method in ophyd.Device
# and it would override the Device.trigger() method
trigger_mode = Cpt(
EpicsSignal,
"TRIGGER",
put_complete=True,
auto_monitor=True,
kind=Kind.config,
string=True,
doc="Trigger mode. Must be either 'auto trigger', 'soft trigger', "
"'ext.exp sfttrg' or 'ext.exp contr'",
)
array_size = DynamicDeviceComponent(
{
"array_size_x": (EpicsSignal, "WIDTH", {"auto_monitor": True, "put_complete": True}),
"array_size_y": (EpicsSignal, "HEIGHT", {"auto_monitor": True, "put_complete": True}),
},
doc="Size of the array in the XY dimensions",
)
# ########################################################################
# Image size configuration (in AD nomenclature)
bin_x = Cpt(EpicsSignal, "BINX", put_complete=True, auto_monitor=True, kind=Kind.config)
bin_y = Cpt(EpicsSignal, "BINY", put_complete=True, auto_monitor=True, kind=Kind.config)
# ########################################################################
# Additional status info
busy = Cpt(EpicsSignalRO, "BUSY", auto_monitor=True, kind=Kind.config)
ss_camera = Cpt(EpicsSignalRO, "SS_CAMERA", auto_monitor=True, kind=Kind.config)
cam_progress = Cpt(EpicsSignalRO, "CAMPROGRESS", auto_monitor=True, kind=Kind.config)
# ########################################################################
# Configuration state maschine with separate transition states
set_param = Cpt(
EpicsSignal,
"BUSY_SET_PARAM",
write_pv="SET_PARAM",
put_complete=True,
auto_monitor=True,
kind=Kind.config,
)
statuscode = Cpt(EpicsSignalRO, "STATUSCODE", auto_monitor=True, kind=Kind.config)
init = Cpt(
EpicsSignalRO,
"INIT",
auto_monitor=True,
kind=Kind.config,
string=True,
doc="Camera initialization status, either 'OFFLINE' or 'INIT'.",
)
camera_init_busy = Cpt(EpicsSignalRO, "BUSY_INIT", auto_monitor=True, kind=Kind.config)
# ########################################################################
# Acquisition configuration
acqmode = Cpt(EpicsSignalRO, "ACQMODE", auto_monitor=True, kind=Kind.config)
# ########################################################################
# Buffer configuration
rec_mode = Cpt(
EpicsSignalRO,
"RECMODE",
auto_monitor=True,
kind=Kind.config,
string=True,
doc="Recording mode of the camera, either 'Sequence' or 'Ring buffer'",
)
store_mode = Cpt(
EpicsSignal,
"STOREMODE",
auto_monitor=True,
kind=Kind.config,
string=True,
doc="Store mode of the camera, either 'Recorder' or 'FIFO buffer'",
)
pic_buffer = Cpt(EpicsSignalRO, "PIC_BUFFER", auto_monitor=True, kind=Kind.normal)
pic_max = Cpt(EpicsSignalRO, "PIC_MAX", auto_monitor=True, kind=Kind.normal)
clear_mem = Cpt(EpicsSignal, "CLEARMEM", put_complete=True, kind=Kind.omitted)
# ########################################################################
# File saving/streaming interface
cam_rate = Cpt(EpicsSignalRO, "CAMRATE", auto_monitor=True, kind=Kind.normal)
file_rate = Cpt(EpicsSignalRO, "FILERATE", auto_monitor=True, kind=Kind.normal)
save_start = Cpt(EpicsSignal, "SAVESTART", put_complete=True, kind=Kind.config)
save_stop = Cpt(EpicsSignal, "SAVESTOP", put_complete=True, kind=Kind.config)
file_format = Cpt(EpicsSignal, "FILEFORMAT", put_complete=True, kind=Kind.config)
file_transfer = Cpt(EpicsSignal, "FTRANSFER", put_complete=True, kind=Kind.config)
file_savebusy = Cpt(EpicsSignalRO, "FILESAVEBUSY", auto_monitor=True, kind=Kind.normal)
# ########################################################################
# Throtled image preview
image = Cpt(EpicsSignalRO, "FPICTURE", kind=Kind.omitted, doc="Throttled image preview")
# ########################################################################
# General hardware info
camError = Cpt(EpicsSignalRO, "ERRCODE", auto_monitor=True, kind=Kind.config)
camWarning = Cpt(EpicsSignalRO, "WARNCODE", auto_monitor=True, kind=Kind.config)
# DAQ parameters
file_path = Cpt(Signal, kind=Kind.config, value="/gpfs/test/test-beamline")
file_prefix = Cpt(Signal, kind=Kind.config, value="scan_")
num_images = Cpt(Signal, kind=Kind.config, value=1000)
frames_per_trigger = Cpt(Signal, kind=Kind.config, value=1)
num_images_counter = Cpt(Signal, kind=Kind.hinted, value=0)
@property
def state(self) -> str:
"""Single word camera state"""
if self.set_param.value:
return "BUSY"
if self.statuscode.value == 2 and self.init.value == 1:
return "IDLE"
if self.statuscode.value == 6 and self.init.value == 1:
return "RUNNING"
# if self.camRemoval.value==0 and self.camInit.value==0:
if self.init.value == 0:
return "OFFLINE"
# if self.camRemoval.value:
# return "REMOVED"
return "UNKNOWN"
@state.setter
def state(self):
raise RuntimeError("State is a ReadOnly property")
# Automatically connect to test camera if directly invoked
if __name__ == "__main__":
# Drive data collection
cam = PcoEdgeBase("X02DA-CCDCAM2:", name="mcpcam")
cam.wait_for_connection()

View File

@@ -0,0 +1,589 @@
from __future__ import annotations
import os
from collections import deque
from typing import TYPE_CHECKING, Literal, cast
import numpy as np
from bec_lib.logger import bec_logger
from ophyd import Component as Cpt
from ophyd import Kind, Signal
from ophyd.status import AndStatus, DeviceStatus, SubscriptionStatus
from ophyd_devices.interfaces.base_classes.psi_device_base import PSIDeviceBase
from ophyd_devices.utils.bec_signals import PreviewSignal, ProgressSignal
from tomcat_bec.devices.pco_edge.pcoedge_base import CameraStatus, CameraStatusCode, PcoEdgeBase
from tomcat_bec.devices.std_daq.std_daq_client import (
StdDaqClient,
StdDaqConfigPartial,
StdDaqStatus,
)
from tomcat_bec.devices.std_daq.std_daq_live_processing import StdDaqLiveProcessing
from tomcat_bec.devices.std_daq.std_daq_preview import StdDaqPreview
logger = bec_logger.logger
if TYPE_CHECKING: # pragma: no cover
from bec_lib.devicemanager import DeviceManagerBase
from ophyd import StatusBase
# pylint: disable=too-many-instance-attributes
class PcoEdge5M(PSIDeviceBase, PcoEdgeBase):
"""Ophyd baseclass for Helge camera IOCs
This class provides wrappers for Helge's camera IOCs around SwissFEL and
for high performance SLS 2.0 cameras. The IOC's operation is a bit arcane
and there are different versions and cameras all around. So this device
only covers the absolute basics.
Probably the most important part is the configuration state machine. As
the SET_PARAMS takes care of buffer allocations it might take some time,
as well as a full re-configuration is required every time we change the
binning, roi, etc... This is automatically performed upon starting an
exposure (if it heven't been done before).
The status flag state machine during re-configuration is:
BUSY low, SET low -> BUSY high, SET low -> BUSY low, SET high -> BUSY low, SET low
UPDATE: Data sending operation modes
- Switch to ZMQ streaming by setting FILEFORMAT to ZEROMQ
- Set SAVESTART and SAVESTOP to select a ROI of image indices
- Start file transfer with FTRANSFER.
The ZMQ connection operates in PUSH-PULL mode, i.e. it needs incoming connection.
STOREMODE sets the acquisition mode:
if STOREMODE == Recorder
Fills up the buffer with images. Here SAVESTART and SAVESTOP selects a ROI
of image indices to be streamed out (i.e. maximum buffer_size number of images)
if STOREMODE == FIFO buffer
Continously streams out data using the buffer as a FIFO queue.
Here SAVESTART and SAVESTOP selects a ROI of image indices to be streamed continously
(i.e. a large SAVESTOP streams indefinitely). Note that in FIFO mode buffer reads are
destructive. to prevent this, we don't have EPICS preview
"""
USER_ACCESS = ["complete", "backend", "live_preview", "arm", "disarm"]
analysis_signal = Cpt(Signal, name="analysis_signal", kind=Kind.hinted, doc="Analysis Signal")
analysis_signal2 = Cpt(Signal, name="analysis_signal2", kind=Kind.hinted, doc="Analysis Signal")
preview = Cpt(PreviewSignal, ndim=2, name="preview", doc="Camera raw data preview signal", num_rotation_90=1, transpose=False)
preview_corrected = Cpt(
PreviewSignal,
ndim=2,
name="preview_corrected",
doc="Camera preview signal with flat and dark correction",
num_rotation_90=1, transpose=False
)
progress = Cpt(
ProgressSignal,
name="progress",
doc="Camera progress signal, used to monitor the acquisition progress",
)
# pylint: disable=too-many-arguments
def __init__(
self,
prefix="",
*,
name,
kind=None,
read_attrs=None,
configuration_attrs=None,
parent=None,
scan_info=None,
std_daq_rest: str | None = None,
std_daq_ws: str | None = None,
std_daq_live: str | None = None,
device_manager: DeviceManagerBase | None = None,
**kwargs,
):
self.device_manager = device_manager
self.connector = device_manager.connector if device_manager else None
super().__init__(
prefix=prefix,
name=name,
kind=kind,
read_attrs=read_attrs,
configuration_attrs=configuration_attrs,
parent=parent,
scan_info=scan_info,
**kwargs,
)
# Configure the stdDAQ client
if std_daq_rest is None or std_daq_ws is None:
raise ValueError("Both std_daq_rest and std_daq_ws must be provided")
self.live_processing = StdDaqLiveProcessing(
parent=self, signal=self.analysis_signal, signal2=self.analysis_signal2
)
self.backend = StdDaqClient(parent=self, ws_url=std_daq_ws, rest_url=std_daq_rest)
self.backend.add_count_callback(self._on_count_update)
self.live_preview = None
self.converted_files = deque(maxlen=100) # Store the last 10 converted files
self.target_files = deque(maxlen=100) # Store the last 10 target files
self.acq_configs = {}
if std_daq_live is not None:
self.live_preview = StdDaqPreview(url=std_daq_live, cb=self._on_preview_update)
def configure(self, d: dict | None = None):
"""Configure the base Helge camera device
Parameters as 'd' dictionary
----------------------------
num_images : int
Number of images to be taken during each scan. Meaning depends on
store mode.
exposure_time_ms : float
Exposure time [ms], usually gets set back to 20 ms
exposure_period_ms : float
Exposure period [ms], up to 200 ms.
store_mode : str
Buffer operation mode
*'Recorder' to record in buffer
*'FIFO buffer' for continous streaming
data_format : str
Usually set to 'ZEROMQ'
acq_mode : str
Store mode and data format according to preconfigured settings
"""
if d is None:
return
# Stop acquisition
self.stop_camera().wait(timeout=10)
backend_config = StdDaqConfigPartial(**d)
self.backend.update_config(backend_config)
config = {}
for key in self.component_names:
val = d.get(key)
if val is not None:
config[key] = val
if d.get("exp_time", 0) > 0:
config["exposure"] = d["exp_time"] * 1000 # exposure time in ms
super().configure(config)
# If a pre-configured acquisition mode is specified, set it
if "acq_mode" in d:
self.set_acquisition_mode(d["acq_mode"])
# State machine
# Initial: BUSY and SET both low
# 0. Write 1 to SET_PARAM
# 1. BUSY goes high, SET stays low
# 2. BUSY goes low, SET goes high
# 3. BUSY stays low, SET goes low
# So we need a 'negedge' on SET_PARAM
def negedge(*, old_value, value, timestamp, **_):
return bool(old_value and not value)
# Subscribe and wait for update
status = SubscriptionStatus(self.set_param, negedge, timeout=5, settle_time=0.5)
self.set_param.set(1).wait()
status.wait()
def set_acquisition_mode(self, acq_mode):
"""Set acquisition mode
Utility function to quickly select between pre-configured and tested
acquisition modes.
"""
if acq_mode in ["default", "step"]:
# NOTE: Trigger duration requires a consumer
self.store_mode.set("FIFO Buffer").wait()
if acq_mode in ["stream"]:
# NOTE: Trigger duration requires a consumer
self.store_mode.set("FIFO Buffer").wait()
else:
raise RuntimeError(f"Unsupported acquisition mode: {acq_mode}")
def destroy(self):
self.backend.shutdown()
if self.live_preview:
self.live_preview.stop()
super().destroy()
def _on_preview_update(self, img: np.ndarray):
corrected_img = self.live_processing.apply_flat_dark_correction(img)
self.live_processing.on_new_data(corrected_img)
self.preview.put(img)
self.preview_corrected.put(corrected_img)
def _on_count_update(self, count: int):
"""
Callback for the count update from the backend.
Updates the progress signal.
Args:
count (int): The current count of images acquired by the camera.
"""
expected_counts = cast(int, self.num_images.get())
self.progress.put(
value=count, max_value=expected_counts, done=bool(count == expected_counts)
)
def acq_done(self) -> DeviceStatus:
"""
Check if the acquisition is done. For the GigaFrost camera, this is
done by checking the status of the backend as the camera does not
provide any feedback about its internal state.
Returns:
DeviceStatus: The status of the acquisition
"""
status = DeviceStatus(self)
if self.backend is not None:
self.backend.add_status_callback(
status,
success=[StdDaqStatus.IDLE, StdDaqStatus.FILE_SAVED],
error=[StdDaqStatus.REJECTED, StdDaqStatus.ERROR],
)
return status
def restart_with_new_config(
self,
name: str,
file_path: str = "",
file_name: str | None = None,
file_suffix: str = "",
num_images: int | None = None,
frames_per_trigger: int | None = None,
) -> StatusBase:
"""
Restart the camera with a new configuration.
This method allows to change the file path, file prefix, and number of images.
Args:
name (str): Name of the configuration to be saved.
file_path (str): New file path for the acquisition. If empty, the current file path is used.
file_prefix (str): New file prefix for the acquisition. If empty, the current file prefix is used.
num_images (int | None): New number of images to acquire. If None, the current number of images is used.
frames_per_trigger (int | None): New number of frames per trigger. If None, the current value is used.
Returns:
DeviceStatus: The status of the restart operation. It resolves when the camera is ready to receive the first image.
"""
if file_name is not None and file_suffix:
raise ValueError("Both file_name and file_suffix are specified. Please choose one.")
self.acq_configs[name] = {}
conf = {}
if file_path:
self.acq_configs[name]["file_path"] = self.file_path.get()
conf["file_path"] = file_path
if file_suffix:
self.acq_configs[name]["file_prefix"] = self.file_prefix.get()
conf["file_prefix"] = "_".join([self.file_prefix.get(), file_suffix])
if file_name:
self.acq_configs[name]["file_prefix"] = self.file_prefix.get()
conf["file_prefix"] = file_name
if num_images is not None:
self.acq_configs[name]["num_images"] = self.num_images.get()
conf["num_images"] = num_images
if frames_per_trigger is not None:
self.acq_configs[name]["frames_per_trigger"] = self.frames_per_trigger.get()
conf["frames_per_trigger"] = frames_per_trigger
# Stop the camera and wait for it to become idle
status = self.stop_camera()
status.wait(timeout=10)
# update the configuration
self.configure(conf)
# Restart the camera with the new configuration
return self.start_camera()
def restore_config(self, name: str) -> None:
"""
Restore a previously saved configuration and restart the camera.
Args:
name (str): Name of the configuration to restore.
"""
status = self.stop_camera()
status.wait(timeout=10)
config = self.acq_configs.pop(name, {})
self.configure(config)
def update_live_processing_reference(
self, reference_type: Literal["dark", "flat"]
) -> StatusBase:
"""
Update the flat or dark reference for the live processing.
Args:
reference_type (Literal["dark", "flat"]): Type of the reference to update.
If 'dark', the dark reference will be updated, if 'flat', the flat reference will be updated.
Returns:
StatusBase: The status of the update operation.
"""
if reference_type not in ["dark", "flat"]:
raise ValueError("Invalid reference type! Must be 'dark' or 'flat'.")
# Use the current acquisition to update the reference
if self.live_processing is None:
raise RuntimeError("Live processing is not available. Cannot update reference.")
status = self.live_processing.update_reference_with_file(
reference_type=reference_type,
file_path=self.target_file,
entry="tomcat-pco/data", # type: ignore
wait=False, # Do not wait for the update to finish
)
return status
def start_camera(self) -> StatusBase:
"""
Start the camera and the backend.
Returns:
DeviceStatus: The status of the startup. It resolves when the backend is ready to receive the first image.
"""
status = DeviceStatus(self)
self.backend.add_status_callback(
status,
success=[StdDaqStatus.WAITING_FOR_FIRST_IMAGE],
error=[StdDaqStatus.REJECTED, StdDaqStatus.ERROR],
)
self.backend.start(
file_path=self.file_path.get(), # type: ignore
file_prefix=self.file_prefix.get(), # type: ignore
num_images=self.num_images.get(), # type: ignore
)
self.camera_status.set(CameraStatus.RUNNING).wait()
self.target_files.append(self.target_file)
def is_running(*, value, timestamp, **_):
return bool(value == CameraStatusCode.RUNNING)
camera_running_status = SubscriptionStatus(
self.statuscode, is_running, timeout=5, settle_time=0.2
)
self.cancel_on_stop(camera_running_status)
return AndStatus(status, camera_running_status)
def set_idle(self) -> AndStatus:
"""Set the camera to idle state"""
cam_status = self.camera_status.set(CameraStatus.IDLE)
save_stop = self.save_stop.set(0)
return AndStatus(cam_status, save_stop)
def stop_camera(self) -> DeviceStatus:
"""Stop the camera acquisition and set it to idle state"""
self.set_idle().wait()
status = DeviceStatus(self)
if self.backend.status != StdDaqStatus.IDLE:
self.backend.add_status_callback(
status,
success=[StdDaqStatus.IDLE],
error=[StdDaqStatus.REJECTED, StdDaqStatus.ERROR],
)
self.backend.stop()
else:
status.set_finished()
return status
@property
def target_file(self) -> str:
"""Return the target file path for the current acquisition."""
file_path = cast(str, self.file_path.get())
file_prefix = cast(str, self.file_prefix.get())
return os.path.join(file_path, f"{file_prefix.removesuffix('_')}.h5")
########################################
# Beamline Specific Implementations #
########################################
def on_init(self) -> None:
"""
Called when the device is initialized.
No signals are connected at this point,
thus should not be set here but in on_connected instead.
"""
def on_connected(self) -> None:
"""
Called after the device is connected and its signals are connected.
Default values for signals should be set here.
"""
self.backend.connect()
if self.live_preview:
self.live_preview.start()
# pylint: disable=protected-access
def on_stage(self) -> None:
"""Configure and arm PCO.Edge camera for acquisition"""
# If the camera is busy, stop it first
if self.statuscode.get() != CameraStatusCode.IDLE:
self.stop_camera()
scan_msg = self.scan_info.msg
if scan_msg is None or scan_msg.request_inputs is None or scan_msg.scan_parameters is None:
# I don't think this can happen outside of tests, but just in case
logger.warning(
f"[{self.name}] Scan message is not available or incomplete. "
"Cannot configure the GigaFrost camera."
)
self.acq_configs = {}
return
scan_args = {
**scan_msg.request_inputs.get("inputs", {}),
**scan_msg.request_inputs.get("kwargs", {}),
**scan_msg.scan_parameters,
}
if "file_path" not in scan_args:
scan_args["file_path"] = (
"/gpfs/test/test-beamline" # FIXME: This should be from the scan message
)
if "file_prefix" not in scan_args:
file_base = scan_msg.info["file_components"][0].split("/")[-1]
file_suffix = scan_msg.info.get("file_suffix") or ""
comps = [file_base, self.name]
if file_suffix:
comps.append(file_suffix)
scan_args["file_prefix"] = "_".join(comps)
self.configure(scan_args)
if scan_msg.scan_type == "step":
num_points = self.frames_per_trigger.get() * max(scan_msg.num_points, 1) # type: ignore
else:
num_points = self.frames_per_trigger.get()
self.num_images.set(num_points).wait()
# reset the acquisition configs
self.acq_configs = {}
def on_unstage(self) -> DeviceStatus | None:
"""Called while unstaging the device."""
return self.stop_camera()
def on_pre_scan(self) -> StatusBase:
"""Called right before the scan starts on all devices automatically."""
return self.start_camera()
def on_trigger(self) -> None | DeviceStatus:
"""Trigger mode operation
Use it to repeatedly record a fixed number of frames and send it to stdDAQ. The method waits
for the acquisition and data transfer to complete.
NOTE: Maciej confirmed that sparse data is no problem to the stdDAQ.
TODO: Optimize data transfer to launch at end and check completion at the beginning.
"""
# Ensure that previous data transfer finished
# def sentIt(*args, value, timestamp, **kwargs):
# return value==0
# status = SubscriptionStatus(self.file_savebusy, sentIt, timeout=120)
# status.wait()
scan_msg = self.scan_info.msg
if scan_msg.scan_type == "step":
# The PCO Edge does not support software triggering. As a result, we have to 'simulate'
# the software triggering mechanism by leveraging the PCO's readout buffer: We limit the buffer
# readout size (save_start/save_stop) to the number of frames we want per trigger, clear the
# buffer and then wait for the buffer to fill up again before transfering the files to the
# file writer (std_daq).
# Set the readout per step scan point to the requested frames per trigger
self.save_stop.set(self.frames_per_trigger.get()).wait()
# Reset the buffer
self.clear_mem.set(1, settle_time=0.1).wait()
# Wait until the buffer fills up with enough images
t_expected = (self.exposure.get() + self.delay.get()) * self.save_stop.get()
def wait_acquisition(*, value, timestamp, **_):
num_target = self.save_stop.get()
# logger.warning(f"{value} of {num_target}")
return bool(value >= num_target)
max_wait = max(5, 5 * t_expected)
buffer_filled_status = SubscriptionStatus(
self.pic_buffer, wait_acquisition, timeout=max_wait, settle_time=0.2
)
self.cancel_on_stop(buffer_filled_status)
buffer_filled_status.wait()
logger.info(f"file savebusy before: {self.file_savebusy.get()}")
def wait_sending(*, old_value, value, timestamp, **_):
logger.info(f"old_value {old_value}, new value: {value}")
return old_value == 1 and value == 0
savebusy_status = SubscriptionStatus(
self.file_savebusy, wait_sending, timeout=120, settle_time=0.2
)
self.cancel_on_stop(savebusy_status)
self.file_transfer.set(1).wait()
savebusy_status.wait()
else:
raise RuntimeError("Triggering for fly scans is not yet implemented.")
def on_complete(self) -> DeviceStatus | None:
"""Called to inquire if a device has completed a scans."""
def _create_dataset(_status: DeviceStatus):
if (
self.target_file in self.converted_files
or self.target_file not in self.target_files
):
logger.info(f"File {self.target_file} already processed or not in target files.")
return
self.backend.create_virtual_datasets(
self.file_path.get(), file_prefix=self.file_prefix.get() # type: ignore
)
self._run_subs(
sub_type=self.SUB_FILE_EVENT,
file_path=self.target_file,
done=True,
successful=True,
hinted_location={"data": "tomcat-pco/data"},
)
self.converted_files.append(self.target_file)
logger.info(f"Finished writing to {self.target_file}")
status = self.acq_done()
status.add_callback(_create_dataset)
return status
def on_kickoff(self) -> DeviceStatus | None:
"""Called to kickoff a device for a fly scan. Has to be called explicitly."""
def on_stop(self) -> DeviceStatus:
"""Called when the device is stopped."""
return self.stop_camera()
# Automatically connect to test camera if directly invoked
if __name__ == "__main__":
# Drive data collection
cam = PcoEdge5M(
"X02DA-CCDCAM2:",
name="mcpcam",
std_daq_ws="ws://129.129.95.111:8081",
std_daq_rest="http://129.129.95.111:5010",
std_daq_live="tcp://129.129.95.111:20010",
)
cam.wait_for_connection()

View File

View File

@@ -19,7 +19,7 @@ from websockets.exceptions import WebSocketException
from websockets.sync.client import ClientConnection, connect
if TYPE_CHECKING: # pragma: no cover
from ophyd import Device, DeviceStatus
from ophyd import Device
logger = bec_logger.logger
@@ -50,25 +50,40 @@ class StdDaqStatus(str, enum.Enum):
class StdDaqConfig(BaseModel):
"""
Configuration for the StdDAQ
Configuration for the StdDAQ.
More information can be found here: https://controls-ci.gitpages.psi.ch/std_detector_buffer/docs/Interfaces/configfile
"""
detector_name: str
detector_type: str
n_modules: int
bit_depth: int
# Mandatory fields
detector_name: str = Field(
description="Name of deployment - used as identifier in logging, "
"part of the name of zmq sockets and shared memory."
)
detector_type: Literal["gigafrost", "eiger", "pco", "jungfrau-raw", "jungfrau-converted"]
image_pixel_height: int
image_pixel_width: int
bit_depth: int
n_modules: int
start_udp_port: int
writer_user_id: int
max_number_of_forwarders_spawned: int
use_all_forwarders: bool
module_sync_queue_size: int
number_of_writers: int
module_positions: dict
ram_buffer_gb: float
delay_filter_timeout: float
number_of_writers: int
# Optional fields
max_number_of_forwarders_spawned: int | None = None
use_all_forwarders: bool | None = None
module_sync_queue_size: int | None = None
ram_buffer_gb: float | None = None
delay_filter_timeout: float | None = None
writer_user_id: int | None = None
live_stream_configs: dict[str, dict[Literal["type", "config"], str | list]]
log_level: Literal["debug", "info", "warning", "error", "off"] | None = Field(
default=None,
description="Log level for the StdDAQ. Defaults to info. Sets the logging level for services - possible values: debug, info, warning, error, off.",
)
stats_collection_period: float | None = Field(
default=None,
description="Period in seconds for printing stats into journald that are shipped to elastic. Defaults to 10. Warning too high frequency will affect the performance of the system",
)
model_config = ConfigDict(extra="ignore")
@@ -114,6 +129,7 @@ class StdDaqWsResponse(BaseModel):
status: StdDaqStatus
reason: str | None = None
count: int | None = None
model_config = ConfigDict(extra="allow")
@@ -134,8 +150,9 @@ class StdDaqClient:
self._daq_is_running = threading.Event()
self._config: StdDaqConfig | None = None
self._status_callbacks: dict[
str, tuple[DeviceStatus, list[StdDaqStatus], list[StdDaqStatus]]
str, tuple[StatusBase, list[StdDaqStatus], list[StdDaqStatus]]
] = {}
self._count_callbacks: dict[int, Callable[[int], None]] = {}
self._send_queue = queue.Queue()
self._daq_is_running.set()
@@ -147,20 +164,49 @@ class StdDaqClient:
return self._status
def add_status_callback(
self, status: DeviceStatus, success: list[StdDaqStatus], error: list[StdDaqStatus]
self, status: StatusBase, success: list[StdDaqStatus], error: list[StdDaqStatus]
):
"""
Add a DeviceStatus callback for the StdDAQ. The status will be updated when the StdDAQ status changes and
Add a StatusBase callback for the StdDAQ. The status will be updated when the StdDAQ status changes and
set to finished when the status matches one of the specified success statuses and to exception when the status
matches one of the specified error statuses.
Args:
status (DeviceStatus): DeviceStatus object
status (StatusBase): StatusBase object
success (list[StdDaqStatus]): list of statuses that indicate success
error (list[StdDaqStatus]): list of statuses that indicate error
"""
self._status_callbacks[id(status)] = (status, success, error)
def add_count_callback(self, callback: Callable[[int], None]) -> int:
"""
Add a callback for the count of images acquired by the StdDAQ. The callback will be called with the count
whenever the StdDAQ status changes and the count is available.
Args:
callback (Callable[[int], None]): callback function that takes an integer as argument
Returns:
int: ID of the callback, which can be used to remove the callback later
"""
if not callable(callback):
raise TypeError("Callback must be a callable function")
max_cb_id = max(self._count_callbacks.keys(), default=0)
self._count_callbacks[max_cb_id + 1] = callback
return max_cb_id + 1
def remove_count_callback(self, cb_id: int):
"""
Remove a count callback by its ID.
Args:
cb_id (int): ID of the callback to remove
"""
if cb_id in self._count_callbacks:
del self._count_callbacks[cb_id]
else:
logger.warning(f"Callback with ID {cb_id} not found in StdDAQ count callbacks.")
@typechecked
def start(
self, file_path: str, file_prefix: str, num_images: int, timeout: float = 20, wait=True
@@ -185,7 +231,7 @@ class StdDaqClient:
}
self._send_queue.put(message)
if wait:
return status.wait(timeout=timeout)
status.wait(timeout=timeout)
return status
@@ -217,7 +263,7 @@ class StdDaqClient:
)
response.raise_for_status()
self._config = StdDaqConfig(**response.json())
return self._config.model_dump()
return self._config.model_dump(exclude_defaults=True)
def set_config(self, config: StdDaqConfig | dict, timeout: float = 2) -> None:
"""
@@ -230,7 +276,7 @@ class StdDaqClient:
if not isinstance(config, StdDaqConfig):
config = StdDaqConfig(**config)
out = config.model_dump(exclude_none=True)
out = config.model_dump(exclude_defaults=True, exclude_none=True)
if not out:
logger.info(
"The provided config does not contain relevant values for the StdDaq. Skipping set_config."
@@ -340,6 +386,8 @@ class StdDaqClient:
Connect to the StdDAQ. This method should be called after the client is created. It will
launch a background thread to exchange data with the StdDAQ.
"""
if self._ws_update_thread is not None and self._ws_update_thread.is_alive():
return
self._ws_update_thread = threading.Thread(
target=self._ws_update_loop, name=f"{self.parent.name}_stddaq_ws_loop", daemon=True
)
@@ -349,6 +397,7 @@ class StdDaqClient:
"""
Shutdown the StdDAQ client.
"""
self._shutdown_event.set()
if self._ws_update_thread is not None:
self._ws_update_thread.join()
if self.ws_client is not None:
@@ -374,7 +423,7 @@ class StdDaqClient:
msg = self._send_queue.get(block=False)
logger.trace(f"Sending to stddaq ws: {msg}")
self.ws_client.send(json.dumps(msg))
logger.trace(f"Sent to stddaq ws: {msg}")
logger.info(f"Sent to stddaq ws: {msg}")
except queue.Empty:
pass
try:
@@ -407,19 +456,36 @@ class StdDaqClient:
content = traceback.format_exc()
logger.warning(f"Failed to decode websocket message: {content}")
return
if data.status != self._status:
logger.info(f"std_daq_client status changed from [{self._status}] to [{data.status}]")
self._status = data.status
if data.count is not None:
self._run_count_callbacks(data.count)
self._run_status_callbacks()
def _run_count_callbacks(self, count: int):
"""
Run the count callbacks with the given count.
The callbacks will be called with the count as argument.
"""
for cb in self._count_callbacks.values():
try:
cb(count)
except Exception as exc:
logger.error(f"Error in StdDAQ count callback: {exc}")
def _run_status_callbacks(self):
"""
Update the DeviceStatus objects based on the current status of the StdDAQ.
If the status matches one of the success or error statuses, the DeviceStatus object will be set to finished
Update the StatusBase objects based on the current status of the StdDAQ.
If the status matches one of the success or error statuses, the StatusBase object will be set to finished
or exception, respectively and removed from the list of callbacks.
"""
status = self._status
completed_callbacks = []
for dev_status, success, error in self._status_callbacks.values():
if dev_status.done:
logger.warning("Status object already resolved. Skipping StdDaq callback.")
continue
if status in success:
dev_status.set_finished()
logger.info(f"StdDaq status is {status}")

View File

@@ -0,0 +1,315 @@
from __future__ import annotations
import pathlib
import threading
from typing import TYPE_CHECKING, Literal
import h5py
import numpy as np
from bec_lib import messages
from bec_lib.endpoints import MessageEndpoints
from bec_lib.logger import bec_logger
from ophyd import StatusBase
from typeguard import typechecked
if TYPE_CHECKING: # pragma: no cover
from bec_lib.redis_connector import RedisConnector
from ophyd import Signal
from ophyd_devices.interfaces.base_classes.psi_device_base import PSIDeviceBase
logger = bec_logger.logger
class StdDaqLiveProcessing:
USER_ACCESS = ["set_enabled", "set_mode", "get_mode"]
def __init__(self, parent: PSIDeviceBase, signal: Signal, signal2: Signal):
self.parent = parent
self.signal = signal
self.signal2 = signal2
self._enabled = False
self._mode = "sum"
self.connector: RedisConnector | None = (
self.parent.device_manager.connector if self.parent.device_manager else None
)
self.references: dict[str, np.ndarray] = {}
def get_mode(self) -> str:
"""
Get the current processing mode.
Returns:
str: Current processing mode, e.g., "sum".
"""
return self._mode
@typechecked
def set_mode(self, mode: Literal["sum", "average"]):
"""
Set the processing mode.
Args:
mode (str): Processing mode, currently only "sum" and "average" are supported.
"""
if mode not in ["sum", "average"]:
raise ValueError("Unsupported mode. Only 'sum' and 'average' are currently supported.")
self._mode = mode
def set_enabled(self, value: bool):
"""
Enable or disable live processing.
Args:
value (bool): True to enable, False to disable.
"""
if not isinstance(value, bool):
raise ValueError("Enabled must be a boolean value.")
self._enabled = value
#########################################
## Live Data Processing #################
#########################################
def on_new_data(self, data: np.ndarray):
"""
Process new data if live processing is enabled.
Args:
data (np.ndarray): New data to process.
"""
if not self._enabled:
logger.info("Skipping data processing")
return
match self._mode:
case "sum":
self.process_sum(data)
case "average":
self.process_average(data)
case _:
raise ValueError(f"Unknown mode: {self._mode}")
def process_sum(self, data: np.ndarray):
"""
Process data by summing it.
Args:
data (np.ndarray): Data to sum.
"""
if not isinstance(data, np.ndarray):
raise ValueError("Data must be a numpy array.")
summed_data = np.sum(np.sum(data))
self.signal.put(summed_data)
def process_average(self, data: np.ndarray):
"""
Process data by averaging it.
Args:
data (np.ndarray): Data to average.
"""
if not isinstance(data, np.ndarray):
raise ValueError("Data must be a numpy array.")
averaged_data = np.mean(data)
self.signal.put(averaged_data)
########################################
## Flat and Dark Field References ######
########################################
def apply_flat_dark_correction(self, data: np.ndarray) -> np.ndarray:
"""
Apply flat and dark field correction to the data.
Args:
data (np.ndarray): Data to correct.
Returns:
np.ndarray: Corrected data.
"""
if not isinstance(data, np.ndarray):
raise ValueError("Data must be a numpy array.")
flat = self.get_flat(data.shape) # type: ignore # ndarray.shape is of type _ShapeType, which is just a generic of Any
dark = self.get_dark(data.shape) # type: ignore
# If flat is just ones, we simply subtract dark from data
if np.all(flat == 1):
corrected_data = data - dark
return corrected_data
# Ensure that the division does not lead to division by zero
flat_corr = np.abs(flat-dark)
corrected_data = np.divide(
data - dark, flat_corr, out=np.zeros_like(data, dtype=np.float32), where=flat_corr != 0
)
return np.clip(corrected_data, a_min=0, a_max=None)
@typechecked
def _load_and_update_reference(
self,
ref_type: Literal["flat", "dark"],
file_path: str | pathlib.PosixPath,
entry: str,
status: StatusBase | None = None,
) -> None:
"""
Update the reference field with data from a file.
Args:
ref_type (str): Type of reference, either "flat" or "dark".
file_path (str): Path to the file containing the reference data.
entry (str): Entry name in the file to read the data from.
status (StatusBase | None): Status object to report progress.
Raises:
ValueError: If the file path is not a string or if the entry is not found.
Exception: If there is an error reading the file or processing the data.
"""
try:
########################################################
# Remove these lines once the mount is fixed
if not isinstance(file_path, str):
file_path = str(file_path)
if file_path.startswith("/gpfs/test"):
file_path = file_path.replace("/gpfs/test", "/data/test")
########################################################
with h5py.File(file_path, "r") as file:
if entry not in file:
raise ValueError(f"Entry '{entry}' not found in the file.")
data = file[entry][:] # type: ignore
if not isinstance(data, np.ndarray):
raise ValueError("Data in the file must be a numpy array.")
if data.ndim == 2:
self.references[f"{ref_type}_{data.shape}"] = data # type: ignore
elif data.ndim == 3:
# For 3D data, we take the mean across the first axis
data = np.mean(data, axis=0)
self.references[f"{ref_type}_{data.shape}"] = data
else:
raise ValueError("Data must be 2D or 3D numpy array.")
self._publish_to_redis(data, self._redis_endpoint_name(ref_type, data.shape)) # type: ignore
if status is not None:
status.set_finished()
except Exception as exc:
if status is not None:
status.set_exception(exc)
else:
logger.error(f"Failed to update {ref_type} field reference from {file_path}: {exc}")
raise
def update_reference_with_file(
self,
reference_type: Literal["dark", "flat"],
file_path: str | pathlib.PosixPath,
entry: str,
wait=False,
) -> StatusBase:
"""
Update the reference with a new file.
Args:
reference_type (Literal["dark", "flat"]): Type of reference to update.
camera_name (str): Name of the camera.
file_path (str): Path to the flat field file.
entry (str): Entry name in the file to read the data from.
wait (bool): Whether to wait for the update to complete.
"""
status = StatusBase()
if not wait:
# If not waiting, run the update in a separate thread
threading.Thread(
target=self._load_and_update_reference,
args=(reference_type, file_path, entry, status),
).start()
return status
self._load_and_update_reference(reference_type, file_path, entry, status=status)
status.wait()
return status
def get_flat(self, shape: tuple[int, int]) -> np.ndarray:
"""
Get the flat field reference for a specific shape.
Args:
shape (tuple[int, int]): Shape of the flat field reference to retrieve.
Returns:
np.ndarray: Flat field reference for the specified shape.
"""
if not isinstance(shape, tuple) or len(shape) != 2:
raise ValueError("Shape must be a tuple of two integers.")
key = f"flat_{shape}"
if key not in self.references:
# if the reference is not found, check Redis for it
redis_data = self._get_from_redis(self._redis_endpoint_name("flat", shape))
if redis_data is not None:
self.references[key] = redis_data
else:
# If not found in Redis, create a default flat field reference
self.references[key] = np.ones(shape) # Default to ones if not found
return self.references[key]
def get_dark(self, shape: tuple[int, int]) -> np.ndarray:
"""
Get the dark field reference for a specific shape.
Args:
shape (tuple[int, int]): Shape of the dark field reference to retrieve.
Returns:
np.ndarray: Dark field reference for the specified shape.
"""
if not isinstance(shape, tuple) or len(shape) != 2:
raise ValueError("Shape must be a tuple of two integers.")
key = f"dark_{shape}"
if key not in self.references:
redis_data = self._get_from_redis(self._redis_endpoint_name("dark", shape))
if redis_data is not None:
self.references[key] = redis_data
else:
self.references[key] = np.zeros(shape)
return self.references[key]
def _redis_endpoint_name(self, ref_type: str, shape: tuple[int, int]) -> str:
return f"{self.parent.name}_{ref_type}_{shape}"
def _publish_to_redis(self, data: np.ndarray, name: str) -> None:
"""
Publish processed data to Redis.
Args:
data (np.ndarray): Data to publish.
name (str): Name of the data for Redis.
"""
if self.connector is None:
logger.warning("Redis connector is not set. Cannot publish data.")
return
msg = messages.ProcessedDataMessage(data={"data": data, "name": name, "shape": data.shape})
self.connector.xadd(
MessageEndpoints.processed_data(process_id=name), msg_dict={"data": msg}, max_size=1
)
def _get_from_redis(self, name: str) -> np.ndarray | None:
"""
Retrieve data from Redis.
Args:
name (str): Name of the data to retrieve.
Returns:
np.ndarray: Retrieved data.
"""
if self.connector is None:
logger.warning("Redis connector is not set. Cannot retrieve data.")
return None
msg = self.connector.get_last(MessageEndpoints.processed_data(process_id=name))
if not msg:
return None
if isinstance(msg, dict):
msg = msg.get("data")
if not isinstance(msg, messages.ProcessedDataMessage):
logger.error(f"Received unexpected message type: {type(msg)}")
return None
if isinstance(msg.data, list):
data = msg.data[0].get("data")
else:
data = msg.data.get("data")
if not isinstance(data, np.ndarray):
logger.error("Data retrieved from Redis is not a numpy array.")
return None
return data

View File

@@ -1,2 +1,10 @@
from .tutorial_fly_scan import AcquireDark, AcquireWhite, AcquireRefs, AcquireProjections, TutorialFlyScanContLine
from .tomcat_scans import TomcatSnapNStep, TomcatSimpleSequence
from .simple_scans import AcquireDark, AcquireFlat, AcquireReferences, TomoFlyScan, TomoScan
from .tomcat_scans import TomcatSimpleSequence, TomcatSnapNStep
# from .tutorial_fly_scan import (
# # AcquireDark,
# AcquireProjections,
# AcquireRefs,
# AcquireWhite,
# TutorialFlyScanContLine,
# )

View File

@@ -0,0 +1,410 @@
import time
import numpy as np
from bec_lib.device import DeviceBase
from bec_server.scan_server.scans import Acquire, AsyncFlyScanBase
from bec_lib import bec_logger
logger = bec_logger.logger
class Shutter:
"""Shutter status"""
CLOSED = 0
OPEN = 1
class AcquireDarkV2(Acquire):
scan_name = "acquire_dark_v2"
required_kwargs = ["exp_burst"]
gui_config = {"Acquisition parameters": ["exp_burst"]}
def __init__(self, exp_burst: int, file_prefix="", **kwargs):
"""
Acquire dark images. This scan is used to acquire dark images. Dark images are images taken with the shutter
closed and no beam on the sample. Dark images are used to correct the data images for dark current.
NOTE: this scan has a special operation mode that does not call
Args:
exp_burst : int
Number of dark images to acquire (no default)
file_prefix : str
File prefix
Examples:
>>> scans.acquire_dark(5)
"""
super().__init__(exp_burst=exp_burst, file_prefix="", **kwargs)
self.burst_at_each_point = 1 # At each point, how many times I want to individually trigger
self.exp_burst = exp_burst
self.file_prefix = file_prefix
def pre_scan(self):
"""Close the shutter before scan"""
yield from self.stubs.set(device=["eyex"], value=[Shutter.CLOSED])
return super().pre_scan()
def direct(self):
"""Direct scan procedure call"""
# Collect relevant devices
self.cams = [
cam.name
for cam in self.device_manager.devices.get_devices_with_tags("camera")
if cam.enabled
]
self.prev = [
pre.name
for pre in self.device_manager.devices.get_devices_with_tags("preview")
if pre.enabled
]
self.daqs = [
daq.name
for daq in self.device_manager.devices.get_devices_with_tags("daq")
if daq.enabled
]
# Do not call stage, as there's no ScanInfo emitted for direct call
for daq in self.daqs:
cam = yield from self.stubs.send_rpc_and_wait(daq, "datasource.get")
prefix = f"{self.file_prefix}_{cam}_dark"
yield from self.stubs.send_rpc_and_wait(daq, "file_prefix.set", prefix)
yield from self.stubs.send_rpc_and_wait(daq, "num_images.set", self.exp_burst)
yield from self.stubs.send_rpc_and_wait(daq, "arm")
for prev in self.prev:
yield from self.stubs.send_rpc_and_wait(prev, "arm")
for cam in self.cams:
yield from self.stubs.send_rpc_and_wait(
cam, "configure", {"exposure_num_burst": self.exp_burst}
)
yield from self.stubs.send_rpc_and_wait(cam, "arm")
yield from self.pre_scan()
yield from self.scan_core()
yield from self.finalize()
yield from self.unstage()
yield from self.cleanup()
class AcquireWhiteV2(Acquire):
scan_name = "acquire_white_v2"
gui_config = {"Acquisition parameters": ["exp_burst"]}
def __init__(
self,
motor: DeviceBase,
exp_burst: int,
sample_position_out: float,
sample_angle_out: float,
file_prefix: str = "",
**kwargs,
):
"""
Acquire flat field images. This scan is used to acquire flat field images. The flat field
image is an image taken with the shutter open but the sample out of the beam. Flat field
images are used to correct the data images for non-uniformity in the detector.
Args:
motor : DeviceBase
Motor to be moved to move the sample out of beam
exp_burst : int
Number of flat field images to acquire (no default)
sample_position_out : float
Position to move the sample stage out of beam and take flat field images
sample_angle_out : float
Angular position where to take the flat field images
Examples:
>>> scans.acquire_white(dev.samx, 5, 20)
"""
super().__init__(exp_burst=exp_burst, **kwargs)
self.exp_burst = exp_burst
self.file_prefix = file_prefix
self.burst_at_each_point = 1
self.scan_motors = [motor, "eyez"]
# self.scan_motors = [motor, "es1_roty"]
self.out_position = [sample_position_out, sample_angle_out]
def pre_scan(self):
"""Open the shutter before scan"""
# Move sample out
yield from self._move_scan_motors_and_wait(self.out_position)
# Open the main shutter (TODO change to the correct shutter device)
yield from self.stubs.set(device=["eyex"], value=[Shutter.OPEN])
return super().pre_scan()
def cleanup(self):
"""Close the shutter after scan"""
# Close fast shutter
yield from self.stubs.set(device=["eyex"], value=[Shutter.CLOSED])
return super().cleanup()
def direct(self):
"""Direct scan procedure call"""
# Collect relevant devices
self.cams = [
cam.name
for cam in self.device_manager.devices.get_devices_with_tags("camera")
if cam.enabled
]
self.prev = [
pre.name
for pre in self.device_manager.devices.get_devices_with_tags("preview")
if pre.enabled
]
self.daqs = [
daq.name
for daq in self.device_manager.devices.get_devices_with_tags("daq")
if daq.enabled
]
# Do not call stage, as there's no ScanInfo emitted for direct call
for daq in self.daqs:
cam = yield from self.stubs.send_rpc_and_wait(daq, "datasource.get")
prefix = f"{self.file_prefix}_{cam}_white"
yield from self.stubs.send_rpc_and_wait(daq, "file_prefix.set", prefix)
yield from self.stubs.send_rpc_and_wait(daq, "num_images.set", self.exp_burst)
yield from self.stubs.send_rpc_and_wait(daq, "arm")
for prev in self.prev:
yield from self.stubs.send_rpc_and_wait(prev, "arm")
for cam in self.cams:
yield from self.stubs.send_rpc_and_wait(
cam, "configure", {"exposure_num_burst": self.exp_burst}
)
yield from self.stubs.send_rpc_and_wait(cam, "arm")
yield from self.pre_scan()
yield from self.scan_core()
yield from self.finalize()
yield from self.unstage()
yield from self.cleanup()
# class AcquireProjections(AsyncFlyScanBase):
# scan_name = "acquire_projections"
# gui_config = {
# "Motor": ["motor"],
# "Acquisition parameters": ["sample_position_in", "start_angle", "angular_range" ],
# "Camera": ["exp_time", "exp_burst"]
# }
# def __init__(self,
# motor: DeviceBase,
# exp_burst: int,
# sample_position_in: float,
# start_angle: float,
# angular_range: float,
# exp_time:float,
# **kwargs):
# """
# Acquire projection images.
# Args:
# motor : DeviceBase
# Motor to move continuously from start to stop position
# exp_burst : int
# Number of flat field images to acquire (no default)
# sample_position_in : float
# Position to move the sample stage to position the sample in the beam
# start_angle : float
# Angular start position for the scan
# angular_range : float
# Angular range
# exp_time : float, optional
# Exposure time [ms]. If not specified, the currently configured value on the camera will be used
# exp_period : float, optional
# Exposure period [ms]. If not specified, the currently configured value on the camera will be used
# image_width : int, optional
# ROI size in the x-direction [pixels]. If not specified, the currently configured value on the camera will be used
# image_height : int, optional
# ROI size in the y-direction [pixels]. If not specified, the currently configured value on the camera will be used
# acq_mode : str, optional
# Predefined acquisition mode (default= 'default')
# file_path : str, optional
# File path for standard daq
# ddc_trigger : int, optional
# Drive Data Capture Trigger
# ddc_source0 : int, optional
# Drive Data capture Input0
# Returns:
# ScanReport
# Examples:
# >>> scans.acquire_projections()
# """
# self.motor = motor
# super().__init__(exp_time=exp_time,**kwargs)
# self.burst_at_each_point = 1
# self.sample_position_in = sample_position_in
# self.start_angle = start_angle
# self.angular_range = angular_range
# self.dark_shutter_pos_out = 1 ### change with a variable
# self.dark_shutter_pos_in = 0 ### change with a variable
# def update_scan_motors(self):
# return [self.motor]
# def prepare_positions(self):
# self.positions = np.array([[self.start_angle], [self.start_angle+self.angular_range]])
# self.num_pos = None
# yield from self._set_position_offset()
# def scan_core(self):
# # move to in position and go to start angular position
# yield from self.stubs.set(device=["eyez", self.motor], value=[self.sample_position_in, self.positions[0][0]])
# # open the shutter
# yield from self.stubs.set(device="eyex", value=self.dark_shutter_pos_out)
# # TODO add opening of fast shutter
# # start the flyer
# flyer_request = yield from self.stubs.set(
# device=self.motor, value=self.positions[1][0], wait=False
# )
# self.connector.send_client_info(
# "Starting the scan", show_asap=True, rid=self.metadata.get("RID")
# )
# yield from self.stubs.trigger()
# while not flyer_request.done:
# yield from self.stubs.read(
# group="monitored", point_id=self.point_id
# )
# time.sleep(1)
# # increase the point id
# self.point_id += 1
# self.num_pos = self.point_id
class AcquireRefsV2(Acquire):
scan_name = "acquire_refs_v2"
gui_config = {}
def __init__(
self,
motor: DeviceBase,
num_darks: int = 0,
num_flats: int = 0,
sample_angle_out: float = 0,
sample_position_in: float = 0,
sample_position_out: float = 1,
file_prefix_dark: str = "tmp_dark",
file_prefix_white: str = "tmp_white",
**kwargs,
):
"""
Acquire reference images (darks + whites) and return to beam position.
Reference images are acquired automatically in an optimized sequence and
the sample is returned to the sample_in_position afterwards.
Args:
motor : DeviceBase
Motor to be moved to move the sample out of beam
num_darks : int , optional
Number of dark field images to acquire
num_flats : int , optional
Number of white field images to acquire
sample_angle_out : float , optional
Angular position where to take the flat field images
sample_position_in : float , optional
Sample stage X position for sample in beam [um]
sample_position_out : float ,optional
Sample stage X position for sample out of the beam [um]
exp_time : float, optional
Exposure time [ms]. If not specified, the currently configured value
on the camera will be used
exp_period : float, optional
Exposure period [ms]. If not specified, the currently configured value
on the camera will be used
image_width : int, optional
ROI size in the x-direction [pixels]. If not specified, the currently
configured value on the camera will be used
image_height : int, optional
ROI size in the y-direction [pixels]. If not specified, the currently
configured value on the camera will be used
acq_mode : str, optional
Predefined acquisition mode (default= 'default')
file_path : str, optional
File path for standard daq
Returns:
ScanReport
Examples:
>>> scans.acquire_refs(sample_angle_out=90, sample_position_in=10, num_darks=5, num_flats=5, exp_time=0.1)
"""
self.motor = motor
super().__init__(**kwargs)
self.sample_position_in = sample_position_in
self.sample_position_out = sample_position_out
self.sample_angle_out = sample_angle_out
self.num_darks = num_darks
self.num_flats = num_flats
self.file_prefix_dark = file_prefix_dark
self.file_prefix_white = file_prefix_white
self.scan_parameters["std_daq_params"] = {"reconnect": False}
def stage(self):
"""Wrapped scan doesn't need staging"""
yield None
def scan_core(self):
if self.num_darks:
msg = f"Acquiring {self.num_darks} dark images"
logger.warning(msg)
self.connector.send_client_info(msg, show_asap=True, rid=self.metadata.get("RID"))
darks = AcquireDarkV2(
exp_burst=self.num_darks,
# file_prefix=self.file_prefix_dark,
device_manager=self.device_manager,
metadata=self.metadata,
instruction_handler=self.stubs._instruction_handler,
**self.caller_kwargs,
)
yield from darks.direct()
self.point_id = darks.point_id
if self.num_flats:
msg = f"Acquiring {self.num_flats} flat field images"
logger.warning(msg)
self.connector.send_client_info(msg, show_asap=True, rid=self.metadata.get("RID"))
logger.warning("Calling AcquireWhite")
flats = AcquireWhiteV2(
motor=self.motor,
exp_burst=self.num_flats,
sample_position_out=self.sample_position_out,
# sample_angle_out=self.sample_angle_out,
device_manager=self.device_manager,
metadata=self.metadata,
instruction_handler=self.stubs._instruction_handler,
**self.caller_kwargs,
)
flats.point_id = self.point_id
yield from flats.direct()
self.point_id = flats.point_id
## TODO move sample in beam and do not wait
## TODO move rotation to angle and do not wait
logger.warning("[AcquireRefsV2] Done with scan_core")

View File

@@ -0,0 +1,12 @@
# from .metadata_schema_template import ExampleSchema
METADATA_SCHEMA_REGISTRY = {
# Add models which should be used to validate scan metadata here.
# Make a model according to the template, and import it as above
# Then associate it with a scan like so:
# "example_scan": ExampleSchema
}
# Define a default schema type which should be used as the fallback for everything:
DEFAULT_SCHEMA = None

View File

@@ -0,0 +1,34 @@
# # By inheriting from BasicScanMetadata you can define a schema by which metadata
# # supplied to a scan must be validated.
# # This schema is a Pydantic model: https://docs.pydantic.dev/latest/concepts/models/
# # but by default it will still allow you to add any arbitrary information to it.
# # That is to say, when you run a scan with which such a model has been associated in the
# # metadata_schema_registry, you can supply any python dictionary with strings as keys
# # and built-in python types (strings, integers, floats) as values, and these will be
# # added to the experiment metadata, but it *must* contain the keys and values of the
# # types defined in the schema class.
# #
# #
# # For example, say that you would like to enforce recording information about sample
# # pretreatment, you could define the following:
# #
#
# from bec_lib.metadata_schema import BasicScanMetadata
#
#
# class ExampleSchema(BasicScanMetadata):
# treatment_description: str
# treatment_temperature_k: int
#
#
# # If this was used according to the example in metadata_schema_registry.py,
# # then when calling the scan, the user would need to write something like:
# >>> scans.example_scan(
# >>> motor,
# >>> 1,
# >>> 2,
# >>> 3,
# >>> metadata={"treatment_description": "oven overnight", "treatment_temperature_k": 575},
# >>> )
#
# # And the additional metadata would be saved in the HDF5 file created for the scan.

View File

@@ -0,0 +1,448 @@
from __future__ import annotations
import time
from typing import Literal
import numpy as np
from bec_lib.device import DeviceBase
from bec_lib.logger import bec_logger
from bec_server.scan_server.scans import AsyncFlyScanBase, LineScan, ScanBase
logger = bec_logger.logger
class TomoComponents:
def __init__(self, scan: ScanBase):
self.scan = scan
self.stubs = scan.stubs
self.device_manager = scan.device_manager
self.connector = scan.device_manager.connector
# Update the available cameras for the current scan
self.cameras = self._get_cameras()
def _get_cameras(self) -> list[DeviceBase]:
return [
cam.name
for cam in self.device_manager.devices.get_devices_with_tags("camera")
if cam.enabled
]
def open_shutter(self):
"""
Open the shutter if it is closed.
"""
logger.info("Opening shutter.")
yield from self.stubs.set(device=["shutter"], value=[1])
def close_shutter(self):
"""
Close the shutter if it is open.
"""
yield from self.stubs.set(device=["shutter"], value=[0])
def restart_cameras(
self,
name: str,
num_images: int,
file_suffix: str = "",
file_path: str = "",
frames_per_trigger: int = 1,
):
"""
Restart the cameras with a new configuration.
This is typically used to reset the cameras during another scan, e.g. before acquiring dark or flat images.
Args:
name (str): Name of the configuration to restart with.
num_images (int): Number of images to acquire.
file_suffix (str): Suffix for the file names.
file_path (str): Path where the files will be saved.
frames_per_trigger (int): Number of frames to acquire per trigger.
"""
for cam in self.cameras:
yield from self.stubs.send_rpc_and_wait(
device=cam,
func_name="restart_with_new_config",
name=name,
file_suffix=file_suffix,
file_path=file_path,
num_images=num_images,
frames_per_trigger=frames_per_trigger,
)
def scan_report_instructions(self):
"""
Generate scan report instructions for the acquisition.
This method provides the necessary instructions to listen to the camera progress during the scan.
"""
if not self.cameras:
return
# Use the first camera or "gfcam" if available for reporting
report_camera = "gfcam" if "gfcam" in self.cameras else self.cameras[0]
yield from self.stubs.scan_report_instruction({"device_progress": [report_camera]})
def complete(self):
"""
Complete the acquisition by sending an RPC to each camera.
This method is typically called after the acquisition is done to finalize the process and start
writing the virtual dataset.
"""
for cam in self.cameras:
yield from self.stubs.send_rpc_and_wait(device=cam, func_name="on_complete")
def restore_configs(self, name: str):
"""
Restore the camera configurations after an acquisition.
Args:
name (str): Name of the configuration to restore.
"""
for cam in self.cameras:
yield from self.stubs.send_rpc_and_wait(
device=cam, func_name="restore_config", name=name
)
def update_live_processing_references(self, ref_type: Literal["dark", "flat"]):
"""
Update the live processing references for dark or flat images.
Args:
ref_type (Literal["dark", "flat"]): Type of reference to update.
"""
if ref_type not in ["dark", "flat"]:
raise ValueError("ref_type must be either 'dark' or 'flat'.")
logger.info(f"Updating live processing references for {ref_type} images.")
for cam in self.cameras:
yield from self.stubs.send_rpc_and_wait(
device=cam, func_name="update_live_processing_reference", reference_type=ref_type
)
def acquire_dark(self, num_images: int, exposure_time: float, name="dark", restart=True, restore=True):
"""
Acquire dark images.
Args:
num_images (int): Number of dark images to acquire.
exposure_time (float): Exposure time for each dark image in seconds.
"""
if not num_images:
return
logger.info(f"Acquiring {num_images} dark images with exposure time {exposure_time}s.")
self.connector.send_client_info(f"Acquiring {num_images} dark images.")
if restart:
yield from self.restart_cameras(
name=name, file_suffix=name, num_images=num_images, frames_per_trigger=num_images
)
# yield from self.close_shutter()
yield from self.stubs.trigger(min_wait=exposure_time * num_images)
yield from self.complete()
yield from self.update_live_processing_references(ref_type="dark")
if restore:
yield from self.restore_configs(name=name)
# yield from self.open_shutter()
self.connector.send_client_info("")
logger.info("Dark image acquisition complete.")
def acquire_flat(self, num_images: int, exposure_time: float, name="flat", restart=True, restore=True):
"""
Acquire flat images.
Args:
num_images (int): Number of flat images to acquire.
exposure_time (float): Exposure time for each flat image in seconds.
"""
if not num_images:
return
logger.info(f"Acquiring {num_images} flat images with exposure time {exposure_time}s.")
self.connector.send_client_info(f"Acquiring {num_images} flat images.")
if restart:
yield from self.restart_cameras(
name=name, file_suffix=name, num_images=num_images, frames_per_trigger=num_images
)
# yield from self.open_shutter()
yield from self.stubs.trigger(min_wait=exposure_time * num_images)
yield from self.complete()
yield from self.update_live_processing_references(ref_type="flat")
if restore:
yield from self.restore_configs(name=name)
logger.info("Flat image acquisition complete.")
self.connector.send_client_info("")
def acquire_references(self, num_darks: int, num_flats: int, exp_time: float, restart=True, restore=True):
yield from self.acquire_dark(num_darks, exposure_time=exp_time, restart=restart, restore=restore)
yield from self.acquire_flat(num_flats, exposure_time=exp_time, restart=restart, restore=restore)
class AcquireDark(ScanBase):
scan_name = "acquire_dark"
gui_config = {"Acquisition Parameters": ["num_images", "exp_time"]}
def __init__(self, num_images: int, exp_time: float, **kwargs):
"""
Acquire dark images.
Args:
num_images (int): Number of dark images to acquire.
exp_time (float): Exposure time for each dark image in seconds.
Returns:
ScanReport
"""
frames_per_trigger = num_images if num_images > 0 else 1
super().__init__(frames_per_trigger=frames_per_trigger, exp_time=exp_time, **kwargs)
self.components = TomoComponents(self)
def scan_report_instructions(self):
yield from self.components.scan_report_instructions()
def scan_core(self):
yield from self.components.acquire_dark(
self.frames_per_trigger, self.exp_time, restart=False
)
class AcquireFlat(ScanBase):
scan_name = "acquire_flat"
gui_config = {"Acquisition Parameters": ["num_images", "exp_time"]}
def __init__(self, num_images: int, exp_time: float, **kwargs):
"""
Acquire flat images.
Args:
num_images (int): Number of flat images to acquire.
exp_time (float): Exposure time for each flat image in seconds.
frames_per_trigger (int): Number of frames to acquire per trigger.
Returns:
ScanReport
"""
frames_per_trigger = num_images if num_images > 0 else 1
super().__init__(frames_per_trigger=frames_per_trigger, exp_time=exp_time, **kwargs)
self.components = TomoComponents(self)
def scan_report_instructions(self):
yield from self.components.scan_report_instructions()
def scan_core(self):
yield from self.components.acquire_flat(
self.frames_per_trigger, self.exp_time, restart=False
)
class AcquireReferences(ScanBase):
scan_name = "acquire_refs"
gui_config = {"Acquisition Parameters": ["num_darks", "num_flats", "exp_time"]}
def __init__(self, num_darks: int, num_flats: int, exp_time: float, **kwargs):
"""
Acquire flats and darks.
Args:
num_darks (int): Number of dark images to acquire.
num_flats (int): Number of flat images to acquire.
exp_time (float): Exposure time for each flat image in seconds.
frames_per_trigger (int): Number of frames to acquire per trigger.
Returns:
ScanReport
"""
super().__init__(exp_time=exp_time, **kwargs)
self.num_darks = num_darks
self.num_flats = num_flats
self.components = TomoComponents(self)
def scan_report_instructions(self):
yield from self.components.scan_report_instructions()
def pre_scan(self):
yield from self.components.acquire_references(self.num_darks, self.num_flats, self.exp_time)
def scan_core(self):
yield None
class TomoScan(LineScan):
scan_name = "tomo_line_scan"
def __init__(
self,
*args,
exp_time: float = 0,
steps: int = None,
relative: bool = False,
burst_at_each_point: int = 1,
num_darks: int = 0,
num_flats: int = 0,
**kwargs,
):
"""
A line scan for one or more motors.
Args:
*args (Device, float, float): pairs of device / start position / end position
exp_time (float): exposure time in s. Default: 0
steps (int): number of steps. Default: 10
relative (bool): if True, the start and end positions are relative to the current position. Default: False
burst_at_each_point (int): number of acquisition per point. Default: 1
Returns:
ScanReport
Examples:
>>> scans.line_scan(dev.motor1, -5, 5, dev.motor2, -5, 5, steps=10, exp_time=0.1, relative=True)
"""
super().__init__(
*args,
exp_time=exp_time,
steps=steps,
relative=relative,
burst_at_each_point=burst_at_each_point,
**kwargs,
)
self.num_darks = num_darks
self.num_flats = num_flats
self.components = TomoComponents(self)
def pre_scan(self):
yield from self.components.acquire_dark(self.num_darks, self.exp_time, name="pre_scan_dark")
yield from self.components.acquire_flat(self.num_flats, self.exp_time, name="pre_scan_flat")
yield from super().pre_scan()
# def finalize(self):
# yield from super().finalize()
# yield from self.components.acquire_dark(
# self.num_darks, self.exp_time, name="post_scan_dark"
# )
# yield from self.components.acquire_flat(
# self.num_flats, self.exp_time, name="post_scan_flat"
# )
class TomoFlyScan(AsyncFlyScanBase):
scan_name = "tomo_fly_scan"
gui_config = {
"Motor": ["motor"],
"Acquisition parameters": ["sample_in"],
"Camera": ["exp_time"],
}
def __init__(
self,
motor: DeviceBase,
start: float,
stop: float,
sample_in: float,
sample_out: float,
num_darks: int = 0,
num_flats: int = 0,
exp_time: float = 0,
relative: bool = False,
**kwargs,
):
"""
A fly scan for a single motor.
Args:
motor (DeviceBase): The motor to scan.
start (float): Start position.
stop (float): Stop position.
sample_in (float): Sample in position.
sample_out (float): Sample out position.
num_darks (int): Number of dark images to acquire. Default: 0
num_flats (int): Number of flat images to acquire. Default: 0
exp_time (float): Exposure time in seconds. Default: 0
relative (bool): If True, the start and stop positions are relative to the current position. Default: False
Returns:
ScanReport
Examples:
>>> scans.tomo_fly_scan(dev.motor1, 0, 10, sample_in=5, sample_out=7, exp_time=0.1, num_darks=5, num_flats=5)
"""
super().__init__(relative=relative, exp_time=exp_time, **kwargs)
self.motor = motor
self.start = start
self.stop = stop
self.sample_in = sample_in
self.sample_out = sample_out
self.num_darks = num_darks
self.num_flats = num_flats
self.sample_stage = "samy" # change to the correct sample stage device
self.shutter = "hx" # change to the correct shutter device
self.num_darks = num_darks
self.num_flats = num_flats
self.components = TomoComponents(self)
def scan_report_instructions(self):
"""
Generate scan report instructions for the fly scan.
This method provides the necessary instructions to listen to the camera progress during the scan.
"""
# If no cameras are available, fall back to the default scan report instructions
if not self.components.cameras:
yield from super().scan_report_instructions()
return
# Use the first camera or "gfcam" if available for reporting
report_camera = (
"gfcam" if "gfcam" in self.components.cameras else self.components.cameras[0]
)
yield from self.stubs.scan_report_instruction({"device_progress": [report_camera]})
def prepare_positions(self):
self.positions = np.array([[self.start], [self.stop]])
self.num_pos = None
yield from self._set_position_offset()
def pre_scan(self):
yield from self.components.acquire_dark(self.num_darks, self.exp_time, name="pre_scan_dark")
yield from self.components.acquire_flat(self.num_flats, self.exp_time, name="pre_scan_flat")
yield from super().pre_scan()
def scan_core(self):
"""
Core scanning logic for the fly scan.
"""
# Open the shutter
# yield from self.components.open_shutter()
# Move the sample stage to the sample in position
sample_in_status = yield from self.stubs.set(
device=self.sample_stage, value=[self.sample_in], wait=False
)
# Move the rotation stage to the start position
motor_start_status = yield from self.stubs.set(
device=self.motor, value=[self.start], wait=False
)
# Wait for both movements to complete
sample_in_status.wait()
motor_start_status.wait()
# Kickoff the rotation stage to start the fly scan
flyer_status = yield from self.stubs.set(device=self.motor, value=[self.stop], wait=False)
# Send a single trigger to kick off the camera acquisition
yield from self.stubs.trigger()
# Monitor the flyer status whilst reading out monitored devices (e.g. temperatures)
while not flyer_status.done:
yield from self.stubs.read(group="monitored", point_id=self.point_id)
self.point_id += 1
time.sleep(1)
# Close the shutter after the scan is complete
# yield from self.components.close_shutter()

View File

@@ -18,95 +18,11 @@ import os
import time
from bec_lib import bec_logger
from bec_server.scan_server.scans import AsyncFlyScanBase, ScanBase, ScanArgType
from bec_server.scan_server.scans import AsyncFlyScanBase, ScanArgType
logger = bec_logger.logger
class TomcatStepScan(ScanBase):
"""Simple software step scan for Tomcat
Example class for simple BEC-based step scan using the low-level API. It's just a standard
'line_scan' with the only difference that overrides burst behavior to use camera burst instead
of individual software triggers.
NOTE: As decided by Tomcat, the scans should not manage the scope of devices
- All enabled devices are expected to be configured for acquisition by the end of stage
- Some devices can configure themselves from mandatory scan parameters (steps, burst)
- Other devices can be optionally configured by keyword arguments
- Devices will try to stage using whatever was set on them before
Example:
--------
>>> scans.tomcatstepscan(scan_start=-25, scan_end=155, steps=180, exp_time=0.005, exp_burst=5)
Common keyword arguments:
-------------------------
image_width : int
image_height : int
ddc_trigger : str
"""
scan_name = "tomcatstepscan"
scan_type = "step"
required_kwargs = ["scan_start", "scan_end", "steps"]
gui_config = {
"Movement parameters": ["steps"],
"Acquisition parameters": ["exp_time", "exp_burst"],
}
def update_scan_motors(self):
self.scan_motors = ["es1_roty"]
def _get_scan_motors(self):
self.scan_motors = ["es1_roty"]
def __init__(
self,
scan_start: float,
scan_end: float,
steps: int,
exp_time: float=0.005,
settling_time: float=0.2,
exp_burst: int=1,
**kwargs,
):
# Converting generic kwargs to tomcat device configuration parameters
super().__init__(
exp_time=exp_time,
settling_time=settling_time,
burst_at_each_point=1,
optim_trajectory=None,
**kwargs,
)
# For position calculation
self.motor = "es1_roty"
self.scan_start = scan_start
self.scan_end = scan_end
self.scan_steps = steps
self.scan_stepsize = (scan_end - scan_start) / steps
def _calculate_positions(self) -> None:
"""Pre-calculate scan positions"""
for ii in range(self.scan_steps + 1):
self.positions.append(self.scan_start + ii * self.scan_stepsize)
def _at_each_point(self, ind=None, pos=None):
""" Overriden at_each_point, using detector burst instaead of manual triggering"""
yield from self._move_scan_motors_and_wait(pos)
time.sleep(self.settling_time)
trigger_time = 0.001*self.exp_time * self.burst_at_each_point
yield from self.stubs.trigger(min_wait=trigger_time)
# yield from self.stubs.trigger(group='trigger', point_id=self.point_id)
# time.sleep(trigger_time)
yield from self.stubs.read(group="monitored", point_id=self.point_id)
# yield from self.stubs.read(group="monitored", point_id=self.point_id, wait_group=None)
self.point_id += 1
class TomcatSnapNStep(AsyncFlyScanBase):
"""Simple software step scan forTomcat
@@ -115,21 +31,21 @@ class TomcatSnapNStep(AsyncFlyScanBase):
Example
-------
>>> scans.tomcatsnapnstepscan(scan_start=-25, scan_end=155, steps=180, exp_time=0.005, exp_burst=5)
>>> scans.tomcatsnapnstepscan(scan_start=-25, scan_end=155, steps=180, acq_time=5, exp_burst=5)
"""
scan_name = "tomcatsnapnstepscan"
scan_type = "scripted"
# scan_type = "scripted"
# arg_input = {"camera" : ScanArgType.DEVICE,
# "exp_time" : ScanArgType.FLOAT}
# arg_bundle_size= {"bundle": len(arg_input), "min": 1, "max": None}
required_kwargs = ["scan_start", "scan_end", "steps"]
gui_config = {
"Movement parameters": ["steps"],
"Acquisition parameters": ["exp_time", "exp_burst"],
"Acquisition parameters": ["acq_time", "exp_burst"],
}
def _get_scan_motors(self):
def _update_scan_motors(self):
self.scan_motors = ["es1_roty"]
def __init__(
@@ -137,8 +53,8 @@ class TomcatSnapNStep(AsyncFlyScanBase):
scan_start: float,
scan_end: float,
steps: int,
exp_time:float=0.005,
settling_time:float=0.2,
acq_time:float=5.0,
exp_burst:int=1,
sync:str="event",
**kwargs,
@@ -147,25 +63,30 @@ class TomcatSnapNStep(AsyncFlyScanBase):
self.scan_start = scan_start
self.scan_end = scan_end
self.scan_steps = steps
self.scan_stepsize = (scan_end - scan_start) / steps
self.scan_ntotal = exp_burst * (steps + 1)
self.exp_time = exp_time
self.exp_time = acq_time
self.exp_burst = exp_burst
self.settling_time = settling_time
self.scan_sync = sync
# General device configuration
# Gigafrost trigger mode
kwargs["parameter"]["kwargs"]["acq_mode"] = "ext_enable"
# Used for Aeroscript file substitutions for the task interface
filename = "AerotechSnapAndStepTemplate.ascript"
filesubs = self.get_filesubs()
filetext = self.render_file(filename, filesubs)
kwargs["parameter"]["kwargs"]["script_text"] = filetext
kwargs["parameter"]["kwargs"]["script_file"] = "bec.ascript"
self.scan_parameters["aerotech_config"] = {
"script_text":filetext,
"script_file":"bec.ascript",
"script_task": 4,
}
# self.scan_parameters["script_file"] = "bec.ascript"
# kwargs["parameter"]["kwargs"]["script_text"] = filetext
# kwargs["parameter"]["kwargs"]["script_file"] = "bec.ascript"
super().__init__(
exp_time=exp_time,
acq_time=acq_time,
exp_burst=exp_burst,
settling_time=settling_time,
burst_at_each_point=1,
optim_trajectory=None,
@@ -182,12 +103,12 @@ class TomcatSnapNStep(AsyncFlyScanBase):
}
filesubs = {
"startpos": self.scan_start,
"stepsize": self.scan_stepsize,
"stepsize": (self.scan_end - self.scan_start) / self.scan_steps,
"numsteps": self.scan_steps,
"exptime": 2 * self.exp_time * self.exp_burst,
"exptime": 0.002 * self.exp_time * self.exp_burst,
"settling": self.settling_time,
"psotrigger": p_modes[self.scan_sync],
"npoints": self.scan_ntotal,
"npoints": self.exp_burst * (self.scan_steps + 1),
}
return filesubs
@@ -196,7 +117,7 @@ class TomcatSnapNStep(AsyncFlyScanBase):
# Load the test file
filename = os.path.join(os.path.dirname(__file__), "../devices/aerotech/" + filename)
logger.info(f"Attempting to load file {filename}")
with open(filename) as f:
with open(filename, "r", encoding="utf-8") as f:
templatetext = f.read()
# Substitute jinja template
@@ -208,12 +129,15 @@ class TomcatSnapNStep(AsyncFlyScanBase):
"""The actual scan routine"""
print("TOMCAT Running scripted scan (via Jinjad AeroScript)")
t_start = time.time()
# Kickoff
yield from self.stubs.send_rpc_and_wait("es1_tasks", "kickoff")
# Complete
# FIXME: this will swallow errors
# yield from self.stubs.complete(device="es1_tasks")
st = yield from self.stubs.send_rpc_and_wait("es1_tasks", "complete")
# st.wait()
yield from self.stubs.send_rpc_and_wait("es1_tasks", "complete")
# Check final state
task_states = yield from self.stubs.send_rpc_and_wait("es1_tasks", "taskStates.get")
if task_states[4] == 8:
raise RuntimeError(f"Task {4} finished in ERROR state")
@@ -243,11 +167,11 @@ class TomcatSimpleSequence(AsyncFlyScanBase):
Example
-------
>>> scans.tomcatsimplesequencescan(33, 180, 180, exp_time=0.005, exp_burst=1800, repeats=10)
>>> scans.tomcatsimplesequencescan(33, 180, 180, acq_time=5, exp_burst=1800, repeats=10)
"""
scan_name = "tomcatsimplesequencescan"
scan_type = "scripted"
# scan_type = "scripted"
scan_report_hint = "table"
required_kwargs = ["scan_start", "gate_high", "gate_low"]
gui_config = {
@@ -255,7 +179,7 @@ class TomcatSimpleSequence(AsyncFlyScanBase):
"Acquisition parameters": [
"gate_high",
"gate_low",
"exp_time",
"acq_time",
"exp_burst",
"sync",
],
@@ -271,7 +195,7 @@ class TomcatSimpleSequence(AsyncFlyScanBase):
gate_low: float,
repeats: int = 1,
repmode: str = "PosNeg",
exp_time: float = 0.005,
acq_time: float = 5,
exp_burst: float = 180,
sync: str = "pso",
**kwargs,
@@ -282,13 +206,13 @@ class TomcatSimpleSequence(AsyncFlyScanBase):
self.gate_low = gate_low
self.scan_repnum = repeats
self.scan_repmode = repmode.upper()
self.exp_time = exp_time
self.exp_time = acq_time
self.exp_burst = exp_burst
self.scan_sync = sync
# Synthetic values
self.scan_ntotal = exp_burst * repeats
self.scan_velocity = gate_high / (exp_time * exp_burst)
self.scan_velocity = gate_high / (0.001*acq_time * exp_burst)
self.scan_acceleration = 500
self.scan_safedistance = 10
self.scan_accdistance = (
@@ -314,7 +238,7 @@ class TomcatSimpleSequence(AsyncFlyScanBase):
kwargs["parameter"]["kwargs"]["script_file"] = "bec.ascript"
super().__init__(
exp_time=exp_time,
acq_time=acq_time,
settling_time=0.5,
relative=False,
burst_at_each_point=1,
@@ -327,7 +251,7 @@ class TomcatSimpleSequence(AsyncFlyScanBase):
# Load the test file
filename = os.path.join(os.path.dirname(__file__), "../devices/aerotech/" + filename)
logger.info(f"Attempting to load file {filename}")
with open(filename) as f:
with open(filename, 'r', encoding="utf-8") as f:
templatetext = f.read()
# Substitute jinja template
@@ -339,12 +263,14 @@ class TomcatSimpleSequence(AsyncFlyScanBase):
"""The actual scan routine"""
print("TOMCAT Running scripted scan (via Jinjad AeroScript)")
t_start = time.time()
# Kickoff
yield from self.stubs.send_rpc_and_wait("es1_tasks", "kickoff")
# Complete
# FIXME: this will swallow errors
# yield from self.stubs.complete(device="es1_tasks")
st = yield from self.stubs.send_rpc_and_wait("es1_tasks", "complete")
# st.wait()
yield from self.stubs.send_rpc_and_wait("es1_tasks", "complete")
task_states = yield from self.stubs.send_rpc_and_wait("es1_tasks", "taskStates.get")
if task_states[4] == 8:
raise RuntimeError(f"Task {4} finished in ERROR state")

View File

@@ -340,7 +340,7 @@ class AcquireRefs(Acquire):
# to set signals on a device
darks = AcquireDark(
exp_burst=self.num_darks,
file_prefix=self.file_prefix_dark,
# file_prefix=self.file_prefix_dark,
device_manager=self.device_manager,
metadata=self.metadata,
instruction_handler=self.stubs._instruction_handler,
@@ -389,6 +389,7 @@ class AcquireRefs(Acquire):
self.point_id = flats.point_id
## TODO move sample in beam and do not wait
## TODO move rotation to angle and do not wait
logger.warning("Done with scan_core")
class TutorialFlyScanContLine(AsyncFlyScanBase):

View File

@@ -16,6 +16,7 @@ class Measurement:
self.nimages_white = 100
self.start_angle = 0
self.angular_range = 180
self.sample_angle_out = 0
self.sample_position_in = 0
self.sample_position_out = 1
@@ -43,6 +44,10 @@ class Measurement:
self.exposure_period = self.det.cfgFramerate.get()
self.roix = self.det.cfgRoiX.get()
self.roiy = self.det.cfgRoiY.get()
self.get_position_rb()
#self.position_rb = False
#self.disable_position_rb_device()
def build_filename(self, acquisition_type='data'):
@@ -74,7 +79,8 @@ class Measurement:
exposure_period=None, roix=None, roiy=None,nimages=None,
nimages_dark=None, nimages_white=None,
start_angle=None, sample_angle_out=None,
sample_position_in=None, sample_position_out=None):
sample_position_in=None, sample_position_out=None,
position_rb=None):
"""
Reconfigure the measurement with any number of new parameter
@@ -110,6 +116,8 @@ class Measurement:
sample_position_out : float, optional
Sample stage X position for sample out of the beam [um]
(default = None)
position_rb : bool, optional
Enable position readback (default = None)
"""
if sample_name != None:
@@ -138,6 +146,13 @@ class Measurement:
self.sample_position_in = sample_position_in
if sample_position_out != None:
self.sample_position_out = sample_position_out
if position_rb != None:
if position_rb == True:
self.enable_position_rb_device()
elif position_rb == False:
self.disable_position_rb_device()
else:
print("WARNING! Position readback should be either True, False or None")
self.build_filename()
@@ -168,6 +183,12 @@ class Measurement:
self.device_name = self.enabled_detectors[0].name
self.build_filename()
def disable_position_rb_device(self):
"Disable position readback device"
dev.es1_ddaq.enabled= False
self.position_rb = False
def enable_detector(self, detector_name):
"""
@@ -197,7 +218,12 @@ class Measurement:
self.device_name = self.enabled_detectors[0].name
self.build_filename()
def enable_position_rb_device(self):
"Enable position readback device"
dev.es1_ddaq.enabled= True
self.position_rb = True
def get_available_detectors(self):
"""
@@ -215,6 +241,14 @@ class Measurement:
"""
self.enabled_detectors = [obj for obj in dev.get_devices_with_tags('camera') if obj.enabled]
def get_position_rb(self):
"""
Get position rb
"""
if dev.es1_ddaq.enabled == True:
self.position_rb = True
else:
self.position_rb = False
def show_available_detectors(self):
"""
@@ -242,39 +276,41 @@ class Measurement:
TODO: make it work for multiple devices
"""
print("Sample name: " + self.sample_name)
print("Data path: " + self.data_path)
print("Number of images: " + str(self.nimages))
print("Number of darks: " + str(self.nimages_dark))
print("Number of flats: " + str(self.nimages_white))
print("Sample name (sample_name): " + self.sample_name)
print("Data path (data_path): " + self.data_path)
print("Number of images (nimages): " + str(self.nimages))
print("Number of darks (nimages_dark): " + str(self.nimages_dark))
print("Number of flats (nimages_flat): " + str(self.nimages_white))
if self.exposure_time == None:
print("Exposure time: " + str(self.det.cfgExposure.get()))
print("Exposure time (exposure_time): " + str(self.det.cfgExposure.get()))
self.exposure_time = self.det.cfgExposure.get()
else:
print("Exposure time: " + str(self.exposure_time))
print("Exposure time (exposure_time): " + str(self.exposure_time))
if self.exposure_period == None:
print("Exposure period: " + str(self.det.cfgFramerate.get()))
print("Exposure period (exposure_period): " + str(self.det.cfgFramerate.get()))
self.exposure_period = self.det.cfgFramerate.get()
else:
print("Exposure period: " + str(self.exposure_period))
print("Exposure period (exposure_period): " + str(self.exposure_period))
if self.roix == None:
print("Roix: " + str(self.det.cfgRoiX.get()))
print("Roix (roix): " + str(self.det.cfgRoiX.get()))
self.roix = self.det.cfgRoiX.get()
else:
print("Roix: " + str(self.roix))
print("Roix (roix): " + str(self.roix))
if self.roiy == None:
print("Roiy: " + str(self.det.cfgRoiY.get()))
print("Roiy (roiy): " + str(self.det.cfgRoiY.get()))
self.roiy = self.det.cfgRoiY.get()
else:
print("Roiy: " + str(self.roiy))
print("Start angle: " + str(self.start_angle))
print("Sample angle out: " + str(self.sample_angle_out))
print("Sample position in: " + str(self.sample_position_in))
print("Sample position out: " + str(self.sample_position_out))
print("Roiy (roiy): " + str(self.roiy))
print("Start angle (start_angle): " + str(self.start_angle))
print("Angular range (angular_range): " + str(self.angular_range))
print("Sample angle out (sample_angle_out): " + str(self.sample_angle_out))
print("Sample position in (sample_position_in): " + str(self.sample_position_in))
print("Sample position out (sample_position_out): " + str(self.sample_position_out))
print("Position readback (position_rb): " + str(self.position_rb))
def acquire_darks(self,nimages_dark=None, exposure_time=None, exposure_period=None,
roix=None, roiy=None, acq_mode=None):
roix=None, roiy=None, acq_mode=None, **kwargs):
"""
Acquire a set of dark images with shutters closed.
@@ -315,17 +351,19 @@ class Measurement:
print("Handing over to 'scans.acquire_dark")
scans.acquire_dark(exp_burst=self.nimages_dark, exp_time=self.exposure_time, exp_period=self.exposure_period, image_width=self.roix,
image_height=self.roiy, acq_mode=acq_mode, file_path=self.file_path, nr_writers=2, base_path=self.base_path,
file_prefix=self.file_prefix)
file_prefix=self.file_prefix, ddc_trigger=4, ddc_source0=1, **kwargs)
def acquire_whites(self,nimages_white=None, sample_angle_out=None, sample_position_out=None,
def acquire_whites(self,motor="eyez", nimages_white=None, sample_angle_out=None, sample_position_out=None,
exposure_time=None, exposure_period=None,
roix=None, roiy=None, acq_mode=None):
roix=None, roiy=None, acq_mode=None, **kwargs):
"""
Acquire a set of whites images with shutters open and sample out of beam.
Parameters
----------
motor : DeviceBase
Motor to be moved to move the sample out of beam
nimages_whites : int, optional
Number of white images to acquire (no default)
sample_angle_out : float, optional
@@ -348,6 +386,7 @@ class Measurement:
m.acquire_whites(nimages_whites=100, exposure_time=5)
"""
self.motor_sample = motor
if nimages_white != None:
self.nimages_white = nimages_white
if sample_angle_out != None:
@@ -367,16 +406,76 @@ class Measurement:
### TODO: camera reset
print("Handing over to 'scans.acquire_whites")
scans.acquire_white(exp_burst=self.nimages_white, sample_angle_out=self.sample_angle_out, sample_position_out= self.sample_position_out,
scans.acquire_white(motor=self.motor_sample, exp_burst=self.nimages_white, sample_angle_out=self.sample_angle_out, sample_position_out= self.sample_position_out,
exp_time=self.exposure_time, exp_period=self.exposure_period, image_width=self.roix,
image_height=self.roiy, acq_mode=acq_mode, file_path=self.file_path, nr_writers=2, base_path=self.base_path,
file_prefix=self.file_prefix)
file_prefix=self.file_prefix, ddc_trigger=4, ddc_source0=1, **kwargs)
def acquire_projections(self, nimages=None, sample_position_in=None,
start_angle=None, angular_range=None,
exposure_time=None, exposure_period=None,
roix=None, roiy=None, acq_mode=None, **kwargs):
"""
Acquire a set of whites images with shutters open and sample out of beam.
def acquire_refs(self,nimages_dark=None, nimages_white=None, sample_angle_out=None,
Parameters
----------
nimages : int, optional
Number of projection images to acquire (no default)
sample_position_in : float, optional
Sample stage X position for sample in the beam [um]
start_angle : float, optional
Starting angular position [deg]
angular_range : float, optional
Angular range [deg]
exposure_time : float, optional
Exposure time [ms]. If not specified, the currently configured value on the camera will be used
exposure_period : float, optional
Exposure period [ms]
roix : int, optional
ROI size in the x-direction [pixels]
roiy : int, optional
ROI size in the y-direction [pixels]
acq_mode : str, optional
Predefined acquisition mode (default=None)
Example:
--------
m.acquire_projections(nimages_projections=100, exposure_time=5)
"""
if nimages != None:
self.nimages = nimages
if sample_position_in != None:
self.sample_position_in = sample_position_in
if start_angle != None:
self.start_angle = start_angle
if angular_range != None:
self.angular_range = angular_range
if exposure_time != None:
self.exposure_time = exposure_time
if exposure_period != None:
self.exposure_period = exposure_period
if roix != None:
self.roix = roix
if roiy != None:
self.roiy = roiy
self.build_filename(acquisition_type='data')
### TODO: camera reset
print("Handing over to 'scans.acquire_projections")
scans.acquire_projections(motor="es1_roty", exp_burst=self.nimages, sample_position_in= self.sample_position_in,
start_angle = self.start_angle, angular_range = self.angular_range,
exp_time=self.exposure_time, exp_period=self.exposure_period, image_width=self.roix,
image_height=self.roiy, acq_mode=acq_mode, file_path=self.file_path, nr_writers=2,
base_path=self.base_path,file_prefix=self.file_prefix, ddc_trigger=4, ddc_source0=1, **kwargs)
def acquire_refs(self, motor="eyez", nimages_dark=None, nimages_white=None, sample_angle_out=None,
sample_position_in=None, sample_position_out=None,
exposure_time=None, exposure_period=None,
roix=None, roiy=None, acq_mode=None):
roix=None, roiy=None, acq_mode=None, **kwargs):
"""
Acquire reference images (darks + whites) and return to beam position.
@@ -385,6 +484,8 @@ class Measurement:
Parameters
----------
motor : DeviceBase
Motor to be moved to move the sample out of beam
darks : int, optional
Number of dark images to acquire (no default)
nimages_whites : int, optional
@@ -435,10 +536,30 @@ class Measurement:
self.build_filename(acquisition_type='white')
file_prefix_white = self.file_prefix
print(file_prefix_dark)
print(file_prefix_white)
### TODO: camera reset
print("Handing over to 'scans.acquire_refs")
scans.acquire_refs(num_darks=self.nimages_dark, num_flats=self.nimages_white, sample_angle_out=self.sample_angle_out,
scans.acquire_refs(motor=motor, num_darks=self.nimages_dark, num_flats=self.nimages_white, sample_angle_out=self.sample_angle_out,
sample_position_in=self.sample_position_in, sample_position_out=self.sample_position_out,
exp_time=self.exposure_time, exp_period=self.exposure_period, image_width=self.roix,
image_height=self.roiy, acq_mode='default', file_path=self.file_path, nr_writers=2, base_path=self.base_path,
file_prefix_dark=file_prefix_dark, file_prefix_white=file_prefix_white)
file_prefix_dark=file_prefix_dark, file_prefix_white=file_prefix_white,
ddc_trigger=4, ddc_source0=1, **kwargs)
def start_preview(self, exposure_time=None, exposure_period=None,
preview_strategy='', preview_paramters=200, **kwargs):
"""
Start the camera in preview mode, no data will be written.
Parameters
----------
exposure_time : float, optional
"""
if exposure_time is None:
exposure_time = self.exposure_time
if exposure_period is None:
exposure_period = 50 # no need to go faster for a preview

View File

@@ -0,0 +1,158 @@
// Test program for simple zig-zag line scanning with PSO window output
// "enable" signal and DDC synchronized to external trigger input.
// The file expects external parameter validation
// The PSO locations arrays are set externally from EPICS PV
//
#define DDC_ADDR 0x800000
#define PSO_ADDR 0x0
enum ScanType
POS = 0
NEG = 1
POSNEG = 2
NEGPOS = 3
end
program
//////////////////////////////////////////////////////////////////////////
// External parameters - USE THEESE
var $fStartPosition as real = 42
var $fScanRange as real = 180
var $iNumRepeat as integer = 13
var $eScanType as ScanType = ScanType.NEGPOS
var $iNumDdcRead as integer = 1300
var $fVelJog as real = 200
var $fVelScan as real = 150
var $fAcceleration = 500
var $fAccDistance as real = 22.5
var $eDdcTrigger as DriveDataCaptureTrigger = DriveDataCaptureTrigger.PsoEvent
//////////////////////////////////////////////////////////////////////////
// Internal parameters - dont use
var $axis as axis = ROTY
var $ii as integer
var $axisFaults as integer = 0
var $iDdcSafeSpace as integer = 4096
// Set acceleration
SetupAxisRampType($axis, RampType.Linear)
SetupAxisRampValue($axis, 0, $fAcceleration)
// set the actual scan range
var $fPosStart as real
var $fPosEnd as real
if $eScanType == ScanType.POS
$fPosStart = $fStartPosition - $fAccDistance
$fPosEnd = $fStartPosition + $fScanRange + $fAccDistance
elseif $eScanType == ScanType.NEG
$fPosStart = $fStartPosition + $fAccDistance
$fPosEnd = $fStartPosition - $fScanRange - $fAccDistance
elseif $eScanType == ScanType.POSNEG
$fPosStart = $fStartPosition - $fAccDistance
$fPosEnd = $fStartPosition + $fScanRange + $fAccDistance
elseif $eScanType == ScanType.NEGPOS
$fPosStart = $fStartPosition + $fAccDistance
$fPosEnd = $fStartPosition - $fScanRange - $fAccDistance
end
// Move to start position before the scan
// NOTE: Also wait for GigaFrost to start, otherwise early triggers might be missed
MoveAbsolute($axis, $fPosStart, $fVelJog)
WaitForInPosition($axis)
Dwell(2)
// Set globals for feedback
$rglobal[2] = $fPosStart
$rglobal[3] = $fPosEnd
// Configure PSO
// FIXME : When the controller is restarted
PsoDistanceConfigureInputs($axis, [PsoDistanceInput.XC4PrimaryFeedback])
PsoDistanceCounterOff($axis)
PsoDistanceEventsOff($axis)
PsoWindowConfigureEvents($axis, PsoWindowEventMode.None)
PsoWaveformOff($axis)
// Simple PSO trigger pattern
var $iPsoArrayPosAddr as integer = PSO_ADDR
var $iPsoArrayPos[] as real = [ UnitsToCounts($axis, 22.5), UnitsToCounts($axis, 180) ]
DriveArrayWrite($axis, $iPsoArrayPos, $iPsoArrayPosAddr, length($iPsoArrayPos), DriveArrayType.PsoDistanceEventDistances)
var $iPsoArrayNegAddr as integer = ($iPsoArrayPosAddr + length($iPsoArrayPos)) * 4
var $iPsoArrayNeg[] as real = [ UnitsToCounts($axis, 22.5), UnitsToCounts($axis, 180) ]
DriveArrayWrite($axis, $iPsoArrayNeg, $iPsoArrayNegAddr, length($iPsoArrayNeg), DriveArrayType.PsoDistanceEventDistances)
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayPosAddr, length($iPsoArrayPos), 0)
PsoDistanceCounterOn($axis)
PsoDistanceEventsOn($axis)
PsoWaveformConfigureMode($axis, PsoWaveformMode.Toggle)
PsoWaveformOn($axis)
PsoOutputConfigureSource($axis, PsoOutputSource.Waveform)
// Configure Drive Data Collection
var $iDdcArraySize as integer = $iNumDdcRead
DriveDataCaptureConfigureInput($axis, 0, DriveDataCaptureInput.PrimaryFeedback);
DriveDataCaptureConfigureInput($axis, 1, DriveDataCaptureInput.AnalogInput0 );
DriveDataCaptureConfigureTrigger($axis, 0, $eDdcTrigger );
DriveDataCaptureConfigureTrigger($axis, 1, $eDdcTrigger );
DriveDataCaptureConfigureArray($axis, 0, DDC_ADDR, $iDdcArraySize);
DriveDataCaptureConfigureArray($axis, 1, DDC_ADDR + $iDdcSafeSpace + 8 * $iDdcArraySize, $iDdcArraySize);
// Directly before scan
PsoDistanceCounterOn($axis)
DriveDataCaptureOn($axis, 0)
DriveDataCaptureOn($axis, 1)
///////////////////////////////////////////////////////////
// Start the actual scanning
///////////////////////////////////////////////////////////
if $eScanType == ScanType.POS || $eScanType == ScanType.NEG
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayPosAddr, length($iPsoArrayPos), 0)
MoveAbsolute($axis, $fPosEnd, $fVelScan)
WaitForMotionDone($axis)
$axisFaults = StatusGetAxisItem($axis, AxisDataSignal.AxisFault)
if $axisFaults
TaskSetError(TaskGetIndex(), "AxisFault on axis ROTY")
end
elseif $eScanType == ScanType.POSNEG || $eScanType == ScanType.NEGPOS
for $ii = 0 to ($iNumRepeat-1)
// Feedback on progress
$rglobal[4] = $ii
if ($ii % 2) == 0
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayPosAddr, length($iPsoArrayPos), 0)
MoveAbsolute($axis, $fPosEnd, $fVelScan)
elseif ($ii % 2) == 1
PsoDistanceConfigureArrayDistances($axis, $iPsoArrayNegAddr, length($iPsoArrayNeg), 0)
MoveAbsolute($axis, $fPosStart, $fVelScan)
end
WaitForMotionDone($axis)
$axisFaults = StatusGetAxisItem($axis, AxisDataSignal.AxisFault)
if $axisFaults
TaskSetError(TaskGetIndex(), "AxisFault on axis ROTY")
end
Dwell(0.2)
end
end
// Directly after scan
PsoDistanceCounterOff($axis)
DriveDataCaptureOff($axis, 0)
DriveDataCaptureOff($axis, 1)
// move back to start position
MoveAbsolute($axis, $fPosStart, $fVelJog)
WaitForInPosition($axis)
Dwell(2)
end