Compare commits

..

38 Commits

Author SHA1 Message Date
d19fe8b66a Merge pull request #1239 from slsdetectorgroup/dev/wheels
All checks were successful
Build on RHEL9 / build (push) Successful in 3m21s
Build on RHEL8 / build (push) Successful in 4m48s
added workflow for python wheels
2025-06-13 17:13:07 +02:00
f4345a91a1 added workflow for python wheels 2025-06-13 16:13:17 +02:00
ec67617e5c Dev/update test framesynchronizer (#1221)
* raise an exception if the pull socket python script had errors at startup (for eg if pyzmq was not installed)

* minor changes that got lost in the merge of automate_version_part 2 PR

---------

Co-authored-by: Erik Fröjdh <erik.frojdh@gmail.com>
2025-06-13 14:20:01 +02:00
bab6a5e9e1 added docs for SLSDETNAME (#1228)
All checks were successful
Build on RHEL9 / build (push) Successful in 2m52s
Build on RHEL8 / build (push) Successful in 4m43s
* added docs for SLSDETNAME

* clarification on hostname

* added examples on module index

* fixes

* fixed typo
2025-06-05 14:01:08 +02:00
f84454fbc1 tests for bool in ToString/StringTo (#1230)
All checks were successful
Build on RHEL9 / build (push) Successful in 3m18s
Build on RHEL8 / build (push) Successful in 4m57s
- Added tests for ToString/StringTo<bool>
- Added overload for ToString of bool (previously went through int)
2025-06-03 08:36:29 +02:00
c92830f854 updates files/variants for pmods for 9.2.0 (#1233)
All checks were successful
Build on RHEL9 / build (push) Successful in 3m59s
Build on RHEL8 / build (push) Successful in 4m56s
2025-06-02 15:16:39 +02:00
e77fd8d85d Dev/add numpy (#1227)
All checks were successful
Build on RHEL9 / build (push) Successful in 3m25s
Build on RHEL8 / build (push) Successful in 4m54s
* added numpy dependency
* added build specifications for python version and platform
2025-05-30 08:17:45 +02:00
cd0fb1b7bb Merge pull request #1224 from slsdetectorgroup/dev/920/doc
All checks were successful
Build on RHEL9 / build (push) Successful in 3m18s
Build on RHEL8 / build (push) Successful in 4m52s
Dev: documentaion for pip
2025-05-28 15:22:00 +02:00
50ab20317d updated documentation for pip installation as well 2025-05-28 10:25:07 +02:00
d0a946a919 Merge pull request #1209 from slsdetectorgroup/dev/automate_version_part2
All checks were successful
Build on RHEL9 / build (push) Successful in 2m49s
Build on RHEL8 / build (push) Successful in 4m50s
Dev/automate version part2
2025-05-27 10:17:17 +02:00
ed142aa34e added expat to host section 2025-05-27 09:10:03 +02:00
1227574590 Merge branch 'developer' into dev/automate_version_part2
All checks were successful
Build on RHEL9 / build (push) Successful in 3m15s
Build on RHEL8 / build (push) Successful in 4m56s
2025-05-26 11:08:42 +02:00
3ac7b579a0 formatted and updated versionAPI.h 2025-05-26 11:01:49 +02:00
feb1b0868e dummy commit for versionAPI 2025-05-26 09:13:45 +02:00
6d2f34ef1d adresses review comments 2025-05-23 11:41:56 +02:00
b36a5b9933 updating pmods
All checks were successful
Build on RHEL9 / build (push) Successful in 2m50s
Build on RHEL8 / build (push) Successful in 4m54s
2025-05-22 13:50:26 +02:00
d8ce5eabb8 and now with link
All checks were successful
Build on RHEL9 / build (push) Successful in 2m48s
Build on RHEL8 / build (push) Successful in 4m56s
2025-05-22 10:39:32 +02:00
ceecb0ca27 added extra fs link and fixed execute_program warning
Some checks failed
Build on RHEL9 / build (push) Successful in 2m52s
Build on RHEL8 / build (push) Failing after 4m51s
2025-05-22 10:20:16 +02:00
ac3670dcd2 Merge pull request #1215 from slsdetectorgroup/dev/doc_frame_sync
Dev//documentation for slsFrameSynchronizer
2025-05-22 08:43:53 +02:00
1c7bc61531 minor 2025-05-21 17:27:50 +02:00
58245a62a4 minor aesthetics 2025-05-21 17:06:08 +02:00
a464262558 typo 2025-05-21 16:47:15 +02:00
995d3e0034 rearranged receiver topics, differentiated btween receiver variants and added info about slsFrameSynchronizer 2025-05-21 16:47:07 +02:00
90d57cb6a9 Merge pull request #1214 from slsdetectorgroup/dev/fixStaticVector
Some checks failed
Build on RHEL9 / build (push) Successful in 3m18s
Build on RHEL8 / build (push) Failing after 4m48s
dev: rewrote end() for StaticVector
2025-05-21 16:34:49 +02:00
30eab42294 rewrote end() for StaticVector 2025-05-21 16:26:32 +02:00
1d0eeea7ee Merge pull request #1207 from slsdetectorgroup/fix_blackfin_read_access
Some checks failed
Build on RHEL9 / build (push) Successful in 2m58s
Build on RHEL8 / build (push) Failing after 4m51s
ctb: fix bug in blackfin read access to firmware registers
2025-05-19 16:18:24 +02:00
d7c012d306 formatting 2025-05-19 13:20:03 +02:00
1665937540 refactoring code and compiling binary 2025-05-19 13:19:32 +02:00
9343e3c667 merged developer
Some checks failed
Build on RHEL9 / build (push) Successful in 2m51s
Build on RHEL8 / build (push) Failing after 4m48s
2025-05-15 17:20:52 +02:00
b4c8fc1765 updated all makefiles 2025-05-15 17:08:27 +02:00
3ad4e01a5d updates api version based on version file & converted shell script files to python 2025-05-15 16:35:09 +02:00
015b4add65 Merge pull request #1206 from slsdetectorgroup/dev/fix_frame_sync
Some checks failed
Build on RHEL9 / build (push) Successful in 2m48s
Build on RHEL8 / build (push) Failing after 4m54s
dev/fix_frame_sync
2025-05-12 01:00:08 +02:00
9051dae787 fix bug in blackfin read access to firmware registers 2025-05-08 15:40:13 +02:00
68bdd75c9c moving the erasure of the fnum to after sending the zmg packets and also deleteing all old frames when end of acquisition 2025-05-07 16:33:50 +02:00
a53873b695 typo 2025-05-07 16:04:12 +02:00
77a39b4ef2 minor 2025-05-07 15:58:28 +02:00
64be8b1e89 better error messageS 2025-05-07 15:56:41 +02:00
68bd9fb4f7 frame synchonrizer fixes: typo of iterator for loop and zmg_msg_t list cleaned up before sending multi part zmq; test written for the frame synchronizer, test_simulator.py rewritten for more robustness and refactoring commonality between both scripts 2025-05-07 15:44:32 +02:00
60 changed files with 1390 additions and 687 deletions

64
.github/workflows/build_wheel.yml vendored Normal file
View File

@ -0,0 +1,64 @@
name: Build wheel
on:
workflow_dispatch:
pull_request:
push:
branches:
- main
release:
types:
- published
jobs:
build_wheels:
name: Build wheels on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest,]
steps:
- uses: actions/checkout@v4
- name: Build wheels
run: pipx run cibuildwheel==2.23.0
- uses: actions/upload-artifact@v4
with:
name: cibw-wheels-${{ matrix.os }}-${{ strategy.job-index }}
path: ./wheelhouse/*.whl
build_sdist:
name: Build source distribution
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build sdist
run: pipx run build --sdist
- uses: actions/upload-artifact@v4
with:
name: cibw-sdist
path: dist/*.tar.gz
upload_pypi:
needs: [build_wheels, build_sdist]
runs-on: ubuntu-latest
environment: pypi
permissions:
id-token: write
if: github.event_name == 'release' && github.event.action == 'published'
# or, alternatively, upload to PyPI on every tag starting with 'v' (remove on: release above to use this)
# if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v')
steps:
- uses: actions/download-artifact@v4
with:
# unpacks all CIBW artifacts into dist/
pattern: cibw-*
path: dist
merge-multiple: true
- uses: pypa/gh-action-pypi-publish@release/v1

View File

@ -21,6 +21,7 @@ if (${CMAKE_VERSION} VERSION_GREATER "3.24")
endif()
include(cmake/project_version.cmake)
include(cmake/SlsAddFlag.cmake)
include(cmake/helpers.cmake)
@ -193,11 +194,9 @@ find_package(ClangFormat)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
if (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
message(STATUS "No build type selected, default to Release")
set(CMAKE_BUILD_TYPE "Release" CACHE STRING "Build type (default Release)" FORCE)
endif()
default_build_type("Release")
set_std_fs_lib()
message(STATUS "Extra linking to fs lib:${STD_FS_LIB}")
#Enable LTO if available
include(CheckIPOSupported)

View File

@ -25,7 +25,9 @@ mark_as_advanced(
ClangFormat_BIN)
if(ClangFormat_FOUND)
exec_program(${ClangFormat_BIN} ${CMAKE_CURRENT_SOURCE_DIR} ARGS --version OUTPUT_VARIABLE CLANG_VERSION_TEXT)
execute_process(COMMAND ${ClangFormat_BIN} --version
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
OUTPUT_VARIABLE CLANG_VERSION_TEXT)
string(REGEX MATCH "([0-9]+)\\.[0-9]+\\.[0-9]+" CLANG_VERSION ${CLANG_VERSION_TEXT})
if((${CLANG_VERSION} GREATER "9") OR (${CLANG_VERSION} EQUAL "9"))
# A CMake script to find all source files and setup clang-format targets for them

46
cmake/helpers.cmake Normal file
View File

@ -0,0 +1,46 @@
function(default_build_type val)
if (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
message(STATUS "No build type selected, default to Release")
set(CMAKE_BUILD_TYPE ${val} CACHE STRING "Build type (default ${val})" FORCE)
endif()
endfunction()
function(set_std_fs_lib)
# from pybind11
# Check if we need to add -lstdc++fs or -lc++fs or nothing
if(DEFINED CMAKE_CXX_STANDARD AND CMAKE_CXX_STANDARD LESS 17)
set(STD_FS_NO_LIB_NEEDED TRUE)
elseif(MSVC)
set(STD_FS_NO_LIB_NEEDED TRUE)
else()
file(
WRITE ${CMAKE_CURRENT_BINARY_DIR}/main.cpp
"#include <filesystem>\nint main(int argc, char ** argv) {\n std::filesystem::path p(argv[0]);\n return p.string().length();\n}"
)
try_compile(
STD_FS_NO_LIB_NEEDED ${CMAKE_CURRENT_BINARY_DIR}
SOURCES ${CMAKE_CURRENT_BINARY_DIR}/main.cpp
COMPILE_DEFINITIONS -std=c++17)
try_compile(
STD_FS_NEEDS_STDCXXFS ${CMAKE_CURRENT_BINARY_DIR}
SOURCES ${CMAKE_CURRENT_BINARY_DIR}/main.cpp
COMPILE_DEFINITIONS -std=c++17
LINK_LIBRARIES stdc++fs)
try_compile(
STD_FS_NEEDS_CXXFS ${CMAKE_CURRENT_BINARY_DIR}
SOURCES ${CMAKE_CURRENT_BINARY_DIR}/main.cpp
COMPILE_DEFINITIONS -std=c++17
LINK_LIBRARIES c++fs)
endif()
if(${STD_FS_NEEDS_STDCXXFS})
set(STD_FS_LIB stdc++fs PARENT_SCOPE)
elseif(${STD_FS_NEEDS_CXXFS})
set(STD_FS_LIB c++fs PARENT_SCOPE)
elseif(${STD_FS_NO_LIB_NEEDED})
set(STD_FS_LIB "" PARENT_SCOPE)
else()
message(WARNING "Unknown C++17 compiler - not passing -lstdc++fs")
set(STD_FS_LIB "")
endif()
endfunction()

View File

@ -28,7 +28,8 @@ requirements:
- libgl-devel # [linux]
- libtiff
- zlib
- expat
run:
- libstdcxx-ng
- libgcc-ng
@ -57,6 +58,7 @@ outputs:
- {{ compiler('c') }}
- {{compiler('cxx')}}
- {{ pin_subpackage('slsdetlib', exact=True) }}
run:
- {{ pin_subpackage('slsdetlib', exact=True) }}

View File

@ -42,6 +42,7 @@ set(SPHINX_SOURCE_FILES
src/pyexamples.rst
src/pyPatternGenerator.rst
src/servers.rst
src/multidet.rst
src/receiver_api.rst
src/result.rst
src/type_traits.rst

View File

@ -6,6 +6,8 @@ Usage
The syntax is *'[detector index]-[module index]:[command]'*, where the indices are by default '0', when not specified.
.. _cl-module-index-label:
Module index
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Modules are indexed based on their order in the hostname command. They are used to configure a specific module within a detector and are followed by a ':' in syntax.

View File

@ -28,6 +28,12 @@ Welcome to slsDetectorPackage's documentation!
receiver_api
examples
.. toctree::
:caption: how to
:maxdepth: 2
multidet
.. toctree::
:caption: Python API
:maxdepth: 2
@ -81,8 +87,9 @@ Welcome to slsDetectorPackage's documentation!
:caption: Receiver
:maxdepth: 2
receivers
slsreceiver
receivers
.. toctree::
:caption: Receiver Files

View File

@ -6,18 +6,33 @@
Installation
===============
One can either install pre-built binaries using conda or build from source.
.. contents::
:local:
:depth: 2
:backlinks: top
Overview
--------------
The ``slsDetectorPackage`` provides core detector software implemented in C++, along with Python bindings packaged as the ``slsdet`` Python extension module. Choose the option that best fits your environment and use case.
:ref:`conda pre-built binaries`:
Install pre-built binaries for the C++ client, receiver, GUI and the Python API (``slsdet``), simplifying setup across platforms.
:ref:`pip`:
Install only the Python extension module, either by downloading the pre-built library from PyPI or by building the extension locally from source. Available only from v9.2.0 onwards.
:ref:`build from source`:
Compile the entire package yourself, including both the C++ core and the Python bindings, for maximum control and customization. However, make sure that you have the :doc:`dependencies <../dependencies>` installed. If installing using conda, conda will manage the dependencies. Avoid installing packages with pip and conda simultaneously.
.. warning ::
Before building from source make sure that you have the
:doc:`dependencies <../dependencies>` installed. If installing using conda, conda will
manage the dependencies. Avoid also installing packages with pip.
.. _conda pre-built binaries:
Install binaries using conda
----------------------------------
1. Install pre-built binaries using conda (Recommended)
--------------------------------------------------------
Conda is not only useful to manage python environments but can also
be used as a user space package manager. Dates in the tag (for eg. 2020.07.23.dev0)
@ -63,12 +78,29 @@ We have four different packages available:
conda search moenchzmq
.. _pip:
2. Pip
-------
The Python extension module ``slsdet`` can be installed using pip. This is available from v9.2.0 onwards.
.. code-block:: bash
#Install the Python extension module from PyPI
pip install slsdet
# or install the python extension locally from source
git clone https://github.com/slsdetectorgroup/slsDetectorPackage.git --branch 9.2.0
cd slsDetectorPackage
pip install .
Build from source
----------------------
.. _build from source:
1. Download Source Code from github
3. Build from source
-------------------------
3.1. Download Source Code from github
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
@ -83,12 +115,12 @@ Build from source
2. Build from Source
3.2. Build from Source
^^^^^^^^^^^^^^^^^^^^^^^^^^
One can either build using cmake or use the in-built cmk.sh script.
Build using CMake
3.2.1. Build using CMake
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
@ -135,7 +167,7 @@ Example cmake options Comment
For v7.x.x of slsDetectorPackage and older, refer :ref:`zeromq notes for cmake option to hint library location. <zeromq for different slsDetectorPackage versions>`
Build using in-built cmk.sh script
3.2.2. Build using in-built cmk.sh script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -185,8 +217,8 @@ Build using in-built cmk.sh script
For v7.x.x of slsDetectorPackage and older, refer :ref:`zeromq notes for cmk script option to hint library location. <zeromq for different slsDetectorPackage versions>`
Build on old distributions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3.3. Build on old distributions using conda
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If your linux distribution doesn't come with a C++11 compiler (gcc>4.8) then
it's possible to install a newer gcc using conda and build the slsDetectorPackage
@ -210,7 +242,7 @@ using this compiler
Build slsDetectorGui (Qt5)
3.4. Build slsDetectorGui (Qt5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. Using pre-built binary on conda
@ -271,7 +303,7 @@ Build slsDetectorGui (Qt5)
Build this documentation
3.5. Build this documentation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The documentation for the slsDetectorPackage is build using a combination
@ -294,8 +326,8 @@ is to use conda
make rst # rst only, saves time in case the API did not change
Pybind and Zeromq
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4. Pybind and Zeromq
-------------------------
.. _pybind for different slsDetectorPackage versions:

228
docs/src/multidet.rst Normal file
View File

@ -0,0 +1,228 @@
Using multiple detectors
==========================
The slsDetectorPackage supports using several detectors on the same computer.
This can either be two users, that need to use the same computer without interfering
with each other, or the same user that wants to use multiple detectors at the same time.
The detectors in turn can consist of multiple modules. For example, a 9M Jungfrau detector
consists of 18 modules which typically are addressed at once as a single detector.
.. note ::
To address a single module of a multi-module detector you can use the module index.
- Command line: :ref:`cl-module-index-label`
- Python: :ref:`py-module-index-label`
Coming back to multiple detectors we have two tools to our disposal:
#. Detector index
#. The SLSDETNAME environment variable
They can be used together or separately depending on the use case.
Detector index
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When configuring a detector you can specify a detector index. The default is 0.
**Command line**
.. code-block:: bash
# Given that we have two detectors (my-det and my-det2) that we want to use,
# we can configure them with different indices.
# Configure the first detector with index 0
$ sls_detector_put hostname my-det
# Set number of frames for detector 0 to 10
$ sls_detector_put frames 10
#
#Configure the second detector with index 1 (notice the 1- before hostname)
$ sls_detector_put 1-hostname my-det2
# Further configuration
...
# Set number of frames for detector 1 to 19
$ sls_detector_put 1-frames 19
# Note that if we call sls_detector_get without specifying the index,
# it will return the configuration of detector 0
$ sls_detector_get frames
10
The detector index is added to the name of the shared memory segment, so in this case
the shared memory segments would be:
.. code-block:: bash
#First detector
/dev/shm/slsDetectorPackage_detector_0
/dev/shm/slsDetectorPackage_detector_0_module_0
#Second detector
/dev/shm/slsDetectorPackage_detector_1
/dev/shm/slsDetectorPackage_detector_1_module_0
**Python**
The main difference between the command line and the Python API is that you set the index
when you create the detector object and you don't have to repeat it for every call.
The C++ API works int the same way.
.. code-block:: python
from slsdet import Detector
# The same can be achieved in Python by creating a detector object with an index.
# Again we have two detectors (my-det and my-det2) that we want to use:
# Configure detector with index 0
d = Detector()
# If the detector has already been configured and has a shared memory
# segment, you can omit setting the hostname again
d.hostname = 'my-det'
#Further configuration
...
# Configure a second detector with index 1
d2 = Detector(1)
d2.hostname = 'my-det2'
d.frames = 10
d2.frames = 19
$SLSDETNAME
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To avoid interfering with other users on shared PCs it is best to always set the SLSDETNAME environmental variable.
Imagining a fictive user: Anna, we can set SLSDETNAME from the shell before configuring the detector:
**Command line**
.. code-block:: bash
# Set the SLSDETNAME variable
$ export SLSDETNAME=Anna
# You can check that it is set
$ echo $SLSDETNAME
Anna
# Now configures a detector with index 0 and prefixed with the name Anna
# /dev/shm/slsDetectorPackage_detector_0_Anna
$ sls_detector_put hostname my-det
.. tip ::
Set SLSDETNAME in your .bashrc in order to not forget it when opening a new terminal.
**Python**
With python the best way is to set the SLSDETNAME from the command line before starting the python interpreter.
Bash:
.. code-block:: bash
$ export SLSDETNAME=Anna
Python:
.. code-block:: python
from slsdet import Detector
# Now configures a detector with index 0 and prefixed with the name Anna
# /dev/shm/slsDetectorPackage_detector_0_Anna
d = Detector()
d.hostname = 'my-det'
You can also set SLSDETNAME from within the Python interpreter, but you have to be aware that it will only
affect the current process and not the whole shell session.
.. code-block:: python
import os
os.environ['SLSDETNAME'] = 'Anna'
# You can check that it is set
print(os.environ['SLSDETNAME']) # Output: Anna
#Now SLSDETNAME is set to Anna but as soon as you exit the python interpreter
# it will not be set anymore
.. note ::
Python has two ways of reading environment variables: `**os.environ**` as shown above which throws a
KeyError if the variable is not set and `os.getenv('SLSDETNAME')` which returns None if the variable is not set.
For more details see the official python documentation on: https://docs.python.org/3/library/os.html#os.environ
Checking for other detectors
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If using shared accounts on a shared computer (which you anyway should not do), it is good practice to check
if there are other detectors configured by other users before configuring your own detector.
You can do this by listing the files in the shared memory directory `/dev/shm/` that start with `sls`. In this
example we can see that two single module detectors are configured one with index 0 and one with index 1.
SLSDETNAME is set to `Anna` so it makes sense to assume that she is the user that configured these detectors.
.. code-block:: bash
# List the files in /dev/shm that starts with sls
$ ls /dev/shm/sls*
/dev/shm/slsDetectorPackage_detector_0_Anna
/dev/shm/slsDetectorPackage_detector_0_module_0_Anna
/dev/shm/slsDetectorPackage_detector_1_Anna
/dev/shm/slsDetectorPackage_detector_1_module_0_Anna
We also provide a command: user, which gets information about the shared memory segment that
the client points to without doing any changes.
.. code-block:: bash
#in this case 3 simulated Mythen3 modules
$ sls_detector_get user
user
Hostname: localhost+localhost+localhost+
Type: Mythen3
PID: 1226078
User: l_msdetect
Date: Mon Jun 2 05:46:20 PM CEST 2025
Other considerations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The shared memory is not the only way to interfere with other users. You also need to make sure that you are not
using the same:
* rx_tcpport
* Unique combination of udp_dstip and udp_dstport
* rx_zmqport
* zmqport
.. attention ::
The computer that you are using need to have enough resources to run multiple detectors at the same time.
This includes CPU and network bandwidth. Please coordinate with the other users!

View File

@ -123,6 +123,47 @@ in a large detector.
# Set exposure time for module 1, 5 and 7
d.setExptime(0.1, [1,5,7])
.. _py-module-index-label:
----------------------------------
Accessing individual modules
----------------------------------
Using the C++ like API you can access individual modules in a large detector
by passing in the module index as an argument to the function.
::
# Read the UDP destination port for all modules
>>> d.getDestinationUDPPort()
[50001, 50002, 50003]
# Read it for module 0 and 1
>>> d.getDestinationUDPPort([0, 1])
[50001, 50002]
>>> d.setDestinationUDPPort(50010, 1)
>>> d.getDestinationUDPPort()
[50001, 50010, 50003]
From the more pythonic API there is no way to read from only one module but you can read
and then use list slicing to get the values for the modules you are interested in.
::
>>> d.udp_dstport
[50001, 50010, 50003]
>>> d.udp_dstport[0]
50001
#For some but not all properties you can also pass in a dictionary with module index as key
>>> ip = IpAddr('127.0.0.1')
>>> d.udp_dstip = {1:ip}
--------------------
Finding functions
--------------------

View File

@ -1,25 +1,25 @@
Receivers
Custom Receiver
=================
Receiver processes can be run on same or different machines as the client, receives the data from the detector (via UDP packets).
When using the slsReceiver/ slsMultiReceiver, they can be further configured by the client control software (via TCP/IP) to set file name, file path, progress of acquisition etc.
The receiver essentially listens to UDP data packets sent out by the detector.
To know more about detector receiver setup in the config file, please check out :ref:`the detector-receiver UDP configuration in the config file<detector udp header config>` and the :ref:`detector udp format<detector udp header>`.
To know more about detector receiver configuration, please check out :ref:`detector udp header and udp commands in the config file <detector udp header>`
| Please note the following when using a custom receiver:
Custom Receiver
----------------
* **udp_dstmac** must be configured in the config file. This parameter is not required when using an in-built receiver.
| When using custom receiver with our package, ensure that **udp_dstmac** is also configured in the config file. This parameter is not required when using slsReceiver.
* Cannot use "auto" for **udp_dstip**.
| Cannot use "auto" for **udp_dstip**.
* No **rx_** commands in the config file. These commands are for configuring the slsReceiver.
| Also ensure that there are no **rx_** commands in the config file. These commands are for configuring the slsReceiver.
The main difference is the lack of **rx_** commands or file commands (eg. **f**write, **f**path) and the **udp_dstmac** is required in config file.
Example of a custom receiver config file
* The main difference is the lack of **rx_** commands or file commands (eg. fwrite, fpath) and the udp_dstmac is required in config file.
.. code-block:: bash
# detector hostname

View File

@ -1,4 +1,5 @@
.. _Detector Server Upgrade:
Upgrade
========

View File

@ -1,16 +1,55 @@
slsReceiver/ slsMultiReceiver
In-built Receiver
================================
| One has to start the slsReceiver before loading config file or using any receiver commands (prefix: **rx_** )
The receiver essentially listens to UDP data packets sent out by the detector. It's main features are:
- **Listening**: Receives UDP data from the detector.
- **Writing to File**: Optionally writes received data to disk.
- **Streaming via ZMQ**: Optionally streams out the data using ZeroMQ.
Each of these operations runs asynchronously and in parallel for each UDP port.
.. note ::
* Can be run on the same or different machine as the client.
* Can be configured by the client. (set file name/ discard policy, get progress etc.)
* Has to be started before the client runs any receiver specific command.
Receiver Variants
-----------------
There are three main receiver types. How to start them is described :ref:`below<Starting up the Receiver>`.
+----------------------+--------------------+-----------------------------------------+--------------------------------+
| Receiver Type | slsReceiver | slsMultiReceiver |slsFrameSynchronizer |
+======================+====================+=========================================+================================+
| Modules Supported | 1 | Multiple | Multiple |
+----------------------+--------------------+-----------------------------------------+--------------------------------+
| Internal Architecture| Threads per porttt | Multiple child processes of slsReceiver | Multi-threading of slsReceiver |
+----------------------+--------------------+-----------------------------------------+--------------------------------+
| ZMQ Streaming | Disabled by default| Disabled by default | Enabled, not optional |
+----------------------+--------------------+-----------------------------------------+--------------------------------+
| ZMQ Synchronization | No | No | Yes, across ports |
+----------------------+--------------------+-----------------------------------------+--------------------------------+
| Image Reconstruction | No | No | No |
+----------------------+--------------------+-----------------------------------------+--------------------------------+
.. _Starting up the Receiver:
Starting up the Receiver
-------------------------
For a Single Module
.. code-block:: bash
slsReceiver # default port 1954
# default port 1954
slsReceiver
# custom port 2012
slsReceiver -t2012
slsReceiver -t2012 # custom port 2012
For Multiple Modules
@ -18,57 +57,66 @@ For Multiple Modules
# each receiver (for each module) requires a unique tcp port (if all on same machine)
# using slsReceiver in multiple consoles
# option 1 (one for each module)
slsReceiver
slsReceiver -t1955
# slsMultiReceiver [starting port] [number of receivers]
# option 2
slsMultiReceiver 2012 2
# slsMultiReceiver [starting port] [number of receivers] [print each frame header for debugging]
slsMultiReceiver 2012 2 1
# option 3
slsFrameSynchronizer 2012 2
Client Commands
-----------------
| One can remove **udp_dstmac** from the config file, as the slsReceiver fetches this from the **udp_ip**.
* Client commands to the receiver begin with **rx_** or **f_** (file commands).
| One can use "auto" for **udp_dstip** if one wants to use default ip of **rx_hostname**.
* **rx_hostname** has to be the first command to the receiver so the client knows which receiver process to communicate with.
| The first command to the receiver (**rx_** commands) should be **rx_hostname**. The following are the different ways to establish contact.
* Can use 'auto' for **udp_dstip** if using 1GbE interface or the :ref:`virtual simulators<Virtual Detector Servers>`.
To know more about detector receiver setup in the config file, please check out :ref:`the detector-receiver UDP configuration in the config file<detector udp header config>` and the :ref:`detector udp format<detector udp header>`.
The following are the different ways to establish contact using **rx_hostname** command.
.. code-block:: bash
# default receiver tcp port (1954)
# ---single module---
# default receiver port at 1954
rx_hostname xxx
# custom receiver port
rx_hostname xxx:1957 # option 1
rx_tcpport 1957 # option 2
rx_hostname xxx
# custom receiver port
rx_hostname xxx:1957
# custom receiver port
rx_tcpport 1954
rx_hostname xxx
# ---multi module---
# multi modules with custom ports
rx_hostname xxx:1955+xxx:1956+
# multi modules using increasing tcp ports when using multi detector command
# using increasing tcp ports
rx_tcpport 1955
rx_hostname xxx
# or specify multi modules with custom ports on same rxr pc
0:rx_tcpport 1954
# custom ports
rx_hostname xxx:1955+xxx:1958+ # option 1
0:rx_tcpport 1954 # option 2
1:rx_tcpport 1955
2:rx_tcpport 1956
rx_hostname xxx
# multi modules with custom ports on different rxr pc
# custom ports on different receiver machines
0:rx_tcpport 1954
0:rx_hostname xxx
1:rx_tcpport 1955
1:rx_hostname yyy
1:rx_hostname yyyrxr
| Example commands:
@ -91,6 +139,32 @@ Client Commands
sls_detector_get -h rx_framescaught
Example of a config file using in-built receiver
.. code-block:: bash
# detector hostname
hostname bchip052+bchip053+
# udp destination port (receiver)
# sets increasing destination udp ports starting at 50004
udp_dstport 50004
# udp destination ip (receiver)
0:udp_dstip 10.0.1.100
1:udp_dstip 10.0.2.100
# udp source ip (same subnet as udp_dstip)
0:udp_srcip 10.0.1.184
1:udp_srcip 10.0.2.184
# udp destination mac - not required (picked up from udp_dstip)
#udp_dstmac 22:47:d5:48:ad:ef
# connects to receivers at increasing tcp port starting at 1954
rx_hostname mpc3434
# same as rx_hostname mpc3434:1954+mpc3434:1955+
Performance

View File

@ -1,4 +1,4 @@
.. _detector udp header:
.. _detector udp header config:
Config file

View File

@ -1,4 +1,5 @@
.. _Virtual Detector Servers:
Simulators
===========

View File

@ -13,5 +13,5 @@ slsDetectorPackage/8.0.2_rh7 stable cmake/3.15.5 Qt/5.12.10
slsDetectorPackage/8.0.2_rh8 stable cmake/3.15.5 Qt/5.12.10
slsDetectorPackage/9.0.0_rh8 stable cmake/3.15.5 Qt/5.12.10
slsDetectorPackage/9.1.0_rh8 stable cmake/3.15.5 Qt/5.12.10
slsDetectorPackage/9.1.1_rh8 stable cmake/3.15.5 Qt/5.12.10
slsDetectorPackage/9.2.0_rh8 stable cmake/3.15.5 Qt/5.12.10

View File

@ -11,9 +11,13 @@ build-backend = "scikit_build_core.build"
[project]
name = "slsdet"
dynamic = ["version"]
dependencies = [
"numpy",
]
[tool.cibuildwheel]
before-all = "uname -a"
build = "cp{311,312,313}-manylinux_x86_64"
[tool.scikit-build.build]
verbose = true

View File

@ -65,6 +65,9 @@ configure_file( scripts/test_virtual.py
${CMAKE_BINARY_DIR}/test_virtual.py
)
configure_file(scripts/frameSynchronizerPullSocket.py
${CMAKE_BINARY_DIR}/bin/frameSynchronizerPullSocket.py COPYONLY)
configure_file( ${CMAKE_CURRENT_SOURCE_DIR}/../VERSION
${CMAKE_BINARY_DIR}/bin/slsdet/VERSION
)

28
slsDetectorServers/compileAllServers.sh Normal file → Executable file
View File

@ -1,16 +1,19 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
# empty branch = developer branch in updateAPIVersion.sh
branch=""
det_list=("ctbDetectorServer
gotthard2DetectorServer
jungfrauDetectorServer
mythen3DetectorServer
moenchDetectorServer
xilinx_ctbDetectorServer"
xilinx_ctbDetectorServer"
)
usage="\nUsage: compileAllServers.sh [server|all(opt)] [branch(opt)]. \n\tNo arguments mean all servers with 'developer' branch. \n\tNo 'branch' input means 'developer branch'"
usage="\nUsage: compileAllServers.sh [server|all(opt)] [update_api(opt)]. \n\tNo arguments mean all servers with 'developer' branch. \n\tupdate_api if true updates the api to version in VERSION file"
update_api=true
target=version
# arguments
if [ $# -eq 0 ]; then
@ -34,15 +37,12 @@ elif [ $# -eq 1 ] || [ $# -eq 2 ]; then
declare -a det=("${1}")
#echo "Compiling only $1"
fi
# branch
if [ $# -eq 2 ]; then
# arg in list
if [[ $det_list == *$2* ]]; then
echo -e "Invalid argument 2: $2. $usage"
return 1
update_api=$2
if not $update_api ; then
target=clean
fi
branch+=$2
#echo "with branch $branch"
fi
else
echo -e "Too many arguments.$usage"
@ -53,6 +53,9 @@ declare -a deterror=("OK" "OK" "OK" "OK" "OK" "OK")
echo -e "list is ${det[@]}"
if $update_api; then
echo "updating api to $(cat ../VERSION)"
fi
# compile each server
idet=0
for i in ${det[@]}
@ -62,14 +65,13 @@ do
echo -e "Compiling $dir [$file]"
cd $dir
make clean
if make version API_BRANCH=$branch; then
if make $target; then
deterror[$idet]="OK"
else
deterror[$idet]="FAIL"
fi
mv bin/$dir bin/$file
git add -f bin/$file
cp bin/$file /tftpboot/
cd ..
echo -e "\n\n"
((++idet))

View File

@ -1,86 +0,0 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
# empty branch = developer branch in updateAPIVersion.sh
branch=""
det_list=("ctbDetectorServer"
"gotthard2DetectorServer"
"jungfrauDetectorServer"
"mythen3DetectorServer"
"moenchDetectorServer"
"xilinx_ctbDetectorServer"
)
usage="\nUsage: compileAllServers.sh [server|all(opt)] [branch(opt)]. \n\tNo arguments mean all servers with 'developer' branch. \n\tNo 'branch' input means 'developer branch'"
# arguments
if [ $# -eq 0 ]; then
# no argument, all servers
declare -a det=${det_list[@]}
echo "Compiling all servers"
elif [ $# -eq 1 ] || [ $# -eq 2 ]; then
# 'all' servers
if [[ $1 == "all" ]]; then
declare -a det=${det_list[@]}
echo "Compiling all servers"
else
# only one server
# arg not in list
if [[ $det_list != *$1* ]]; then
echo -e "Invalid argument 1: $1. $usage"
return -1
fi
declare -a det=("${1}")
#echo "Compiling only $1"
fi
# branch
if [ $# -eq 2 ]; then
# arg in list
if [[ $det_list == *$2* ]]; then
echo -e "Invalid argument 2: $2. $usage"
return -1
fi
branch+=$2
#echo "with branch $branch"
fi
else
echo -e "Too many arguments.$usage"
return -1
fi
declare -a deterror=("OK" "OK" "OK" "OK" "OK" "OK")
echo -e "list is ${det[@]}"
# compile each server
idet=0
for i in ${det[@]}
do
dir=$i
file="${i}_developer"
echo -e "Compiling $dir [$file]"
cd $dir
make clean
if make API_BRANCH=$branch; then
deterror[$idet]="OK"
else
deterror[$idet]="FAIL"
fi
mv bin/$dir bin/$file
git add -f bin/$file
cp bin/$file /tftpboot/
cd ..
echo -e "\n\n"
((++idet))
done
echo -e "Results:"
idet=0
for i in ${det[@]}
do
printf "%s\t\t= %s\n" "$i" "${deterror[$idet]}"
((++idet))
done

23
slsDetectorServers/compileEigerServer.sh Normal file → Executable file
View File

@ -3,21 +3,31 @@
deterror="OK"
dir="eigerDetectorServer"
file="${dir}_developer"
branch=""
usage="\nUsage: compileAllServers.sh [update_api(opt)]. \n\t update_api if true updates the api to version in VERSION file"
update_api=true
target=version
# arguments
if [ $# -eq 1 ]; then
branch+=$1
#echo "with branch $branch"
update_api=$1
if not $update_api ; then
target=clean
fi
elif [ ! $# -eq 0 ]; then
echo -e "Only one optional argument allowed for branch."
echo -e "Only one optional argument allowed for update_api."
return -1
fi
if $update_api; then
echo "updating api to $(cat ../VERSION)"
fi
echo -e "Compiling $dir [$file]"
cd $dir
make clean
if make version API_BRANCH=$branch; then
if make $target; then
deterror="OK"
else
deterror="FAIL"
@ -25,7 +35,6 @@ fi
mv bin/$dir bin/$file
git add -f bin/$file
cp bin/$file /tftpboot/
cd ..
echo -e "\n\n"
printf "Result:\t\t= %s\n" "${deterror}"

View File

@ -25,11 +25,10 @@ version: clean versioning $(PROGS)
boot: $(OBJS)
version_branch=$(API_BRANCH)
version_name=APICTB
version_path=slsDetectorServers/ctbDetectorServer
versioning:
cd ../../ && echo $(PWD) && echo `tput setaf 6; ./updateAPIVersion.sh $(version_name) $(version_path) $(version_branch); tput sgr0;`
cd ../../ && echo $(PWD) && echo `tput setaf 6; python updateAPIVersion.py $(version_name) $(version_path); tput sgr0;`
$(PROGS): $(OBJS)

View File

@ -15,6 +15,7 @@
#include "loadPattern.h"
#include <netinet/in.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h> // usleep
@ -93,6 +94,10 @@ void basictests() {
LOG(logINFOBLUE, ("********* Chip Test Board Virtual Server *********\n"));
#else
LOG(logINFOBLUE, ("************* Chip Test Board Server *************\n"));
initError = enableBlackfinAMCExternalAccessExtension(initErrorMessage);
if (initError == FAIL) {
return;
}
initError = defineGPIOpins(initErrorMessage);
if (initError == FAIL) {
return;
@ -438,6 +443,32 @@ uint32_t getDetectorIP() {
return res;
}
int enableBlackfinAMCExternalAccessExtension(char *mess) {
unsigned int value;
const char *file_path = BFIN_AMC_ACCESS_EXTENSION_FNAME;
FILE *file = fopen(file_path, "r");
if (!file) {
strcpy(mess, "Failed to enable blackfin AMC access extension. Could "
"not read EBIU_AMBCTL1\n");
LOG(logERROR, (mess));
return FAIL;
}
fscanf(file, "%x", &value);
fclose(file);
value |= BFIN_AMC_ACCESS_EXTENSION_ENA_VAL;
file = fopen(file_path, "w");
if (!file) {
strcpy(mess, "Failed to enable blackfin AMC access extension. Could "
"not write EBIU_AMBCTL1\n");
LOG(logERROR, (mess));
return FAIL;
}
fprintf(file, "0x%x", value);
fclose(file);
return OK;
}
/* initialization */
void initControlServer() {

View File

@ -25,11 +25,10 @@ version: clean versioning $(PROGS) #hv9m_blackfin_server
boot: $(OBJS)
version_branch=$(API_BRANCH)
version_name=APIEIGER
version_path=slsDetectorServers/eigerDetectorServer
versioning:
cd ../../ && echo $(PWD) && echo `tput setaf 6; ./updateAPIVersion.sh $(version_name) $(version_path) $(version_branch); tput sgr0;`
cd ../../ && echo $(PWD) && echo `tput setaf 6; python updateAPIVersion.py $(version_name) $(version_path); tput sgr0;`
$(PROGS): $(OBJS)

View File

@ -24,11 +24,10 @@ version: clean versioning $(PROGS)
boot: $(OBJS)
version_branch=$(API_BRANCH)
version_name=APIGOTTHARD2
version_path=slsDetectorServers/gotthard2DetectorServer
versioning:
cd ../../ && echo $(PWD) && echo `tput setaf 6; ./updateAPIVersion.sh $(version_name) $(version_path) $(version_branch); tput sgr0;`
cd ../../ && echo $(PWD) && echo `tput setaf 6; python updateAPIVersion.py $(version_name) $(version_path); tput sgr0;`
$(PROGS): $(OBJS)

View File

@ -24,11 +24,10 @@ version: clean versioning $(PROGS)
boot: $(OBJS)
version_branch=$(API_BRANCH)
version_name=APIJUNGFRAU
version_path=slsDetectorServers/jungfrauDetectorServer
versioning:
cd ../../ && echo $(PWD) && echo `tput setaf 6; ./updateAPIVersion.sh $(version_name) $(version_path) $(version_branch); tput sgr0;`
cd ../../ && echo $(PWD) && echo `tput setaf 6; python updateAPIVersion.py $(version_name) $(version_path); tput sgr0;`
$(PROGS): $(OBJS)

View File

@ -24,11 +24,10 @@ version: clean versioning $(PROGS)
boot: $(OBJS)
version_branch=$(API_BRANCH)
version_name=APIMOENCH
version_path=slsDetectorServers/moenchDetectorServer
versioning:
cd ../../ && echo $(PWD) && echo `tput setaf 6; ./updateAPIVersion.sh $(version_name) $(version_path) $(version_branch); tput sgr0;`
cd ../../ && echo $(PWD) && echo `tput setaf 6; python updateAPIVersion.py $(version_name) $(version_path); tput sgr0;`
$(PROGS): $(OBJS)

View File

@ -25,11 +25,10 @@ version: clean versioning $(PROGS)
boot: $(OBJS)
version_branch=$(API_BRANCH)
version_name=APIMYTHEN3
version_path=slsDetectorServers/mythen3DetectorServer
versioning:
cd ../../ && echo $(PWD) && echo `tput setaf 6; ./updateAPIVersion.sh $(version_name) $(version_path) $(version_branch); tput sgr0;`
cd ../../ && echo $(PWD) && echo `tput setaf 6; python updateAPIVersion.py $(version_name) $(version_path); tput sgr0;`
$(PROGS): $(OBJS)

View File

@ -5,6 +5,23 @@
#include <inttypes.h>
#include <sys/types.h>
/** enable support for ARDY signal on interface to FPGA
* needed to properly translate avalon_mm_waitrequest in the CTB firmware
* https://www.analog.com/media/en/dsp-documentation/processor-manuals/bf537_hwr_Rev3.2.pdf
* page 274
* */
#define BFIN_EBIU_AMBCTL1_B2_ARDY_ENA_OFST (0)
#define BFIN_EBIU_AMBCTL1_B2_ARDY_ENA_MSK \
(1 << BFIN_EBIU_AMBCTL1_B2_ARDY_ENA_OFST)
#define BFIN_EBIU_AMBCTL1_B2_ARDY_POL_OFST (1)
#define BFIN_EBIU_AMBCTL1_B2_ARDY_POL_MSK \
(1 << BFIN_EBIU_AMBCTL1_B2_ARDY_POL_OFST)
#define BFIN_AMC_ACCESS_EXTENSION_ENA_VAL \
(BFIN_EBIU_AMBCTL1_B2_ARDY_ENA_MSK | BFIN_EBIU_AMBCTL1_B2_ARDY_POL_MSK)
#define BFIN_AMC_ACCESS_EXTENSION_FNAME \
"/sys/kernel/debug/blackfin/ebiu_amc/EBIU_AMBCTL1"
/** I2C defines */
#define I2C_CLOCK_MHZ (131.25)

View File

@ -113,6 +113,10 @@ void setModuleId(int modid);
u_int64_t getDetectorMAC();
u_int32_t getDetectorIP();
#if defined(CHIPTESTBOARDD)
int enableBlackfinAMCExternalAccessExtension(char *mess);
#endif
// initialization
void initControlServer();
void initStopServer();

View File

@ -36,11 +36,10 @@ version: clean versioning $(PROGS)
boot: $(OBJS)
version_branch=$(API_BRANCH)
version_name=APIXILINXCTB
version_path=slsDetectorServers/xilinx_ctbDetectorServer
versioning:
cd ../../ && echo $(PWD) && echo `tput setaf 6; ./updateAPIVersion.sh $(version_name) $(version_path) $(version_branch); tput sgr0;`
cd ../../ && echo $(PWD) && echo `tput setaf 6; python updateAPIVersion.py $(version_name) $(version_path); tput sgr0;`
$(PROGS): $(OBJS)

View File

@ -100,7 +100,6 @@ if(SLS_USE_TEXTCLIENT)
target_link_libraries(${val1}
slsDetectorStatic
pthread
rt
)
SET_SOURCE_FILES_PROPERTIES( src/Caller.cpp PROPERTIES COMPILE_FLAGS "-Wno-unused-variable -Wno-unused-but-set-variable")

View File

@ -8,7 +8,6 @@ target_sources(tests PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/Caller/test-Caller.cpp
${CMAKE_CURRENT_SOURCE_DIR}/Caller/test-Caller-rx.cpp
${CMAKE_CURRENT_SOURCE_DIR}/Caller/test-Caller-rx-running.cpp
${CMAKE_CURRENT_SOURCE_DIR}/Caller/test-Caller-pattern.cpp
${CMAKE_CURRENT_SOURCE_DIR}/Caller/test-Caller-eiger.cpp
${CMAKE_CURRENT_SOURCE_DIR}/Caller/test-Caller-jungfrau.cpp

View File

@ -1,72 +0,0 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2025 Contributors to the SLS Detector Package
#include "Caller.h"
#include "catch.hpp"
#include "sls/Detector.h"
#include "tests/globals.h"
#include <sstream>
namespace sls {
using test::PUT;
/*
detfuncs::F_RECEIVER_SET_NUM_FRAMES
detfuncs::F_RECEIVER_SET_NUM_TRIGGERS
detfuncs::F_RECEIVER_SET_NUM_BURSTS
detfuncs::F_RECEIVER_SET_TIMING_MODE
detfuncs::F_RECEIVER_SET_BURST_MODE
detfuncs::F_RECEIVER_SET_NUM_ANALOG_SAMPLES
detfuncs::F_RECEIVER_SET_NUM_DIGITAL_SAMPLES
detfuncs::F_RECEIVER_SET_NUM_TRANSCEIVER_SAMPLES
detfuncs::F_RECEIVER_SET_DYNAMIC_RANGE
detfuncs::F_RECEIVER_SET_STREAMING_FREQUENCY
detfuncs::F_RECEIVER_SET_FILE_INDEX
detfuncs::F_RECEIVER_SET_FILE_WRITE
detfuncs::F_RECEIVER_SET_MASTER_FILE_WRITE
detfuncs::F_RECEIVER_SET_OVERWRITE
detfuncs::F_RECEIVER_ENABLE_TENGIGA
detfuncs::F_RECEIVER_SET_FIFO_DEPTH
detfuncs::F_RECEIVER_SET_ACTIVATE
detfuncs::F_RECEIVER_SET_STREAMING
detfuncs::F_RECEIVER_SET_STREAMING_TIMER
detfuncs::F_RECEIVER_SET_FLIP_ROWS
detfuncs::F_RECEIVER_SET_DBIT_LIST
detfuncs::F_RECEIVER_SET_DBIT_OFFSET
detfuncs::F_RECEIVER_SET_DBIT_REORDE
*/
auto get_test_parameters() {
return GENERATE(
std::make_tuple("asamples", std::vector<std::string>{"5"}),
std::make_tuple("tsamples", std::vector<std::string>{"2"}),
std::make_tuple("dsamples", std::vector<std::string>{"100"}),
std::make_tuple("frames", std::vector<std::string>{"10"}),
std::make_tuple("triggers", std::vector<std::string>{"5"}),
std::make_tuple("rx_dbitlist", std::vector<std::string>{"{1,2,10}"}),
std::make_tuple("rx_dbitreorder", std::vector<std::string>{"0"}),
std::make_tuple("rx_dbitoffset", std::vector<std::string>{"5"}),
std::make_tuple("findex", std::vector<std::string>{"2"}),
std::make_tuple("fwrite", std::vector<std::string>{"0"}),
std::make_tuple("bursts", std::vector<std::string>{"20"}));
}
TEST_CASE("cant put if receiver is not idle", "[.cmdcall][.rx]") {
auto [command, function_arguments] = get_test_parameters();
Detector det;
Caller caller(&det);
// start receiver
std::ostringstream oss;
caller.call("rx_start", {}, -1, PUT, oss);
REQUIRE(oss.str() == "rx_start successful\n");
REQUIRE_THROWS(caller.call(command, function_arguments, -1, PUT, oss, -1));
std::ostringstream oss_stop;
caller.call("rx_stop", {}, -1, PUT, oss_stop);
REQUIRE(oss_stop.str() == "rx_stop successful\n");
}
} // namespace sls

View File

@ -551,7 +551,6 @@ int ClientInterface::set_num_analog_samples(Interface &socket) {
if (detType != CHIPTESTBOARD && detType != XILINX_CHIPTESTBOARD) {
functionNotImplemented();
}
verifyIdle(socket);
try {
impl()->setNumberofAnalogSamples(value);
} catch (const std::exception &e) {
@ -568,7 +567,6 @@ int ClientInterface::set_num_digital_samples(Interface &socket) {
if (detType != CHIPTESTBOARD && detType != XILINX_CHIPTESTBOARD) {
functionNotImplemented();
}
verifyIdle(socket);
try {
impl()->setNumberofDigitalSamples(value);
} catch (const std::exception &e) {
@ -1738,7 +1736,6 @@ int ClientInterface::set_num_transceiver_samples(Interface &socket) {
if (detType != CHIPTESTBOARD && detType != XILINX_CHIPTESTBOARD) {
functionNotImplemented();
}
verifyIdle(socket);
try {
impl()->setNumberofTransceiverSamples(value);
} catch (const std::exception &e) {

View File

@ -168,7 +168,7 @@ class DataProcessor : private virtual slsDetectorDefs, public ThreadObject {
uint32_t streamingTimerInMs;
uint32_t streamingStartFnum;
uint32_t currentFreqCount{0};
struct timespec timerbegin{};
struct timespec timerbegin {};
bool framePadding;
std::atomic<bool> startedFlag{false};
std::atomic<uint64_t> firstIndex{0};

View File

@ -104,11 +104,11 @@ void zmq_free(void *data, void *hint) { delete[] static_cast<char *>(data); }
void print_frames(const PortFrameMap &frame_port_map) {
LOG(sls::logDEBUG) << "Printing frames";
for (const auto &it : frame_port_map) {
uint16_t udpPort = it.first;
const uint16_t udpPort = it.first;
const auto &frame_map = it.second;
LOG(sls::logDEBUG) << "UDP port: " << udpPort;
for (const auto &frame : frame_map) {
uint64_t fnum = frame.first;
const uint64_t fnum = frame.first;
const auto &msg_list = frame.second;
LOG(sls::logDEBUG)
<< " acq index: " << fnum << '[' << msg_list.size() << ']';
@ -127,30 +127,26 @@ std::set<uint64_t> get_valid_fnums(const PortFrameMap &port_frame_map) {
// collect all unique frame numbers from all ports
std::set<uint64_t> unique_fnums;
for (auto it = port_frame_map.begin(); it != port_frame_map.begin(); ++it) {
const FrameMap &frame_map = it->second;
for (auto frame = frame_map.begin(); frame != frame_map.end();
++frame) {
unique_fnums.insert(frame->first);
for (const auto &it : port_frame_map) {
const FrameMap &frame_map = it.second;
for (const auto &frame : frame_map) {
unique_fnums.insert(frame.first);
}
}
// collect valid frame numbers
for (auto &fnum : unique_fnums) {
bool is_valid = true;
for (auto it = port_frame_map.begin(); it != port_frame_map.end();
++it) {
uint16_t port = it->first;
const FrameMap &frame_map = it->second;
for (const auto &it : port_frame_map) {
const uint16_t port = it.first;
const FrameMap &frame_map = it.second;
auto frame = frame_map.find(fnum);
// invalid: fnum missing in one port
if (frame == frame_map.end()) {
LOG(sls::logDEBUG)
<< "Fnum " << fnum << " is missing in port " << port;
// invalid: fnum greater than all in that port
auto last_frame = std::prev(frame_map.end());
auto last_fnum = last_frame->first;
if (fnum > last_fnum) {
auto upper_frame = frame_map.upper_bound(fnum);
if (upper_frame == frame_map.end()) {
LOG(sls::logDEBUG) << "And no larger fnum found. Fnum "
<< fnum << " is invalid.\n";
is_valid = false;
@ -220,18 +216,26 @@ void Correlate(FrameStatus *stat) {
// sending all valid fnum data packets
for (const auto &fnum : valid_fnums) {
ZmqMsgList msg_list;
PortFrameMap &port_frame_map = stat->frames;
for (auto it = port_frame_map.begin();
it != port_frame_map.end(); ++it) {
uint16_t port = it->first;
const FrameMap &frame_map = it->second;
for (const auto &it : stat->frames) {
const uint16_t port = it.first;
const FrameMap &frame_map = it.second;
auto frame = frame_map.find(fnum);
if (frame != frame_map.end()) {
msg_list.insert(msg_list.end(),
stat->frames[port][fnum].begin(),
stat->frames[port][fnum].end());
// clean up
for (zmq_msg_t *msg : stat->frames[port][fnum]) {
}
}
LOG(printHeadersLevel)
<< "Sending data packets for fnum " << fnum;
zmq_send_multipart(socket, msg_list);
// clean up
for (const auto &it : stat->frames) {
const uint16_t port = it.first;
const FrameMap &frame_map = it.second;
auto frame = frame_map.find(fnum);
if (frame != frame_map.end()) {
for (zmq_msg_t *msg : frame->second) {
if (msg) {
zmq_msg_close(msg);
delete msg;
@ -240,9 +244,6 @@ void Correlate(FrameStatus *stat) {
stat->frames[port].erase(fnum);
}
}
LOG(printHeadersLevel)
<< "Sending data packets for fnum " << fnum;
zmq_send_multipart(socket, msg_list);
}
}
// sending all end packets
@ -256,6 +257,21 @@ void Correlate(FrameStatus *stat) {
}
}
stat->ends.clear();
// clean up old frames
for (auto &it : stat->frames) {
FrameMap &frame_map = it.second;
for (auto &frame : frame_map) {
for (zmq_msg_t *msg : frame.second) {
if (msg) {
zmq_msg_close(msg);
delete msg;
}
}
frame.second.clear();
}
frame_map.clear();
}
stat->frames.clear();
}
}
}

View File

@ -63,8 +63,8 @@ class GeneralData {
slsDetectorDefs::frameDiscardPolicy frameDiscardMode{
slsDetectorDefs::NO_DISCARD};
GeneralData() {};
virtual ~GeneralData() {};
GeneralData(){};
virtual ~GeneralData(){};
// Returns the pixel depth in byte, 4 bits being 0.5 byte
float GetPixelDepth() { return float(dynamicRange) / 8; }

View File

@ -47,8 +47,8 @@ class GeneralDataTest : public GeneralData {
// dummy DataProcessor class for testing
class DataProcessorTest : public DataProcessor {
public:
DataProcessorTest() : DataProcessor(0) {};
~DataProcessorTest() {};
DataProcessorTest() : DataProcessor(0){};
~DataProcessorTest(){};
void ArrangeDbitData(size_t &size, char *data) {
DataProcessor::ArrangeDbitData(size, data);
}

View File

@ -88,6 +88,7 @@ message(STATUS "RAPID: ${SLS_INTERNAL_RAPIDJSON_DIR}")
target_link_libraries(slsSupportObject
PUBLIC
slsProjectOptions
${STD_FS_LIB} # from helpers.cmake
PRIVATE
slsProjectWarnings

View File

@ -113,10 +113,10 @@ template <typename T, size_t Capacity> class StaticVector {
// auto begin() noexcept -> decltype(data_.begin()) { return data_.begin();
// }
const_iterator begin() const noexcept { return data_.begin(); }
iterator end() noexcept { return &data_[current_size]; }
const_iterator end() const noexcept { return &data_[current_size]; }
iterator end() noexcept { return data_.begin()+current_size; }
const_iterator end() const noexcept { return data_.begin()+current_size; }
const_iterator cbegin() const noexcept { return data_.cbegin(); }
const_iterator cend() const noexcept { return &data_[current_size]; }
const_iterator cend() const noexcept { return data_.cbegin()+current_size; }
void size_check(size_type s) const {
if (s > Capacity) {

View File

@ -47,6 +47,8 @@ std::string ToString(const defs::polarity s);
std::string ToString(const defs::timingInfoDecoder s);
std::string ToString(const defs::collectionMode s);
std::string ToString(bool value);
std::string ToString(const slsDetectorDefs::xy &coord);
std::ostream &operator<<(std::ostream &os, const slsDetectorDefs::xy &coord);
std::string ToString(const slsDetectorDefs::ROI &roi);

View File

@ -11,7 +11,7 @@ class Version {
private:
std::string version_;
std::string date_;
const std::string defaultBranch_ = "developer";
inline static const std::string defaultVersion_[] = {"developer", "0.0.0"};
public:
explicit Version(const std::string &s);

View File

@ -1,12 +1,13 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
/** API versions */
#define APILIB "developer 0x241122"
#define APIRECEIVER "developer 0x241122"
#define APICTB "developer 0x250310"
#define APIGOTTHARD2 "developer 0x250310"
#define APIMOENCH "developer 0x250310"
#define APIEIGER "developer 0x250310"
#define APIXILINXCTB "developer 0x250311"
#define APIJUNGFRAU "developer 0x250318"
#define APIMYTHEN3 "developer 0x250409"
#define APILIB "0.0.0 0x250523"
#define APIRECEIVER "0.0.0 0x250523"
#define APICTB "0.0.0 0x250523"
#define APIGOTTHARD2 "0.0.0 0x250523"
#define APIMOENCH "0.0.0 0x250523"
#define APIEIGER "0.0.0 0x250523"
#define APIXILINXCTB "0.0.0 0x250523"
#define APIJUNGFRAU "0.0.0 0x250523"
#define APIMYTHEN3 "0.0.0 0x250523"

View File

@ -5,6 +5,13 @@
namespace sls {
std::string ToString(bool value) {
return value ? "1" : "0";
}
std::string ToString(const slsDetectorDefs::xy &coord) {
std::ostringstream oss;
oss << '[' << coord.x << ", " << coord.y << ']';

View File

@ -21,7 +21,8 @@ Version::Version(const std::string &s) {
}
bool Version::hasSemanticVersioning() const {
return version_ != defaultBranch_;
return (version_ != defaultVersion_[0]) && (version_ != defaultVersion_[1]);
}
std::string Version::getVersion() const { return version_; }

View File

@ -14,6 +14,7 @@ target_sources(tests PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/test-TypeTraits.cpp
${CMAKE_CURRENT_SOURCE_DIR}/test-UdpRxSocket.cpp
${CMAKE_CURRENT_SOURCE_DIR}/test-logger.cpp
${CMAKE_CURRENT_SOURCE_DIR}/test-Version.cpp
${CMAKE_CURRENT_SOURCE_DIR}/test-ZmqSocket.cpp
)

View File

@ -8,12 +8,15 @@
#include <sstream>
#include <vector>
namespace sls {
using sls::StaticVector;
TEST_CASE("StaticVector is a container") {
REQUIRE(is_container<StaticVector<int, 7>>::value == true);
REQUIRE(sls::is_container<StaticVector<int, 7>>::value == true);
}
TEST_CASE("Comparing StaticVector containers") {
StaticVector<int, 5> a{0, 1, 2};
StaticVector<int, 5> b{0, 1, 2};
@ -90,10 +93,17 @@ TEST_CASE("Copy construct from array") {
REQUIRE(fcc == arr);
}
TEST_CASE("Construct from a smaller StaticVector") {
StaticVector<int, 3> sv{1, 2, 3};
StaticVector<int, 5> sv2{sv};
REQUIRE(sv == sv2);
}
TEST_CASE("Free function and method gives the same iterators") {
StaticVector<int, 3> fcc{1, 2, 3};
REQUIRE(std::begin(fcc) == fcc.begin());
}
SCENARIO("StaticVectors can be sized and resized", "[support]") {
GIVEN("A default constructed container") {
@ -246,23 +256,23 @@ SCENARIO("Sorting, removing and other manipulation of a container",
REQUIRE(a[3] == 90);
}
}
// WHEN("Sorting is done using free function for begin and end") {
// std::sort(begin(a), end(a));
// THEN("it also works") {
// REQUIRE(a[0] == 12);
// REQUIRE(a[1] == 12);
// REQUIRE(a[2] == 14);
// REQUIRE(a[3] == 90);
// }
// }
// WHEN("Erasing elements of a certain value") {
// a.erase(std::remove(begin(a), end(a), 12));
// THEN("all elements of that value are removed") {
// REQUIRE(a.size() == 2);
// REQUIRE(a[0] == 14);
// REQUIRE(a[1] == 90);
// }
// }
WHEN("Sorting is done using free function for begin and end") {
std::sort(std::begin(a), std::end(a));
THEN("it also works") {
REQUIRE(a[0] == 12);
REQUIRE(a[1] == 12);
REQUIRE(a[2] == 14);
REQUIRE(a[3] == 90);
}
}
WHEN("Erasing elements of a certain value") {
a.erase(std::remove(std::begin(a), std::end(a), 12));
THEN("all elements of that value are removed") {
REQUIRE(a.size() == 2);
REQUIRE(a[0] == 14);
REQUIRE(a[1] == 90);
}
}
}
}
@ -335,4 +345,3 @@ TEST_CASE("StaticVector stream") {
REQUIRE(oss.str() == "[33, 85667, 2]");
}
} // namespace sls

View File

@ -16,6 +16,17 @@ namespace sls {
using namespace sls::time;
TEST_CASE("Convert bool to string", "[support]") {
REQUIRE(ToString(true) == "1");
REQUIRE(ToString(false) == "0");
}
TEST_CASE("Convert string to bool", "[support]") {
REQUIRE(StringTo<bool>("1") == true);
REQUIRE(StringTo<bool>("0") == false);
}
TEST_CASE("Integer conversions", "[support]") {
REQUIRE(ToString(0) == "0");
REQUIRE(ToString(1) == "1");

View File

@ -0,0 +1,19 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include "catch.hpp"
#include "sls/Version.h"
namespace sls {
TEST_CASE("check if version is semantic", "[.version]") {
auto [version_string, has_semantic_version] =
GENERATE(std::make_tuple("developer 0x250512", false),
std::make_tuple("0.0.0 0x250512", false));
Version version(version_string);
CHECK(version.hasSemanticVersioning() == has_semantic_version);
}
} // namespace sls

View File

@ -60,3 +60,5 @@ include(Catch)
catch_discover_tests(tests)
configure_file(scripts/test_simulators.py ${CMAKE_BINARY_DIR}/bin/test_simulators.py COPYONLY)
configure_file(scripts/test_frame_synchronizer.py ${CMAKE_BINARY_DIR}/bin/test_frame_synchronizer.py COPYONLY)
configure_file(scripts/utils_for_test.py ${CMAKE_BINARY_DIR}/bin/utils_for_test.py COPYONLY)

View File

@ -0,0 +1,141 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
'''
This file is used to start up simulators, frame synchronizer, pull sockets, acquire, test and kill them finally.
'''
import sys, time
import traceback, json
from slsdet import Detector
from slsdet.defines import DEFAULT_TCP_RX_PORTNO
from utils_for_test import (
Log,
LogLevel,
RuntimeException,
checkIfProcessRunning,
killProcess,
cleanup,
cleanSharedmemory,
startProcessInBackground,
startProcessInBackgroundWithLogFile,
checkLogForErrors,
startDetectorVirtualServer,
loadConfig,
ParseArguments
)
LOG_PREFIX_FNAME = '/tmp/slsFrameSynchronizer_test'
MAIN_LOG_FNAME = LOG_PREFIX_FNAME + '_log.txt'
PULL_SOCKET_PREFIX_FNAME = LOG_PREFIX_FNAME + '_pull_socket_'
def startFrameSynchronizerPullSocket(name, fp):
fname = PULL_SOCKET_PREFIX_FNAME + name + '.txt'
cmd = ['python', '-u', 'frameSynchronizerPullSocket.py']
startProcessInBackgroundWithLogFile(cmd, fp, fname)
time.sleep(1)
checkLogForErrors(fp, fname)
def startFrameSynchronizer(num_mods, fp):
cmd = ['slsFrameSynchronizer', str(DEFAULT_TCP_RX_PORTNO), str(num_mods)]
# in 10.0.0
#cmd = ['slsFrameSynchronizer', '-p', str(DEFAULT_TCP_RX_PORTNO), '-n', str(num_mods)]
startProcessInBackground(cmd, fp)
time.sleep(1)
def acquire(fp, det):
Log(LogLevel.INFO, 'Acquiring')
Log(LogLevel.INFO, 'Acquiring', fp)
det.acquire()
def testFramesCaught(name, det, num_frames):
fnum = det.rx_framescaught[0]
if fnum != num_frames:
raise RuntimeException(f"{name} caught only {fnum}. Expected {num_frames}")
Log(LogLevel.INFOGREEN, f'Frames caught test passed for {name}')
Log(LogLevel.INFOGREEN, f'Frames caught test passed for {name}', fp)
def testZmqHeadetTypeCount(name, det, num_mods, num_frames, fp):
Log(LogLevel.INFO, f"Testing Zmq Header type count for {name}")
Log(LogLevel.INFO, f"Testing Zmq Header type count for {name}", fp)
htype_counts = {
"header": 0,
"series_end": 0,
"module": 0
}
try:
# get a count of each htype from file
pull_socket_fname = PULL_SOCKET_PREFIX_FNAME + name + '.txt'
with open(pull_socket_fname, 'r') as log_fp:
for line in log_fp:
line = line.strip()
if not line or not line.startswith('{'):
continue
try:
data = json.loads(line)
htype = data.get("htype")
if htype in htype_counts:
htype_counts[htype] += 1
except json.JSONDecodeError:
continue
# test if file contents matches expected counts
num_ports_per_module = 1 if name == "gotthard2" else det.numinterfaces
total_num_frame_parts = num_ports_per_module * num_mods * num_frames
for htype, expected_count in [("header", num_mods), ("series_end", num_mods), ("module", total_num_frame_parts)]:
if htype_counts[htype] != expected_count:
msg = f"Expected {expected_count} '{htype}' entries, found {htype_counts[htype]}"
raise RuntimeException(msg)
except Exception as e:
raise RuntimeException(f'Failed to get zmq header count type. Error:{str(e)}') from e
Log(LogLevel.INFOGREEN, f"Zmq Header type count test passed for {name}")
Log(LogLevel.INFOGREEN, f"Zmq Header type count test passed for {name}", fp)
def startTestsForAll(args, fp):
for server in args.servers:
try:
Log(LogLevel.INFOBLUE, f'Synchronizer Tests for {server}')
Log(LogLevel.INFOBLUE, f'Synchronizer Tests for {server}', fp)
cleanup(fp)
startDetectorVirtualServer(server, args.num_mods, fp)
startFrameSynchronizerPullSocket(server, fp)
startFrameSynchronizer(args.num_mods, fp)
d = loadConfig(name=server, rx_hostname=args.rx_hostname, settingsdir=args.settingspath, fp=fp, num_mods=args.num_mods, num_frames=args.num_frames)
acquire(fp, d)
testFramesCaught(server, d, args.num_frames)
testZmqHeadetTypeCount(server, d, args.num_mods, args.num_frames, fp)
Log(LogLevel.INFO, '\n')
except Exception as e:
raise RuntimeException(f'Synchronizer Tests failed') from e
Log(LogLevel.INFOGREEN, 'Passed all synchronizer tests for all detectors \n' + str(args.servers))
if __name__ == '__main__':
args = ParseArguments(description='Automated tests to test frame synchronizer', default_num_mods=2)
Log(LogLevel.INFOBLUE, '\nLog File: ' + MAIN_LOG_FNAME + '\n')
with open(MAIN_LOG_FNAME, 'w') as fp:
try:
startTestsForAll(args, fp)
cleanup(fp)
except Exception as e:
with open(MAIN_LOG_FNAME, 'a') as fp_error:
traceback.print_exc(file=fp_error)
cleanup(fp)
Log(LogLevel.ERROR, f'Tests Failed.')

View File

@ -4,244 +4,86 @@
This file is used to start up simulators, receivers and run all the tests on them and finally kill the simulators and receivers.
'''
import argparse
import os, sys, subprocess, time, colorama
import sys, subprocess, time, traceback
from colorama import Fore, Style
from slsdet import Detector, detectorType, detectorSettings
from slsdet.defines import DEFAULT_TCP_CNTRL_PORTNO, DEFAULT_TCP_RX_PORTNO, DEFAULT_UDP_DST_PORTNO
HALFMOD2_TCP_CNTRL_PORTNO=1955
HALFMOD2_TCP_RX_PORTNO=1957
from slsdet import Detector
from slsdet.defines import DEFAULT_TCP_RX_PORTNO
colorama.init(autoreset=True)
def Log(color, message):
print(f"{color}{message}{Style.RESET_ALL}", flush=True)
class RuntimeException (Exception):
def __init__ (self, message):
super().__init__(Log(Fore.RED, message))
def checkIfProcessRunning(processName):
cmd = f"pgrep {processName}"
res = subprocess.getoutput(cmd)
return res.strip().splitlines()
from utils_for_test import (
Log,
LogLevel,
RuntimeException,
checkIfProcessRunning,
killProcess,
cleanup,
cleanSharedmemory,
startProcessInBackground,
runProcessWithLogFile,
startDetectorVirtualServer,
loadConfig,
ParseArguments
)
def killProcess(name):
pids = checkIfProcessRunning(name)
if pids:
Log(Fore.GREEN, f"Killing '{name}' processes with PIDs: {', '.join(pids)}")
for pid in pids:
try:
p = subprocess.run(['kill', pid])
if p.returncode != 0 and bool(checkIfProcessRunning(name)):
raise RuntimeException(f"Could not kill {name} with pid {pid}")
except Exception as e:
Log(Fore.RED, f"Failed to kill process {name} pid:{pid}. Exception occured: [code:{e}, msg:{e.stderr}]")
raise
#else:
# Log(Fore.WHITE, 'process not running : ' + name)
LOG_PREFIX_FNAME = '/tmp/slsDetectorPackage_virtual_test'
MAIN_LOG_FNAME = LOG_PREFIX_FNAME + '_log.txt'
GENERAL_TESTS_LOG_FNAME = LOG_PREFIX_FNAME + '_results_general.txt'
CMD_TEST_LOG_PREFIX_FNAME = LOG_PREFIX_FNAME + '_results_cmd_'
def cleanup(fp):
'''
kill both servers, receivers and clean shared memory
'''
Log(Fore.GREEN, 'Cleaning up...')
killProcess('DetectorServer_virtual')
killProcess('slsReceiver')
killProcess('slsMultiReceiver')
cleanSharedmemory(fp)
def cleanSharedmemory(fp):
Log(Fore.GREEN, 'Cleaning up shared memory...')
try:
p = subprocess.run(['sls_detector_get', 'free'], stdout=fp, stderr=fp)
except:
Log(Fore.RED, 'Could not free shared memory')
raise
def startProcessInBackground(name):
try:
# in background and dont print output
p = subprocess.Popen(name.split(), stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, restore_signals=False)
Log(Fore.GREEN, 'Starting up ' + name + ' ...')
except Exception as e:
Log(Fore.RED, f'Could not start {name}:{e}')
raise
def startServer(name):
startProcessInBackground(name + 'DetectorServer_virtual')
# second half
if name == 'eiger':
startProcessInBackground(name + 'DetectorServer_virtual -p' + str(HALFMOD2_TCP_CNTRL_PORTNO))
tStartup = 6
Log(Fore.WHITE, 'Takes ' + str(tStartup) + ' seconds... Please be patient')
time.sleep(tStartup)
def startReceiver(name):
startProcessInBackground('slsReceiver')
# second half
if name == 'eiger':
startProcessInBackground('slsReceiver -t' + str(HALFMOD2_TCP_RX_PORTNO))
time.sleep(2)
def loadConfig(name, rx_hostname, settingsdir):
Log(Fore.GREEN, 'Loading config')
try:
d = Detector()
if name == 'eiger':
d.hostname = 'localhost:' + str(DEFAULT_TCP_CNTRL_PORTNO) + '+localhost:' + str(HALFMOD2_TCP_CNTRL_PORTNO)
#d.udp_dstport = {2: 50003}
# will set up for every module
d.udp_dstport = DEFAULT_UDP_DST_PORTNO
d.udp_dstport2 = DEFAULT_UDP_DST_PORTNO + 1
d.rx_hostname = rx_hostname + ':' + str(DEFAULT_TCP_RX_PORTNO) + '+' + rx_hostname + ':' + str(HALFMOD2_TCP_RX_PORTNO)
d.udp_dstip = 'auto'
d.trimen = [4500, 5400, 6400]
d.settingspath = settingsdir + '/eiger/'
d.setThresholdEnergy(4500, detectorSettings.STANDARD)
else:
d.hostname = 'localhost'
d.rx_hostname = rx_hostname
d.udp_dstip = 'auto'
d.udp_srcip = 'auto'
if d.type == detectorType.JUNGFRAU or d.type == detectorType.MOENCH or d.type == detectorType.XILINX_CHIPTESTBOARD:
d.powerchip = 1
if d.type == detectorType.XILINX_CHIPTESTBOARD:
d.configureTransceiver()
except:
Log(Fore.RED, 'Could not load config for ' + name)
raise
def startCmdTests(name, fp, fname):
Log(Fore.GREEN, 'Cmd Tests for ' + name)
cmd = 'tests --abort [.cmdcall] -s -o ' + fname
try:
subprocess.run(cmd.split(), stdout=fp, stderr=fp, check=True, text=True)
except subprocess.CalledProcessError as e:
pass
with open (fname, 'r') as f:
for line in f:
if "FAILED" in line:
msg = 'Cmd tests failed for ' + name + '!!!'
sys.stdout = original_stdout
Log(Fore.RED, msg)
Log(Fore.RED, line)
sys.stdout = fp
raise Exception(msg)
Log(Fore.GREEN, 'Cmd Tests successful for ' + name)
def startGeneralTests(fp, fname):
Log(Fore.GREEN, 'General Tests')
cmd = 'tests --abort -s -o ' + fname
try:
subprocess.run(cmd.split(), stdout=fp, stderr=fp, check=True, text=True)
except subprocess.CalledProcessError as e:
pass
with open (fname, 'r') as f:
for line in f:
if "FAILED" in line:
msg = 'General tests failed !!!'
sys.stdout = original_stdout
Log(Fore.RED, msg + '\n' + line)
sys.stdout = fp
raise Exception(msg)
Log(Fore.GREEN, 'General Tests successful')
# parse cmd line for rx_hostname and settingspath using the argparse library
parser = argparse.ArgumentParser(description = 'automated tests with the virtual detector servers')
parser.add_argument('rx_hostname', nargs='?', default='localhost', help = 'hostname/ip of the current machine')
parser.add_argument('settingspath', nargs='?', default='../../settingsdir', help = 'Relative or absolut path to the settingspath')
parser.add_argument('-s', '--servers', help='Detector servers to run', nargs='*')
args = parser.parse_args()
if args.servers is None:
servers = [
'eiger',
'jungfrau',
'mythen3',
'gotthard2',
'ctb',
'moench',
'xilinx_ctb'
]
else:
servers = args.servers
Log(Fore.WHITE, 'Arguments:\nrx_hostname: ' + args.rx_hostname + '\nsettingspath: \'' + args.settingspath + '\nservers: \'' + ' '.join(servers) + '\'')
# redirect to file
prefix_fname = '/tmp/slsDetectorPackage_virtual_test'
original_stdout = sys.stdout
original_stderr = sys.stderr
fname = prefix_fname + '_log.txt'
Log(Fore.BLUE, '\nLog File: ' + fname)
with open(fname, 'w') as fp:
# general tests
file_results = prefix_fname + '_results_general.txt'
Log(Fore.BLUE, 'General tests (results: ' + file_results + ')')
sys.stdout = fp
sys.stderr = fp
Log(Fore.BLUE, 'General tests (results: ' + file_results + ')')
def startReceiver(num_mods, fp):
if num_mods == 1:
cmd = ['slsReceiver']
else:
cmd = ['slsMultiReceiver', str(DEFAULT_TCP_RX_PORTNO), str(num_mods)]
# in 10.0.0
#cmd = ['slsMultiReceiver', '-p', str(DEFAULT_TCP_RX_PORTNO), '-n', str(num_mods)]
startProcessInBackground(cmd, fp)
time.sleep(1)
def startGeneralTests(fp):
fname = GENERAL_TESTS_LOG_FNAME
cmd = ['tests', '--abort', '-s']
try:
cleanup(fp)
startGeneralTests(fp, file_results)
testError = False
for server in servers:
try:
# print to terminal for progress
sys.stdout = original_stdout
sys.stderr = original_stderr
file_results = prefix_fname + '_results_cmd_' + server + '.txt'
Log(Fore.BLUE, 'Cmd tests for ' + server + ' (results: ' + file_results + ')')
sys.stdout = fp
sys.stderr = fp
Log(Fore.BLUE, 'Cmd tests for ' + server + ' (results: ' + file_results + ')')
# cmd tests for det
cleanup(fp)
startServer(server)
startReceiver(server)
loadConfig(server, args.rx_hostname, args.settingspath)
startCmdTests(server, fp, file_results)
cleanup(fp)
except Exception as e:
# redirect to terminal
sys.stdout = original_stdout
sys.stderr = original_stderr
Log(Fore.RED, f'Exception caught while testing {server}. Cleaning up...')
testError = True
break
# redirect to terminal
sys.stdout = original_stdout
sys.stderr = original_stderr
if not testError:
Log(Fore.GREEN, 'Passed all tests for virtual detectors \n' + str(servers))
runProcessWithLogFile('General Tests', cmd, fp, fname)
except Exception as e:
# redirect to terminal
sys.stdout = original_stdout
sys.stderr = original_stderr
Log(Fore.RED, f'Exception caught with general testing. Cleaning up...')
cleanSharedmemory(sys.stdout)
raise RuntimeException(f'General tests failed.') from e
def startCmdTestsForAll(args, fp):
for server in args.servers:
try:
num_mods = 2 if server == 'eiger' else 1
fname = CMD_TEST_LOG_PREFIX_FNAME + server + '.txt'
cmd = ['tests', '--abort', '[.cmdcall]', '-s']
Log(LogLevel.INFOBLUE, f'Starting Cmd Tests for {server}')
cleanup(fp)
startDetectorVirtualServer(name=server, num_mods=num_mods, fp=fp)
startReceiver(num_mods, fp)
loadConfig(name=server, rx_hostname=args.rx_hostname, settingsdir=args.settingspath, fp=fp, num_mods=num_mods)
runProcessWithLogFile('Cmd Tests for ' + server, cmd, fp, fname)
except Exception as e:
raise RuntimeException(f'Cmd Tests failed for {server}.') from e
Log(LogLevel.INFOGREEN, 'Passed all tests for all detectors \n' + str(args.servers))
if __name__ == '__main__':
args = ParseArguments('Automated tests with the virtual detector servers')
if args.num_mods > 1:
raise RuntimeException(f'Cannot support multiple modules at the moment (except Eiger).')
Log(LogLevel.INFOBLUE, '\nLog File: ' + MAIN_LOG_FNAME + '\n')
with open(MAIN_LOG_FNAME, 'w') as fp:
try:
startGeneralTests(fp)
startCmdTestsForAll(args, fp)
cleanup(fp)
except Exception as e:
with open(MAIN_LOG_FNAME, 'a') as fp_error:
traceback.print_exc(file=fp_error)
cleanup(fp)
Log(LogLevel.ERROR, f'Tests Failed.')

View File

@ -0,0 +1,259 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
'''
This file is used for common utils used for integration tests between simulators and receivers.
'''
import sys, subprocess, time, argparse
from enum import Enum
from colorama import Fore, Style, init
from slsdet import Detector, detectorSettings
from slsdet.defines import DEFAULT_TCP_RX_PORTNO, DEFAULT_UDP_DST_PORTNO
SERVER_START_PORTNO=1900
init(autoreset=True)
class LogLevel(Enum):
INFO = 0
INFORED = 1
INFOGREEN = 2
INFOBLUE = 3
WARNING = 4
ERROR = 5
DEBUG = 6
LOG_LABELS = {
LogLevel.WARNING: "WARNING: ",
LogLevel.ERROR: "ERROR: ",
LogLevel.DEBUG: "DEBUG: "
}
LOG_COLORS = {
LogLevel.INFO: Fore.WHITE,
LogLevel.INFORED: Fore.RED,
LogLevel.INFOGREEN: Fore.GREEN,
LogLevel.INFOBLUE: Fore.BLUE,
LogLevel.WARNING: Fore.YELLOW,
LogLevel.ERROR: Fore.RED,
LogLevel.DEBUG: Fore.CYAN
}
def Log(level: LogLevel, message: str, stream=sys.stdout):
color = LOG_COLORS.get(level, Fore.WHITE)
label = LOG_LABELS.get(level, "")
print(f"{color}{label}{message}{Style.RESET_ALL}", file=stream, flush=True)
class RuntimeException (Exception):
def __init__ (self, message):
Log(LogLevel.ERROR, message)
super().__init__(message)
def checkIfProcessRunning(processName):
cmd = f"pgrep -f {processName}"
res = subprocess.getoutput(cmd)
return res.strip().splitlines()
def killProcess(name, fp):
pids = checkIfProcessRunning(name)
if pids:
Log(LogLevel.INFO, f"Killing '{name}' processes with PIDs: {', '.join(pids)}", fp)
for pid in pids:
try:
p = subprocess.run(['kill', pid])
if p.returncode != 0 and bool(checkIfProcessRunning(name)):
raise RuntimeException(f"Could not kill {name} with pid {pid}")
except Exception as e:
raise RuntimeException(f"Failed to kill process {name} pid:{pid}. Error: {str(e)}") from e
#else:
# Log(LogLevel.INFO, 'process not running : ' + name)
def cleanSharedmemory(fp):
Log(LogLevel.INFO, 'Cleaning up shared memory', fp)
try:
p = subprocess.run(['sls_detector_get', 'free'], stdout=fp, stderr=fp)
except:
raise RuntimeException('Could not free shared memory')
def cleanup(fp):
Log(LogLevel.INFO, 'Cleaning up')
Log(LogLevel.INFO, 'Cleaning up', fp)
killProcess('DetectorServer_virtual', fp)
killProcess('slsReceiver', fp)
killProcess('slsMultiReceiver', fp)
killProcess('slsFrameSynchronizer', fp)
killProcess('frameSynchronizerPullSocket', fp)
cleanSharedmemory(fp)
def startProcessInBackground(cmd, fp):
Log(LogLevel.INFO, 'Starting up ' + ' '.join(cmd))
Log(LogLevel.INFO, 'Starting up ' + ' '.join(cmd), fp)
try:
p = subprocess.Popen(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, restore_signals=False)
except Exception as e:
raise RuntimeException(f'Failed to start {cmd}:{str(e)}') from e
def startProcessInBackgroundWithLogFile(cmd, fp, log_file_name: str):
Log(LogLevel.INFOBLUE, 'Starting up ' + ' '.join(cmd) + '. Log: ' + log_file_name)
Log(LogLevel.INFOBLUE, 'Starting up ' + ' '.join(cmd) + '. Log: ' + log_file_name, fp)
try:
with open(log_file_name, 'w') as log_fp:
subprocess.Popen(cmd, stdout=log_fp, stderr=log_fp, text=True)
except Exception as e:
raise RuntimeException(f'Failed to start {cmd}:{str(e)}') from e
def checkLogForErrors(fp, log_file_path: str):
try:
with open(log_file_path, 'r') as log_file:
for line in log_file:
if 'Error' in line:
Log(LogLevel.ERROR, f"Error found in log: {line.strip()}")
Log(LogLevel.ERROR, f"Error found in log: {line.strip()}", fp)
raise RuntimeException("Error found in log file")
except FileNotFoundError:
print(f"Log file not found: {log_file_path}")
raise
except Exception as e:
print(f"Exception while reading log: {e}")
raise
def runProcessWithLogFile(name, cmd, fp, log_file_name):
Log(LogLevel.INFOBLUE, 'Running ' + name + '. Log: ' + log_file_name)
Log(LogLevel.INFOBLUE, 'Running ' + name + '. Log: ' + log_file_name, fp)
Log(LogLevel.INFOBLUE, 'Cmd: ' + ' '.join(cmd), fp)
try:
with open(log_file_name, 'w') as log_fp:
subprocess.run(cmd, stdout=log_fp, stderr=log_fp, check=True, text=True)
except subprocess.CalledProcessError as e:
pass
except Exception as e:
Log(LogLevel.ERROR, f'Failed to run {name}:{str(e)}', fp)
raise RuntimeException(f'Failed to run {name}:{str(e)}')
with open (log_file_name, 'r') as f:
for line in f:
if "FAILED" in line:
raise RuntimeException(f'{line}')
Log(LogLevel.INFOGREEN, name + ' successful!\n')
Log(LogLevel.INFOGREEN, name + ' successful!\n', fp)
def startDetectorVirtualServer(name :str, num_mods, fp):
for i in range(num_mods):
port_no = SERVER_START_PORTNO + (i * 2)
cmd = [name + 'DetectorServer_virtual', '-p', str(port_no)]
startProcessInBackgroundWithLogFile(cmd, fp, "/tmp/virtual_det_" + name + str(i) + ".txt")
match name:
case 'jungfrau':
time.sleep(7)
case 'gotthard2':
time.sleep(5)
case _:
time.sleep(3)
def connectToVirtualServers(name, num_mods):
try:
d = Detector()
except Exception as e:
raise RuntimeException(f'Could not create Detector object for {name}. Error: {str(e)}') from e
counts_sec = 5
while (counts_sec != 0):
try:
d.virtual = [num_mods, SERVER_START_PORTNO]
break
except Exception as e:
# stop server still not up, wait a bit longer
if "Cannot connect to" in str(e):
Log(LogLevel.WARNING, f'Still waiting for {name} virtual server to be up...{counts_sec}s left')
time.sleep(1)
counts_sec -= 1
else:
raise
return d
def loadConfig(name, rx_hostname, settingsdir, fp, num_mods = 1, num_frames = 1):
Log(LogLevel.INFO, 'Loading config')
Log(LogLevel.INFO, 'Loading config', fp)
try:
d = connectToVirtualServers(name, num_mods)
d.udp_dstport = DEFAULT_UDP_DST_PORTNO
if name == 'eiger':
d.udp_dstport2 = DEFAULT_UDP_DST_PORTNO + 1
d.rx_hostname = rx_hostname
d.udp_dstip = 'auto'
if name != "eiger":
d.udp_srcip = 'auto'
if name == "jungfrau" or name == "moench" or name == "xilinx_ctb":
d.powerchip = 1
if name == "xilinx_ctb":
d.configureTransceiver()
if name == "eiger":
d.trimen = [4500, 5400, 6400]
d.settingspath = settingsdir + '/eiger/'
d.setThresholdEnergy(4500, detectorSettings.STANDARD)
d.frames = num_frames
except Exception as e:
raise RuntimeException(f'Could not load config for {name}. Error: {str(e)}') from e
return d
def ParseArguments(description, default_num_mods=1):
parser = argparse.ArgumentParser(description)
parser.add_argument('rx_hostname', nargs='?', default='localhost',
help='Hostname/IP of the current machine')
parser.add_argument('settingspath', nargs='?', default='../../settingsdir',
help='Relative or absolute path to the settings directory')
parser.add_argument('-n', '--num-mods', nargs='?', default=default_num_mods, type=int,
help='Number of modules to test with')
parser.add_argument('-f', '--num-frames', nargs='?', default=1, type=int,
help='Number of frames to test with')
parser.add_argument('-s', '--servers', nargs='*',
help='Detector servers to run')
args = parser.parse_args()
# Set default server list if not provided
if args.servers is None:
args.servers = [
'eiger',
'jungfrau',
'mythen3',
'gotthard2',
'ctb',
'moench',
'xilinx_ctb'
]
Log(LogLevel.INFO, 'Arguments:\n' +
'rx_hostname: ' + args.rx_hostname +
'\nsettingspath: \'' + args.settingspath +
'\nservers: \'' + ' '.join(args.servers) +
'\nnum_mods: \'' + str(args.num_mods) +
'\nnum_frames: \'' + str(args.num_frames) + '\'')
return args

81
updateAPIVersion.py Normal file
View File

@ -0,0 +1,81 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2025 Contributors to the SLS Detector Package
"""
Script to update API VERSION file based on the version in VERSION file.
"""
import argparse
import sys
import os
import re
import time
from datetime import datetime
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
API_FILE = SCRIPT_DIR + "/slsSupportLib/include/sls/versionAPI.h"
VERSION_FILE = SCRIPT_DIR + "/VERSION"
parser = argparse.ArgumentParser(description = 'updates API version')
parser.add_argument('api_module_name', choices=["APILIB", "APIRECEIVER", "APICTB", "APIGOTTHARD2", "APIMOENCH", "APIEIGER", "APIXILINXCTB", "APIJUNGFRAU", "APIMYTHEN3"], help = 'module name to change api version options are: ["APILIB", "APIRECEIVER", "APICTB", "APIGOTTHARD2", "APIMOENCH", "APIEIGER", "APIXILINXCTB", "APIJUNGFRAU", "APIMYTHEN3"]')
parser.add_argument('api_dir', help = 'Relative or absolute path to the module code')
def update_api_file(new_api : str, api_module_name : str, api_file_name : str):
regex_pattern = re.compile(rf'#define\s+{api_module_name}\s+')
with open(api_file_name, "r") as api_file:
lines = api_file.readlines()
with open(api_file_name, "w") as api_file:
for line in lines:
if regex_pattern.match(line):
api_file.write(f'#define {api_module_name} "{new_api}"\n')
else:
api_file.write(line)
def get_latest_modification_date(directory : str):
latest_time = 0
latest_date = None
for root, dirs, files in os.walk(directory):
for file in files:
if file.endswith(".o"):
continue
full_path = os.path.join(root, file)
try:
mtime = os.path.getmtime(full_path)
if mtime > latest_time:
latest_time = mtime
except FileNotFoundError:
continue
latest_date = datetime.fromtimestamp(latest_time).strftime("%y%m%d")
return latest_date
def update_api_version(api_module_name : str, api_dir : str):
api_date = get_latest_modification_date(api_dir)
api_date = "0x"+str(api_date)
with open(VERSION_FILE, "r") as version_file:
api_version = version_file.read().strip()
api_version = api_version + " " + api_date #not sure if we should give an argument option version_branch
update_api_file(api_version, api_module_name, API_FILE)
print(f"updated {api_module_name} api version to: {api_version}")
if __name__ == "__main__":
args = parser.parse_args()
api_dir = SCRIPT_DIR + "/" + args.api_dir
update_api_version(args.api_module_name, api_dir)

View File

@ -1,65 +0,0 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
usage="\nUsage: updateAPIVersion.sh [API_NAME] [API_DIR] [API_BRANCH(opt)]."
if [ $# -lt 2 ]; then
echo -e "Requires atleast 2 arguments. $usage"
return [-1]
fi
API_NAME=$1
PACKAGE_DIR=$PWD
API_DIR=$PACKAGE_DIR/$2
API_FILE=$PACKAGE_DIR/slsSupportLib/include/sls/versionAPI.h
CURR_DIR=$PWD
if [ ! -d "$API_DIR" ]; then
echo "[API_DIR] does not exist. $usage"
return [-1]
fi
#go to directory
cd $API_DIR
#deleting line from file
NUM=$(sed -n '/'$API_NAME' /=' $API_FILE)
#echo $NUM
if [ "$NUM" -gt 0 ]; then
sed -i ${NUM}d $API_FILE
fi
#find new API date
API_DATE="find . -printf \"%T@ %CY-%Cm-%Cd\n\"| sort -nr | cut -d' ' -f2- | egrep -v '(\.)o' | head -n 1"
API_DATE=`eval $API_DATE`
API_DATE=$(sed "s/-//g" <<< $API_DATE | awk '{print $1;}' )
#extracting only date
API_DATE=${API_DATE:2:6}
#prefix of 0x
API_DATE=${API_DATE/#/0x}
echo "date="$API_DATE
# API_VAL concatenates branch and date
API_VAL=""
# API branch is defined (3rd argument)
if [ $# -eq 3 ]; then
API_BRANCH=$3
echo "branch="$API_BRANCH
API_VAL+="\"$API_BRANCH $API_DATE\""
else
# API branch not defined (default is developer)
echo "branch=developer"
API_VAL+="\"developer $API_DATE\""
fi
#copy it to versionAPI.h
echo "#define "$API_NAME $API_VAL >> $API_FILE
#go back to original directory
cd $CURR_DIR

34
updateClientAPIVersion.py Normal file
View File

@ -0,0 +1,34 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2025 Contributors to the SLS Detector Package
"""
Script to update API VERSION for slsReceiverSoftware or slsDetectorSoftware
"""
import argparse
import os
from updateAPIVersion import update_api_version
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
parser = argparse.ArgumentParser(description = 'updates API version')
parser.add_argument('module_name', nargs="?", choices=["slsDetectorSoftware", "slsReceiverSoftware", "all"], default="all", help = 'module name to change api version options are: ["slsDetectorSoftware", "slsReceiverSoftware, "all"]')
if __name__ == "__main__":
args = parser.parse_args()
if args.module_name == "all":
client_names = ["APILIB", "APIRECEIVER"]
client_directories = [SCRIPT_DIR+"/slsDetectorSoftware", SCRIPT_DIR+"/slsReceiverSoftware"]
elif args.module_name == "slsDetectorSoftware":
client_names = ["APILIB"]
client_directories = [SCRIPT_DIR+"/slsDetectorSoftware"]
else:
client_names = ["APIRECEIVER"]
client_directories = [SCRIPT_DIR+"/slsReceiverSoftware"]
for client_name, client_directory in zip(client_names, client_directories):
update_api_version(client_name, client_directory)

View File

@ -1,59 +0,0 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
branch=""
client_list=("slsDetectorSoftware" "slsReceiverSoftware")
usage="\nUsage: updateClientAPI.sh [all|slsDetectorSoftware|slsReceiverSoftware] [branch]. \n\tNo arguments means all with 'developer' branch. \n\tNo 'branch' input means 'developer branch'"
# arguments
if [ $# -eq 0 ]; then
declare -a client=${client_list[@]}
echo "API Versioning all"
elif [ $# -eq 1 ] || [ $# -eq 2 ]; then
# 'all' client
if [[ $1 == "all" ]]; then
declare -a client=${client_list[@]}
echo "API Versioning all"
else
# only one server
if [[ $client_list != *$1* ]]; then
echo -e "Invalid argument 1: $1. $usage"
return -1
fi
declare -a client=("${1}")
#echo "Versioning only $1"
fi
if [ $# -eq 2 ]; then
if [[ $client_list == *$2* ]]; then
echo -e "Invalid argument 2: $2. $usage"
return -1
fi
branch+=$2
#echo "with branch $branch"
fi
else
echo -e "Too many arguments.$usage"
return -1
fi
#echo "list is: ${client[@]}"
# versioning each client
for i in ${client[@]}
do
dir=$i
case $dir in
slsDetectorSoftware)
declare -a name=APILIB
;;
slsReceiverSoftware)
declare -a name=APIRECEIVER
;;
*)
echo -n "unknown client argument $i"
return -1
;;
esac
echo -e "Versioning $dir [$name]"
./updateAPIVersion.sh $name $dir $branch
done