Compare commits

..

786 Commits

Author SHA1 Message Date
98f0b967e3 suppressed warning in qwt + added reason 2022-11-23 09:13:12 +01:00
a36670df42 added comments 2022-11-21 16:47:06 +01:00
670ec18c3b cleanup 2022-11-21 16:26:37 +01:00
0ae15e5aa2 static linking of qwt 2022-11-21 16:12:04 +01:00
b08bf08ea9 using external qwt 2022-11-21 11:41:12 +01:00
f6241f7a5e formatting 2022-11-18 14:26:58 +01:00
2dcf4a7144 fixed warnings 2022-11-18 11:19:01 +01:00
33e2155db1 Added plotPattern.py for the new pattern generator for the new firmware (6 loops, 6 waits) 2022-11-18 10:30:25 +01:00
39a87ffcd9 release notes 2022-11-17 16:57:35 +01:00
5ea2cdb83d fix to allow all clks for pll (totaldiv was float instead of int) (#579) 2022-11-17 16:39:17 +01:00
4bb1a612f1 changed detector header version to detector specific field names and incremented version number, also for zmq (#574) 2022-11-17 15:24:12 +01:00
f108ec82ea destruct arping thread in Arping desturctor (#578) 2022-11-17 15:22:54 +01:00
74a2f07c7d merge from develpoer, locking complete set of start acq steps and stop steps in eiger server as it shouldnt be interrutped, moved prepare into startstatemachine (#577) 2022-11-17 14:58:53 +01:00
a921368dea Updated generator.c: patioctrl, loops and timeouts are only generated if they exist in the pattern source 2022-11-17 10:57:42 +01:00
b432c70076 Moved location of patternGenerator directory.
Updated generator.c to include 3 additional timers and loops
2022-11-16 17:47:54 +01:00
06fc48b115 update release notes 2022-11-11 17:16:34 +01:00
bf3333b97b show fpga temp in developer tab in gui, virtual servers just show warning if no temperature file in /tmp to read (dont throw exception) (#573) 2022-11-11 17:15:15 +01:00
38cd10d4e6 Qt5 built in qwt (#570)
- qt4->qt5
- in built qt5 6.1.5 because rhel7 is not upto date with qt5, removed findqwt.cmake
- made a fix in qwt lib (qwt_plot_layout.h) to work with 5.15 and lower versions for qrect constr.
- qt5 forms fixed, qt4 many hard coding forms switched to forms including qtabwidget, scrolls etc, fonts moved to forms
 - docking option enabled by default, removed option to disable docking feature from "Mode"
- added qVersionResolve utility functions to handle compatibility before and after qt5.12
- qtplots (ian's code) takes in gain mode enable to set some settings within the class, with proper gain plot ticks
- ensure gain plots have no zooming of z axis in 2d and y axis in 1d
- removed placeholder text in qpalette in main window form as its not supportd until 5.12 (so using qt5.9 designer insted of qt5.15 to cope)
- tab order
Servers:
- fixed some error messages that were empty for fail in funcs (mostly minor as if this error, major issues)
2022-11-11 15:15:10 +01:00
05f657c106 Versioning (#568)
- removed getClientServerAPIVersion in server (not used)
- removed rxr side (clientversion compatibility check), removed enum as well as it is now done on the client side.
- versionAPI.h
   - GITBRANCH changed to RELEASE
   - dates for all API changed to "sem_version date". Scripts to compile servers modified for this. Empty "branch" name will end up with developer for sem_version.

- Version class with constructor taking in the long version (APILIB date). Other member functions including concise(to get sem_version for new releases and date for old releases), 
  
- bypassing initial tests, also now bypasses the client-rxr compatibility check (at rx_hostname command)

- previously, compatibility between client-det was ensuring both had the same detector API (eg. same APIJUNGFRAU)
   - Now, compatibility only checks APILIB (client side) and detector API(eg. APIJUNGFRAU) (detector side) have same major version. It only does backward compatibility test. Rest is upto user to ensure. 
   - If server is from an older release, it will compare dates like previous implementation (APIJUNGFRAU from both client and det)
 
- - previously, compatibility between client-rxr was ensuring both had the same APIRECEIVER
   - Now, compatibility only checks APILIB (client side) and APIRECEIVER (rxr side) have same major version. It only does backward compatibility test. Rest is upto user to ensure. 
   - If rxr is from an older release, it will compare dates like previous implementation (APIRECEIVER from both client and rxr)

- removed APIGUI, evalVersionVariables.sh, genVersionHeader.sh (not needed or not used)

- clientVersion, rxrversion and detectorserverversion all return strings and not integers (in hex) anymore. Depending if it has semantic versioning, it will print that or the date if it is too old.

- fixed in python (strings for versions)
- check_version function in detector server changed to "initial checks" as it only checks server-firmware compatibility and initial server checks. Client compatibilities are moved to client side.
- --version gives sem_version and date? Is date needed as well. The clientversion, detserverversion and rxrversion API gives only sem_version (no date)
- - formatting
2022-11-09 11:13:09 +01:00
b150db0fb3 Merge branch 'developer' of github.com:slsdetectorgroup/slsDetectorPackage into developer 2022-11-08 11:27:54 +01:00
f0c0f07701 changed JF structures and tiff writer for rectangular detectors 2022-11-08 11:27:31 +01:00
4a95ee8362 format 2022-11-08 10:25:12 +01:00
16e9b272c7 release notes 2022-11-07 12:49:04 +01:00
615d66d493 eiger server workaround fix for fw stop done signal (#571)
- eiger server: fix for fw workaround where stop acquisition processing done signal does not come up, by removing reset in stop acquisition and waiting for2 seconds for feb done processing signal to go down, if it doesnt, throw if status is not idle.
- error messages not setup for some eiger server errors
- quad fix (chip signals to trim quad, both left and right registers can be different)
- minor logical error of no consequence (stop acquisition returns a different enum than expected)
2022-11-07 12:42:54 +01:00
9f906a779e M3clk (#548)
* m3: clock update, cannot set clk 4 and 5 anymore
* updated firmware version
2022-11-07 11:21:37 +01:00
3560f81d8e cpupower added (#569) 2022-10-27 11:55:09 +02:00
4230517e69 pybind documentation (update) (#563)
* document pybind11 update and included in package

* updatng doc for pybind11
2022-10-27 10:50:00 +02:00
4f21ad5122 Corrected mistake for compiling moench04 with digital bits 2022-10-25 09:55:11 +02:00
2696b95014 Removed an extra bracket in slsDetectorCalibration/singlePhotonDetector.h wrongfully inserted in previous commit 2022-10-24 14:22:07 +02:00
bb0201385f fixed issue with cluster finder 2022-10-24 12:39:37 +02:00
22b16de6b5 added binary for moench04 and gain bits 2022-10-24 12:36:24 +02:00
9232776577 Merge branch 'developer' of https://github.com/slsdetectorgroup/slsDetectorPackage into developer 2022-10-24 12:11:27 +02:00
5096119ac3 some changes for JF with LGADs 2022-10-24 12:09:52 +02:00
876977392b Modified moench04CtbZmq10GbData.h for analog digital mode 2022-10-24 12:08:42 +02:00
9efb19a9b5 some changes for JF with LGADs 2022-10-24 12:07:26 +02:00
65ad0bae51 update release notes 2022-10-20 13:52:29 +02:00
71c50e5f66 CtbGui (Root) removed printout of pixel value on draw 2022-10-20 12:06:03 +02:00
3fa3e05bfd Adcvpp2 (#567)
* fix for adcvpp p
2022-10-20 11:53:23 +02:00
f9f485c99e moench04CtbZmq10GbData.h aoff guess (l. 115 or 116) 2022-10-20 11:07:44 +02:00
b7cdfbb4d2 Adcvpp (#566)
* added api in detector class for adcvpp, taken out of dac list
2022-10-20 10:58:56 +02:00
af300e0276 fix compiler warning for ctb gui for moench getGain: aoff not declared (#565) 2022-10-20 10:50:31 +02:00
a84fcd2c82 pybind11 release update 2022-10-18 17:24:47 +02:00
6fc43fa06a binaries in 2022-10-18 15:52:05 +02:00
e7879ee365 g2 and m3 round robin (#559)
* g2 and m3: round robin
2022-10-18 15:51:23 +02:00
46bb9bc2d7 nios temp (#557)
* fixed temp read nios

* divide for eiger and dont print
2022-10-18 15:47:23 +02:00
4a7cd051c1 Gainzoom (#556)
* gain plot: dont allow zoom, only zoom on main plot

* fixed gain plot zooming

* fixing panning for gainplots
2022-10-18 15:27:44 +02:00
d2c4827b31 increasing default rx tcp port (#562)
* increasing rx tcp port by default when creating shm
2022-10-18 15:24:04 +02:00
d9e34e1657 Pllreset (#560)
* ctb, moench and jungfrau: pll reset at start not happening as no defines
2022-09-29 14:03:26 +02:00
bac32dcba9 ctb 1g non blocking acquire (#555)
* allowing ctb and moench 1g to have non blocking acquisition also send data, refactoring wait for acquisition finished for all others
2022-09-16 17:45:51 +02:00
e385618d09 gui crash qtimer (#551)
* locks when qtimer runs out same time as acquisitionfinished

* fix for newer gcc
2022-09-16 17:02:03 +02:00
1df4aba8ec updated g2 version for new fw (#554)
* updated g2 version for new fw
2022-09-12 15:04:11 +02:00
fdfdfd9f43 Option to install slsdet python extension in the install tree (#553)
* added option to install python ext
2022-09-09 13:47:27 +02:00
c0c8c8e21a Hardcopy of pybind11 instead of using git submodules (#552)
* removed pybind as submodule
* added hardcopy of pybind11 2.10.0
* rename pybind11 folder to avoid conflicts when changing branch

Co-authored-by: Dhanya Thattil <dhanya.thattil@psi.ch>
2022-09-09 10:42:43 +02:00
236f00c810 only get dacnames if modules are attached (#550) 2022-09-08 13:48:05 +02:00
afff85b44b fix for broken build of zmq and ctb 2022-09-07 11:58:26 +02:00
1425382dbb G2clkdiv (#547)
* g2 changing clkdivs 2 3 4 to defaults for burst and cw mode
2022-09-05 17:00:01 +02:00
76c18f3303 setting rx_hostname (or udp_dstip with rx_hostname not none) will always set udp_dstmac. solves problem of chaing udp_dstip and udp_dstmac stays the same (#544) 2022-09-05 16:57:54 +02:00
7de6f157b5 M3badchannels (#526)
* badchannels for m3 and modify for g2 (file from single and multi)

* m3: invert polarity of bit 7 and 11 signals from setmodule, allow commas in bad channel file

* badchannel file can take commas, colons and comments (also taking care of spaces at the end of channel numbers)

* tests 'badchannels' and 'Channel file reading' added, removing duplicates in badchannel list, defining macro for num counters in client side

* fix segfault when list from file is empty, 

* fix tests assertion for ctbconfig (adding message) for c++11

* fixed badchannels in m3server (clocking in trimming) 

* badchannel tests can be run from any folder (finds the file)
2022-09-01 15:30:04 +02:00
02322bb3c2 jungfrau reset core and usleep (fix for 6.1.1 now fixed in firmware) removed (#545) 2022-09-01 15:05:41 +02:00
4913c9c0b0 incorrect check for fwrite fail (#541) 2022-08-31 15:53:24 +02:00
d73d8994c0 g2 hdi type tolerance changed from 5 to 10 (#542) 2022-08-30 15:54:32 +02:00
aeb3e222c6 format 2022-08-30 15:31:38 +02:00
045a28b5de Nanosecond times in Python (#522)
* initital implementation

* datetime replaces with sls::Duration in Python C bindings

* using custom type caster

* fix for conversion to seconds

* added set_count in python

* common header for pybind11 includes

authored-by: Erik Frojdh <erik.frojdh@psi.ch>
2022-08-26 11:48:40 +02:00
3970ed2560 Suppress -Wdeprecated-declarations for gcc (#536)
* Supress gcc warning from rapidjson include

Co-authored-by: Erik Frojdh <erik.frojdh@psi.ch>
2022-08-26 09:23:05 +02:00
5a22e8b926 ctb and moench fw fixed to work with pattern length so pattern command works (#535) 2022-08-25 15:01:59 +02:00
4638bf7cf8 Jungfrausync (#519)
* jungfrau sync
2022-08-23 10:29:16 +02:00
da16c1101c G2 hdi id (#521)
* g2: hdi value in fpga is 3 bits, so using appropriate enum for it

* hdi id bits are determined

* fw date changed (g2)
2022-08-23 08:02:11 +02:00
67eea7ac36 Patdefault (#524)
* m3, ctb, moench set wait and loop addresses to 0x1fff

* update default pattern file for moench
2022-08-23 07:59:39 +02:00
c8e31ae6d7 formatting 2022-08-16 09:53:09 +02:00
809b0bdeb8 Jungfraumaster (#518)
* set jungfrau master only from client
* added tests, fixed a bug in ctb and moench (infinite recursion) that will never happen atm
2022-08-16 09:51:18 +02:00
01696ca89b Jungfrautrigger (#516)
* jungfrau trigger added
* added blocking trigger
2022-08-16 09:41:47 +02:00
1bc4994be6 G2parallel (#514)
* g2: non parallel added
2022-08-16 09:35:39 +02:00
22b9562629 G2hdi (#510)
* g2: new hdi values, write hdi value to reg, set slave/master to reg, able to set master from server config file, server command line and client

* print versions for virtual as well
2022-08-16 09:31:13 +02:00
409a3977db Merge branch 'developer' of github.com:slsdetectorgroup/slsDetectorPackage into developer 2022-08-15 12:58:18 +02:00
81fbc45488 moench04CtbZmq10GbData.h corrected 2022-08-15 12:57:55 +02:00
9980a419f3 G2minor (#512)
* minor refactoring from old code

* binaries in
2022-08-11 09:17:35 +02:00
e0207cfac1 Udpsocket (#507)
* udp socket refactor from reuss: closing socket before throwing for bind error in constructor, closing socket at destructor and not at shutdown to allow other thread to read with a -1, so the object can still be accessed

* check for size of packet for every detector

* nullptr to unique ptr to call its destructor, when deallocating resources moved out of shutdown

* minor
2022-08-11 09:14:29 +02:00
07ff28e9e5 help should doesn't create detector object (#508)
* help should doesn't create detector object instead passing nullptr to the Proxy
2022-08-10 12:27:17 +02:00
6bf9dbf6d3 Format (#506)
Formatted package
2022-08-05 15:39:34 +02:00
7173785b29 Hdf5 para (#505)
* incorrect dimensions for virtual hdf5 parameter set

* fix for corner case bug in hdf5 virtual parameter dataset when frames caught is not a multiple of framesperfile

* reafctoring for readability and error prone hard numbers
2022-08-05 09:09:00 +02:00
89e293cb5a Rxpointers (#504)
* gui message doesnt show if it has a '>' symbol in error msg

* minor refactoring for readability (size_t calc fifo size)

* refactoring listening udp socket code: activated and datastream dont create udp sockets anyway, rc<=- should be discarded in any case

* wip

* refactoring memory structure access

* wip: bugfix write header + data to binary

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* portRoi no roi effecto on progress

* fail at receiver progress, wip

* segfaults for char pointer in struct

* reference to header to get header and data

* refactoring

* use const defined for size of header of fifo

* updated release notes

* remove pointer in callback for sls_receiver_header pointer

* rx same name arguments in constructors

* rx: same name arguments in constructor

* rx: removing the '_' suffix in class data members

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* diff undo for clang later

* wip

* Wip

* const string&
2022-08-05 09:08:18 +02:00
9ac8dab8af Rxclassmembers (#503)
* gui message doesnt show if it has a '>' symbol in error msg

* minor refactoring for readability (size_t calc fifo size)

* refactoring listening udp socket code: activated and datastream dont create udp sockets anyway, rc<=- should be discarded in any case

* wip

* refactoring memory structure access

* wip: bugfix write header + data to binary

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* portRoi no roi effecto on progress

* fail at receiver progress, wip

* segfaults for char pointer in struct

* reference to header to get header and data

* refactoring

* use const defined for size of header of fifo

* updated release notes

* remove pointer in callback for sls_receiver_header pointer

* rx same name arguments in constructors

* rx: same name arguments in constructor

* rx: removing the '_' suffix in class data members

* merge fix

* merge fix

* review fix refactoring
2022-07-25 14:02:11 +02:00
d132ad8d02 Callback rxheader (#502)
* gui message doesnt show if it has a '>' symbol in error msg

* minor refactoring for readability (size_t calc fifo size)

* refactoring listening udp socket code: activated and datastream dont create udp sockets anyway, rc<=- should be discarded in any case

* wip

* refactoring memory structure access

* wip: bugfix write header + data to binary

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* portRoi no roi effecto on progress

* fail at receiver progress, wip

* segfaults for char pointer in struct

* reference to header to get header and data

* refactoring

* use const defined for size of header of fifo

* updated release notes

* remove pointer in callback for sls_receiver_header pointer

* passing reference header for callback instead of copying it
2022-07-22 16:15:21 +02:00
4117cda79b Rx: refactor memory structure and listener (#496)
* gui message doesnt show if it has a '>' symbol in error msg

* minor refactoring for readability (size_t calc fifo size)

* refactoring listening udp socket code: activated and datastream dont create udp sockets anyway, rc<=- should be discarded in any case

* wip

* refactoring memory structure access

* wip: bugfix write header + data to binary

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* wip

* portRoi no roi effecto on progress

* fail at receiver progress, wip

* segfaults for char pointer in struct

* reference to header to get header and data

* refactoring

* use const defined for size of header of fifo

* updated release notes

* refactoring from review: fwrite, static_cast
2022-07-22 15:32:41 +02:00
26cbfbdb30 Merge branch 'developer' of github.com:slsdetectorgroup/slsDetectorPackage into developer 2022-07-14 12:19:45 +02:00
3b9df5868e Added JF strixels cluster finder and gain bit for Moench04 2022-07-14 12:18:34 +02:00
8fcec81a67 Pattern 6 levels (#493)
* separating pattern levels from command name: command line done

* separated patten level from command in examples and default pattern files in servers

* command line and server works

* python: patnloop not verified, wip

* works except for patloop (set, and get does not list properly)

* minor

* fixed tests

* added 3 more levels for ctb and moench

* wip

* minor err msg

* minor

* binaries in

* separating pattern levels from command name: command line done

* separated patten level from command in examples and default pattern files in servers

* command line and server works

* python: patnloop not verified, wip

* works except for patloop (set, and get does not list properly)

* minor

* fixed tests

* added 3 more levels for ctb and moench

* wip

* minor err msg

* minor

* binaries in

* python working

* import fix

* changed fw version for ctb and moench. binaries in

Co-authored-by: Erik Frojdh <erik.frojdh@gmail.com>
2022-07-14 12:00:07 +02:00
5be50785fb rx noRoi should not create files for port (#499) 2022-07-14 10:09:48 +02:00
3d3e70c718 added defines for python (#498) 2022-07-14 10:07:26 +02:00
f1051246ad Configfree (#492)
* prevent mem size check before Detector constructed (for loading config files), allowing freeing shm for submodules

* back to not allowing free submodules - consistency issues
2022-07-12 11:01:44 +02:00
0fc76a81b0 Zmqhint (#495)
* sorted the arguments of cmk.sh

* sorted arguments

* sorting arguments

* added q for zmq hint
2022-07-12 11:01:00 +02:00
b6db097800 udp_cleardst in fpga (#490)
* set number of destination in fpga as well when using udp_cleardst

* ensures fpga destinations are also cleared up

* binaries in
2022-07-05 16:09:41 +02:00
c414d92b86 help script to update submodule 2022-07-05 14:52:23 +02:00
0358838dfb fixed ctb gui for namespace fix 2022-07-05 11:29:23 +02:00
704b67c846 Merge branch 'developer' of github.com:slsdetectorgroup/slsDetectorPackage into developer 2022-06-13 16:52:39 +02:00
9ecafbf8d5 some updates to the moench data processing 2022-06-13 16:50:30 +02:00
aa93aed4ed updated client versioning 2022-06-09 13:50:41 +02:00
5490daa0a1 storagecells not updated in rx and allowing it in idle mode (#485) 2022-06-09 13:42:18 +02:00
8ca8185d41 H5 one dataset name (#484)
* rename all datasets in hdf5 files to just 'data'

* removing the global qualifier H5

* update release notes
2022-06-09 12:35:33 +02:00
89aa0760c6 Hdf5fix (#483)
* hdf5 fix for string reference

* fix hdf5 compilation after namespace change
2022-06-09 11:42:32 +02:00
3cee36a3db gui hv moved to settings tab from developer tab, allowed also for eiger (#482) 2022-06-08 17:06:08 +02:00
364e0c6268 Udp srcip auto (#480)
* able to set udp_srcip and udp_srcip2 to auto

* minor

* minor
2022-06-08 12:26:49 +02:00
728cb35c37 rxMemsize (#479)
* changing fifo header size to 16 to fix the rxr header (112) to be a power of 2 to make it more efficient and reduce packet loss

* release notes
2022-06-08 12:03:13 +02:00
1fb90ab98c M3threshold (#475)
* vicin default changed to 800, only setting vthx directly allows to set dac even if counter disabled, else disable counter, setallthresholdenergy if an energy is -1, get module value, fix that reg was repaced by isettings

* vth3 disabled for interpolation enable, interpolation disable sets counter mask to what it was before (updating old mask whn setting counter mask except for setting all counters for interpolation enable) and enabling vth3 if counter was enabled

* refactor and test for previous commit

* pump probe only has vth2 enabled, handles both pump probe mode and interpolation mode as well

* wip

* refactored pump probe and interpolation and added to setmodule

* check dacs and trimbits out of range for setmodule (not just threshold)

* binaries in

* m3: pump probe and interpolation mutually exclusive

* minor
2022-06-07 16:55:33 +02:00
25b5b02302 m3 and g2: change in system clock (clkdiv2) should also change the time settings(exptime, period, gate delay etc.), g2: sys freq same irrespective of external or internal timing source (#470) 2022-06-02 11:02:10 +02:00
365ac835eb Get trimbits (#462)
* added the possibility to save settings file for m3 and eiger

* added save trimbits to gui

* update release notes

* python wip

* moved location of trimbits save option in gui

* python works

* updating getModule with all its parameters in the server side

* updating binaries
2022-05-24 11:08:08 +02:00
d61741c28b tests add to namespace sls (#466)
* tests add to namespace sls

* fixed for tests

* finish up namespace sls for tests
2022-05-23 16:17:32 +02:00
8656eeec25 Revert "tests add to namespace sls (#464)" (#465)
This reverts commit f5745fcf18.
2022-05-20 15:43:48 +02:00
f5745fcf18 tests add to namespace sls (#464) 2022-05-20 15:41:37 +02:00
c7ba79644a minor 2022-05-20 12:01:42 +02:00
70aed474dc fedora cant compile as make_unique in Implementation points to std and not sls namespace, but it compiles for DetectorImpl (#463) 2022-05-20 12:00:24 +02:00
f3edd4dc56 Not allowing / or spaces in file name prefix (#461) 2022-05-18 12:37:44 +02:00
a4bd2f1be7 updated release 2022-05-18 12:19:32 +02:00
73a45e1b5c nCe made high before programming blackfin to ensure seamless programming (#460) 2022-05-18 12:17:44 +02:00
4259363169 rxr sls namespace (#457)
* rxr src files and classes (detectordata, ZmqSocket) added to sls namespace

* moving defines inside namespace

* moving defines inside namespace, added helpdacs to namespace

* added namespace to gui

* gui also updated

* removed unnecessary sls:: when already in sls namespace for slsDetectoSoftware, receverSoftware, slsDetectorGui and slsSupportlib
2022-05-18 11:48:38 +02:00
fcc7f7aef8 Rx roi (#428)
* roi structure expanded to have ymin and ymax

* compile with 'detector roi'

* wip

* wip, rx_roi, rx_clearroi

* wip rxroi

* rxroi wip

* wip rxroi

* merge fix

* wip

* rx_roi works, impl wip, test

* tests in, impl left

* wip, rxroi impl

* wip, rxroi impl

* wip

* setrx_Roi works, getrx_roi, wip

* rx_roi impl done

* wip, rxroi

* wip, getrx_roi rxr ports

* fix ports

* wip

* wip

* fix positions on server side

* wip

* numports wip

* wip

* jungfrau top inner interface row increment

* x, y detpos, wip

* removed eiger row indices flipping in gui (bottom flipping maintained)

* wip

* wip, jungfrau numinterfaces2

* jungfrau virtual works

* eiger, jungfrau, g2 virtual server works

* eiger positions fix, wip

* binaries in

* minor printout

* binaries in

* merge fix

* merge fix

* removing getposition

* setrxroi wip

* set upto port

* get messed, wip

* roi multi to module works, wip

* wip

* roi dont return -1

* added rxroi metadata in master file

* added rxroifromshm, not yet in detector

* rx roi in gui with box, also for gap pixels (gappixels for jungfrau mess)

* fix for segfault in gui with detaching roi box in gui

* wip

* m3 gui: slave timing modes should be discarded when squashing

* fixed m3 virtual data, and fixed counters in gui asthetics

* m3 roi works

* wip, g2

* wip

* handling g225um boards, and showing roi for gainplot as well

* udpate python functions

* fix for 1d and a2d roi written

* fixed actual roi written to file

* no virtual hdf5 when handling rx roi

* test

* minor

* binarie in
2022-05-16 12:35:06 +02:00
9808376207 release notes 2022-05-16 12:33:07 +02:00
be617577c9 always enable gap pixels if they exist for detector in gui (#450) 2022-05-16 12:31:46 +02:00
e30ee445a1 m3 gui: slave timing modes should be discarded when squashing (#449) 2022-05-16 12:29:17 +02:00
8d6b8d66cc size of shm needs to be only sometimes checked when opening shared memory (#443) 2022-05-16 12:27:48 +02:00
88649a00b6 M3vtrim min (#454)
* Remove the vtrim min 600 check when setting threshold energy. vth1-3:200 - 2400, others 0-2800

* compile fix
2022-05-12 11:41:05 +02:00
b122c2fbdf Revert "Remove the vtrim min 600 check when setting threshold energy. vth1-3:200 - 2400, others 0-2800 (#452)" (#453)
This reverts commit 7d574375b4.
2022-05-12 11:38:34 +02:00
7d574375b4 Remove the vtrim min 600 check when setting threshold energy. vth1-3:200 - 2400, others 0-2800 (#452) 2022-05-12 11:06:14 +02:00
466d431081 update release notes 2022-05-10 15:35:44 +02:00
cd4520b051 not validating settings for dacs temporarily (#447) 2022-05-10 15:30:34 +02:00
0129c2c686 M3: master starts twice (non blocking) part 2 (#445)
* slaves and master vectors empty means all positions included: fixing double acquisition in masters

* debug print out
2022-05-10 15:23:39 +02:00
f55bdd6eae M3: master starts twice (non blocking) (#444)
* start acq for master m3 was sent twice (non blocking), removed redundant code, check that there is only one master

* m3 can have more than 1 master (when many master modules used independently)

* fix for singe mod m3 or other dets
2022-05-10 14:27:40 +02:00
36a1159f38 updated pybind11 from 2.6.2 to 2.9.2 2022-04-28 16:39:39 +02:00
98e2ddbb74 indent 2022-04-28 16:37:52 +02:00
fa12ab2858 Support external builds of python bindings, gui, ctbgui and moench stuff (#440)
Use already installed version of the slsDetectorPackage. Assumes that the library has already been built and installed either on a system wide location or pointed to by CMAKE_PREFIX_PATH
2022-04-28 16:35:29 +02:00
afeee5501c Fixpositions (#436)
* fix positions on server side

* wip

* numports wip

* wip

* jungfrau top inner interface row increment

* x, y detpos, wip

* removed eiger row indices flipping in gui (bottom flipping maintained)

* wip

* wip, jungfrau numinterfaces2

* jungfrau virtual works

* eiger, jungfrau, g2 virtual server works

* eiger positions fix, wip

* binaries in

* minor printout

* binaries in

* pointer bug

* comment to define test_mod_geometry define
2022-04-28 16:32:26 +02:00
b7153fe3e0 cmake fix 2022-04-27 11:29:04 +02:00
2db2694660 m3 rxr: inconsistent generaldata default (#435)
* inconsistent copy with generalData and implementation members, especially for m3 (non default rxr generic values), issue caught on second configure with non m3 default values, eg tengiga 0

* removing test
2022-04-22 16:02:10 +02:00
e1642cf37c bugfix cmake zeromq 2022-04-19 20:11:31 +02:00
086d22f1a3 Cleaning up the find zmq (#431) 2022-04-19 17:17:31 +02:00
52882cba20 M3: polarity, interpolation, pump probe (#421)
* wip, adding m3 functions: polarity, inerpolation, pumpprobe

* added interpol, polarity, pump probe, analog pulsing, digital pulsing

* tests

* binaries in

* update release

* added python polarity enum

* fixed python and minor readability in mythen3.c

* binarie sin

* added all the m3 funcs also in list.c and enablingall counters for enabling interpolation

* binarie sin
2022-04-08 15:18:01 +02:00
27c7fd9a97 Merge pull request #423 from slsdetectorgroup/rmoldserver
copy detector server: rm old server
2022-04-08 14:56:50 +02:00
5d16ba7e16 update release 2022-04-08 11:10:15 +02:00
d8c6f9141d Fixed crash on gendoc (#430)
Fixed by checking for help action before using the detector
added test that checks that for all helps this doesn't crash
Disabled Timer tests by default since they take ~2s
2022-04-07 16:20:54 +02:00
e9dc3d8c38 minor changes (#429)
Various small changes to the data processor
2022-04-07 14:39:26 +02:00
62418c1316 sls_receiver_header* in callbacks (#425)
* char* to sls_receiver_header* in receiver data call backs

* uint32_t to size_t in callbacks

* string to const std::string & for callbacks
2022-04-07 10:19:47 +02:00
835aa575b0 remaining for removing copydetectorserver 2022-04-05 14:55:45 +02:00
b42d65c5e2 merge fix 2022-04-05 14:33:43 +02:00
150d27cf95 removed copydetectorserver 2022-04-05 14:11:04 +02:00
95ed9551c0 Merge pull request #427 from slsdetectorgroup/m3mastertrigger
m3:bug fix: slaves shoudl always have trigger
2022-04-04 17:09:07 +02:00
0f2ec47b5f binaries in 2022-04-04 17:08:38 +02:00
fe895cd782 m3:bug fix: slaves shoudl always have trigger 2022-04-04 17:06:38 +02:00
11bf6a5c58 update release notes 2022-04-04 15:44:34 +02:00
8bce87c082 delete old servers also for copy detector server (via tftp) 2022-04-04 15:41:58 +02:00
61f38bf5a9 clearer error message for unknown detector type when hostname error 2022-04-04 13:10:27 +02:00
cbc7066620 Merge branch 'rmoldserver' of github.com:slsdetectorgroup/slsDetectorPackage into rmoldserver 2022-04-04 12:47:19 +02:00
509ed9101f comments 2022-04-04 12:47:12 +02:00
191cfa0abe binaries in 2022-04-04 12:46:14 +02:00
f712847061 minor 2022-04-04 12:45:19 +02:00
b875a95bd5 binaries in 2022-04-04 12:44:03 +02:00
45f57ebeb7 compile, wip 2022-04-04 12:41:30 +02:00
0309eba3c6 redundant of getting abs path starting with '/' 2022-04-04 12:40:10 +02:00
f0448b3cec binaries in 2022-04-04 12:36:47 +02:00
a18af0b726 fix inittab to minimum, wip 2022-04-04 12:31:26 +02:00
6aa5cb8d3e abs path of abs path, wip 2022-04-04 12:26:02 +02:00
479906a9eb minor 2022-04-04 12:16:24 +02:00
28a503ed5a minor 2022-04-04 12:12:21 +02:00
b3c5a431d0 resolve abs path in root dir, wip 2022-04-04 11:58:36 +02:00
43cde3609a minor 2022-04-04 11:27:34 +02:00
9d2d8fe1d7 resolve for doubel slashes, wip 2022-04-04 11:17:41 +02:00
1826dd46cb minor 2022-04-04 10:57:27 +02:00
cf6423dbbe delete old servers, wip 2022-04-04 10:45:44 +02:00
8b1851e652 wip, copy server delete old name 2022-04-01 17:52:27 +02:00
5913864cbb Merge pull request #419 from slsdetectorgroup/jsonmaster
Jsonmaster
2022-03-31 15:10:44 +02:00
c2ef6d700e merge fix 2022-03-31 15:09:58 +02:00
76296507ff updated release 2022-03-31 15:09:34 +02:00
03d2158472 Merge branch 'developer' into jsonmaster 2022-03-30 16:49:41 +02:00
bb7b676ca2 Merge pull request #420 from slsdetectorgroup/framesinfilehdf5
frames in file hdf5
2022-03-30 16:49:17 +02:00
e1988bf088 fixes 2022-03-30 16:47:43 +02:00
8ef1a209c9 added release notes 2022-03-30 16:06:04 +02:00
28572af3ab resetting frames in file when creating a new hdf5 file 2022-03-30 15:39:20 +02:00
c57e528447 wip, prettywriter 2022-03-30 10:43:43 +02:00
74e325edb4 removing \n in timestamp for json file 2022-03-30 09:06:09 +02:00
e68499bb09 Merge branch 'developer' into jsonmaster 2022-03-29 16:29:55 +02:00
8ce6868e46 fixed compilation 2022-03-29 16:13:33 +02:00
f5cca7a98f removed binary master file as well 2022-03-29 13:30:06 +02:00
b9aa0f46e4 wip, hdf5 refactored 2022-03-29 13:02:57 +02:00
6cd780ae99 wip 2022-03-29 11:49:30 +02:00
f2be834d55 wip, binary file refactored for stringbuffer, hdf5 mid way, masterattributes to be de (inherited) :) 2022-03-28 17:43:58 +02:00
e55e18d5e9 Refactoring of SharedMemory.h (#418)
Function names match Detector.h
Removed double print due to LOG then throw
file descriptor not kept as a member variable
2022-03-28 16:13:56 +02:00
66900da476 Minor fixes to dacnames 2022-03-28 14:39:31 +02:00
13ec32c79a With corrections 2022-03-28 14:30:45 +02:00
1ff35edb99 Setting dac names for CTB (C++ and Python) (#413)
# Setting DAC names for CTB
* Introduced new shared memory for CTB only
* Prepared for additional functionality 
* Works from C++ and Python

Co-authored-by: Dhanya Thattil <dhanya.thattil@psi.ch>
2022-03-28 14:27:47 +02:00
9a969c1549 Merge branch 'developer' into jsonmaster 2022-03-28 12:32:21 +02:00
039e1fd829 merge fix with developer 2022-03-28 10:26:28 +02:00
fc41d4313f Merge pull request #414 from slsdetectorgroup/specialfile
Specialfile
2022-03-28 10:22:46 +02:00
4bd4364a3a minor. binaries in 2022-03-28 10:21:19 +02:00
4b697dd9db binaries in 2022-03-28 09:47:43 +02:00
6470277e43 use s_ischr.. not s_isblk 2022-03-28 09:47:32 +02:00
2453390cc3 merge fix 2022-03-28 09:11:32 +02:00
fa694dbc4c Merge branch 'developer' of github.com:slsdetectorgroup/slsDetectorPackage into developer 2022-03-25 18:21:12 +01:00
ea1222ac5b Solved problem in photon finder 2022-03-25 18:20:32 +01:00
cbed2e88c6 wip, binary json master for other dets 2022-03-25 15:34:39 +01:00
4cce1dbd7f added missing files 2022-03-25 15:13:08 +01:00
1710177af4 Merge branch 'dacnames' of github.com:slsdetectorgroup/slsDetectorPackage into dacnames 2022-03-25 15:10:47 +01:00
5c79a1a1e8 Moved to class implementation 2022-03-25 15:08:50 +01:00
0f02ffdc9a binary master json done, hdf5 left wip 2022-03-25 13:29:03 +01:00
83d76267f9 Merge pull request #411 from slsdetectorgroup/numudp
num udp interfaces in shm
2022-03-25 12:13:47 +01:00
9ff43efdc5 Merge branch 'developer' into numudp 2022-03-25 10:50:51 +01:00
21c21a423d update release 2022-03-25 08:18:29 +01:00
e479b7d4be updated releasetxt 2022-03-25 08:17:12 +01:00
8f0398681e updated releasetxt 2022-03-25 08:15:57 +01:00
b112cf81c4 fixed daclist setter 2022-03-24 17:46:20 +01:00
589124845a working implementation 2022-03-24 17:40:44 +01:00
90d1d0f8b8 wip, json master 2022-03-24 17:15:23 +01:00
1e564a1b33 binaries in. fixed 2022-03-24 12:33:19 +01:00
5fe10c19a1 binarie sin 2022-03-24 09:26:29 +01:00
de5c298d99 Merge pull request #373 from slsdetectorgroup/ghdf5
Gotthard25um: virtual hdf5 image
2022-03-23 12:08:40 +01:00
2a1f6dc544 Merge pull request #371 from slsdetectorgroup/g225gui
Gotthard25um: gui image
2022-03-23 12:06:27 +01:00
fc21a6763d fix and minor removing comments 2022-03-23 12:03:51 +01:00
fd8e1b2ef7 merge fix 2022-03-23 11:42:16 +01:00
0803f1bc1f help in cmdline, fixed get for daclist in python 2022-03-23 11:27:52 +01:00
9995b74217 added setdaclist in cmd line 2022-03-23 10:53:29 +01:00
2823451c9d Hacky implementation 2022-03-22 16:45:02 +01:00
0f4bcf3a9d test if special file when updating kernel(solution: reboot only), --force-delete-normal-file used to force delete bfin fpga drive if normal file and create proper device tree 2022-03-22 16:44:12 +01:00
74d55db3f0 Merge pull request #398 from slsdetectorgroup/rxacqIndices
Receiver frame indices and progress
2022-03-22 11:23:07 +01:00
2b35101b17 moved shm numUdpInterfaces initialization up front, moved updating this value in DetectorImpl::setHostname to DetectorImpl::addModule for more readability, renamed getNumberofUdpInterfaces to an updateNumberofUdpInterfaces as the shm was being updated and is used only in setHostname, everywhere else getNumberofUdpInterfaces is replaced by getNumberofUdpInterfacesFromShm 2022-03-22 10:23:22 +01:00
f538b8b10b binaries in 2022-03-21 15:52:19 +01:00
fb012aa9e9 Merge branch 'specialfile' of github.com:slsdetectorgroup/slsDetectorPackage into specialfile 2022-03-21 15:51:39 +01:00
82bad7fec6 special file check fix 2022-03-21 15:46:34 +01:00
717922f380 Merge branch 'developer' into specialfile 2022-03-21 14:27:28 +01:00
3250dda7eb fix warning 2022-03-21 10:56:10 +01:00
dffac3014e Merge branch 'developer' into g225gui 2022-03-18 12:12:44 +01:00
586149f3e7 Merge branch 'developer' into ghdf5 2022-03-18 12:12:25 +01:00
088dd2c9f8 merge fix 2022-03-18 12:12:08 +01:00
89395bd990 merge fix 2022-03-18 12:11:40 +01:00
3144f40068 bianries in after merge 2022-03-18 12:10:59 +01:00
2fe24c108b Merge pull request #409 from slsdetectorgroup/serverhelpsize
Serverhelpsize
2022-03-18 12:09:08 +01:00
bbfe3b278f binaries in 2022-03-18 12:07:47 +01:00
c9fd8ba569 qip 2022-03-18 12:07:15 +01:00
faa4f09a82 qip 2022-03-18 12:03:34 +01:00
1ca2e61a85 qip 2022-03-18 12:02:53 +01:00
7fa51e2a8e check if size exceeds capacity in server command line help 2022-03-18 12:01:16 +01:00
adbd2b853d binaries in 2022-03-17 17:00:28 +01:00
0e6d92118f compile fix 2022-03-17 16:57:08 +01:00
3796182eb1 check if drive is a normal file or special block file 2022-03-17 13:43:55 +01:00
570651a9f8 minor 2022-03-17 13:09:54 +01:00
afbc414afe works for both g25 and normal 2022-03-17 13:08:40 +01:00
ca0aa7144c hdf5 for g25um 2022-03-17 12:53:32 +01:00
3e5b8840b4 wip 2022-03-17 12:21:29 +01:00
f1da831e10 Merge branch 'developer' into ghdf5 2022-03-17 12:18:20 +01:00
06281ccae9 firmware reverses slave channels 2022-03-17 12:11:51 +01:00
509946d964 Merge branch 'developer' into g225gui 2022-03-17 11:52:30 +01:00
95f9da9d70 Merge branch 'developer' into rxacqIndices 2022-03-17 11:30:01 +01:00
4a663e9e50 Merge pull request #408 from slsdetectorgroup/metadataGeometry
Metadata geometry
2022-03-17 11:29:23 +01:00
b02dec8157 Merge branch 'developer' into metadataGeometry 2022-03-17 11:29:15 +01:00
c361b9517c Merge pull request #407 from slsdetectorgroup/stopfnum
Stopfnum
2022-03-17 11:28:32 +01:00
78823760b3 more checks in generate functions 2022-03-17 09:52:39 +01:00
d80006a024 Merge branch 'developer' into rxacqIndices 2022-03-17 09:03:32 +01:00
561777dad6 fixed clang--format version 2022-03-17 09:02:40 +01:00
c1895c4bc8 Merge branch 'developer' into rxacqIndices 2022-03-17 08:48:15 +01:00
7b66466186 Merge branch 'developer' into metadataGeometry 2022-03-17 08:47:34 +01:00
ef1c52ddc1 merge conflict fix 2022-03-17 08:46:04 +01:00
7bd4b9d9d9 Merge pull request #397 from slsdetectorgroup/setmaster
Setmaster
2022-03-17 08:42:06 +01:00
39d3ee2b15 merge fix 2022-03-17 08:41:49 +01:00
401467c700 updated scripts and generated detector.cpp 2022-03-16 18:29:33 +01:00
c9769579e3 Merge pull request #388 from slsdetectorgroup/eiger12
eiger 12 bit mode
2022-03-16 16:50:41 +01:00
7663d4ef53 udpated release notes 2022-03-16 16:12:58 +01:00
9c1bc262e5 added geometry to master file 2022-03-16 16:09:50 +01:00
c17914e0a1 split MasterAtrributes into cpp 2022-03-16 15:49:06 +01:00
7d91a15834 Merge branch 'setmaster' of github.com:slsdetectorgroup/slsDetectorPackage into setmaster 2022-03-16 14:50:40 +01:00
43c46841c1 minor 2022-03-16 14:44:26 +01:00
fc6c6985e6 binary in 2022-03-16 13:26:28 +01:00
14c63810d6 minor 2022-03-16 13:23:23 +01:00
6f0eebfbb8 fix 2022-03-16 13:18:59 +01:00
c8bed64b91 Merge branch 'setmaster' of github.com:slsdetectorgroup/slsDetectorPackage into setmaster 2022-03-16 13:16:31 +01:00
e1762605e8 fix for eiger server actual detector 2022-03-16 13:16:25 +01:00
62fff64d87 eiger binary in 2022-03-16 12:51:53 +01:00
7a39822813 fixes for set top, masterin api 2022-03-16 12:49:22 +01:00
de9e83fd61 set slave for m3 virtual 2022-03-16 12:07:03 +01:00
80d31bbb10 added tests 2022-03-16 11:35:27 +01:00
45171d82a4 minor 2022-03-16 09:56:03 +01:00
9e050060f3 eiger binary in 2022-03-15 17:18:56 +01:00
ed5a1cdf1c eiger: get nextframenumber for 10g fixed (was connected to 1g registers for get), eiger/jungfrau/ctb/moench: if after a stop the next framenumbers are inconsistent, then it will get their max value andf set to +1 2022-03-15 17:17:28 +01:00
34588356e8 added top 2022-02-28 17:05:24 +01:00
b6d63a8381 master tests 2022-02-28 16:28:15 +01:00
dd8aebb0ab eiger server binaries 2022-02-28 14:56:40 +01:00
261ac78743 wip 2022-02-28 14:49:02 +01:00
0437bd0584 Wip 2022-02-28 12:28:03 +01:00
46578d1447 wip 2022-02-25 17:50:32 +01:00
5566cfd24f configuing master from command lineg 2022-02-25 16:03:11 +01:00
5869c25658 wip, top, master command line 2022-02-24 17:06:10 +01:00
0b7c202f98 datastream 10g Receiver (#401)
* datastream not updated when tengiga enabled in receiver
2022-02-24 16:16:48 +01:00
4db34effda fixed tests 2022-02-24 11:57:00 +01:00
5a5d4eadf1 minor 2022-02-24 11:27:29 +01:00
b9016fad12 reverting to normal command parsing for missing packets 2022-02-24 11:26:17 +01:00
a1ee681135 - framescaught and frameindex now returns a vector for each port
- progress looks at activated or enabled ports, so progress does not stagnate
- (eiger) disable datastreaming also for virtual servers only for 10g
- missing packets also takes care of disabled ports
2022-02-24 11:15:03 +01:00
219318a52e wip, removed extra virutal server binaries for eiger, --ignore-config for command line 2022-02-23 17:31:46 +01:00
89edf58f41 wip, setmaster 2022-02-23 12:26:37 +01:00
ef3df36e55 merge fix 2022-02-23 11:40:50 +01:00
6d2302bcc1 Merge branch 'developer' into eiger12 2022-02-23 10:32:22 +01:00
939fc70284 bug fix in startDetector vector 2022-02-23 10:32:06 +01:00
6695b10354 binaries in 2022-02-23 10:30:05 +01:00
94adba72bf bugfix for 12bit to 16 bit expansion in server 2022-02-23 10:26:37 +01:00
1063e8b929 warnings 2022-02-23 10:09:06 +01:00
c9abeace8f warnings 2022-02-23 10:08:50 +01:00
5bfdbf59a2 warnings 2022-02-23 10:07:00 +01:00
1cd347a54d minor printing 2022-02-23 10:03:23 +01:00
76eb09eb04 binaries in 2022-02-23 09:59:41 +01:00
a936cc26cc bugfix, always 12 dr 2022-02-23 09:58:55 +01:00
92beb3aa2a bug fix from an earlier PR, slaves.begin() 2022-02-23 09:32:25 +01:00
543eb7bb60 merge fixed 2022-02-23 09:23:00 +01:00
2034362eca Merge pull request #396 from slsdetectorgroup/ctbupdate
updatemode
2022-02-23 09:16:00 +01:00
7245db5cc8 binaries in 2022-02-23 09:15:20 +01:00
8c4a4b7182 merge fix 2022-02-23 09:14:43 +01:00
11ad019d47 changing manual list size entry for allowed funcs in server update mode 2022-02-23 09:12:51 +01:00
e29d73251e Merge pull request #391 from slsdetectorgroup/missingsigned
signed num missing packets
2022-02-23 09:01:43 +01:00
c14fb92c16 hostname command failed when connecting to servers in update mode 2022-02-22 16:50:12 +01:00
e4b80703ae wip 2022-02-22 15:27:27 +01:00
2b2533f465 allowing setmaster for eiger 2022-02-22 15:23:04 +01:00
8f632db2a0 get number of missing packets now returns signed so negative numbers mean extra packets 2022-02-22 10:27:22 +01:00
daa536077d merge fix 2022-02-21 15:14:47 +01:00
bf1df92303 Merge pull request #390 from slsdetectorgroup/startmodular
startdetector
2022-02-21 15:14:01 +01:00
5e97bcde7f startdetector (non blocking) is allowed at modular level 2022-02-21 09:42:24 +01:00
aa7dee1011 release notes 2022-02-21 09:01:14 +01:00
54313af2f8 revert cmkae 2022-02-18 16:17:34 +01:00
ab302a5160 hdf5 works 2022-02-18 16:16:46 +01:00
7a607c6dd1 refactored 2022-02-18 16:08:04 +01:00
83ff4ab112 hdf5 doestn work yet, wip 2022-02-18 15:51:54 +01:00
fb631187fa binaries in 2022-02-18 14:29:46 +01:00
8770c9f6fb fix, wip 2022-02-18 14:27:50 +01:00
6e0d7b91bd wip, fix for dr 12 2022-02-18 14:19:21 +01:00
6433704086 fixed virtual server fake data for 12 bit mode 2022-02-18 13:21:21 +01:00
aea0efa1b6 Merge branch 'eiger12' of github.com:slsdetectorgroup/slsDetectorPackage into eiger12 2022-02-18 11:45:11 +01:00
8ffa2c1d65 dr>0 fix in server 2022-02-18 11:45:03 +01:00
47f9ab4027 binaries in 2022-02-18 11:39:42 +01:00
0d521b64b6 fix 2022-02-18 11:38:37 +01:00
0fb6c8b823 updating dr 12 in server, changing signature to get fail for getdynamicrange 2022-02-18 10:32:07 +01:00
d2731c77a3 gui 12 bit mode done 2022-02-17 17:33:16 +01:00
40a9dce7e0 vritual server sends 12 bit mode 2022-02-17 16:49:56 +01:00
2ba89b8a45 wip 2022-02-16 17:40:30 +01:00
4cfb35c176 receiver 12 bit, wip 2022-02-16 16:53:25 +01:00
4107938921 adding 12 bit mode for eiger, WIP 2022-02-16 15:03:25 +01:00
6d794cdf4b Merge pull request #386 from slsdetectorgroup/fwrite0
fwrite0
2022-02-15 15:52:02 +01:00
29cd944c11 file write disabled by default 2022-02-15 15:34:50 +01:00
e5ec218e5f Merge pull request #378 from slsdetectorgroup/localhost
localhostMac
2022-02-15 12:07:40 +01:00
0ac20a3bc8 Merge pull request #384 from slsdetectorgroup/ctbgui
Ctbgui
2022-02-15 11:06:48 +01:00
c38f292613 udp_srcip defaulted to 127.0.0.1 for virtual servers 2022-02-15 10:48:14 +01:00
faa9ecf97c allowing virtual servers to also have mac 0 2022-02-15 10:16:22 +01:00
6e32679def ctbgui compiles 2022-02-14 15:58:36 +01:00
649451f824 removing tiff from cmake 2022-02-14 15:46:37 +01:00
83a65f85ab allowing localhost for virtual server 2022-02-14 10:25:34 +01:00
8eb5c19187 Merge branch 'developer' into ghdf5 2022-02-14 08:26:36 +01:00
a1888bf7c9 Merge branch 'developer' into g225gui 2022-02-11 16:24:36 +01:00
fa929b138e Merge branch 'developer' into localhost 2022-02-11 16:24:11 +01:00
7eb9cb1840 Merge pull request #376 from slsdetectorgroup/datastreamFix
eiger datastream fix for 1g
2022-02-11 16:23:09 +01:00
7e5e9faf1c Merge branch 'developer' into datastreamFix 2022-02-11 16:22:55 +01:00
9a9a8ae836 Merge pull request #379 from slsdetectorgroup/m3gaincaps
M3gaincaps
2022-02-11 16:22:13 +01:00
f2cca765be wip 2022-02-10 16:59:25 +01:00
e8ededc1d1 fixes 2022-02-09 11:56:57 +01:00
dc1fbb8ce4 updated release notes 2022-02-09 11:54:29 +01:00
2cf539c16e reg for m3 is reserved only for gaincaps and not settings. Fixed in set threshold and setall threshold 2022-02-09 11:53:18 +01:00
bcca99e38c clearer exceptions to help user fix the issue of localhost not getting mac address 2022-02-09 11:30:34 +01:00
abfa627246 binary in 2022-02-07 17:13:45 +01:00
f9a88b0f79 datastream is only for 10g for eiger atm, its mentioned in comments. accordingly handled in receiver. This is better solution, in case it was disabled in 10g, and not possible to set enable in 1g mode, which is given to the receiver 2022-02-07 17:10:31 +01:00
5b4cc53f8c Merge branch 'developer' into ghdf5 2022-02-07 15:11:01 +01:00
251f07a9ae Merge branch 'developer' into g225gui 2022-02-07 15:10:42 +01:00
75f98b27a3 Merge pull request #375 from slsdetectorgroup/gettid
gettid
2022-02-07 15:09:07 +01:00
c9fbd7afdf fix for gettid() only after glibc 2.30 2022-02-07 15:05:57 +01:00
f228fde6f7 including stride and block for selecting hyperslab 2022-02-07 14:01:19 +01:00
20f3fb19af g25 option passed to hdf5 2022-02-04 15:09:18 +01:00
cfe627d348 gotthardu25 image reconstruction in gui 2022-02-04 14:47:52 +01:00
753387c34c gotthard type can only have max 2 modules 2022-02-04 13:29:42 +01:00
83e0599a37 Merge pull request #362 from slsdetectorgroup/eiger_datastream
Eiger datastream
2022-02-04 11:52:34 +01:00
b8de1955e3 binaries in 2022-02-04 10:50:44 +01:00
ac5d60155d swapping left and right for trasnmission delay to correspond to left and right of top 2022-02-04 10:49:28 +01:00
7535decd7f merge fix 2022-02-04 10:42:48 +01:00
8f2bacfd53 Merge branch 'arping' into eiger_datastream 2022-02-04 10:41:58 +01:00
771b1e7877 rx_arping for 10g mode (#359)
* test for rx_arping

* arping ip and interface from client interface

* apring thread added to thread ids

* clean code for thread for arping

* removing the assumption that udpip1 fill be updated along with udpip2

* review, replacing syscall(sys_gettid) with gettid()
2022-02-04 10:12:57 +01:00
e8cf366616 review, replacing syscall(sys_gettid) with gettid() 2022-02-04 09:51:49 +01:00
3350e3997e fixes fromreview 2022-02-04 08:19:46 +01:00
dae77a50e6 fixed moench copy for conda 2022-02-03 18:42:06 +01:00
bb5782eb92 Merge branch 'arping' into eiger_datastream 2022-02-03 15:26:15 +01:00
26faaa307b Merge branch 'developer' into arping 2022-02-03 15:26:00 +01:00
510d8717b5 interpolate vtrim for eiger 2022-02-03 15:21:18 +01:00
bc4cf95d0e merge fix 2022-02-03 13:03:40 +01:00
97417737b6 removing the assumption that udpip1 fill be updated along with udpip2 2022-02-03 12:53:07 +01:00
47c6954044 clean code for thread for arping 2022-02-03 12:14:29 +01:00
7af5d991d9 using ThreadObject, waiting for 1 sec 2022-02-03 10:59:52 +01:00
cace18e535 merge fix 2022-02-03 09:43:06 +01:00
778fe17f2b udpated release notes 2022-02-03 09:39:08 +01:00
6aa8eff6ea Merge pull request #368 from slsdetectorgroup/m3vthreshold
fix for m3 crash
2022-02-03 09:35:56 +01:00
d92d696c2b fix for m3 crash: dac_names for vthrehsold 2022-02-03 09:22:04 +01:00
abdc755dc2 fix for m3 crash 2022-02-03 09:18:47 +01:00
2c842afbf4 removed M3 ops and fixed comment 2022-02-02 19:27:44 +01:00
6c662b1370 bitwise operators for gain caps 2022-02-02 18:35:44 +01:00
1a1533cad8 binaries in 2022-02-02 16:35:41 +01:00
59085f7dc3 allow to get datastream and enable data stream for 1g 2022-02-02 16:32:35 +01:00
8f30394f63 datastream enabled allowed for 1g mode 2022-02-02 16:18:19 +01:00
f49e45ca6c Merge branch 'eiger_datastream' of github.com:slsdetectorgroup/slsDetectorPackage into eiger_datastream 2022-02-02 16:16:21 +01:00
ef1e41fc12 typo fix 2022-02-02 16:16:14 +01:00
d0f761b2ad binaries in 2022-02-02 15:30:32 +01:00
5825428779 receiver suimmary print out more clear when it is deactivated and exception when trying to disable data stream in 10g mode 2022-02-02 15:28:12 +01:00
0ed7d1e9b1 receiver suimmary print out more clear when it is deactivated and exception when trying to disable data stream in 10g mode 2022-02-02 15:27:37 +01:00
9168bc3ec9 eiger server: fix for datastream enabling wrong ports for bottom half module 2022-02-02 15:12:35 +01:00
158719e325 eiger and 2 udpinterfaces 2022-02-01 15:38:15 +01:00
2a63548f40 apring thread added to thread ids 2022-02-01 15:28:32 +01:00
bf83c9b3e2 arping ip and interface from client interface 2022-02-01 14:03:48 +01:00
c236cbf17b one therad is enough 2022-02-01 12:23:57 +01:00
6793f5e530 fixed virtual function problem in slsDetectorCalibration 2022-02-01 12:01:53 +01:00
63ebc03df0 Merge pull request #361 from slsdetectorgroup/200percent
fix 200% in acquire
2022-02-01 09:44:53 +01:00
df6b9e192b fixes 200% in acquire, instead of currentframeindex in listener (+1 to what it expects) to lastframeindex 2022-02-01 09:43:34 +01:00
a1b2bed3aa fix mythen3 acq error (bad packet loss) as server in 10g and receiver in 1g settings 2022-02-01 09:18:18 +01:00
ca8a1c046a wip, thread to start arping 2022-01-31 17:12:32 +01:00
a4cd4fd14a test for rx_arping 2022-01-31 12:02:09 +01:00
328279d315 Moenchstuff (#358)
* using standard header in moench03T1Receiver
* restructure slsDetectorData added override to moench03T1ReceiverDataNew
* removed unused function
2022-01-31 11:55:12 +01:00
92c767859f fixed tests for next frame number for ctb and moench 2022-01-31 11:24:48 +01:00
3eafcd69a7 Merge pull request #355 from slsdetectorgroup/rxframeindex
Rxframeindex
2022-01-31 10:44:45 +01:00
aa992840b6 Merge branch 'developer' into rxframeindex 2022-01-31 10:17:25 +01:00
b17a49c06b Merge branch 'developer' into rxframeindex 2022-01-31 10:16:34 +01:00
1f308a5730 Merge pull request #357 from slsdetectorgroup/fnumfixconst
Fnumfixconst
2022-01-31 09:54:29 +01:00
8e98bba9c5 binaries in 2022-01-31 09:52:45 +01:00
963e9ee501 resuing char pointer bug for removing CET from kernel string 2022-01-31 09:51:49 +01:00
8d7a55c2df const changed to fix kernel parsing for CET 2022-01-31 09:22:02 +01:00
5a3caf22d4 Merge pull request #349 from slsdetectorgroup/moenchfnum
Moenchfnum
2022-01-31 08:37:35 +01:00
4cd027507d updated release notes 2022-01-31 08:37:05 +01:00
a771c56f6f fix progress 2022-01-28 16:35:51 +01:00
80c1c6428a fix 2022-01-28 16:16:18 +01:00
b5070f384a Merge branch 'developer' into rxframeindex 2022-01-28 16:09:48 +01:00
bae41e74eb binaries in after merge 2022-01-28 16:09:12 +01:00
76901d4135 Merge pull request #354 from slsdetectorgroup/mythenkernelfix
Mythenkernelfix
2022-01-28 15:36:14 +01:00
5e817d68da release notes 2022-01-28 15:34:43 +01:00
e5b42d411e binaries in 2022-01-28 15:33:56 +01:00
3c0b57eaa1 fix to convert string to time for cet timezone as well 2022-01-28 15:26:12 +01:00
5d4c35ddbe binaries in 2022-01-28 11:53:11 +01:00
6a02264af6 fix for 1g udp interface framenumber being set 2022-01-28 11:52:07 +01:00
6be983ca1c binaries in 2022-01-28 11:48:37 +01:00
ab28615999 minor 2022-01-28 11:47:31 +01:00
8995d3db8d setting next frame number also for udp 1g itnerface 2022-01-28 11:44:12 +01:00
7b37489cdc missed in last commit 2022-01-28 11:34:01 +01:00
5a69c60205 added nextframenumber for moench, ctb (also for virtual servers) 2022-01-28 11:32:27 +01:00
f6e76145c1 Make a library for writing and reading tiff, added tests (#347)
* removed Makefile for moench and integrated the build in CMake
* broke out tiff reading and writing to its own library
* moved tiff includes to include/sls
* moved tiffio source to src
* removed incorrectly used bps
* cleanup and tests for tiffio
* removed using namespace std from header
* some fixing for moench04
* Program for offline processing renamed

Co-authored-by: Anna Bergamaschi <anna.bergamaschi@psi.ch>
2022-01-27 10:24:02 +01:00
28fe6eadb5 progress also depends on listener index now 2022-01-26 17:35:31 +01:00
0d0429983b Merge branch 'developer' into rxframeindex 2022-01-26 14:42:55 +01:00
379d08dd03 Merge pull request #340 from slsdetectorgroup/eiger2udp
Eiger2udp
2022-01-24 16:40:52 +01:00
bf43b003b6 using xy instead of portGeometry 2022-01-24 15:42:25 +01:00
dbb6accfbe Merge branch 'eiger2udp' into rxframeindex 2022-01-24 12:54:09 +01:00
a2f46aa2dd binaries in 2022-01-24 12:46:16 +01:00
eb715e82cb Merge pull request #344 from slsdetectorgroup/patsetbit
Patsetbit and vref
2022-01-24 12:43:30 +01:00
8a64d055b3 Merge branch 'eiger2udp' of github.com:slsdetectorgroup/slsDetectorPackage into eiger2udp 2022-01-24 11:39:22 +01:00
6c0356ff90 merge with developer 2022-01-24 11:39:13 +01:00
7cc7d979e4 merge with developer 2022-01-24 11:36:46 +01:00
a44284834a binaries in 2022-01-24 11:29:02 +01:00
27e48851e6 vref voltage of ad9257 changed from 1.33V to default 2V (for moench) 2022-01-24 11:28:36 +01:00
c554bbb2d3 formatted slsDetectorCalibration 2022-01-24 11:26:56 +01:00
c45f2a282c updated release 2022-01-24 11:25:07 +01:00
623b1de8a0 added f suffix 2022-01-24 11:19:38 +01:00
9328aadfa3 Merge branch 'developer' into patsetbit 2022-01-24 11:13:32 +01:00
4a15b31b04 updated help in Detector API and python 2022-01-24 11:09:36 +01:00
01e8414162 binarie sin 2022-01-24 11:00:37 +01:00
3b11000532 fixed getsettings (moench) for patsetbit and patmask 2022-01-24 11:00:16 +01:00
f38c1c8714 binaries in 2022-01-24 09:40:24 +01:00
664c992028 exchanging patsetbit and patsetmask functionalities 2022-01-24 09:39:19 +01:00
02174f79d1 changed deprecated uint32 declaration to uint32_t 2022-01-24 09:01:44 +01:00
5f40e32924 rxr: frame number should be forwarded to caught frame number for discard partial frames or discardemptyframe mode, currentframeindex command should point to listener current frame index and not dataprocessors index 2022-01-21 14:46:38 +01:00
ef8de7b2be updated release notes 2022-01-21 12:06:50 +01:00
7c740445dc Merge branch 'developer' into eiger2udp 2022-01-21 11:51:42 +01:00
f95a15c841 update release 2022-01-21 11:51:32 +01:00
193ab75ebe Merge branch 'developer' into eiger2udp 2022-01-20 18:02:19 +01:00
062ab63289 Merge pull request #343 from slsdetectorgroup/ctbslowadc
fix for slow adcs and any other adcs that went to stop server, but ne…
2022-01-20 17:23:55 +01:00
50c1056eb2 minor 2022-01-20 17:23:41 +01:00
14d10d2f8f fix for slow adcs and any other adcs that went to stop server, but needed control server for configuration of the adc 2022-01-20 17:21:37 +01:00
5403656e79 Fixed some stuff for the Zmq process 2022-01-19 14:56:44 +01:00
26aa129004 merged with developer to fix conflicts 2022-01-19 10:40:20 +01:00
945adebe2e binaries in 2022-01-19 08:38:27 +01:00
753cbbd18c gui doesnt need to multply to get port geometry for number of interfaces, previously worked as bool was used instead of int for numInterfaces in DetectorImpl:readframefromreceiver 2022-01-18 17:10:12 +01:00
bfccc004e8 updated versions 2022-01-18 12:22:36 +01:00
987404a14e binaries in 2022-01-17 16:46:56 +01:00
a41b6f73d4 merge from developer 2022-01-17 11:58:22 +01:00
7871a78c8f updates to work with ctb and moench04 2022-01-14 13:39:40 +01:00
e5be13f064 Merge branch 'eiger2udp' of github.com:slsdetectorgroup/slsDetectorPackage into eiger2udp 2022-01-13 16:39:27 +01:00
c15131f8f6 fixed eiger bottom port now shows, because number of udp interfaces for eiger was set by default to get zmq port, has to be calcualted again now 2022-01-13 16:39:17 +01:00
15c887f413 server binary 2022-01-13 15:19:43 +01:00
01d7831abf module id in udp header for virtual servers for debugging, formatting 2022-01-12 12:21:17 +01:00
182e5fdadb missing one line 2022-01-07 12:55:40 +01:00
fd655437ab gui doesnt work, but ports fixed 2022-01-07 12:51:49 +01:00
88869c1746 some more fixes for eiger 2 udp interface 2022-01-07 12:44:32 +01:00
c92d0e5ee2 minor 2022-01-07 08:57:31 +01:00
549fef5680 fixed warnings 2022-01-07 08:54:20 +01:00
79affe1ea4 updated client and rxr, not tested 2022-01-06 18:46:14 +01:00
2aaf59adb3 added libtiff as dependency for Moench 2022-01-06 16:44:19 +01:00
f9eed62a45 Merge pull request #339 from slsdetectorgroup/fixes
Fixes
2022-01-06 15:51:33 +01:00
f818ac46b8 int64_t in receiver missing packets 2022-01-06 15:15:42 +01:00
1e309b67ef server side done 2022-01-06 09:20:29 +01:00
22c820771a update release notes 2022-01-05 15:23:56 +01:00
ae9691e848 removing warnings, hdfmutexlib moved from class member in dataprocessor to just arguments when required in dataprocessor (setupfilewriter and createvirtualfile) 2022-01-05 15:20:06 +01:00
4fce0dcd9c temp workaround, but must fix in cmdproxy the possibility of getting extra packets in rx_missingpackets 2022-01-05 14:23:53 +01:00
77fb8280f1 warning in using abs for unsigned (missing packets) in rxr, but also trying to print to signed in command line (so as not to change api atm) 2022-01-05 12:55:21 +01:00
2dd98c6054 numImages not used in Listener anymore 2022-01-05 11:38:18 +01:00
9101200283 updated project version 2022-01-04 13:22:20 +01:00
6057de2a6d This commit fixes the issue #336
A delay of 100ms has been added between the generation of the stop pulse and the resetCore function call. This should give enough time to the detector to readout and streamout the ongoing frame before the internal logic is reset (even after the transmission is delayed with txndelay_frame).
2021-12-14 17:28:56 +01:00
f476266e5e back to developer versions 2021-11-26 10:42:57 +01:00
066706872d updated version and added python 3.10 build 2021-11-26 09:25:41 +01:00
f98c403f06 Merge branch 'main' into developer 2021-11-25 13:03:03 +01:00
5dcc2ab35c fixed server names versioing 2021-11-25 13:01:23 +01:00
d80c5e1c02 fixed versions in servers 2021-11-25 12:59:44 +01:00
7acc201797 fix release version 2021-11-25 12:56:21 +01:00
57a52ba2dc update release version 2021-11-25 12:55:47 +01:00
c011129c43 Merge branch 'main' into 6.0.1.rc1 2021-11-25 12:40:24 +01:00
2ed8b85143 update release notes 2021-11-25 12:30:59 +01:00
5ac2fc33ff release notes 2021-11-25 12:29:45 +01:00
c169e6b896 fixed tests 2021-11-25 12:26:48 +01:00
497d29db39 fix for eiger overwriting of server nchip 2021-11-25 12:21:44 +01:00
2fbf0d6996 to fix test changing order 2021-11-25 12:15:20 +01:00
16246407c5 Merge branch '6.0.1.rc1' of github.com:slsdetectorgroup/slsDetectorPackage into 6.0.1.rc1 2021-11-25 12:09:16 +01:00
4a89bef87b no need to update server nchan/nchip/ndac values from client 2021-11-25 12:09:07 +01:00
a146257b13 testing 2021-11-25 11:30:14 +01:00
340b18ca83 Merge branch '6.0.1.rc1' of github.com:slsdetectorgroup/slsDetectorPackage into 6.0.1.rc1 2021-11-25 11:25:24 +01:00
271f6da92e tests fix 2021-11-25 11:25:18 +01:00
9d20bf25c6 fix tests 2021-11-25 10:51:59 +01:00
6668fef61a fix test 2021-11-25 10:42:13 +01:00
a455a95aab fix tests 2021-11-25 10:31:19 +01:00
7f7a691b25 fix test 2021-11-25 10:19:48 +01:00
7536971b34 binaries in 2021-11-25 10:16:37 +01:00
8ca11ec705 Merge branch '6.0.1.rc1' of github.com:slsdetectorgroup/slsDetectorPackage into 6.0.1.rc1 2021-11-25 10:15:02 +01:00
00d63e48bb fix tests 2021-11-25 10:14:49 +01:00
146da0f20f binaries in 2021-11-25 10:02:14 +01:00
993ba5926e update test fix 2021-11-25 10:00:40 +01:00
44424bcbe3 fix 2021-11-25 09:36:50 +01:00
3570795469 fix test 2021-11-25 09:34:00 +01:00
a9d61526ef flip rows only for hw2.0 for jungfrau 2021-11-25 09:30:04 +01:00
2a5116f49a rename binary 2021-11-25 09:18:23 +01:00
5e0408474d binaries in 2021-11-25 09:17:01 +01:00
e7b11f3eb1 fix tets 2021-11-25 08:54:01 +01:00
c1e374ed51 Merge branch '6.0.1.rc1' of github.com:slsdetectorgroup/slsDetectorPackage into 6.0.1.rc1 2021-11-25 08:42:46 +01:00
eb690437c9 fix tests: get filtercells only for chipv1.1 2021-11-25 08:42:39 +01:00
f076c1cbb7 missing ( 2021-11-24 18:03:26 +01:00
de9854e773 fix message for lto and get/put target 2021-11-24 17:59:22 +01:00
150d8f5fda Merge pull request #335 from slsdetectorgroup/ltofix
Adding LTO to tests and disable them for Debug builds
2021-11-24 17:53:09 +01:00
b0a5a76065 adding LTO to tests 2021-11-24 17:40:14 +01:00
77e610f5a5 fixing tests to run on rhel7 2021-11-24 17:14:20 +01:00
0689c82e98 troubleshooting docs 2021-11-24 17:11:17 +01:00
ce94364c73 binary 2021-11-24 17:00:08 +01:00
1ed10acc01 g2 speed also requires dbit pipeline to be set 2021-11-24 16:59:20 +01:00
2c57d5f72d release update 2021-11-24 14:53:52 +01:00
0d867c91d9 release doc 2021-11-24 14:50:17 +01:00
845920f8cc renaming 2021-11-24 14:49:14 +01:00
eff4ba01b9 jungfrau: flip rows and partial readout only available for hw2.0 2021-11-24 14:44:42 +01:00
bcf0922b8d changed server names 2021-11-24 11:40:20 +01:00
d689c415e4 binaries in 2021-11-24 11:34:35 +01:00
e9caa53af0 minor text fix + macro define in the right place 2021-11-24 11:32:19 +01:00
81eb0217ad fixing the error messagE 2021-11-24 10:19:35 +01:00
0e23665de5 binaries in 2021-11-24 09:27:53 +01:00
044843c8b7 check not required when writing to fpga flash dir 2021-11-24 09:27:18 +01:00
9d63875802 binaries in 2021-11-23 16:27:29 +01:00
9aea183f5c typo 2021-11-23 16:26:56 +01:00
f39f93b2c8 adding the check for copydetector server and updatemode (also for any kind of updatedetectorserver, programfpga and updatekernel) 2021-11-23 16:25:28 +01:00
ed2e6e4e28 print error 2021-11-23 15:45:01 +01:00
74348afcf6 fix 2021-11-23 15:30:08 +01:00
a101e18d60 fix to ensure updatekernel does not work with Amd blackfin flash and a kernel older than the current one 2021-11-23 15:23:16 +01:00
d9686e0b6a minor 2021-11-22 16:30:33 +01:00
5f38165b07 binaries in 2021-11-22 15:15:00 +01:00
9d859cb4c2 fixed warnings in server for format 2021-11-22 15:14:26 +01:00
7772eb153d using const for getupdatemode 2021-11-22 15:08:58 +01:00
e37725ac12 release notes and some fixes 2021-11-22 14:02:54 +01:00
f3c95148a7 LGTM. updated versioning and server versioning 2021-11-22 13:06:59 +01:00
b9b3055984 updated release notes 2021-11-19 18:52:53 +01:00
464ebe70f1 bug fix: servername interchanged for firmware name 2021-11-19 08:53:07 +01:00
4f76219456 Merge pull request #334 from slsdetectorgroup/4updatemode
4updatemode
2021-11-18 16:42:53 +01:00
18b0e84fbf eiger cant reboot 2021-11-18 15:55:12 +01:00
1c6e33064b minor typo 2021-11-18 15:39:34 +01:00
a74f71be0e binaries in 2021-11-18 15:39:05 +01:00
848884f6cf temp server binary in tmp folder already has full path 2021-11-18 15:33:47 +01:00
4690bd0b19 binaries in 2021-11-17 10:14:35 +01:00
f7286d29fa merge fix 2021-11-17 10:13:44 +01:00
eb666d8b05 Merge pull request #333 from slsdetectorgroup/3kernelupdate
3kernelupdate
2021-11-17 10:10:19 +01:00
148c979727 binaries in 2021-11-17 09:26:56 +01:00
00775b543e fixed warnings 2021-11-17 09:25:36 +01:00
904af4de06 fix to allowing update mode functions in update mode and removing exception about set_position for hostname in update mode 2021-11-16 09:55:29 +01:00
eb69d7cb69 update mode added. need to fix why udpatemode get and set not in allowed functions 2021-11-12 17:18:26 +01:00
0ffd30e147 works virutally for virtual servers 2021-11-12 15:18:42 +01:00
eda66e63a5 allowed functions in update mode 2021-11-11 19:08:05 +01:00
6e276770eb binaries in 2021-11-11 11:05:59 +01:00
4fb19ceaa5 minor 2021-11-11 11:01:19 +01:00
1840ad218a complete path for eiger 2021-11-11 11:00:31 +01:00
93a191f122 complete path for eiger 2021-11-11 10:56:09 +01:00
c532ecc2e8 moved movefile and writefile to common and avoiding need to send different named files for nios 2021-11-11 10:43:17 +01:00
25eecf7039 siable warning to truncate and compile fix 2021-11-11 10:12:08 +01:00
85d350b48b blackfin server is not in memory 2021-11-11 10:06:47 +01:00
ec1ee635d5 works and allowed reboot 2021-11-11 09:38:01 +01:00
18bbce70b1 fwrite bug 2021-11-11 09:36:15 +01:00
fa822634aa copying binary not done properly 2021-11-11 09:30:53 +01:00
93a86324fb typo bug fix 2021-11-11 09:25:28 +01:00
9d21062f5a remove reboot for checks 2021-11-11 09:24:48 +01:00
a099637b7e checksum also for nios 2021-11-11 09:23:24 +01:00
4dfdd6f10b minor 2021-11-11 09:22:15 +01:00
4f6640a6f1 minor 2021-11-11 09:21:21 +01:00
65a2a9eb06 checksum of server binary file 2021-11-11 09:12:14 +01:00
169361d459 blackfin requires a few writes 2021-11-10 18:54:02 +01:00
0144eff60b binaries in 2021-11-10 17:54:34 +01:00
4f5f8408cf more error detail 2021-11-10 17:53:40 +01:00
32d664a77d actually writing the server binary from memory to file before linking, sycing, permissions etc 2021-11-10 17:48:18 +01:00
4a8c365447 typo 2021-11-10 17:25:37 +01:00
4b46091be2 python fix, server copy wrong filename 2021-11-10 16:40:14 +01:00
5190e2ab30 refactoring 2021-11-10 15:56:01 +01:00
adc6cf214a fixed runtime error with module::sendprogram default servername value 2021-11-10 14:54:42 +01:00
fb7daf426f binaries in 2021-11-10 14:04:16 +01:00
0e9c88dfa2 programfpga already does reboot 2021-11-10 11:49:32 +01:00
233d374a4d server works 2021-11-10 11:47:26 +01:00
15aa42d328 wip 2021-11-10 10:58:29 +01:00
14ee2087dc rebootcontroller after updating kernel 2021-11-09 16:19:53 +01:00
1e134276ca typo 2021-11-09 16:06:37 +01:00
d9168803ae nios kernel update takes time, simulating 2021-11-09 16:05:47 +01:00
0090c183bf more print and fclose 2021-11-09 16:00:01 +01:00
f5d62b50ce allowing kernel update for nios 2021-11-09 15:53:31 +01:00
098601e717 need to find a better way to show unrecognixed functions 2021-11-09 15:52:37 +01:00
898ee9b7b7 arguments stays for unknown enum 2021-11-09 15:41:59 +01:00
1fd15fadf8 Merge branch '3kernelupdate' of github.com:slsdetectorgroup/slsDetectorPackage into 3kernelupdate 2021-11-09 15:37:20 +01:00
db88f67cda unknown function enum error proper print 2021-11-09 15:37:12 +01:00
0f08ddd454 binaries in 2021-11-09 14:41:00 +01:00
f8e2522a11 minor 2021-11-09 14:40:36 +01:00
5143295711 binaries in 2021-11-09 14:36:24 +01:00
a59537088b fix 2021-11-09 14:35:49 +01:00
dde98fc8b6 fix 2021-11-09 14:33:24 +01:00
c0e3bbbc61 Merge branch '3kernelupdate' of github.com:slsdetectorgroup/slsDetectorPackage into 3kernelupdate 2021-11-09 14:10:24 +01:00
bf778b5336 too much read from kernel flash for checksum validation 2021-11-09 14:10:15 +01:00
99c44b2592 binaries in 2021-11-09 11:48:56 +01:00
6569e4a8bf kernel update fix 2021-11-09 11:46:27 +01:00
3ebfbb123d binaries in 2021-11-09 11:44:38 +01:00
8d2bb3d678 Merge branch '3kernelupdate' of github.com:slsdetectorgroup/slsDetectorPackage into 3kernelupdate 2021-11-09 11:42:35 +01:00
e81e06696a kernel update works, but without flash checksum 2021-11-09 11:42:26 +01:00
99ad1d9228 binaries in 2021-11-09 10:35:42 +01:00
33c86db019 gpio defined checks 2021-11-09 10:35:18 +01:00
f085b4ca1e binaries in 2021-11-09 10:22:44 +01:00
7558c43b8c reverting last change 2021-11-09 10:22:24 +01:00
e332439020 binaries in 2021-11-09 10:20:20 +01:00
ea44151cb1 Merge branch '3kernelupdate' of github.com:slsdetectorgroup/slsDetectorPackage into 3kernelupdate 2021-11-09 10:19:53 +01:00
3167aade45 more print for error in basictests 2021-11-09 10:19:43 +01:00
44709b1384 binaries in 2021-11-09 10:09:19 +01:00
717d68c217 bug fix, not returning 2021-11-09 10:08:57 +01:00
c218d7dc00 bug fix, kernel index 2021-11-09 10:02:47 +01:00
30d38ecae9 print error even if in execute 2021-11-09 09:56:13 +01:00
254b918408 binaries in 2021-11-09 09:51:02 +01:00
1506c70329 bugfix 2021-11-09 09:50:40 +01:00
729441dcc6 Merge branch '3kernelupdate' of github.com:slsdetectorgroup/slsDetectorPackage into 3kernelupdate 2021-11-09 09:49:30 +01:00
90b9b57865 bugfix 2021-11-09 09:49:21 +01:00
29a41c6b19 binaries in 2021-11-09 09:23:04 +01:00
64a25a242b server side fixed 2021-11-08 17:24:51 +01:00
7b4f8c118b client done 2021-11-08 14:26:53 +01:00
54ee4ec653 Merge branch 'developer' into 3kernelupdate 2021-11-08 09:40:07 +01:00
59bcf6a0d0 Merge pull request #331 from slsdetectorgroup/2kernelversion
2kernelversion
2021-11-08 09:39:24 +01:00
6462a7162e wip 2021-11-05 17:01:45 +01:00
e15028e94c Merge branch 'developer' into 3kernelupdate 2021-11-05 12:31:36 +01:00
81e1221e0d Merge branch 'developer' into 2kernelversion 2021-11-05 12:28:37 +01:00
953fc9bb48 Merge pull request #332 from slsdetectorgroup/pyfix
removed c++14 only overload_cast from pybind enum interface
2021-11-05 12:28:10 +01:00
91cf18c5d1 removed comment 2021-11-05 12:03:01 +01:00
642989cab2 removed c++14 only overload_cast from pybind enum interface 2021-11-05 11:56:57 +01:00
d438b53c68 wip 2021-11-04 19:18:10 +01:00
6e49b77b08 updating kernel like program fpga, execute command to print which module failed, unlinking temporary file while programming bug fix 2021-11-03 17:17:24 +01:00
5f94ca30f1 removed deprecated root include 2021-11-03 16:26:59 +01:00
98cf908918 Merge branch 'developer' into 2kernelversion 2021-11-03 14:44:49 +01:00
cb39a59508 fixes for kernelversion 2021-11-03 14:42:14 +01:00
b68ef6cbd0 binaries in 2021-11-03 11:47:14 +01:00
eff64f99f2 addd kernel version 2021-11-03 11:46:46 +01:00
2b1028d636 Merge pull request #330 from slsdetectorgroup/programfix
Program firmware for new kernel
2021-11-03 11:43:37 +01:00
1da2761654 bug fix 2021-11-03 10:49:59 +01:00
7c2e64d9fe rleease notes 2021-11-02 17:21:27 +01:00
340d708b12 updated m3 kernel version 2021-11-02 17:07:55 +01:00
3f517420af updated kernel date for gotthard2, checking kernel code similar for blackfin and nios, need to add date for m3 2021-11-02 16:59:54 +01:00
05c9fcfe19 gpio3 only when new kernel 2021-11-02 14:23:14 +01:00
dcae1b7a2b bianries in 2021-10-29 17:16:22 +02:00
d8570bc9a9 updated date kernel 2021-10-29 17:14:26 +02:00
e3bfdaf38e binaries in 2021-10-29 16:45:18 +02:00
5188e600a2 specific kernel version name 2021-10-29 16:45:05 +02:00
45b3514118 moved verifykernelversion to common 2021-10-29 16:43:48 +02:00
2813cd5ac2 remove cest as strptime doesnt work on bfin with timezone 2021-10-29 16:28:58 +02:00
2d2287e189 check kernel version before enabling the gipo 3 chipenable pins 2021-10-29 12:25:30 +02:00
c3eff0246a programing problem fixed 2021-10-28 15:51:29 +02:00
c911fe4c85 bash script for cmk.sh in ubuntu 2021-10-28 14:18:58 +02:00
87a515a549 Merge pull request #329 from slsdetectorgroup/rxhostnamenone
Bug: rxhostname none
2021-10-27 14:36:16 +02:00
0b2d294a19 updated release notes 2021-10-27 14:27:34 +02:00
b62a6eff64 updated release notes 2021-10-27 14:27:08 +02:00
dde62f13d5 fixed bug. setting rx_hostname to none should not throw 2021-10-27 11:29:01 +02:00
146b012eff minor changes 2021-10-27 11:14:29 +02:00
95897085ec missed serverBin 2021-10-27 11:02:28 +02:00
e53a71f88f updated to developer versioning 2021-10-27 11:00:01 +02:00
274ec27934 iupdated licensing info inrelease notes 2021-10-21 16:44:48 +02:00
1454cc8434 binaries updated 2021-10-21 16:27:20 +02:00
76c86cb5ac binaries in after minor 2021-10-21 16:26:18 +02:00
ec4aca0dd4 minor 2021-10-21 16:25:05 +02:00
9f27478f95 serverBin binaries updated 2021-10-21 16:18:36 +02:00
ec1bdffa1a binaries renamed 2021-10-21 16:13:55 +02:00
258a0f794c binaries in 2021-10-21 16:12:52 +02:00
6caafaea00 binaries in 2021-10-21 15:54:39 +02:00
727e52b9e8 Merge pull request #327 from slsdetectorgroup/jfres
Jfres
2021-10-21 15:37:51 +02:00
fdd3ab2a60 binaries in 2021-10-21 15:36:35 +02:00
a84bd1f881 jungfrau filter resistor highervalue in fpga is smaller resitance, and needs toggling 2021-10-21 15:35:57 +02:00
1f8823a3b7 Merge pull request #326 from slsdetectorgroup/jlowcurrentfix
binaries in
2021-10-21 15:24:57 +02:00
1f4d94b3cc merge conflict fix 2021-10-21 15:24:46 +02:00
effbc6f571 binaries in 2021-10-21 15:22:36 +02:00
8c8aa175a6 Merge pull request #325 from slsdetectorgroup/jlowcurrentfix
Jlowcurrentfix
2021-10-21 15:05:59 +02:00
42b1f9a623 jungfrau filter resistor high bit for higher values change, also no toggling for status 2021-10-21 15:04:53 +02:00
9e23648801 binaries in 2021-10-21 14:56:13 +02:00
6dc4634495 typo 2021-10-21 14:55:58 +02:00
4b7d73a4ee jungfrau normal/low is not toggled like the others in register 2021-10-21 14:54:57 +02:00
0358749b3b updating versions 2021-10-21 13:15:39 +02:00
333a23c7e2 docs, auto_comp_disable->autocompdisable, comp_disable_time->compdisabletime (removing _ in commands) 2021-10-21 13:11:43 +02:00
802bd27e50 python extrastoragecells, documentation 2021-10-21 12:34:49 +02:00
0909eabfaf minor 2021-10-21 12:20:26 +02:00
2f7a0898d6 Merge pull request #324 from slsdetectorgroup/jungfraufix
Jungfraufix
2021-10-21 12:07:36 +02:00
e89dd393e2 binaries in 2021-10-21 12:00:21 +02:00
76dc6db5c0 jungfrau: api changed from set/getFilterCell to set/getNumberOfFilterCells, storagecells command line changed to extrastoragecells, fixed wrong numberof arguments parsing message 2021-10-21 11:59:10 +02:00
9b321d2ee1 jungfrau: new default to asic reg for chipv1.1, filtercells name change, wrongnumberof parameters message change 2021-10-21 11:27:31 +02:00
f7a6160e67 docs 2021-10-21 09:56:51 +02:00
156ce6a2e5 docs 2021-10-21 09:40:12 +02:00
9dc217aaa3 updated calibration settings for jungfrau, default special dac values for high gain 0, temporary fix for firmwarebug (config_V11_status has to be flipped to be read) 2021-10-20 17:20:09 +02:00
ae736cd0e5 docs 2021-10-20 16:51:07 +02:00
c5962f40eb enums added in python docs 2021-10-20 11:38:05 +02:00
aab5418166 eiger copy detector server command should not reboot for eiger (feature does not exist) 2021-10-19 16:21:37 +02:00
f61d14a2f1 binaries in 2021-10-19 14:52:49 +02:00
836e4c51f3 remove license checks 2021-10-19 14:50:15 +02:00
b39c64032d clang format 2021-10-19 14:49:43 +02:00
3726ae3fd1 Merge pull request #323 from slsdetectorgroup/jungfraucalibfix
Jungfraucalibfix
2021-10-19 11:45:43 +02:00
98c2d52200 binaries in 2021-10-19 11:43:33 +02:00
54097ba21c bug fix for inverted select for chipv1.1 2021-10-19 11:40:50 +02:00
b8b7966d79 jungfrau server: reversing bits of chipv1.1 select 2021-10-19 11:04:18 +02:00
a1c9947821 Merge pull request #322 from slsdetectorgroup/copyservererror
print module number and hostname when tftp error
2021-10-19 10:32:43 +02:00
469d4e5c7c print module number and hostname when tftp error 2021-10-19 10:29:25 +02:00
bd0eb22392 Merge pull request #320 from slsdetectorgroup/copyserver
Copyserver
2021-10-19 10:21:26 +02:00
3a543daf55 comments 2021-10-19 10:21:04 +02:00
c061baaaee comment 2021-10-19 10:18:19 +02:00
b9fab9bc1f binaries in 2021-10-19 10:16:57 +02:00
6cf5072293 snprintf and linked server to be respawned, not copied one 2021-10-19 10:16:31 +02:00
8db1dfb2ce binaries in 2021-10-19 10:06:38 +02:00
a54a570a78 merge fix and release update 2021-10-19 10:06:04 +02:00
3cfdc063fc Merge pull request #321 from esrf-bliss/slsdetectorserver-max-udp-destination
slsDetectorServer: fix checks on UDP destination entry range
2021-10-19 10:01:28 +02:00
9b521ade27 help fixed 2021-10-19 09:59:49 +02:00
313216443a slsDetectorServer: fix checks on UDP destination entry range 2021-10-19 09:41:10 +02:00
318a5fd9d5 biaries in 2021-10-19 08:02:09 +02:00
dd2e9ff7f3 Merge branch 'copyserver' of github.com:slsdetectorgroup/slsDetectorPackage into copyserver 2021-10-19 07:59:06 +02:00
27c4d8652e added sync and not executing set detector position in update mode 2021-10-19 07:58:54 +02:00
927f30e55e binaries in 2021-10-19 07:52:25 +02:00
51c2e78a31 added sync 2021-10-19 07:51:39 +02:00
a0004dc775 updated release to include lib versioning info 2021-10-18 17:57:32 +02:00
db4f345b47 binaries in 2021-10-18 17:53:24 +02:00
195d28d091 typo 2021-10-18 17:52:52 +02:00
d7bbcb24c9 fixes for warnings 2021-10-18 17:51:31 +02:00
6b94f266bf execute command used properly 2021-10-18 17:39:16 +02:00
203d6465a1 clang and redoing copy detector server to have a soft link and put that in respawning for blackfin servers 2021-10-18 17:17:56 +02:00
43bbf66e85 Merge branch 'developer' into copyserver 2021-10-18 16:14:12 +02:00
b665ed87b4 Merge pull request #319 from slsdetectorgroup/filtercellhotfix
Filtercellhotfix
2021-10-18 16:11:04 +02:00
29fbef7ced binaries in 2021-10-18 16:10:01 +02:00
f3ca25d104 filter cell logic fixed 2021-10-18 16:08:21 +02:00
e4b141dda5 minor 2021-10-18 15:51:46 +02:00
843a35d2f9 Merge pull request #318 from slsdetectorgroup/selecthotfix
udpated naming warning in release.txt
2021-10-18 15:40:41 +02:00
1e2395bd44 udpated naming warning in release.txt 2021-10-18 15:37:28 +02:00
ed81ce2877 Merge pull request #317 from slsdetectorgroup/selecthotfix
Selecthotfix
2021-10-18 15:28:17 +02:00
12945916b7 updated binary for current source comments 2021-10-18 15:25:55 +02:00
4367a39b98 fix for current source 64 bit select mask for chipv1.1 2021-10-18 15:22:18 +02:00
519b09fcad permission 2021-10-18 15:01:10 +02:00
84f56ff314 updated permissions 2021-10-18 14:54:25 +02:00
3e70f0cbfb Merge pull request #316 from slsdetectorgroup/licensing
Licensing
2021-10-18 14:45:34 +02:00
96f7bf05c8 changes in gui notice and apache 2.0 changes 2021-10-18 14:45:19 +02:00
7eb05a3637 make files 2021-10-18 11:52:23 +02:00
ca08cd9ec1 updated cmakelists.txt for licesnse 2021-10-18 11:44:47 +02:00
a0ecf056d8 updated apache2 notice 2021-10-18 11:43:11 +02:00
479fa23acb added .sh licenses 2021-10-15 16:02:42 +02:00
fada085f0e added .py licenses 2021-10-15 15:54:58 +02:00
b913c0059a added .c licenses 2021-10-15 15:52:40 +02:00
dac60ad76d added .cpp licenses 2021-10-15 15:47:04 +02:00
4de7bb51ed updated all .h files with license notice and copyright notice 2021-10-14 18:10:56 +02:00
0801957203 updating license script 2021-10-14 18:03:19 +02:00
2d7ffdd603 before adding license notice 2021-10-14 17:51:45 +02:00
576157351e minor 2021-10-14 17:42:40 +02:00
86a9aa9e38 license notice and copyright notice ammended 2021-10-14 17:39:37 +02:00
928ed201f6 Merge branch 'developer' into licensing 2021-10-14 17:37:13 +02:00
ae4473d631 some changes 2021-10-14 17:36:47 +02:00
cbe2956ee4 Merge pull request #315 from slsdetectorgroup/libversioning
lib versioning
2021-10-14 17:34:54 +02:00
aff3a6ed20 added folder with licenses used in source code 2021-10-14 16:43:37 +02:00
e6b18f6a95 updated versioning for shared libraries 2021-10-14 15:11:47 +02:00
93550ebed7 fix 2021-10-14 12:48:13 +02:00
8fb4393981 python fix 2021-10-14 12:38:03 +02:00
2bfe0a939d Merge branch 'developer' of github.com:slsdetectorgroup/slsDetectorPackage into developer 2021-10-13 16:29:29 +02:00
cfbc4c699f typo 2021-10-13 16:27:43 +02:00
c0edbc8631 Merge pull request #314 from slsdetectorgroup/eigerhotfix
Eigerhotfix
2021-10-13 15:56:10 +02:00
34bc596ea6 fixed eiger tengiga hotfix 2021-10-13 13:04:34 +02:00
e172156829 typo 2021-10-13 13:02:02 +02:00
1c13dd95a0 eiger server fix: 10genable stop server does not have send_data struct initialized, not configuring mac or setup header for stop server 2021-10-13 12:56:03 +02:00
c836371b7c removed readme for python folder 2021-10-13 11:22:31 +02:00
7426110e8a Warnings (#313) 2021-10-12 11:42:02 +02:00
e84f5bec0b disable Wstringop-truncation for servers 2021-10-11 19:56:39 +02:00
809 changed files with 111366 additions and 37071 deletions

View File

@ -19,6 +19,7 @@ Checks: '*,
-google-readability-braces-around-statements,
-modernize-use-trailing-return-type,
-readability-isolate-declaration,
-readability-implicit-bool-conversion,
-llvmlibc-*'
HeaderFilterRegex: \.h

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "python/pybind11"]
path = libs/pybind11
url = https://github.com/pybind/pybind11.git

229
CMakeLists.txt Executable file → Normal file
View File

@ -1,13 +1,15 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
cmake_minimum_required(VERSION 3.12)
project(slsDetectorPackage)
set(PROJECT_VERSION 5.1.0)
include(CheckIPOSupported)
set(PROJECT_VERSION 6.1.1)
set(CMAKE_CXX_FLAGS_RELEASE "-O3 -DNDEBUG")
cmake_policy(SET CMP0074 NEW)
include(cmake/project_version.cmake)
# Include additional modules that are used unconditionally
include(cmake/SlsAddFlag.cmake)
include(cmake/SlsFindZeroMQ.cmake)
include(GNUInstallDirs)
# If conda build, always set lib dir to 'lib'
@ -21,7 +23,7 @@ string(TOLOWER "${PROJECT_NAME}" PROJECT_NAME_LOWER)
# Set targets export name (used by slsDetectorPackage and dependencies)
set(TARGETS_EXPORT_NAME "${PROJECT_NAME_LOWER}-targets")
#set(namespace "${PROJECT_NAME}::")
set(namespace "sls::")
set(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
@ -32,6 +34,8 @@ if (CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_SOURCE_DIR)
set(SLS_MASTER_PROJECT ON)
endif()
option(SLS_USE_HDF5 "HDF5 File format" OFF)
option(SLS_BUILD_SHARED_LIBRARIES "Build shared libaries" ON)
option(SLS_USE_TEXTCLIENT "Text Client" ON)
@ -44,6 +48,7 @@ option(SLS_USE_TESTS "TESTS" OFF)
option(SLS_USE_INTEGRATION_TESTS "Integration Tests" OFF)
option(SLS_USE_SANITIZER "Sanitizers for debugging" OFF)
option(SLS_USE_PYTHON "Python bindings" OFF)
option(SLS_INSTALL_PYTHONEXT "Install the python extension in the install tree under CMAKE_INSTALL_PREFIX/python/" OFF)
option(SLS_USE_CTBGUI "ctb GUI" OFF)
option(SLS_BUILD_DOCS "docs" OFF)
option(SLS_BUILD_EXAMPLES "examples" OFF)
@ -51,7 +56,35 @@ option(SLS_TUNE_LOCAL "tune to local machine" OFF)
option(SLS_DEVEL_HEADERS "install headers for devel" OFF)
option(SLS_USE_MOENCH "compile zmq and post processing for Moench" OFF)
# set(ClangFormat_BIN_NAME clang-format)
#Convenience option to switch off defaults when building Moench binaries only
option(SLS_BUILD_ONLY_MOENCH "compile only Moench" OFF)
if(SLS_BUILD_ONLY_MOENCH)
message(STATUS "Build MOENCH binaries only!")
set(SLS_BUILD_SHARED_LIBRARIES OFF CACHE BOOL "Disabled for MOENCH_ONLY" FORCE)
set(SLS_USE_TEXTCLIENT OFF CACHE BOOL "Disabled for MOENCH_ONLY" FORCE)
set(SLS_USE_DETECTOR OFF CACHE BOOL "Disabled for MOENCH_ONLY" FORCE)
set(SLS_USE_RECEIVER OFF CACHE BOOL "Disabled for MOENCH_ONLY" FORCE)
set(SLS_USE_RECEIVER_BINARIES OFF CACHE BOOL "Disabled for MOENCH_ONLY" FORCE)
set(SLS_USE_MOENCH ON CACHE BOOL "Enable" FORCE)
endif()
option(SLS_EXT_BUILD "external build of part of the project" OFF)
if(SLS_EXT_BUILD)
message(STATUS "External build using already installed libraries")
set(SLS_BUILD_SHARED_LIBRARIES OFF CACHE BOOL "Should already exist" FORCE)
set(SLS_USE_TEXTCLIENT OFF CACHE BOOL "Should already exist" FORCE)
set(SLS_USE_DETECTOR OFF CACHE BOOL "Should already exist" FORCE)
set(SLS_USE_RECEIVER OFF CACHE BOOL "Should already exist" FORCE)
set(SLS_USE_RECEIVER_BINARIES OFF CACHE BOOL "Should already exist" FORCE)
set(SLS_MASTER_PROJECT OFF CACHE BOOL "No master proj in case of extbuild" FORCE)
endif()
#Maybe have an option guarding this?
set(SLS_INTERNAL_RAPIDJSON_DIR ${CMAKE_CURRENT_SOURCE_DIR}/libs/rapidjson)
set(SLS_INTERNAL_QWT_DIR ${CMAKE_CURRENT_SOURCE_DIR}/libs/qwt-6.1.5)
set(ClangFormat_EXCLUDE_PATTERNS "build/"
"libs/"
"slsDetectorCalibration/"
@ -62,11 +95,6 @@ set(ClangFormat_EXCLUDE_PATTERNS "build/"
${CMAKE_BINARY_DIR})
find_package(ClangFormat)
#Enable LTO if available
check_ipo_supported(RESULT SLS_LTO_AVAILABLE)
message(STATUS "SLS_LTO_AVAILABLE:" ${SLS_LTO_AVAILABLE})
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
if (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
@ -75,70 +103,77 @@ if (NOT CMAKE_BUILD_TYPE AND NOT CMAKE_CONFIGURATION_TYPES)
endif()
#Add two fake libraries to manage options
add_library(slsProjectOptions INTERFACE)
add_library(slsProjectWarnings INTERFACE)
target_compile_features(slsProjectOptions INTERFACE cxx_std_11)
target_compile_options(slsProjectWarnings INTERFACE
-Wall
-Wextra
-Wno-unused-parameter #Needs to be slowly mitigated
# -Wold-style-cast
-Wnon-virtual-dtor
-Woverloaded-virtual
-Wdouble-promotion
-Wformat=2
-Wredundant-decls
# -Wconversion
-Wvla
-Wdouble-promotion
-Werror=return-type
)
#Settings for C code
add_library(slsProjectCSettings INTERFACE)
target_compile_features(slsProjectCSettings INTERFACE c_std_99)
target_compile_options(slsProjectCSettings INTERFACE
-Wall
-Wextra
-Wno-unused-parameter
-Wdouble-promotion
-Wformat=2
-Wredundant-decls
-Wdouble-promotion
-Werror=return-type
)
#Enable LTO if available
include(CheckIPOSupported)
check_ipo_supported(RESULT SLS_LTO_AVAILABLE)
if((CMAKE_BUILD_TYPE STREQUAL "Release") AND SLS_LTO_AVAILABLE)
message(STATUS "Building with link time optimization")
else()
message(STATUS "Building without link time optimization")
endif()
#Testing for minimum version for compilers
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 3.2)
message(FATAL_ERROR "Clang version must be at least 3.2!")
endif()
target_compile_options(slsProjectWarnings INTERFACE -Wshadow) #Clag does not warn on constructor
elseif ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.8)
message(FATAL_ERROR "GCC version must be at least 4.8!")
endif()
if(SLS_EXT_BUILD)
# Find ourself in case of external build
find_package(slsDetectorPackage ${PROJECT_VERSION} REQUIRED)
endif()
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5)
# slsProjectOptions and slsProjectWarnings are used
# to control options for the libraries
if(NOT TARGET slsProjectOptions)
add_library(slsProjectOptions INTERFACE)
target_compile_features(slsProjectOptions INTERFACE cxx_std_11)
endif()
if (NOT TARGET slsProjectWarnings)
add_library(slsProjectWarnings INTERFACE)
target_compile_options(slsProjectWarnings INTERFACE
-Wall
-Wextra
-Wno-unused-parameter
# -Wold-style-cast
-Wnon-virtual-dtor
-Woverloaded-virtual
-Wdouble-promotion
-Wformat=2
-Wredundant-decls
# -Wconversion
-Wvla
-Wdouble-promotion
-Werror=return-type
)
# Add or disable warnings depending on if the compiler supports them
# The function checks internally and sets HAS_warning-name
sls_enable_cxx_warning("-Wnull-dereference")
sls_enable_cxx_warning("-Wduplicated-cond")
sls_disable_cxx_warning("-Wclass-memaccess")
if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5 AND "${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
target_compile_options(slsProjectWarnings INTERFACE
-Wno-missing-field-initializers)
-Wno-missing-field-initializers)
endif()
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 6.0)
target_compile_options(slsProjectWarnings INTERFACE
-Wno-misleading-indentation # mostly in rapidjson remove using clang format
-Wduplicated-cond
-Wnull-dereference )
endif()
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER 8.0)
target_compile_options(slsProjectWarnings INTERFACE
-Wno-class-memaccess )
endif()
endif()
if (NOT TARGET slsProjectCSettings)
#Settings for C code
add_library(slsProjectCSettings INTERFACE)
target_compile_options(slsProjectCSettings INTERFACE
-std=gnu99 #fixed
-Wall
-Wextra
-Wno-unused-parameter
-Wdouble-promotion
-Wformat=2
-Wredundant-decls
-Wdouble-promotion
-Werror=return-type
)
sls_disable_c_warning("-Wstringop-truncation")
endif()
@ -154,68 +189,32 @@ if(SLS_TUNE_LOCAL)
endif()
#rapidjson
add_library(rapidjson INTERFACE)
target_include_directories(rapidjson INTERFACE
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/libs/rapidjson>
)
# Install fake the libraries
install(TARGETS slsProjectOptions slsProjectWarnings rapidjson
if(SLS_MASTER_PROJECT)
install(TARGETS slsProjectOptions slsProjectWarnings
EXPORT "${TARGETS_EXPORT_NAME}"
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
PUBLIC_HEADER DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}
)
endif()
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
set(CMAKE_INSTALL_RPATH $ORIGIN)
# set(CMAKE_BUILD_WITH_INSTALL_RPATH TRUE)
set(CMAKE_BUILD_WITH_INSTALL_RPATH FALSE)
set(ZeroMQ_HINT "" CACHE STRING "Hint where ZeroMQ could be found")
#Adapted from: https://github.com/zeromq/cppzmq/
if (NOT TARGET libzmq)
if(ZeroMQ_HINT)
message(STATUS "Looking for ZeroMQ in: ${ZeroMQ_HINT}")
find_package(ZeroMQ 4
NO_DEFAULT_PATH
HINTS ${ZeroMQ_DIR}
)
else()
find_package(ZeroMQ 4 QUIET)
endif()
custom_find_zmq()
# libzmq autotools install: fallback to pkg-config
if(NOT ZeroMQ_FOUND)
message(STATUS "CMake libzmq package not found, trying again with pkg-config (normal install of zeromq)")
list (APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_LIST_DIR}/libzmq-pkg-config)
find_package(ZeroMQ 4 REQUIRED)
endif()
# TODO "REQUIRED" above should already cause a fatal failure if not found, but this doesn't seem to work
if(NOT ZeroMQ_FOUND)
message(FATAL_ERROR "ZeroMQ was not found, neither as a CMake package nor via pkg-config")
endif()
if (ZeroMQ_FOUND AND NOT TARGET libzmq)
message(FATAL_ERROR "ZeroMQ version not supported!")
endif()
endif()
if (SLS_USE_TESTS)
enable_testing()
add_subdirectory(tests)
endif(SLS_USE_TESTS)
# Common functionallity to detector and receiver
add_subdirectory(slsSupportLib)
if(NOT SLS_EXT_BUILD)
add_subdirectory(slsSupportLib)
endif()
if (SLS_USE_DETECTOR OR SLS_USE_TEXTCLIENT)
add_subdirectory(slsDetectorSoftware)
@ -226,6 +225,7 @@ if (SLS_USE_RECEIVER)
endif (SLS_USE_RECEIVER)
if (SLS_USE_GUI)
add_subdirectory(libs/qwt)
add_subdirectory(slsDetectorGui)
endif (SLS_USE_GUI)
@ -239,7 +239,7 @@ endif (SLS_USE_INTEGRATION_TESTS)
if (SLS_USE_PYTHON)
find_package (Python 3.6 COMPONENTS Interpreter Development)
add_subdirectory(libs/pybind11)
add_subdirectory(libs/pybind ${CMAKE_BINARY_DIR}/bin/)
add_subdirectory(python)
endif(SLS_USE_PYTHON)
@ -259,16 +259,13 @@ if(SLS_BUILD_DOCS)
add_subdirectory(docs)
endif(SLS_BUILD_DOCS)
if(SLS_USE_MOENCH)
add_subdirectory(slsDetectorCalibration/tiffio)
add_subdirectory(slsDetectorCalibration/moenchExecutables)
endif(SLS_USE_MOENCH)
if(SLS_MASTER_PROJECT)
# Set install dir CMake packages
set(CMAKE_INSTALL_DIR "share/cmake/${PROJECT_NAME}")
# Set the list of exported targets
set(PROJECT_LIBRARIES slsSupportShared slsDetectorShared slsReceiverShared)
# Generate and install package config file and version
include(cmake/package_config.cmake)
endif()

17
COPYING Normal file
View File

@ -0,0 +1,17 @@
The SLS Detector Package is provided under:
SPDX-License-Identifier: LGPL-3.0-or-later
Being under the terms of the GNU Lesser General Public License version 3 or later,
according with:
LICENSES/LGPL-3.0
Source code under the Apache 2.0 License have the SPDX Identifier and are
according with:
LICENSES/ThirdParty/Apache-2.0
All contributions to the SLS Detector Package are subject to this COPYING file.

View File

@ -1,3 +1,17 @@
Valid-License-Identifier: GPL-3.0
Valid-License-Identifier: GPL-3.0+
SPDX-URL: https://spdx.org/licenses/GPL-3.0-or-later.html
Usage-Guide:
To use this license in source code, put one of the following SPDX
tag/value pairs into a comment according to the placement
guidelines in the licensing rules documentation.
For 'GNU Library General Public License (LGPL) version 3.0 only' use:
SPDX-License-Identifier: GPL-3.0
For 'GNU Library General Public License (LGPL) version 3.0 or any later
version' use:
SPDX-License-Identifier: GPL-3.0-or-later
License-Text:
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007

View File

@ -1,3 +1,17 @@
Valid-License-Identifier: LGPL-3.0
Valid-License-Identifier: LGPL-3.0+
SPDX-URL: https://spdx.org/licenses/LGPL-3.0-or-later.html
Usage-Guide:
To use this license in source code, put one of the following SPDX
tag/value pairs into a comment according to the placement
guidelines in the licensing rules documentation.
For 'GNU Library General Public License (LGPL) version 3.0 only' use:
SPDX-License-Identifier: LGPL-3.0
For 'GNU Library General Public License (LGPL) version 3.0 or any later
version' use:
SPDX-License-Identifier: LGPL-3.0-or-later
License-Text:
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007

View File

@ -0,0 +1,210 @@
Valid-License-Identifier: Apache-2.0
SPDX-URL: https://spdx.org/licenses/Apache-2.0.html
Usage-Guide:
To use this license in source code, put one of the following SPDX
tag/value pairs into a comment according to the placement
guidelines in the licensing rules documentation.
SPDX-License-Identifier: Apache-2.0
License-Text:
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

383
RELEASE.txt Executable file → Normal file
View File

@ -1,7 +1,7 @@
SLS Detector Package 6.0.0-rc1 released on 08.10.2021 (Release Candidate 1)
===========================================================================
SLS Detector Package Minor Release 7.0.0 released on 25.11.2021
===============================================================
This document describes the differences between v6.0.0-rc1 and v5.1.0.
This document describes the differences between v7.0.0 and v6.x.x
@ -9,8 +9,8 @@ This document describes the differences between v6.0.0-rc1 and v5.1.0.
--------
1. New or Changed Features
2. Resolved Issues
3. Known Issues
4. Firmware Requirements
3. Firmware Requirements
4. Kernel Requirements
5. Download, Documentation & Support
@ -19,235 +19,118 @@ This document describes the differences between v6.0.0-rc1 and v5.1.0.
1. New or Changed Features
==========================
Client
------
1. [Jungfrau] Chip version
Features for chipv1.1 incorporated
Command line: chipversion, API: getchipVersion
gets chip version (1.0 or 1.1)
chipv1.1 requires config_jungfrau.txt on detector server.
2. [Jungfrau] Chip configuration (only chipv1.1)
powering on the chip and changing settings will configure the chip.
Hence, required before acquisition.
3. [Jungfrau] Settings and Gain mode
Settings can be gain0 and highgain0. Gain mode can be dynamicgain,
forceswitchg1, forceswitchg2, fixg1, fixg2, fixg0. fixg0 must be
used with EXTRA caution as you can damage the detector.
Changing settings also changes dac values of 3 dacs () and reconfigures
chip (only v1.1)
4. [Jungfrau] Storage cells (only chipv1.1)
Additional number of storage cells not applicable for chipv1.1.
Storage cell start is only allowed from 0 - 3 for chipv1.1
(0 - 15 for chipv1.0).
5. [Gotthard2][Jungfrau] Filter Resistor
Command line: filterresistor, API: getFilterResistor/ setFilterResistor
Previous Command: filter, setFilter/ getFilter
Set Filter resistor. Increasing values for increasing resistance.
[Jungfrau] only for chipv1.1. Options: [0|1]. Default is 1.
[Gotthard2] Options: [0|1|2|3]. Default is 0.
6. [Jungfrau] Filter cell (only chipv1.1)
Command line: filtercell, API: getFilterCell/ setFilterCell
Set filter cell. Options: [0-12]. Advanced user command.
7. [Jungfrau] Comparator disable time (only chipv1.1)
Command line: comp_disable_time, API: getComparatorDisableTime/
setComparatorDisableTime
One can customize the period to disable comparator.
8. [Eiger][Jungfrau] Flip rows
Command line: fliprows, API: getFlipRows/ setFlipRows
Previous command: flippeddatax, setBottom/ getBottom
[Jungfrau] Flips rows in detector only for HW v2.0.
slsReceiver and slsDetectorGui will not flip them again.
[Eiger] same as before.
9. [Eiger][Jungfrau] Read n rows
Command line: readnrows, API: getReadNRows/ setReadNRows
Previous Command: readnlines, getPartialReadout/ setPartialReadout
[Eiger] same as before
[Jungfrau] Options: 8 - 512, multiples of 8. Default is 512.
10. [Gotthard2][Jungfrau] Current source
Command line: currentsource, API: getCurrentSource, setCurrentSource
Enable or disable current source. Default is disabled.
[Gotthard2] Can only enable or disable.
[Jungfrau] Can choose to fix, select source and choose normal or low
current. Normal/ low only for chipv1.1.
Select source is 0-63 for chipv1.0 and a 64 bit mask for chipv1.1.
11. Default dac
Command line: defaultdac, API: getDefaultDac/ setDefaultDac
change default value of a dac
[Jungfrau][Mythen3] Also change default value of dac for particular
setting.
12. Reset dacs
Command line: resetdacs, API: resetToDefaultDacs
Previous command: defaultdacs
Resets dacs to their default values or hard coded values.
13. [Mythen3] Gain Capacitance
Command line: gaincaps, API: getGainCaps/ setGainCaps
Set various gain capacitances.
14. [Gotthard2] Veto Streaming from chip
Command line: veto, API: getVeto/ setVeto
This command used to mean veto streaming from detector. Now, it means
veto streaming from chip (New feature). Default is disabled.
15. [Gotthard2] Veto streaming from detector
Command line: vetostream, API: getVetoStream, setVetoStream
Options: None, local link interface, 10 10GbE, Both
Default: None
10GbE (as before) will enable 2 udp interfaces in receiver.
16. [Gotthard2] Veto algorithm
Command line: vetoalg, API: getVetoAlgorithm/ setVetoAlgorithm
Set veto algorithm for each interface.
Options: hits, raw
17. [Eiger][Gotthard2][Mythen3] Module ID
Command line: moduleid, API: getModuleId
Previous command (Eiger only): serialnumber, getSerialNumber
Unique id read from txt file on detector and streamed out in udp header.
18. [Gotthard2]
Command line: dbitpipeline, API: getDBITPipeline/ setDBITPipeline
Set pipeline to latch digital bits. Options: 0-7
19. [Eiger][Jungfrau] Round Robin commands
Command line, udp_dstlist, API: getDestinationUDPList/
setDestinationUDPList
Round robin commands at the moment does not configure the receiver.
Set multiple udp destinations in the detector to stream udp data packets
to. Upto 32 destinations. Refer documentation for details.
Command line, udp_numdst, API: getNumberofUDPDestinations
[Jungfrau] Command line, udp_firstdst, API: getFirstUDPDestination/
setFirstUDPDestination
20. Command Line Parsing
Parsing of detector index and module index has been modified to
integrate round robin index.
[detector index]-[module index]:[round robin index] [command]
It is backwards compatible.
For ease, one can also execute
sls_detector_put [module index] [command]
21. Clear Udp Destination
Command line, udp_cleardst, API: clearUDPDestinations
This is useful when changing receivers for a detector or for round robin
system.
22. Shared Memory Naming
Shared memory name has been changed to reflect a more appropriate naming
scheme.
23. [Eiger][Mythen3] Blocking trigger
Command line: blockingtrigger, API: sendSoftwareTrigger
Sends software trigger signal to detector and blocks till frames are
sent out for that trigger.
24. [Eiger] Data stream enable for ports
Command line: datastream, API: getDataStream/ setDataStream
Enable or disable each port. Default: enabled
25. Changing TCP ports
This will only affect shared memory and will not try to change the
current tcp port of the control/stop server in detector.
26. [Eiger][Jungfrau][Gotthard2] Speed
Command line: readoutspeed, readoutspeedlist API: getReadoutSpeed/ setReadoutSpeed /
getReadoutSpeedList
Previous command: speed, setSpeed/ getSpeed
[Eiger][Jungfrau] same as before.
[Gotthard2] New command to set readout speed. Options: 108, 144 (in MHz)
Detector servers
----------------
1. [Gotthard2] Bad Channels moved to a new register, default settings
including clock frequency changed
2. [Gotthard2] Updated config file in detector server
Virtual servers
----------------
1. Artifical pixel values increasing by every packet, instead of every pixel.
2. All possible features updated.
Receiver
--------
1. Frames caught in metadata
Frames caught by the master receiver is added to master file metadata.
Hdf5 and Binary version numbers changed to 6.3
2. Removed Padding option for Deactivated half modules.
3. Changing Receiver TCP ports
This will only affect shared memory and will not try to change the
current tcp port of the receiver.
Gui
----
1. [Mythen3] counters added in settings tab
- Fixed minor warnings (will fix commandline print of excess packets for missing packets)
- ctb slow adcs and any other adcs (other than temp) goes to the control Server
- number of udp interfaces is 2 for Eiger (CHANGE IN API??)
- added module id for virtual servers into the udp header
- refactoring (rxr)
- fixed patsetbit and patsetmask for moench
- changed default vref of adc9257 to 2V for moench (from 1.33V)
- moench and ctb - can set the starting frame number of next acquisition
- mythen server kernel check incompatible (cet timezone)
- rx_arping
- rx_threadsids max is now 9 (breaking api)
- fixed datastream disabling for eiger. Its only available in 10g mode.
- m3 server crash (vthrehsold dac names were not provided)
- allow vtrim to be interpolated for Eiger settings
- m3 setThresholdEnergy and setAllThresholdEnergy was overwriting gaincaps with settings enum
- can set localhost with virtual server with minimum configuration: (hostname localhost, rx_hostname localhost, udp_dstip auto)
- increases the progress according to listened index. (not processed index)
- current frame index points to listened frame index (not processed index)
- when in discard partial frames or empty mode, the frame number doesnt increase by 1, it increases to that number (so its faster)
- file write disabled by default
- eiger 12 bit mode
- start non blocking acquisition at modular level
- connect master commands to api (allow set master for eiger)
--ignore-config command line
- command line argument 'master' mainly for virtual servers (also master/top for real eiger), only one virtual server for eiger, use command lines for master/top
- stop servers also check for errors at startup( in case it was running with an older version)
- hostname cmd failed when connecting to servers in update mode (ctb, moench, jungfrau, eiger)
- missingpackets signed (negative => extra packets)
- framescaught and frameindex now returns a vector for each port
- progress looks at activated or enabled ports, so progress does not stagnate
- (eiger) disable datastreaming also for virtual servers only for 10g
- missing packets also takes care of disabled ports
- added geometry to metadata
- 10g eiger nextframenumber get fixed.
- stop, able to set nextframenumber to a consistent (max + 1) for all modules if different (eiger/ctb/jungfrau/moench)
- ctb: can set names for all the dacs
- fpga/kernel programming, checks if drive is a special file and not a normal file
- gotthard 25 um image reconstructed in gui and virtual hdf5 (firmware updated for slave to reverse channels)
- master binary file in json format now
- fixed bug introduced in 6.0.0: hdf5 files created 1 file per frame after the initial file which had maxframesperfile
- rx_roi
- m3 polarity, interpolation (enables all counters when enabled), pump probe, analog pulsing, digital pulsing
- updatedetectorserver - removes old server current binary pointing to for blackfin
- removing copydetectorserver using tftp
- registerCallBackRawDataReady and registerCallBackRawDataModifyReady now gives a sls_receiver_header* instead of a char*, and uint32_t to size_t
- registerCallBackStartAcquisition gave incorrect imagesize (+120 bytes). corrected.
- registerCallBackStartAcquisition parameter is a const string reference
- m3 (runnig config second time with tengiga 0, dr !=32, counters !=0x7) calculated incorrect image size expected
- fixed row column indexing (mainly for multi module Jungfrau 2 interfaces )
- eiger gui row indices not flipped anymore (fix in config)
- m3 (settings dac check disabled temporarily?)
- m3 virtual server sends the right pacets now
- gap pixels in gui enabled by default
- rxr src files and classes (detectordata, ZmqSocket, helpDacs) added to sls namespace, and macros (namely from logger (logINFO etc)), slsDetectorGui (make_unique in implemtnation requires sls nemspace (points to std otherwise) but not deectorImpl.cpp)
- blackfin programing made seamless (nCE fixed which helps)
-save settings file for m3 and eiger
- m3 threshold changes
- g2 and m3 clkdiv 2 (system clock) change should affect time settings (g2: exptime, period, delayaftertrigger, burstperiod, m3: exptime, gatedelay, gateperiod, period, delayaftertrigger)
- g2 system frequency is the same irrespective of timing source
- (apparently) rxr doesnt get stuck anymore from 6.1.1
- rxr mem size changed (fifo header size from 8 to 16) due to sls rxr header = 112.. 112+ 16=128 (reduces packet losss especially for g2)
-udp_srcip and udp_Srcip2: can set to auto (for virtual or 1g data networks)
- set dataset name for all hdf5 files to "data" only
- number of storage cells is not updated in teh receiver. done. and also allowing it to be modified in running status
- refactored memory structure in receiver and listener code (maybe resolves stuck issue, need to check)
- callback modified to have rx header and not rx header pointer
- adapted for g2 hdi v2.0. able to set master from server command line, server config file, and client.
- rx udp socket refactored (maybe resolves getting stuck?)remove check for eiger header and isntead checks for malformed packets for every detector
- jungfrau sw trigger , blocking trigger
-help should not create a new object
- jungfrau master
- g2 parallel command
- jungfrau sync
- m3 bad channels (badchannel file also for g2 extended to include commas and colons, remove duplicates)
- m3 fix for gain caps to invert where needed when loading from trimbit file (fix for feature might have been added only in developer branch)
- pat loop and wait address default
- ctb and moench Fw fixed (to work with pattern commdand) )addreess length
- setting rx_hostname (or udp_dstip with rx_hostname not none) will always set udp_dstmac. solves problem of chaing udp_dstip and udp_dstmac stays the same
- jungfrau reset core and usleep removed (fix for 6.1.1 is now fixed in firmware)
- m3 clock update, m3 clk 4 and 5 cannot be set
- g2 change clkdivs 2 3 4 to defaults for burst and cw mode.
- ctb and moench: allowing 1g non blocking acquire to send data
- m3 and g2 rr
- m3 and g2 temp
- gain plot zooming fixed (disabled, acc. to main plot)
- ctb, moench, jungfrau (pll reset at start fixed, before no defines)
- pybind built into package, no need to update submodule when previous release had different pybind version
- adcvpp moved from dac.. and api added (ctb, moench)
- qt4->qt5
- in built qt5 6.1.5 because rhel7 is not upto date with qt5, removed findqwt.cmake
- made a fix in qwt lib (qwt_plot_layout.h) to work with 5.15 and lower versions
- qt5 forms fixed, qt4 many hard coding forms switched to forms including qtabwidget, scrolls etc, fonts moved to forms
- docking option enabled by default, removed option to disable docking feature from "Mode"
- added qVersionResolve utility functions to handle compatibility before and after qt5.12
- qtplots (ian's code) takes in gain mode enable to set some settings within the class, with proper gain plot ticks
- ensure gain plots have no zooming of z axis in 2d and y axis in 1d
- fixed some error messages in server side that were empty for fail in funcs (mostly minor as if this error, major issues)
- eiger (removed feb reset in stop acquisition as it caused processing bit to randomly not go high (leads to infinite loop waiting for it to go high). This is anyway done at prepare acquisition and set trimbits.
- left AND right registers monitored for processing bit done
- febProcessinginprogress returns STATUS_IDLE and not IDLE
- In feb stop acquisition, if processing bit is running forever, checks for 1 s, then if acq done bit is high, returns ok, else throws
- feb stop acquisition returns 1 if success and fucntion in list calling it compares properly instead of STATUS_IDLE (no effect, but incorrect logic)
- chipsignals to trimquad should only monitor right fpga (not both as it will throw)
- fixed error messages of readregister inconsistent values
- setmodule and read frame was returning fail without setting error messages (leading to broken tcp connection due to no error message) )
- gui nios temperature added
- detector header change (bunchid, reserved, debug, roundRnumber) ->detSpec1 - 4
-ctb and moench (allowing all clkdivs (totaldiv was a float instead of int))
2. Resolved Issues
==================
Detector Servers
----------------
1. [Gotthard2] Tolerance in time parameters.
Eg. 220 ns was being set to 215 ns, instead of 222ns.
2. [Jungfrau] Stopping in trigger mode and then switching to auto timing mode
blocks data streaming from detector. Workaround fix made in
detector server to reset core until next firmware release.
3. [Jungfrau][CTB][Moench][Gotthard][Gotthard2][Mythen3] Firmware Programming
Firmware programming incorporates more validations such as checksum of
program. Always ensure client and server are of same release before
programming firmware.
4. [Eiger] Stop sends last frame
Stop acquisition will now also send out all complete frames in fifo.
5. [Eiger] Bottom not rotated in quad mode. Fixed.
6. [Mythen3] counter mask effect on vthreshold
Setting counter mask changes vth daac values (ie. disabling sets to 2800),
vthreshold only changes for enabled counters, setting vth overwrites
dac even if counter disabled and when counters enabled, remembers set
values.
7. [Eiger] fast quad fix for loading trimbits
Receiver
--------
1. Disabled port or deactivated (half) modules will not create files.
- better control of what is built (PR)?
- cmake package has hardcoded path to zeromq library
- Reading back sub-microsecond exposure times from the Python API.
3. Firmware Requirements
========================
@ -268,11 +151,11 @@ This document describes the differences between v6.0.0-rc1 and v5.1.0.
Mythen3
=======
Compatible version : 10.09.2021 (development)
Compatible version : 10.09.2021 (v1.1)
Gotthard2
=========
Compatible version : 27.05.2021 (v1.0)
Compatible version : 27.05.2021 (v0.1)
Moench
======
@ -284,7 +167,6 @@ This document describes the differences between v6.0.0-rc1 and v5.1.0.
Detector Upgrade
================
The following can be upgraded remotely:
Eiger via bit files
Jungfrau via command <.pof>
@ -303,25 +185,34 @@ This document describes the differences between v6.0.0-rc1 and v5.1.0.
4. Kernel Requirements
======================
Blackfin
========
Latest version: Fri Oct 29 00:00:00 2021
Older ones will work, but might have issues with programming firmware via
the package.
5. Known Issues
===============
Nios
====
Compatible version: Mon May 10 18:00:21 CEST 2021
Receiver
--------
1. It does not handle readnrows or partial readout. Only the summary
is adjusted to print in red. However, it will still write complete
images with missing data padded. Roi will be implemented in future
that can be complimented with this feature to remove the additional
data in files.
Kernel Upgrade
==============
Eiger via bit files
Others via command
2. Round robin is not implemented in receiver side, ie. one cannot configure
more than 1 receiver at a time. This will/might be done in the future.
Commands: udpatekernel, kernelversion
Instructions available at
https://slsdetectorgroup.github.io/devdoc/commandline.html
https://slsdetectorgroup.github.io/devdoc/detector.html
https://slsdetectorgroup.github.io/devdoc/pydetector.html
6. Download, Documentation & Support
5. Download, Documentation & Support
====================================
Download

View File

@ -1,118 +0,0 @@
# Qt Widgets for Technical Applications
# available at http://www.http://qwt.sourceforge.net/
#
# The module defines the following variables:
# QWT_FOUND - the system has Qwt
# QWT_INCLUDE_DIR - where to find qwt_plot.h
# QWT_INCLUDE_DIRS - qwt includes
# QWT_LIBRARY - where to find the Qwt library
# QWT_LIBRARIES - aditional libraries
# QWT_MAJOR_VERSION - major version
# QWT_MINOR_VERSION - minor version
# QWT_PATCH_VERSION - patch version
# QWT_VERSION_STRING - version (ex. 5.2.1)
# QWT_ROOT_DIR - root dir (ex. /usr/local)
#=============================================================================
# Copyright 2010-2013, Julien Schueller
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# The views and conclusions contained in the software and documentation are those
# of the authors and should not be interpreted as representing official policies,
# either expressed or implied, of the FreeBSD Project.
#=============================================================================
find_path ( QWT_INCLUDE_DIR
NAMES qwt_plot.h
HINTS $ENV{QWTDIR} $ENV{QWTDIR}/src ${QT_INCLUDE_DIR}
PATH_SUFFIXES qwt qwt-qt3 qwt-qt4 qwt-qt5
)
set ( QWT_INCLUDE_DIRS ${QWT_INCLUDE_DIR} )
# version
set ( _VERSION_FILE ${QWT_INCLUDE_DIR}/qwt_global.h )
if ( EXISTS ${_VERSION_FILE} )
file ( STRINGS ${_VERSION_FILE} _VERSION_LINE REGEX "define[ ]+QWT_VERSION_STR" )
if ( _VERSION_LINE )
string ( REGEX REPLACE ".*define[ ]+QWT_VERSION_STR[ ]+\"(.*)\".*" "\\1" QWT_VERSION_STRING "${_VERSION_LINE}" )
string ( REGEX REPLACE "([0-9]+)\\.([0-9]+)\\.([0-9]+)" "\\1" QWT_MAJOR_VERSION "${QWT_VERSION_STRING}" )
string ( REGEX REPLACE "([0-9]+)\\.([0-9]+)\\.([0-9]+)" "\\2" QWT_MINOR_VERSION "${QWT_VERSION_STRING}" )
string ( REGEX REPLACE "([0-9]+)\\.([0-9]+)\\.([0-9]+)" "\\3" QWT_PATCH_VERSION "${QWT_VERSION_STRING}" )
endif ()
endif ()
# check version
set ( _QWT_VERSION_MATCH TRUE )
if ( Qwt_FIND_VERSION AND QWT_VERSION_STRING )
if ( Qwt_FIND_VERSION_EXACT )
if ( NOT Qwt_FIND_VERSION VERSION_EQUAL QWT_VERSION_STRING )
set ( _QWT_VERSION_MATCH FALSE )
endif ()
else ()
if ( QWT_VERSION_STRING VERSION_LESS Qwt_FIND_VERSION )
set ( _QWT_VERSION_MATCH FALSE )
endif ()
endif ()
endif ()
find_library ( QWT_LIBRARY
NAMES qwt qwt-qt3 qwt-qt4 qwt-qt5
HINTS $ENV{QWTDIR}/lib ${QT_LIBRARY_DIR}
)
set ( QWT_LIBRARIES ${QWT_LIBRARY} )
# try to guess root dir from include dir
if ( QWT_INCLUDE_DIR )
string ( REGEX REPLACE "(.*)/include.*" "\\1" QWT_ROOT_DIR ${QWT_INCLUDE_DIR} )
# try to guess root dir from library dir
elseif ( QWT_LIBRARY )
string ( REGEX REPLACE "(.*)/lib[/|32|64].*" "\\1" QWT_ROOT_DIR ${QWT_LIBRARY} )
endif ()
# handle the QUIETLY and REQUIRED arguments
include ( FindPackageHandleStandardArgs )
if ( CMAKE_VERSION LESS 2.8.3 )
find_package_handle_standard_args( Qwt DEFAULT_MSG QWT_LIBRARY QWT_INCLUDE_DIR _QWT_VERSION_MATCH )
else ()
find_package_handle_standard_args( Qwt REQUIRED_VARS QWT_LIBRARY QWT_INCLUDE_DIR _QWT_VERSION_MATCH VERSION_VAR QWT_VERSION_STRING )
endif ()
mark_as_advanced (
QWT_LIBRARY
QWT_LIBRARIES
QWT_INCLUDE_DIR
QWT_INCLUDE_DIRS
QWT_MAJOR_VERSION
QWT_MINOR_VERSION
QWT_PATCH_VERSION
QWT_VERSION_STRING
QWT_ROOT_DIR
)

64
cmake/SlsAddFlag.cmake Normal file
View File

@ -0,0 +1,64 @@
include(CheckCXXCompilerFlag)
include(CheckCCompilerFlag)
function(enable_cxx_warning flag target)
string(REPLACE "-W" "HAS_" flag_name ${flag})
check_cxx_compiler_flag(${flag} ${flag_name})
if(${flag_name})
target_compile_options(${target} INTERFACE ${flag})
message(STATUS "Adding: ${flag} to ${target}")
else()
message(STATUS "Flag: ${flag} not supported")
endif()
endfunction()
function(enable_c_warning flag target)
string(REPLACE "-W" "HAS_" flag_name ${flag})
check_c_compiler_flag(${flag} ${flag_name})
if(${flag_name})
target_compile_options(${target} INTERFACE ${flag})
message(STATUS "Adding: ${flag} to ${target}")
else()
message(STATUS "Flag: ${flag} not supported")
endif()
endfunction()
function(disable_cxx_warning flag target)
string(REPLACE "-W" "HAS_" flag_name ${flag})
check_cxx_compiler_flag(${flag} ${flag_name})
if(${flag_name})
string(REPLACE "-W" "-Wno-" neg_flag ${flag})
message(STATUS "Adding: ${neg_flag} to ${target}")
target_compile_options(${target} INTERFACE ${neg_flag})
else()
message(STATUS "Warning: ${flag} not supported no need to disable")
endif()
endfunction()
function(disable_c_warning flag target)
string(REPLACE "-W" "HAS_" flag_name ${flag})
check_c_compiler_flag(${flag} ${flag_name})
if(${flag_name})
string(REPLACE "-W" "-Wno-" neg_flag ${flag})
message(STATUS "Adding: ${neg_flag} to ${target}")
target_compile_options(${target} INTERFACE ${neg_flag})
else()
message(STATUS "Warning: ${flag} not supported no need to disable")
endif()
endfunction()
function(sls_disable_c_warning flag)
disable_c_warning(${flag} slsProjectCSettings)
endfunction()
function(sls_enable_cxx_warning flag)
enable_cxx_warning(${flag} slsProjectWarnings)
endfunction()
function(sls_disable_cxx_warning flag)
disable_cxx_warning(${flag} slsProjectWarnings)
endfunction()

38
cmake/SlsFindZeroMQ.cmake Normal file
View File

@ -0,0 +1,38 @@
function(custom_find_zmq)
set(ZeroMQ_HINT "" CACHE STRING "Hint where ZeroMQ could be found")
#Adapted from: https://github.com/zeromq/cppzmq/
if (NOT TARGET libzmq)
if(ZeroMQ_HINT)
message(STATUS "Looking for ZeroMQ in: ${ZeroMQ_HINT}")
find_package(ZeroMQ 4
NO_DEFAULT_PATH
HINTS ${ZeroMQ_HINT}
)
else()
find_package(ZeroMQ 4 QUIET)
endif()
# libzmq autotools install: fallback to pkg-config
if(ZeroMQ_FOUND)
message(STATUS "Found libzmq using find_package")
else()
message(STATUS "CMake libzmq package not found, trying again with pkg-config (normal install of zeromq)")
list (APPEND CMAKE_MODULE_PATH ${CMAKE_CURRENT_LIST_DIR}/cmake/libzmq-pkg-config)
find_package(ZeroMQ 4 REQUIRED)
endif()
# TODO "REQUIRED" above should already cause a fatal failure if not found, but this doesn't seem to work
if(NOT ZeroMQ_FOUND)
message(FATAL_ERROR "ZeroMQ was not found, neither as a CMake package nor via pkg-config")
endif()
if (ZeroMQ_FOUND AND NOT TARGET libzmq)
message(FATAL_ERROR "ZeroMQ version not supported!")
endif()
endif()
get_target_property(VAR libzmq IMPORTED_LOCATION)
message(STATUS "Using libzmq: ${VAR}")
endfunction()

View File

@ -0,0 +1,36 @@
#From: https://github.com/zeromq/cppzmq/
set(PKG_CONFIG_USE_CMAKE_PREFIX_PATH ON)
find_package(PkgConfig)
pkg_check_modules(PC_LIBZMQ QUIET libzmq)
set(ZeroMQ_VERSION ${PC_LIBZMQ_VERSION})
find_path(ZeroMQ_INCLUDE_DIR zmq.h
PATHS ${ZeroMQ_DIR}/include
${PC_LIBZMQ_INCLUDE_DIRS}
)
find_library(ZeroMQ_LIBRARY
NAMES zmq
PATHS ${ZeroMQ_DIR}/lib
${PC_LIBZMQ_LIBDIR}
${PC_LIBZMQ_LIBRARY_DIRS}
)
if(ZeroMQ_LIBRARY OR ZeroMQ_STATIC_LIBRARY)
set(ZeroMQ_FOUND ON)
message(STATUS "Found libzmq using PkgConfig")
endif()
set ( ZeroMQ_LIBRARIES ${ZeroMQ_LIBRARY} )
set ( ZeroMQ_INCLUDE_DIRS ${ZeroMQ_INCLUDE_DIR} )
if (NOT TARGET libzmq)
add_library(libzmq UNKNOWN IMPORTED)
set_target_properties(libzmq PROPERTIES
IMPORTED_LOCATION ${ZeroMQ_LIBRARIES}
INTERFACE_INCLUDE_DIRECTORIES ${ZeroMQ_INCLUDE_DIRS})
endif()
include ( FindPackageHandleStandardArgs )
find_package_handle_standard_args ( ZeroMQ DEFAULT_MSG ZeroMQ_LIBRARIES ZeroMQ_INCLUDE_DIRS )

View File

@ -26,7 +26,7 @@ install(FILES
)
install(FILES
"${CMAKE_SOURCE_DIR}/libzmq-pkg-config/FindZeroMQ.cmake"
"${CMAKE_SOURCE_DIR}/cmake/libzmq-pkg-config/FindZeroMQ.cmake"
COMPONENT devel
DESTINATION ${CMAKE_INSTALL_DIR}/libzmq-pkg-config
)

132
cmk.sh
View File

@ -1,4 +1,6 @@
#!/bin/bash
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
CMAKE="cmake3"
BUILDDIR="build"
INSTALLDIR=""
@ -16,6 +18,7 @@ CTBGUI=0
MANUALS=0
MANUALS_ONLY_RST=0
MOENCHZMQ=0
ZMQ_HINT_DIR=""
CLEAN=0
@ -24,25 +27,26 @@ CMAKE_PRE=""
CMAKE_POST=""
usage() { echo -e "
Usage: $0 [-c] [-b] [-p] [e] [t] [r] [g] [s] [u] [i] [m] [n] [-h] [z] [-d <HDF5 directory>] [-l Install directory] [-k <CMake command>] [-j <Number of threads>]
Usage: $0 [-b] [-c] [-d <HDF5 directory>] [e] [g] [-h] [i] [-j <Number of threads>] [-k <CMake command>] [-l <Install directory>] [m] [n] [-p] [-q <Zmq hint directory>] [r] [s] [t] [u] [z]
-[no option]: only make
-c: Clean
-b: Builds/Rebuilds CMake files normal mode
-p: Builds/Rebuilds Python API
-h: Builds/Rebuilds Cmake files with HDF5 package
-c: Clean
-d: HDF5 Custom Directory
-e: Debug mode
-g: Build/Rebuilds only gui
-h: Builds/Rebuilds Cmake files with HDF5 package
-i: Builds tests
-j: Number of threads to compile through
-k: CMake command
-l: Install directory
-t: Build/Rebuilds only text client
-r: Build/Rebuilds only receiver
-g: Build/Rebuilds only gui
-s: Simulator
-u: Chip Test Gui
-j: Number of threads to compile through
-e: Debug mode
-i: Builds tests
-m: Manuals
-n: Manuals without compiling doxygen (only rst)
-p: Builds/Rebuilds Python API
-q: Zmq hint directory
-r: Build/Rebuilds only receiver
-s: Simulator
-t: Build/Rebuilds only text client
-u: Chip Test Gui
-z: Moench zmq processor
Rebuild when you switch to a new build and compile in parallel:
@ -79,69 +83,50 @@ For rebuilding only certain sections
" ; exit 1; }
while getopts ":bpchd:k:l:j:trgeisumnz" opt ; do
while getopts ":bcd:eghij:k:l:mnpq:rstuz" opt ; do
case $opt in
b)
echo "Building of CMake files Required"
REBUILD=1
;;
p)
echo "Compiling Options: Python"
PYTHON=1
REBUILD=1
;;
c)
echo "Clean Required"
CLEAN=1
;;
h)
echo "Building of CMake files with HDF5 option Required"
HDF5=1
REBUILD=1
;;
d)
echo "New HDF5 directory: $OPTARG"
HDF5DIR=$OPTARG
;;
l)
echo "CMake install directory: $OPTARG"
INSTALLDIR="$OPTARG"
e)
echo "Compiling Options: Debug"
DEBUG=1
;;
g)
echo "Compiling Options: GUI"
GUI=1
REBUILD=1
;;
h)
echo "Building of CMake files with HDF5 option Required"
HDF5=1
REBUILD=1
;;
i)
echo "Compiling Options: Tests"
TESTS=1
;;
j)
echo "Number of compiler threads: $OPTARG"
COMPILERTHREADS=$OPTARG
;;
k)
echo "CMake command: $OPTARG"
CMAKE="$OPTARG"
;;
j)
echo "Number of compiler threads: $OPTARG"
COMPILERTHREADS=$OPTARG
l)
echo "CMake install directory: $OPTARG"
INSTALLDIR="$OPTARG"
;;
t)
echo "Compiling Options: Text Client"
TEXTCLIENT=1
REBUILD=1
;;
r)
echo "Compiling Options: Receiver"
RECEIVER=1
REBUILD=1
;;
g)
echo "Compiling Options: GUI"
GUI=1
REBUILD=1
;;
e)
echo "Compiling Options: Debug"
DEBUG=1
;;
i)
echo "Compiling Options: Tests"
TESTS=1
;;
s)
echo "Compiling Options: Simulator"
SIMULATOR=1
;;
m)
echo "Compiling Manuals"
MANUALS=1
@ -150,14 +135,37 @@ while getopts ":bpchd:k:l:j:trgeisumnz" opt ; do
echo "Compiling Manuals (Only RST)"
MANUALS_ONLY_RST=1
;;
z)
echo "Compiling Moench Zmq Processor"
MOENCHZMQ=1
p)
echo "Compiling Options: Python"
PYTHON=1
REBUILD=1
;;
q)
echo "Zmq hint directory: $OPTARG"
ZMQ_HINT_DIR=$OPTARG
;;
r)
echo "Compiling Options: Receiver"
RECEIVER=1
REBUILD=1
;;
s)
echo "Compiling Options: Simulator"
SIMULATOR=1
;;
t)
echo "Compiling Options: Text Client"
TEXTCLIENT=1
REBUILD=1
;;
u)
echo "Compiling Options: Chip Test Gui"
CTBGUI=1
;;
z)
echo "Compiling Moench Zmq Processor"
MOENCHZMQ=1
;;
\?)
echo "Invalid option: -$OPTARG"
usage
@ -252,6 +260,12 @@ if [ $TESTS -eq 1 ]; then
echo "Tests Option enabled"
fi
#zmq hint dir
if [ -n "$ZMQ_HINT_DIR" ]; then
CMAKE_POST+=" -DZeroMQ_HINT="$ZMQ_HINT_DIR
CMAKE_POST+=" -DZeroMQ_DIR="
# echo "Enabling Zmq Hint Directory: $ZMQ_HINT_DIR"
fi
#hdf5 rebuild
if [ $HDF5 -eq 1 ]; then

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
mkdir build
mkdir install

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
echo "|<-------- starting python build"
cd python

View File

@ -3,6 +3,7 @@ python:
- 3.7
- 3.8
- 3.9
- 3.10
numpy:
- 1.17
- 1.17

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
mkdir $PREFIX/lib
mkdir $PREFIX/bin
mkdir $PREFIX/include

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
#Copy the GUI
mkdir -p $PREFIX/bin
cp build/install/bin/slsDetectorGui $PREFIX/bin/.

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
mkdir -p $PREFIX/lib
mkdir -p $PREFIX/bin
@ -17,4 +19,4 @@ cp build/install/bin/slsMultiReceiver $PREFIX/bin/.
cp build/install/include/sls/* $PREFIX/include/sls
cp -r build/install/share/ $PREFIX/share
cp -rv build/install/share $PREFIX

View File

@ -1,4 +1,6 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
#Copy the Moench executables
mkdir -p $PREFIX/bin
cp build/install/bin/moench04ZmqProcess $PREFIX/bin/.
cp build/install/bin/moenchZmqProcess $PREFIX/bin/.
cp build/install/bin/moench* $PREFIX/bin/.

View File

@ -1 +1,3 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
ctest -j2

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
find_package(ROOT CONFIG REQUIRED COMPONENTS Core Gui)
@ -32,7 +34,7 @@ add_executable(ctbGui
ctbAdcs.cpp
ctbPattern.cpp
ctbAcquisition.cpp
${CMAKE_SOURCE_DIR}/slsDetectorCalibration/tiffIO.cpp
${CMAKE_SOURCE_DIR}/slsDetectorCalibration/tiffio/src/tiffIO.cpp
)
@ -41,6 +43,7 @@ target_include_directories(ctbGui PRIVATE
${CMAKE_SOURCE_DIR}/slsDetectorCalibration/dataStructures
${CMAKE_SOURCE_DIR}/slsDetectorCalibration/interpolations
${CMAKE_SOURCE_DIR}/slsDetectorCalibration/
${CMAKE_SOURCE_DIR}/slsDetectorCalibration/tiffio/include/
)
# Headders needed for ROOT dictionary generation
@ -59,7 +62,6 @@ set( HEADERS
#set(ROOT_INCLUDE_PATH ${CMAKE_CURRENT_SOURCE_DIR})
# ROOT dictionary generation
include("${ROOT_DIR}/RootMacros.cmake")
root_generate_dictionary(ctbDict ${HEADERS} LINKDEF ctbLinkDef.h)
add_library(ctbRootLib SHARED ctbDict.cxx)
target_include_directories(ctbRootLib PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
@ -84,4 +86,5 @@ target_link_libraries(ctbGui PUBLIC
set_target_properties(ctbGui PROPERTIES
RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin
)
)

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
INCS=ctbMain.h ctbDacs.h ctbPattern.h ctbSignals.h ctbAdcs.h ctbAcquisition.h ctbPowers.h ctbSlowAdcs.h

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
INCS=ctbMain.h ctbDacs.h ctbPattern.h ctbSignals.h ctbAdcs.h ctbAcquisition.h ctbPowers.h ctbSlowAdcs.h

8
ctbGui/ctbAcquisition.cpp Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
//#define TESTADC
@ -826,14 +828,14 @@ void ctbAcquisition::setCanvas(TCanvas* c) {
myCanvas->AddExec("dynamic",Form("((ctbAcquisition*)%p)->canvasClicked()",this));
// myCanvas->AddExec("ex","canvasClicked()");
}
void ctbAcquisition::dataCallback(detectorData *data, long unsigned int index, unsigned int dum, void* pArgs) {
void ctbAcquisition::dataCallback(sls::detectorData *data, long unsigned int index, unsigned int dum, void* pArgs) {
// return
((ctbAcquisition*)pArgs)->plotData(data,index);
}
int ctbAcquisition::plotData(detectorData *data, int index) {
int ctbAcquisition::plotData(sls::detectorData *data, int index) {
/*
******************************************************************
@ -986,7 +988,7 @@ sample1 (dbit0 + dbit1 +...)if (cmd == "rx_dbitlist") {
ped=0;
aval=dataStructure->getValue(data->data,x,y);
//aval=dataStructure->getChannel(data->data,x,y);
cout << x << " " <<y << " "<< aval << endl;
// cout << x << " " <<y << " "<< aval << endl;
if (cbGetPedestal->IsOn()) {
if (photonFinder) {
photonFinder->addToPedestal(aval,x,y);

8
ctbGui/ctbAcquisition.h Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#ifndef CTBACQUISITION_H
#define CTBACQUISITION_H
#include <TGFrame.h>
@ -26,8 +28,8 @@ class TGTextButton;
namespace sls
{
class Detector;
class detectorData;
};
class detectorData;
template <class dataType> class slsDetectorData;
@ -199,10 +201,10 @@ class ctbAcquisition : public TGGroupFrame {
void setBitGraph (int i ,int en, Pixel_t col);
void startAcquisition();
static void progressCallback(double,void*);
static void dataCallback(detectorData*, long unsigned int, unsigned int, void*);
static void dataCallback(sls::detectorData*, long unsigned int, unsigned int, void*);
int StopFlag;
int plotData(detectorData*, int);
int plotData(sls::detectorData*, int);
void setPatternFile(const char* t);

2
ctbGui/ctbAdcs.cpp Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include <TApplication.h>
#include <TGClient.h>
#include <TCanvas.h>

2
ctbGui/ctbAdcs.h Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package

2
ctbGui/ctbDacs.cpp Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include <stdio.h>
#include <iostream>

2
ctbGui/ctbDacs.h Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#ifndef CTBDACS_H

2
ctbGui/ctbDefs.h Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#pragma once
#include <string>

2
ctbGui/ctbGui.cpp Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include <TApplication.h>
#include <TColor.h>

2
ctbGui/ctbLinkDef.h Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#pragma link C++ class ctbMain;
#pragma link C++ class ctbDacs;
#pragma link C++ class ctbDac;

2
ctbGui/ctbMain.cpp Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include <TApplication.h>
#include <TGClient.h>
#include <TCanvas.h>

2
ctbGui/ctbMain.h Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#ifndef CTBMAIN_H
#define CTBMAIN_H
#include <TGFrame.h>

2
ctbGui/ctbPattern.cpp Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include <TApplication.h>
#include <TGClient.h>
#include <TCanvas.h>

2
ctbGui/ctbPattern.h Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#ifndef CTBPATTERN_H
#define CTBPATTERN_H
#include <TGFrame.h>

View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include <TGFrame.h>

View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#ifndef CTBPOWERS_H
#define CTBPOWERS_H

2
ctbGui/ctbSignals.cpp Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include <TApplication.h>
#include <TGClient.h>
#include <TCanvas.h>

2
ctbGui/ctbSignals.h Executable file → Normal file
View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#ifndef CTBSIGNALS_H
#define CTBSIGNALS_H
#include <TGFrame.h>

View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include <stdio.h>
#include <iostream>

View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#ifndef CTBSLOWADCS_H

View File

@ -1,110 +0,0 @@
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <sys/utsname.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <errno.h>
#include <math.h>
#include <fcntl.h>
#include <stdarg.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
int iarg;
char fname[10000];
uint64_t word;
int val[64];
int bit[64];
FILE *fdin;
int nb=2;
int off=0;
int ioff=0;
int dr=24;
int idr=0;
int ib=0;
int iw=0;
bit[0]=19;
bit[1]=8;
// for (iarg=0; iarg<argc; iarg++) printf("%d %s\n",iarg, argv[iarg]);
if (argc<2) printf("Error: usage is %s fname [dr off b0 b1 bn]\n");
if (argc>2) dr=atoi(argv[2]);
if (argc>3) off=atoi(argv[3]);
if (argc>4) {
for (ib=0; ib<64; ib++) {
if (argc>4+ib) {
bit[ib]=atoi(argv[4+ib]);
nb++;
}
}
}
idr=0;
for (ib=0; ib<nb; ib++) {
val[ib]=0;
}
fdin=fopen(argv[1],"rb");
if (fdin==NULL) {
printf("Cannot open input file %s for reading\n",argv[1]);
return 200;
}
while (fread((void*)&word, 8, 1, fdin)) {
// printf("%llx\n",word);
if (ioff<off) ioff++;
else {
for (ib=0; ib<nb; ib++) {
if (word&(1<<bit[ib])) val[ib]|=(1<<idr);
}
idr++;
if (idr==dr) {
idr=0;
fprintf(stdout,"%d\t",iw++);
for (ib=0; ib<nb; ib++) {
#ifdef HEX
fprintf(stdout,"%08llx\t",val[ib]);
#else
fprintf(stdout,"%lld\t",val[ib]);
#endif
val[ib]=0;
}
fprintf(stdout,"\n");
}
}
}
if (idr!=0) {
fprintf(stdout,"%d\t",iw++);
for (ib=0; ib<nb; ib++) {
#ifdef HEX
fprintf(stdout,"%08llx\t",val[ib]);
#else
fprintf(stdout,"%lld\t",val[ib]);
#endif
val[ib]=0;
}
fprintf(stdout,"\n");
}
fclose(fdin);
return 0;
}

View File

@ -1,177 +0,0 @@
/****************************************************************************
usage to generate a patter test.pat from test.p
gcc -DINFILE="\"test.p\"" -DOUTFILE="\"test.pat\"" -o test.exe generator.c ; ./test.exe ; rm test.exe
*************************************************************************/
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <sys/utsname.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <errno.h>
#include <math.h>
#include <fcntl.h>
#include <stdarg.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#define MAXLOOPS 3
#define MAXTIMERS 3
#define MAXWORDS 1024
uint64_t pat=0;
uint64_t iopat=0;
uint64_t clkpat=0;
int iaddr=0;
int waitaddr[3]={MAXWORDS,MAXWORDS,MAXWORDS};
int startloopaddr[3]={MAXWORDS,MAXWORDS,MAXWORDS};
int stoploopaddr[3]={MAXWORDS,MAXWORDS,MAXWORDS};
int start=0, stop=0;
uint64_t waittime[3]={0,0,0};
int nloop[3]={0,0,0};
char infile[10000], outfile[10000];
FILE *fd, *fd1;
uint64_t PAT[MAXWORDS];
int i,ii,iii,j,jj,jjj,pixx,pixy,memx,memy,muxout,memclk,colclk,rowclk,muxclk,memcol,memrow,loopcounter;
void setstart() {
start=iaddr;
}
void setstop() {
stop=iaddr;
}
void setinput(int bit) {
uint64_t mask=1;
mask=mask<<bit;
iopat &= ~mask;
}
void setoutput(int bit) {
uint64_t mask=1;
mask=mask<<bit;
iopat |= mask;
}
void setclk(int bit) {
uint64_t mask=1;
mask=mask<<bit;
iopat |= mask;
clkpat |= mask;
}
void clearbit(int bit){
uint64_t mask=1;
mask=mask<<bit;
pat &= ~mask;
}
void setbit(int bit){
uint64_t mask=1;
mask=mask<<bit;
pat |= mask;
}
int checkbit(int bit) {
uint64_t mask=1;
mask=mask<<bit;
return (pat & mask ) >>bit;
}
void setstartloop(int iloop) {
if (iloop>=0 && iloop<MAXLOOPS)
startloopaddr[iloop]=iaddr;
}
void setstoploop(int iloop) {
if (iloop>=0 && iloop<MAXLOOPS)
stoploopaddr[iloop]=iaddr;
}
void setnloop(int iloop, int n) {
if (iloop>=0 && iloop<MAXLOOPS)
nloop[iloop]=n;
}
void setwaitpoint(int iloop) {
if (iloop>=0 && iloop<MAXTIMERS)
waitaddr[iloop]=iaddr;
}
void setwaittime(int iloop, uint64_t t) {
if (iloop>=0 && iloop<MAXTIMERS)
waittime[iloop]=t;
}
void pw(){
if (iaddr<MAXWORDS)
PAT[iaddr]= pat;
fprintf(fd,"patword 0x%04x 0x%016llx\n",iaddr, pat);
iaddr++;
if (iaddr>=MAXWORDS) printf("ERROR: too many word in the pattern (%d instead of %d)!",iaddr, MAXWORDS);
}
int parseCommand(int clk, int cmdbit, int cmd, int length) {
int ibit;
clearbit(clk);
for (ibit=0; ibit<length; ibit++) {
if (cmd&(1>>ibit))
setbit(cmdbit);
else
clearbit(cmdbit);
pw();
/******/
setbit(clk);
pw();
/******/
}
};
main(void) {
int iloop=0;
fd=fopen(OUTFILE,"w");
#include INFILE
fprintf(fd,"patioctrl 0x%016llx\n",iopat);
fprintf(fd,"patclkctrl 0x%016llx\n",clkpat);
fprintf(fd,"patlimits 0x%04x 0x%04x\n",start, stop);
for (iloop=0; iloop<MAXLOOPS; iloop++) {
fprintf(fd,"patloop%d 0x%04x 0x%04x\n",iloop, startloopaddr[iloop], stoploopaddr[iloop]);
if ( startloopaddr[iloop]<0 || stoploopaddr[iloop]<= startloopaddr[iloop]) nloop[iloop]=0;
fprintf(fd,"patnloop%d %d\n",iloop, nloop[iloop]);
}
for (iloop=0; iloop<MAXTIMERS; iloop++) {
fprintf(fd,"patwait%d 0x%04x\n",iloop, waitaddr[iloop]);
if (waitaddr[iloop]<0) waittime[iloop]=0;
fprintf(fd,"patwaittime%d %lld\n",iloop, waittime[iloop]);
}
close((int)fd);
fd1=fopen(OUTFILEBIN,"w");
fwrite(PAT,sizeof(uint64_t),iaddr, fd1);
close((int)fd1);
}

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
find_package(Doxygen REQUIRED)
find_package(Sphinx REQUIRED)
@ -53,6 +55,9 @@ set(SPHINX_SOURCE_FILES
src/troubleshooting.rst
src/receivers.rst
src/slsreceiver.rst
src/udpheader.rst
src/udpconfig.rst
src/udpdetspec.rst
)
foreach(filename ${SPHINX_SOURCE_FILES})

View File

@ -34,6 +34,12 @@ Python bindings
* pybind11 (packaged in libs/)
-----------------------
Moench executables
-----------------------
* libtiff
-----------------------
Documentation
-----------------------

View File

@ -1,8 +1,6 @@
Firmware Upgrade
=================
Eiger
-------------
@ -18,30 +16,9 @@ Upgrade
^^^^^^^^
#. Tftp must be already installed on your pc to use the bcp script.
#. Kill the on-board servers and copy new servers to the board.
#. Copy new servers to the board. See :ref:`how to upgrade detector servers<Detector Server Upgrade>` for more detals. A reboot should have started the new linked servers automatically. For Eiger, do not reboot yet as we need to program the firmware via bit files.
.. code-block:: bash
# Option 1: from detector console
# kill old server
ssh root@bebxxx
killall eigerDetectorServer
# copy new server
cd executables
scp user@pc:/path/eigerDetectorServerxxx .
chmod 777 eigerDetectorServerxxx
ln -sf eigerDetectorServerxxx eigerDetectorServer
sync
# Options 2: from client console for multiple modules
for i in bebxxx bebyyy;
do ssh root@$i killall eigerDetectorServer;
scp eigerDetectorServerxxx root@$i:~/executables/eigerDetectorServer;
ssh root@$i sync; done
* This is crucial when registers between firmwares change. Failure to do so will result in linux on boards to crash and boards can't be pinged anymore.
* This step is crucial when registers between firmwares change. Failure to do so will result in linux on boards to crash and boards can't be pinged anymore.
#. Bring the board into programmable mode using either of the 2 ways. Both methods result in only the central LED blinking.
@ -50,8 +27,13 @@ Upgrade
Do a hard reset for each half module on back panel boards, between the LEDs, closer to each of the 1G ethernet connectors. Push until all LEDs start to blink.
* Software:
.. code-block:: bash
# Option 1: if the old server is still running:
sls_detector_put execcommand "./boot_recovery"
# Option 2:
ssh root@bebxxx
cd executables
./boot_recovery
@ -79,11 +61,24 @@ Upgrade
#update front right fpga
bcp download.bit bebxxx:/febr
#update kernel (only if required by the SLS Detector Group)
#update kernel (only if required by us)
bcp download.bit bebxxx:/kernel
#. Reboot the detector.
.. code-block:: bash
# In the first terminal where we saw "Succeess"
# reconfig febX is necessary only if you have flashed a new feb firmware
reconfig febl
reconfig febr
# will reboot controller
reconfig fw0
.. note ::
If the detector servers did not start up automatically after reboot, you need to add scripts to do that. See :ref:`Automatic start<Automatic start servers>` for more details.
Jungfrau
-------------
@ -94,75 +89,26 @@ Download
- `pof files <https://github.com/slsdetectorgroup/slsDetectorFirmware>`__
Upgrade (from v4.x.x)
^^^^^^^^^^^^^^^^^^^^^^
Upgrade
^^^^^^^^
Check :ref:`firmware troubleshooting <blackfin firmware troubleshooting>` if you run into issues while programming firmware.
.. note ::
#. Tftp must be installed on pc.
These instructions are for upgrades from v5.0.0. For earlier versions, contact us.
#. Update client package to the latest (5.x.x).
#. Disable server respawning or kill old server
.. code-block:: bash
# Option 1: if respawning enabled
telnet bchipxxx
# edit /etc/inittab
# comment out line #ttyS0::respawn:/jungfrauDetectorServervxxx
reboot
# ensure servers did not start up after reboot
telnet bchipxxx
ps
# Option 2: if respawning already disabled
telnet bchipxxx
killall jungfrauDetectorServerv*
#. Copy new server and start in update mode
.. code-block:: bash
tftp pcxxx -r jungfrauDetectorServervxxx -g
chmod 777 jungfrauDetectorServervxxx
./jungfrauDetectorServervxxx -u
#. Program fpga from the client console
.. code-block:: bash
sls_detector_get free
# Crucial that the next command executes without any errors
sls_detector_put hostname bchipxxx
sls_detector_put programfpga xxx.pof
#. After programming, kill 'update server' using Ctrl + C in server console.
#. Enable server respawning if needed
.. code-block:: bash
telnet bchipxxx
# edit /etc/inittab
# uncomment out line #ttyS0::respawn:/jungfrauDetectorServervxxx
# ensure the line has the new server name
reboot
# ensure both servers are running using ps
jungfrauDetectorServervxxx
jungfrauDetectorServervxxx --stop-server 1953
Upgrade (from v5.0.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^
Check :ref:`firmware troubleshooting <blackfin firmware troubleshooting>` if you run into issues while programming firmware.
Always ensure that the client and server software are of the same release.
#. Program from console
Program from console
.. code-block:: bash
# copies server from tftp folder of pc, programs fpga,
# removes old server from respawn, sets up new server to respawn
# and reboots
# copies server from tftp folder of pc, links new server to jungfrauDetectorServer,
# removes old server from respawn, sets up new lnked server to respawn
# programs fpga,
# reboots
sls_detector_put update jungfrauDetectorServervxxx pcxxx xx.pof
# Or only program firmware
@ -170,8 +116,8 @@ Always ensure that the client and server software are of the same release.
Gotthard
---------
Gotthard I
-----------
Download
^^^^^^^^^^^^^
@ -186,7 +132,7 @@ Upgrade
^^^^^^^^
.. warning ::
| Gotthard firmware cannot be upgraded remotely and requires the use of USB-Blaster.
| It is generally updated by the SLS Detector group.
| It is generally updated by us.
#. Download `Altera Quartus software or Quartus programmer <https://fpgasoftware.intel.com/20.1/?edition=standard&platform=linux&product=qprogrammer#tabs-4>`__.
@ -197,7 +143,7 @@ Upgrade
#. Plug the end of your USB-Blaster with the adaptor provided to the connector 'AS config' on the Gotthard board.
#. Click on 'Add file'. Select programming (pof) file provided by the SLS Detector group.
#. Click on 'Add file'. Select programming (pof) file provided by us.
#. Check "Program/Configure" and "Verify". Push the start button. Wait until the programming process is finished.
@ -206,68 +152,69 @@ Upgrade
#. Reboot the detector.
Mythen3
-------
Mythen III
-----------
.. note ::
As it is still in developement, the rbf files must be picked up from the SLS Detector Group.
As it is still in development, the rbf files must be picked up from us.
Download
^^^^^^^^^^^^^
- detector server corresponding to package in slsDetectorPackage/serverBin
- rbf files (in developement)
- `rbf files <https://github.com/slsdetectorgroup/slsDetectorFirmware>`__
Upgrade (from v5.0.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Upgrade
^^^^^^^^
Always ensure that the client and server software are of the same release.
#. Program from console
Program from console
.. code-block:: bash
# copies server from tftp folder of pc, programs fpga,
# and reboots (new server not respawned currently)
# copies server from tftp folder of pc, links new server to mythen3DetectorServer,
# programs fpga,
# reboots
sls_detector_put update mythen3DetectorServervxxx pcxxx xxx.rbf
# Or only program firmware
sls_detector_put programfpga xxx.rbf
Gotthard2
-------------
.. note ::
As it is still in developement, the rbf files must be picked up from the SLS Detector Group.
If the detector servers did not start up automatically after reboot, you need to add scripts to do that. See :ref:`Automatic start<Automatic start servers>` for more details.
Gotthard II
-------------
Download
^^^^^^^^^^^^^
- detector server corresponding to package in slsDetectorPackage/serverBin
- rbf files (in development)
- `rbf files <https://github.com/slsdetectorgroup/slsDetectorFirmware>`__
Upgrade (from v5.0.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^
Upgrade
^^^^^^^^
Always ensure that the client and server software are of the same release.
#. Program from console
Program from console
.. code-block:: bash
# copies server from tftp folder of pc, programs fpga,
# and reboots (new server not respawned currently)
# copies server from tftp folder of pc, links new server to gotthard2DetectorServer,
# programs fpga,
# reboots
sls_detector_put update gotthard2DetectorServervxxx pcxxx xxx.rbf
# Or only program firmware
sls_detector_put programfpga xxx.rbf
.. note ::
If the detector servers did not start up automatically after reboot, you need to add scripts to do that. See :ref:`Automatic start<Automatic start servers>` for more details.
Moench
-------
@ -279,19 +226,21 @@ Download
- `pof files <https://github.com/slsdetectorgroup/slsDetectorFirmware>`__
Upgrade (from v5.0.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Upgrade
^^^^^^^^
Check :ref:`firmware troubleshooting <blackfin firmware troubleshooting>` if you run into issues while programming firmware.
Always ensure that the client and server software are of the same release.
#. Program from console
Program from console
.. code-block:: bash
# copies server from tftp folder of pc, programs fpga,
# removes old server from respawn, sets up new server to respawn
# and reboots
# copies server from tftp folder of pc, links new server to moenchDetectorServer,
# removes old server from respawn, sets up new lnked server to respawn
# programs fpga,
# reboots
sls_detector_put update moenchDetectorServervxxx pcxxx xx.pof
# Or only program firmware
@ -307,19 +256,21 @@ Download
- `pof files <https://github.com/slsdetectorgroup/slsDetectorFirmware>`__
Upgrade (from v5.0.0)
^^^^^^^^^^^^^^^^^^^^^^^^^^
Upgrade
^^^^^^^^
Check :ref:`firmware troubleshooting <blackfin firmware troubleshooting>` if you run into issues while programming firmware.
Always ensure that the client and server software are of the same release.
#. Program from console
Program from console
.. code-block:: bash
# copies server from tftp folder of pc, programs fpga,
# removes old server from respawn, sets up new server to respawn
# and reboots
# copies server from tftp folder of pc, links new server to ctbDetectorServer,
# removes old server from respawn, sets up new lnked server to respawn
# programs fpga,
# reboots
sls_detector_put update ctbDetectorServervxxx pcxxx xx.pof
# Or only program firmware
@ -354,43 +305,53 @@ Firmware Troubleshooting with blackfin
5. If one can't list it, read the next section to try to get the blackfin to list it.
How to get back mtd3 drive remotely (copying new kernel)
How to get back mtd3 drive remotely (udpating kernel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You have 2 alternatives to update the kernel.
.. code-block:: bash
# step 1: get the kernel image (uImage.lzma) from slsdetectorgroup
# and copy it to pc's tftp folder
1. Commands via software (>= v6.0.0)
# step 2: connect to the board
telnet bchipxxx
.. code-block:: bash
#step 3: go to directory for space
cd /var/tmp/
sls_detector_put updatekernel /home/...path-to-kernel-image
# step 3: copy kernel to board
tftp pcxxx -r uImage.lzma -g
# step 4: verify kernel copied properly
ls -lrt
# step 5: erase flash
flash_eraseall /dev/mtd1
# step 6: copy new image to kernel drive
cat uImage.lzma > /dev/mtd1
# step 7:
sync
# step 8:
reboot
# step 9: verification
telnet bchipxxx
uname -a # verify kernel date
more /proc/mtd # verify mtd3 is listed
2. or command line
.. code-block:: bash
# step 1: get the kernel image (uImage.lzma) from slsdetectorgroup
# and copy it to pc's tftp folder
# step 2: connect to the board
telnet bchipxxx
#step 3: go to directory for space
cd /var/tmp/
# step 3: copy kernel to board
tftp pcxxx -r uImage.lzma -g
# step 4: verify kernel copied properly
ls -lrt
# step 5: erase flash
flash_eraseall /dev/mtd1
# step 6: copy new image to kernel drive
cat uImage.lzma > /dev/mtd1
# step 7:
sync
# step 8:
reboot
# step 9: verification
telnet bchipxxx
uname -a # verify kernel date
more /proc/mtd # verify mtd3 is listed
Last Resort using USB Blaster
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
/**
* Utility program to generate input files for the command line
* documentation. Uses the string returned from sls_detector_help cmd
@ -43,6 +45,7 @@ int main() {
for (const auto &cmd : commands) {
std::ostringstream os;
std::cout << cmd << '\n';
proxy.Call(cmd, {}, -1, slsDetectorDefs::HELP_ACTION, os);
auto tmp = os.str().erase(0, cmd.size());

View File

@ -66,6 +66,15 @@ Welcome to slsDetectorPackage's documentation!
virtualserver
serverdefaults
.. toctree::
:caption: Detector UDP Header
:maxdepth: 2
udpheader
udpconfig
udpdetspec
.. toctree::
:caption: Receiver
:maxdepth: 2

View File

@ -27,13 +27,18 @@ Build from source using CMake
---------------------------------
Note that on some systems, for example RH7, cmake v3+ is available under the cmake3 alias.
It is also required to clone with the option --recursive to get the git submodules used
in the package.
It is also required to clone with the option --recursive to get the pybind11 submodules used
in the package. (Only needed for older versions than v7.0.0)
.. code-block:: bash
git clone --recursive https://github.com/slsdetectorgroup/slsDetectorPackage.git
# if older than v7.0.0 and using python, update pybind11 submodules
cd slsDetectorPackage
git submodule update --init
mkdir build && cd build
cmake ../slsDetectorPackage -DCMAKE_INSTALL_PREFIX=/your/install/path
make -j12 #or whatever number of cores you are using to build
@ -55,26 +60,28 @@ These are mainly aimed at those not familiar with using ccmake and cmake.
The binaries are generated in slsDetectorPackage/build/bin directory.
Usage: $0 [-c] [-b] [-p] [e] [t] [r] [g] [s] [u] [i] [m] [n] [-h] [z] [-d <HDF5 directory>] [-l Install directory] [-k <CMake command>] [-j <Number of threads>]
Usage: ./cmk.sh [-b] [-c] [-d <HDF5 directory>] [e] [g] [-h] [i] [-j <Number of threads>] [-k <CMake command>] [-l <Install directory>] [m] [n] [-p] [-q <Zmq hint directory>] [r] [s] [t] [u] [z]
-[no option]: only make
-c: Clean
-b: Builds/Rebuilds CMake files normal mode
-p: Builds/Rebuilds Python API
-h: Builds/Rebuilds Cmake files with HDF5 package
-c: Clean
-d: HDF5 Custom Directory
-e: Debug mode
-g: Build/Rebuilds only gui
-h: Builds/Rebuilds Cmake files with HDF5 package
-i: Builds tests
-j: Number of threads to compile through
-k: CMake command
-l: Install directory
-t: Build/Rebuilds only text client
-r: Build/Rebuilds only receiver
-g: Build/Rebuilds only gui
-s: Simulator
-u: Chip Test Gui
-j: Number of threads to compile through
-e: Debug mode
-i: Builds tests
-m: Manuals
-n: Manuals without compiling doxygen (only rst)
-p: Builds/Rebuilds Python API
-q: Zmq hint directory
-r: Build/Rebuilds only receiver
-s: Simulator
-t: Build/Rebuilds only text client
-u: Chip Test Gui
-z: Moench zmq processor
# get all options
./cmk.sh -?

View File

@ -71,4 +71,19 @@ exposed to Python through pybind11.
:undoc-members:
.. autoclass:: timingSourceType
:undoc-members:
.. autoclass:: M3_GainCaps
:undoc-members:
.. autoclass:: portPosition
:undoc-members:
.. autoclass:: streamingInterface
:undoc-members:
.. autoclass:: vetoAlgorithm
:undoc-members:
.. autoclass:: gainMode
:undoc-members:

View File

@ -17,6 +17,22 @@ environments.
.. warning ::
If you use conda avoid also installing packages with pip.
---------------------
PYBIND11
---------------------
**v7.0.0 of slsDetectorPackage:**
#. It is packaged into libs (pybind)
#. No longer a submodule of the slsDetectorPackage
**Older than v7.0.0:**
#. Submodule in libs (pybind11)
#. Switching between versions will require an update of the submodule as well using:
.. code-block:: bash
git submodule update --init #from the main slsDetectorPackage folder
---------------------
PYTHONPATH
@ -136,7 +152,7 @@ can use dir()
'__str__', '__subclasshook__', '_adc_register', '_frozen',
'_register', 'acquire', 'adcclk', 'adcphase', 'adcpipeline',
'adcreg', 'asamples', 'auto_comp_disable', 'clearAcquiringFlag',
'clearBit', 'clearROI', 'client_version', 'config', 'copyDetectorServer',
'clearBit', 'clearROI', 'client_version', 'config',
'counters', 'daclist', 'dacvalues', 'dbitclk', 'dbitphase' ...
Since the list for Detector is rather long it's an good idea to filter it.

View File

@ -4,41 +4,8 @@ Receivers
Receiver processes can be run on same or different machines as the client, receives the data from the detector (via UDP packets).
When using the slsReceiver/ slsMultiReceiver, they can be further configured by the client control software (via TCP/IP) to set file name, file path, progress of acquisition etc.
Detector UDP Header
---------------------
| The UDP data format for the packets consist of a common header for all detectors, followed by the data for that one packet.
**The SLS Detector Header**
.. table:: <-------------------------------- 8 bytes -------------------------------->
:align: center
:widths: 30,30,30,30
+--------------------------------------------------------------------+
|frameNumber |
+---------------------------------+----------------------------------+
|expLength |packetNumber |
+---------------------------------+----------------------------------+
|bunchId |
+--------------------------------------------------------------------+
|timestamp |
+----------------+----------------+----------------+-----------------+
|modId |row |column |reserved |
+----------------+----------------+----------------+--------+--------+
|debug |roundRNumber |detType |version |
+---------------------------------+----------------+--------+--------+
UDP configuration in Config file
----------------------------------
#. UDP source port is hardcoded in detector server, starting at 32410.
#. **udp_dstport** : UDP destination port number. Port in receiver pc to listen to packets from the detector.
#. **udp_dstip** : IP address of UDP destination interface. IP address of interface in receiver pc to listen to packets from detector. If **auto** is used (only when using slsReceiver/ slsMultiReceiver), the IP of **rx_hostname** is picked up.
#. **udp_dstmac** : Mac address of UDP destination interface. MAC address of interface in receiver pc to list to packets from detector. Only required when using custom receiver, else slsReceiver/slsMultiReceiver picks it up from **udp_dstip**.
#. **udp_srcip** : IP address of UDP source interface. IP address of detector UDP interface to send packets from. Do not use for Eiger 1Gb interface (uses its hardware IP). For others, must be in the same subnet as **udp_dstip**.
#. **udp_srcmac** : MAC address of UDP source interface. MAC address of detector UDP interface to send packets from. Do not use for Eiger (uses hardware mac). For others, it is not necessary, but can help for switch and debugging to put unique values for each module.
To know more about detector receiver configuration, please check out :ref:`detector udp header and udp commands in the config file <detector udp header>`
Custom Receiver
----------------

View File

@ -1,5 +1,14 @@
Detector Servers
=================
Getting Started
===============
Detector Servers include:
* Control server [default port: 1952]
* Almost all client communication.
* Stop server [default port: 1953]
* Client requests for detector status, stop acquisition, temperature, advanced read/write registers.
When using a blocking acquire command (sls_detector_acquire or Detector::acquire), the control server is blocked until end of acquisition. However, stop server commands could be used in parallel.
Location
---------
@ -24,18 +33,8 @@ Arguments
-s, --stopserver : Stop server. Do not use as it is created by control server
Basics
------------
Detector Servers include:
* Control server [default port: 1952]
* Almost all client communication.
* Stop server [default port: 1953]
* Client requests for detector status, stop acquisition, temperature, advanced read/write registers.
When using a blocking acquire command (sls_detector_acquire or Detector::acquire), the control server is blocked until end of acquisition. However, stop server commands could be used in parallel.
.. _Automatic start servers:
Automatic start
------------------

View File

@ -1,114 +1,66 @@
Detector Server Upgrade
=======================
Eiger
-------------
.. _Detector Server Upgrade:
Upgrade
========
**Location:** slsDetectorPackage/serverBin/ folder for every release.
.. note ::
For Mythen3, Gotthard2 and Eiger, you need to add scripts to automatically start detector server upon power on. See :ref:`Automatic start<Automatic start servers>` for more details.
.. note ::
Eiger requires a manual reboot. Or killall the servers and restart the new linked one. If you are in the process of updating firmware, then don't reboot yet.
6.1.1+ (no tftp required)
---------------------------------------
#. Program from console
#. Kill old server and copy new server
.. code-block:: bash
# Option 1: from detector console
# kill old server
ssh root@bebxxx
killall eigerDetectorServer
# the following command copies new server, creates a soft link to xxxDetectorServerxxx
# [Jungfrau][CTB][Moench] also deletes the old server binary and edits initttab to respawn server on reboot
# Then, the detector controller will reboot (except Eiger)
sls_detector_put updatedetectorserver /complete-path-to-binary/xxxDetectorServerxxx
# copy new server
cd executables
scp user@pc:/path/eigerDetectorServerxxx .
chmod 777 eigerDetectorServerxxx
ln -sf eigerDetectorServerxxx eigerDetectorServer
sync
#. Copy the detector server specific config files or any others required to the detector:
# Options 2: from client console for multiple modules
for i in bebxxx bebyyy;
do ssh root@$i killall eigerDetectorServer;
scp eigerDetectorServerxxx root@$i:~/executables/eigerDetectorServer;
ssh root@$i sync; done
.. code-block:: bash
sls_detector_put execcommand "tftp pcxxx -r configxxx -g"
#. Reboot the detector.
Jungfrau
-------------
**Location:** slsDetectorPackage/serverBin/ folder for every release.
5.0.0 - 6.1.1
--------------
#. Install tftp and copy detector server binary to tftp folder
#. Program from console (only from 5.0.0-rcx)
#. Program from console
.. code-block:: bash
# copies new server from pc tftp folder, respawns and reboots
sls_detector_put copydetectorserver jungfrauDetectorServerxxx pcxxx
# the following command copies new server from pc tftp folder, creates a soft link to xxxDetectorServerxxx
# [Jungfrau][CTB][Moench] also edits initttab to respawn server on reboot
# Then, the detector controller will reboot (except Eiger)
sls_detector_put copydetectorserver xxxDetectorServerxxx pcxxx
#. Copy the detector server specific config files or any others required to the detector:
.. code-block:: bash
sls_detector_put execcommand "tftp pcxxx -r configxxx -g"
Gotthard
---------
Troubleshooting with tftp
^^^^^^^^^^^^^^^^^^^^^^^^^
**Location:** slsDetectorPackage/serverBin/ folder for every release.
#. tftp write error: There is no space left. Please delete some old binaries and try again.
#. Install tftp and copy detector server binary to tftp folder
#. Program from console (only from 5.0.0-rcx)
.. code-block:: bash
# copies new server from pc tftp folder, respawns and reboots
sls_detector_put copydetectorserver gotthardDetectorServerxxx pcxxx
#. text file busy: You are trying to copy the same server.
< 5.0.0
--------
Mythen3
-------
**Location:** slsDetectorPackage/serverBin/ folder for every release.
#. Install tftp and copy detector server binary to tftp folder
#. Program from console (only from 5.0.0-rcx)
.. code-block:: bash
# copies new server from pc tftp folder and reboots (does not respawn)
sls_detector_put copydetectorserver mythen3DetectorServerxxx pcxxx
Gotthard2
----------
**Location:** slsDetectorPackage/serverBin/ folder for every release.
#. Install tftp and copy detector server binary to tftp folder
#. Program from console (only from 5.0.0-rcx)
.. code-block:: bash
# copies new server from pc tftp folder and reboots (does not respawn)
sls_detector_put copydetectorserver gotthard2DetectorServerxxx pcxxx
Moench
------
**Location:** slsDetectorPackage/serverBin/ folder for every release.
#. Install tftp and copy detector server binary to tftp folder
#. Program from console (only from 5.0.0-rcx)
.. code-block:: bash
# copies new server from pc tftp folder, respawns and reboots
sls_detector_put copydetectorserver moenchDetectorServerxxx pcxxx
Ctb
---
**Location:** slsDetectorPackage/serverBin/ folder for every release.
#. Install tftp and copy detector server binary to tftp folder
#. Program from console (only from 5.0.0-rcx)
.. code-block:: bash
# copies new server from pc tftp folder, respawns and reboots
sls_detector_put copydetectorserver ctbDetectorServerxxx pcxxx
Please contact us.

View File

@ -144,6 +144,30 @@ Receiver PC Tuning Options
| xth1 is example interface name.
| These settings are lost at pc reboot.
#. Disable CPU frequency scaling and set system to performance
* Check current policy (default might be powersave or schedutil)
.. code-block:: bash
# check current active governor and range of cpu freq policy
cpupower frequency-info --policy
# list all available governors for this kernel
cpupower frequency-info --governors
* Temporarily (until shut down)
.. code-block:: bash
# set to performance
sudo cpupower frequency-set -g performance
* Permanently
.. code-block:: bash
# edit /etc/sysconfig/cpupower to preference
# enable or disable permanently
sudo systemctl enable cpupower
#. Give user speicific user scheduling privileges.
.. code-block:: bash
@ -247,6 +271,19 @@ Possible causes could be the following:
* For Jungfrau, refer to :ref:`Jungfrau Power Supply Troubleshooting<Jungfrau Troubleshooting Power Supply>`.
Cannot ping module (Nios)
^^^^^^^^^^^^^^^^^^^^^^^^^
If you executed "reboot" command on the board, you cannot ping it anymore unless you power cycle. To reboot the controller, please use the software command ("rebootcontroller"), which talks to the microcontroller.
Gotthard2
---------
Cannot get data without a module attached
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You cannot get data without a module attached as a specific pin is floating. Attach module to get data.
Gotthard
----------

33
docs/src/udpconfig.rst Normal file
View File

@ -0,0 +1,33 @@
.. _detector udp header:
Config file
============
Commands to configure the UDP in the config file:
Source Port
-----------
Hardcoded in detector server, starting at 32410.
udp_srcip - Source IP
---------------------
IP address of detector UDP interface to send packets from. Do not use for Eiger 1Gb interface (uses its hardware IP). For others, must be in the same subnet as **udp_dstip**.
udp_srcmac - Source MAC
-----------------------
MAC address of detector UDP interface to send packets from. Do not use for Eiger (uses hardware mac). For others, it is not necessary, but can help for switch and debugging to put unique values for each module.
udp_dstport - Desintation Port
-------------------------------
Port in receiver pc to listen to packets from the detector.
udp_dstip - Destination IP
--------------------------
IP address of interface in receiver pc to listen to packets from detector. If **auto** is used (only when using slsReceiver/ slsMultiReceiver), the IP of **rx_hostname** is picked up.
udp_dstmac - Destination MAC
----------------------------
MAC address of interface in receiver pc to list to packets from detector. Only required when using custom receiver, else slsReceiver/slsMultiReceiver picks it up from **udp_dstip**.

19
docs/src/udpdetspec.rst Normal file
View File

@ -0,0 +1,19 @@
Detector Specific Fields
========================
Eiger
-----
.. table::
+----------+------------------------------+
| detSpec1 | 0x0 |
+----------+------------------------------+
| detSpec2 | 0x0 |
+----------+------------------------------+
| detSpec3 | e14a |
+----------+------------------------------+
| detSpec4 | Round Robin Interface Number |
+----------+------------------------------+

75
docs/src/udpheader.rst Normal file
View File

@ -0,0 +1,75 @@
.. _detector udp header:
Format
=======
The UDP data format for the packets consist of a common header for all detectors, followed by the data for that one packet.
Current Version
---------------------------
**v3.0 (slsDetectorPackage v7.0.0+)**
.. table:: <---------------------------------------------------- 8 bytes ---------------------------------------------------->
:align: center
:widths: 30,30,30,15,15
+---------------------------------------------------------------+
| frameNumber |
+-------------------------------+-------------------------------+
| expLength | packetNumber |
+-------------------------------+-------------------------------+
| **detSpec1** |
+---------------------------------------------------------------+
| timestamp |
+---------------+---------------+---------------+---------------+
| modId | row | column | **detSpec2** |
+---------------+---------------+---------------+-------+-------+
| **detSpec3** | **detSpec4** |detType|version|
+-------------------------------+---------------+-------+-------+
Previous Versions
-----------------
**v2.0 (Package v4.0.0 - 6.x.x)**
.. table:: <---------------------------------------------------- 8 bytes ---------------------------------------------------->
:align: center
:widths: 30,30,30,15,15
+---------------------------------------------------------------+
| frameNumber |
+-------------------------------+-------------------------------+
| expLength | packetNumber |
+-------------------------------+-------------------------------+
| bunchid |
+---------------------------------------------------------------+
| timestamp |
+---------------+---------------+---------------+---------------+
| modId | **row** | **column** | **reserved** |
+---------------+---------------+---------------+-------+-------+
| debug | roundRNumber |detType|version|
+-------------------------------+---------------+-------+-------+
**v1.0 (Package v3.0.0 - 3.1.5)**
.. table:: <---------------------------------------------------- 8 bytes ---------------------------------------------------->
:align: center
:widths: 30,30,30,15,15
+---------------------------------------------------------------+
| frameNumber |
+-------------------------------+-------------------------------+
| expLength | packetNumber |
+-------------------------------+-------------------------------+
| bunchid |
+---------------------------------------------------------------+
| timestamp |
+---------------+---------------+---------------+---------------+
| modId | xCoord | yCoord | zCoord |
+---------------+---------------+---------------+-------+-------+
| debug | roundRNumber |detType|version|
+-------------------------------+---------------+-------+-------+

View File

@ -1,6 +1,6 @@
.. _Virtual Detector Servers:
Detector Simulators
===================
Simulators
===========
Compilation
-----------

View File

@ -1,4 +1,4 @@
/* override table no-wrap */
.wy-table-responsive table td, .wy-table-responsive table th {
white-space: normal;
}
}

View File

@ -1,22 +0,0 @@
GITREPO1='git remote -v'
GITREPO2=" | grep \"fetch\" | cut -d' ' -f1"
BRANCH1='git branch -v'
BRANCH2=" | grep '*' | cut -d' ' -f2"
REPUID1='git log --pretty=format:"%H" -1'
AUTH1_1='git log --pretty=format:"%cn" -1'
AUTH1_2=" | cut -d' ' -f1"
AUTH2_1='git log --pretty=format:"%cn" -1'
AUTH2_2=" | cut -d' ' -f2"
FOLDERREV1='git log --oneline . ' #used for all the individual server folders
FOLDERREV2=" | wc -l" #used for all the individual server folders
REV1='git log --oneline '
REV2=" | wc -l"
GITREPO=`eval $GITREPO1 $GITREPO2`
BRANCH=`eval $BRANCH1 $BRANCH2`
REPUID=`eval $REPUID1`
AUTH1=`eval $AUTH1_1 $AUTH1_2`
AUTH2=`eval $AUTH2_1 $AUTH2_2`
REV=`eval $REV1 $REV2`
FOLDERREV=`eval $FOLDERREV1 $FOLDERREV2`

View File

@ -409,18 +409,18 @@ patword 018d 0008599f0008503a
patioctrl 8f0effff6dbffdbf
patclkctrl 0000000000000000
patlimits 0000 018c
patloop0 013a 016b
patnloop0 199
patloop1 0400 0400
patnloop1 0
patloop2 0400 0400
patnloop2 0
patwait0 00aa
patwaittime0 10000
patwait1 0400
patwaittime1 0
patwait2 0400
patwaittime2 0
patloop 0 013a 016b
patnloop 0 199
patloop 1 0400 0400
patnloop 1 0
patloop 2 0400 0400
patnloop 2 0
patwait 0 00aa
patwaittime 0 10000
patwait 1 0400
patwaittime 1 0
patwait 2 0400
patwaittime 2 0
#############################################
### edit with hostname or 1Gbs IP address of your server

View File

@ -3,7 +3,7 @@
### edit with hostname or IP address of your detector
############################################
#hostname bchip181+
hostname bchip181+
hostname bchip135
#############################################
### edit with hostname or 1Gbs IP address of your server
@ -28,7 +28,7 @@ rx_zmqport 50003
#############################################
### edit with 1 Gbs IP of PC where you will run the GUI
############################################
zmqip 129.129.202.98
zmqip 129.129.202.57
zmqport 50001

View File

@ -427,18 +427,18 @@ patword 0x018c 0x0008599f0008503a
patword 0x018d 0x0008599f0008503a
patioctrl 0x8f0effff6dbffdbf
patlimits 0x0000 0x018c
patloop0 0x013a 0x016b
patnloop0 0x199
patloop1 0x0400 0x0400
patnloop1 0
patloop2 0x0400 0x0400
patnloop2 0
patwait0 0x00aa
patwaittime0 10000
patwait1 0x0400
patwaittime1 0
patwait2 0x0400
patwaittime2 0
patloop 0 0x013a 0x016b
patnloop 0 0x199
patloop 1 0x0400 0x0400
patnloop 1 0
patloop 2 0x0400 0x0400
patnloop 2 0
patwait 0 0x00aa
patwaittime 0 10000
patwait 1 0x0400
patwaittime 1 0
patwait 2 0x0400
patwaittime 2 0
# dacs
dac 6 800

View File

@ -1,25 +0,0 @@
#####! /bin/awk -f
if [ $# -lt 3 ]
then
echo "wrong usage"
exit -1
fi
fin=$1
ftmp=$2
fout=$3
#dat=echo "date '+%Y%m%d'"
echo "Updating $fout"
#echo "in: $fin tmp: $ftmp out: $fout"
#awk 'NR==FNR {if ($3=="Date:") {l[FNR]=$4; gsub("-","",l[FNR]);} else { if (match($0,"Rev")) {l[FNR]=$(NF);} else {l[FNR]="\""$(NF)"\"";};};next} {$0=$1" "$2" "l[FNR]}1' $fin $ftmp > $fout
awk 'BEGIN {l[0]=0; "date +%Y%m%d" | getline l[1]; l[2]="\"/\""; l[3]="\"nobody\""; l[3]="\"nobody\""; l[4]="\"0000-0000-0000\"";} \
NR==FNR {if (match($0,"Rev")) {l[0]="0x"$(NF);} else if (match($0,"Date")) {l[1]="0x"$4; gsub("-","",l[1]);} else if (match($0,"URL")) {l[2]="\""$(NF)"\"";} else if (match($0,"Author")) {l[3]="\""$(NF)"\"";} else if (match($0,"UUID")) {l[4]="\""$(NF)"\"";} else if (match($0,"Branch")) {l[5]="\""$(NF)"\"";};next;}
{if (match($2,"REV")) {$0=$1" "$2" "l[0];} else if (match($2,"DATE")) {$0=$1" "$2" "l[1];} else if (match($2,"URL")) {$0=$1" "$2" "l[2];} else if (match($2,"AUTH")) {$0=$1" "$2" "l[3];} else if (match($2,"UUID")) {$0=$1" "$2" "l[4];} else if (match($2,"BRANCH")) {$0=$1" "$2" "l[5];}}1' $fin $ftmp > $fout

View File

@ -1,3 +1,5 @@
# SPDX-License-Identifier: LGPL-3.0-or-other
# Copyright (C) 2021 Contributors to the SLS Detector Package
# MESSAGE( STATUS "CMAKE_CURRENT_SOURCE_DIR: " ${CMAKE_CURRENT_SOURCE_DIR} )
# MESSAGE( STATUS "PROJECT_SOURCE_DIR: " ${PROJECT_SOURCE_DIR} )

View File

@ -1,9 +1,13 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include "DetectorImpl.h"
#include "catch.hpp"
#include "sls/string_utils.h"
#include "tests/globals.h"
#include <iostream>
namespace sls {
class MultiDetectorFixture {
protected:
DetectorImpl d;
@ -134,7 +138,7 @@ TEST_CASE_METHOD(MultiDetectorFixture, "Get ID", "[.eigerintegration][cli]") {
std::string hn = test::hostname;
hn.erase(std::remove(begin(hn), end(hn), 'b'), end(hn));
hn.erase(std::remove(begin(hn), end(hn), 'e'), end(hn));
auto hostnames = sls::split(hn, '+');
auto hostnames = split(hn, '+');
CHECK(hostnames.size() == d.getNumberOfDetectors());
for (int i = 0; i != d.getNumberOfDetectors(); ++i) {
CHECK(d.getId(defs::DETECTOR_SERIAL_NUMBER, 0) ==
@ -196,3 +200,5 @@ TEST_CASE_METHOD(MultiDetectorFixture, "rate correction",
d.setRateCorrection(200);
CHECK(d.getRateCorrection() == 200);
}
} // namespace sls

View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include "catch.hpp"
@ -22,6 +24,8 @@
// extern std::string detector_type;
// extern dt type;
namespace sls {
TEST_CASE("Single detector no receiver", "[.integration][.single]") {
auto t = Module::getTypeFromDetector(test::hostname);
CHECK(t == test::type);
@ -46,8 +50,8 @@ TEST_CASE("Set control port then create a new object with this control port",
Is this the best way to initialize the detectors
Using braces to make the object go out of scope
*/
int old_cport = DEFAULT_PORTNO;
int old_sport = DEFAULT_PORTNO + 1;
int old_cport = DEFAULT_TCP_CNTRL_PORTNO;
int old_sport = DEFAULT_TCP_STOP_PORTNO;
int new_cport = 1993;
int new_sport = 2000;
{
@ -75,7 +79,7 @@ TEST_CASE("Set control port then create a new object with this control port",
Module d(test::type);
d.setHostname(test::hostname);
CHECK(d.getStopPort() == DEFAULT_PORTNO + 1);
CHECK(d.getStopPort() == DEFAULT_TCP_STOP_PORTNO);
d.freeSharedMemory();
}
@ -281,14 +285,14 @@ TEST_CASE(
CHECK(m.getRateCorrection() == ratecorr);
// ratecorr fail with dr 4 or 8
CHECK_THROWS_AS(m.setDynamicRange(8), sls::RuntimeError);
CHECK_THROWS_AS(m.setDynamicRange(8), RuntimeError);
CHECK(m.getRateCorrection() == 0);
m.setDynamicRange(16);
m.setDynamicRange(16);
m.setRateCorrection(ratecorr);
m.setDynamicRange(16);
m.setRateCorrection(ratecorr);
CHECK_THROWS_AS(m.setDynamicRange(4), sls::RuntimeError);
CHECK_THROWS_AS(m.setDynamicRange(4), RuntimeError);
CHECK(m.getRateCorrection() == 0);
}
@ -327,11 +331,11 @@ TEST_CASE("Chiptestboard Loading Patterns", "[.ctbintegration]") {
m.setPatternWord(addr, word);
CHECK(m.setPatternWord(addr, -1) == word);
addr = MAX_ADDR;
CHECK_THROWS_AS(m.setPatternWord(addr, word), sls::RuntimeError);
CHECK_THROWS_AS(m.setPatternWord(addr, word), RuntimeError);
CHECK_THROWS_WITH(m.setPatternWord(addr, word),
Catch::Matchers::Contains("be between 0 and"));
addr = -1;
CHECK_THROWS_AS(m.setPatternWord(addr, word), sls::RuntimeError);
CHECK_THROWS_AS(m.setPatternWord(addr, word), RuntimeError);
CHECK_THROWS_WITH(m.setPatternWord(addr, word),
Catch::Matchers::Contains("be between 0 and"));
@ -406,7 +410,7 @@ TEST_CASE("Chiptestboard Dbit offset, list, sampling, advinvert",
CHECK(m.getReceiverDbitList().size() == 10);
list.push_back(64);
CHECK_THROWS_AS(m.setReceiverDbitList(list), sls::RuntimeError);
CHECK_THROWS_AS(m.setReceiverDbitList(list), RuntimeError);
CHECK_THROWS_WITH(m.setReceiverDbitList(list),
Catch::Matchers::Contains("be between 0 and 63"));
@ -474,7 +478,7 @@ TEST_CASE("Eiger or Jungfrau nextframenumber",
CHECK(m.acquire() == slsDetectorDefs::OK);
CHECK(m.getReceiverCurrentFrameIndex() == val);
CHECK_THROWS_AS(m.setNextFrameNumber(0), sls::RuntimeError);
CHECK_THROWS_AS(m.setNextFrameNumber(0), RuntimeError);
if (m.getDetectorTypeAsString() == "Eiger") {
val = 281474976710655;
@ -509,8 +513,10 @@ TEST_CASE("Eiger partialread", "[.eigerintegration][partialread]") {
m.setDynamicRange(8);
m.setPartialReadout(256);
CHECK(m.getPartialReadout() == 256);
CHECK_THROWS_AS(m.setPartialReadout(1), sls::RuntimeError);
CHECK_THROWS_AS(m.setPartialReadout(1), RuntimeError);
CHECK(m.getPartialReadout() == 256);
CHECK_THROWS_AS(m.setPartialReadout(0), sls::RuntimeError);
CHECK_THROWS_AS(m.setPartialReadout(0), RuntimeError);
m.setPartialReadout(256);
}
}
} // namespace sls

View File

@ -1,13 +1,17 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
#include "DetectorImpl.h"
#include "catch.hpp"
#include "sls/string_utils.h"
#include "tests/globals.h"
#include <iostream>
namespace sls {
using namespace Catch::literals;
TEST_CASE("Initialize a multi detector", "[.integration][.multi]") {
auto hostnames = sls::split(test::hostname, '+');
auto hostnames = split(test::hostname, '+');
DetectorImpl d(0, true, true);
d.setHostname(test::hostname.c_str());
@ -100,3 +104,5 @@ TEST_CASE("Set and read timers", "[.integration][.multi]") {
d.freeSharedMemory();
}
} // namespace sls

View File

@ -1,3 +1,5 @@
// SPDX-License-Identifier: LGPL-3.0-or-other
// Copyright (C) 2021 Contributors to the SLS Detector Package
// tests-main.cpp
#define CATCH_CONFIG_MAIN
#include "catch.hpp"

File diff suppressed because it is too large Load Diff

299
libs/pybind/CMakeLists.txt Normal file
View File

@ -0,0 +1,299 @@
# CMakeLists.txt -- Build system for the pybind11 modules
#
# Copyright (c) 2015 Wenzel Jakob <wenzel@inf.ethz.ch>
#
# All rights reserved. Use of this source code is governed by a
# BSD-style license that can be found in the LICENSE file.
cmake_minimum_required(VERSION 3.4)
# The `cmake_minimum_required(VERSION 3.4...3.22)` syntax does not work with
# some versions of VS that have a patched CMake 3.11. This forces us to emulate
# the behavior using the following workaround:
if(${CMAKE_VERSION} VERSION_LESS 3.22)
cmake_policy(VERSION ${CMAKE_MAJOR_VERSION}.${CMAKE_MINOR_VERSION})
else()
cmake_policy(VERSION 3.22)
endif()
# Avoid infinite recursion if tests include this as a subdirectory
if(DEFINED PYBIND11_MASTER_PROJECT)
return()
endif()
# Extract project version from source
file(STRINGS "${CMAKE_CURRENT_SOURCE_DIR}/include/pybind11/detail/common.h"
pybind11_version_defines REGEX "#define PYBIND11_VERSION_(MAJOR|MINOR|PATCH) ")
foreach(ver ${pybind11_version_defines})
if(ver MATCHES [[#define PYBIND11_VERSION_(MAJOR|MINOR|PATCH) +([^ ]+)$]])
set(PYBIND11_VERSION_${CMAKE_MATCH_1} "${CMAKE_MATCH_2}")
endif()
endforeach()
if(PYBIND11_VERSION_PATCH MATCHES [[\.([a-zA-Z0-9]+)$]])
set(pybind11_VERSION_TYPE "${CMAKE_MATCH_1}")
endif()
string(REGEX MATCH "^[0-9]+" PYBIND11_VERSION_PATCH "${PYBIND11_VERSION_PATCH}")
project(
pybind11
LANGUAGES CXX
VERSION "${PYBIND11_VERSION_MAJOR}.${PYBIND11_VERSION_MINOR}.${PYBIND11_VERSION_PATCH}")
# Standard includes
include(GNUInstallDirs)
include(CMakePackageConfigHelpers)
include(CMakeDependentOption)
if(NOT pybind11_FIND_QUIETLY)
message(STATUS "pybind11 v${pybind11_VERSION} ${pybind11_VERSION_TYPE}")
endif()
# Check if pybind11 is being used directly or via add_subdirectory
if(CMAKE_SOURCE_DIR STREQUAL PROJECT_SOURCE_DIR)
### Warn if not an out-of-source builds
if(CMAKE_CURRENT_SOURCE_DIR STREQUAL CMAKE_CURRENT_BINARY_DIR)
set(lines
"You are building in-place. If that is not what you intended to "
"do, you can clean the source directory with:\n"
"rm -r CMakeCache.txt CMakeFiles/ cmake_uninstall.cmake pybind11Config.cmake "
"pybind11ConfigVersion.cmake tests/CMakeFiles/\n")
message(AUTHOR_WARNING ${lines})
endif()
set(PYBIND11_MASTER_PROJECT ON)
if(OSX AND CMAKE_VERSION VERSION_LESS 3.7)
# Bug in macOS CMake < 3.7 is unable to download catch
message(WARNING "CMAKE 3.7+ needed on macOS to download catch, and newer HIGHLY recommended")
elseif(WINDOWS AND CMAKE_VERSION VERSION_LESS 3.8)
# Only tested with 3.8+ in CI.
message(WARNING "CMAKE 3.8+ tested on Windows, previous versions untested")
endif()
message(STATUS "CMake ${CMAKE_VERSION}")
if(CMAKE_CXX_STANDARD)
set(CMAKE_CXX_EXTENSIONS OFF)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
endif()
set(pybind11_system "")
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
else()
set(PYBIND11_MASTER_PROJECT OFF)
set(pybind11_system SYSTEM)
endif()
# Options
option(PYBIND11_INSTALL "Install pybind11 header files?" ${PYBIND11_MASTER_PROJECT})
option(PYBIND11_TEST "Build pybind11 test suite?" ${PYBIND11_MASTER_PROJECT})
option(PYBIND11_NOPYTHON "Disable search for Python" OFF)
set(PYBIND11_INTERNALS_VERSION
""
CACHE STRING "Override the ABI version, may be used to enable the unstable ABI.")
cmake_dependent_option(
USE_PYTHON_INCLUDE_DIR
"Install pybind11 headers in Python include directory instead of default installation prefix"
OFF "PYBIND11_INSTALL" OFF)
cmake_dependent_option(PYBIND11_FINDPYTHON "Force new FindPython" OFF
"NOT CMAKE_VERSION VERSION_LESS 3.12" OFF)
# NB: when adding a header don't forget to also add it to setup.py
set(PYBIND11_HEADERS
include/pybind11/detail/class.h
include/pybind11/detail/common.h
include/pybind11/detail/descr.h
include/pybind11/detail/init.h
include/pybind11/detail/internals.h
include/pybind11/detail/type_caster_base.h
include/pybind11/detail/typeid.h
include/pybind11/attr.h
include/pybind11/buffer_info.h
include/pybind11/cast.h
include/pybind11/chrono.h
include/pybind11/common.h
include/pybind11/complex.h
include/pybind11/options.h
include/pybind11/eigen.h
include/pybind11/embed.h
include/pybind11/eval.h
include/pybind11/gil.h
include/pybind11/iostream.h
include/pybind11/functional.h
include/pybind11/numpy.h
include/pybind11/operators.h
include/pybind11/pybind11.h
include/pybind11/pytypes.h
include/pybind11/stl.h
include/pybind11/stl_bind.h
include/pybind11/stl/filesystem.h)
# Compare with grep and warn if mismatched
if(PYBIND11_MASTER_PROJECT AND NOT CMAKE_VERSION VERSION_LESS 3.12)
file(
GLOB_RECURSE _pybind11_header_check
LIST_DIRECTORIES false
RELATIVE "${CMAKE_CURRENT_SOURCE_DIR}"
CONFIGURE_DEPENDS "include/pybind11/*.h")
set(_pybind11_here_only ${PYBIND11_HEADERS})
set(_pybind11_disk_only ${_pybind11_header_check})
list(REMOVE_ITEM _pybind11_here_only ${_pybind11_header_check})
list(REMOVE_ITEM _pybind11_disk_only ${PYBIND11_HEADERS})
if(_pybind11_here_only)
message(AUTHOR_WARNING "PYBIND11_HEADERS has extra files:" ${_pybind11_here_only})
endif()
if(_pybind11_disk_only)
message(AUTHOR_WARNING "PYBIND11_HEADERS is missing files:" ${_pybind11_disk_only})
endif()
endif()
# CMake 3.12 added list(TRANSFORM <list> PREPEND
# But we can't use it yet
string(REPLACE "include/" "${CMAKE_CURRENT_SOURCE_DIR}/include/" PYBIND11_HEADERS
"${PYBIND11_HEADERS}")
# Cache variable so this can be used in parent projects
set(pybind11_INCLUDE_DIR
"${CMAKE_CURRENT_LIST_DIR}/include"
CACHE INTERNAL "Directory where pybind11 headers are located")
# Backward compatible variable for add_subdirectory mode
if(NOT PYBIND11_MASTER_PROJECT)
set(PYBIND11_INCLUDE_DIR
"${pybind11_INCLUDE_DIR}"
CACHE INTERNAL "")
endif()
# Note: when creating targets, you cannot use if statements at configure time -
# you need generator expressions, because those will be placed in the target file.
# You can also place ifs *in* the Config.in, but not here.
# This section builds targets, but does *not* touch Python
# Non-IMPORT targets cannot be defined twice
if(NOT TARGET pybind11_headers)
# Build the headers-only target (no Python included):
# (long name used here to keep this from clashing in subdirectory mode)
add_library(pybind11_headers INTERFACE)
add_library(pybind11::pybind11_headers ALIAS pybind11_headers) # to match exported target
add_library(pybind11::headers ALIAS pybind11_headers) # easier to use/remember
target_include_directories(
pybind11_headers ${pybind11_system} INTERFACE $<BUILD_INTERFACE:${pybind11_INCLUDE_DIR}>
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>)
target_compile_features(pybind11_headers INTERFACE cxx_inheriting_constructors cxx_user_literals
cxx_right_angle_brackets)
if(NOT "${PYBIND11_INTERNALS_VERSION}" STREQUAL "")
target_compile_definitions(
pybind11_headers INTERFACE "PYBIND11_INTERNALS_VERSION=${PYBIND11_INTERNALS_VERSION}")
endif()
else()
# It is invalid to install a target twice, too.
set(PYBIND11_INSTALL OFF)
endif()
include("${CMAKE_CURRENT_SOURCE_DIR}/tools/pybind11Common.cmake")
# Relative directory setting
if(USE_PYTHON_INCLUDE_DIR AND DEFINED Python_INCLUDE_DIRS)
file(RELATIVE_PATH CMAKE_INSTALL_INCLUDEDIR ${CMAKE_INSTALL_PREFIX} ${Python_INCLUDE_DIRS})
elseif(USE_PYTHON_INCLUDE_DIR AND DEFINED PYTHON_INCLUDE_DIR)
file(RELATIVE_PATH CMAKE_INSTALL_INCLUDEDIR ${CMAKE_INSTALL_PREFIX} ${PYTHON_INCLUDE_DIRS})
endif()
if(PYBIND11_INSTALL)
install(DIRECTORY ${pybind11_INCLUDE_DIR}/pybind11 DESTINATION ${CMAKE_INSTALL_INCLUDEDIR})
set(PYBIND11_CMAKECONFIG_INSTALL_DIR
"${CMAKE_INSTALL_DATAROOTDIR}/cmake/${PROJECT_NAME}"
CACHE STRING "install path for pybind11Config.cmake")
if(IS_ABSOLUTE "${CMAKE_INSTALL_INCLUDEDIR}")
set(pybind11_INCLUDEDIR "${CMAKE_INSTALL_FULL_INCLUDEDIR}")
else()
set(pybind11_INCLUDEDIR "\$\{PACKAGE_PREFIX_DIR\}/${CMAKE_INSTALL_INCLUDEDIR}")
endif()
configure_package_config_file(
tools/${PROJECT_NAME}Config.cmake.in "${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake"
INSTALL_DESTINATION ${PYBIND11_CMAKECONFIG_INSTALL_DIR})
if(CMAKE_VERSION VERSION_LESS 3.14)
# Remove CMAKE_SIZEOF_VOID_P from ConfigVersion.cmake since the library does
# not depend on architecture specific settings or libraries.
set(_PYBIND11_CMAKE_SIZEOF_VOID_P ${CMAKE_SIZEOF_VOID_P})
unset(CMAKE_SIZEOF_VOID_P)
write_basic_package_version_file(
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake
VERSION ${PROJECT_VERSION}
COMPATIBILITY AnyNewerVersion)
set(CMAKE_SIZEOF_VOID_P ${_PYBIND11_CMAKE_SIZEOF_VOID_P})
else()
# CMake 3.14+ natively supports header-only libraries
write_basic_package_version_file(
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake
VERSION ${PROJECT_VERSION}
COMPATIBILITY AnyNewerVersion ARCH_INDEPENDENT)
endif()
install(
FILES ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}Config.cmake
${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}ConfigVersion.cmake
tools/FindPythonLibsNew.cmake
tools/pybind11Common.cmake
tools/pybind11Tools.cmake
tools/pybind11NewTools.cmake
DESTINATION ${PYBIND11_CMAKECONFIG_INSTALL_DIR})
if(NOT PYBIND11_EXPORT_NAME)
set(PYBIND11_EXPORT_NAME "${PROJECT_NAME}Targets")
endif()
install(TARGETS pybind11_headers EXPORT "${PYBIND11_EXPORT_NAME}")
install(
EXPORT "${PYBIND11_EXPORT_NAME}"
NAMESPACE "pybind11::"
DESTINATION ${PYBIND11_CMAKECONFIG_INSTALL_DIR})
# Uninstall target
if(PYBIND11_MASTER_PROJECT)
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/tools/cmake_uninstall.cmake.in"
"${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake" IMMEDIATE @ONLY)
add_custom_target(uninstall COMMAND ${CMAKE_COMMAND} -P
${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake)
endif()
endif()
# BUILD_TESTING takes priority, but only if this is the master project
if(PYBIND11_MASTER_PROJECT AND DEFINED BUILD_TESTING)
if(BUILD_TESTING)
if(_pybind11_nopython)
message(FATAL_ERROR "Cannot activate tests in NOPYTHON mode")
else()
add_subdirectory(tests)
endif()
endif()
else()
if(PYBIND11_TEST)
if(_pybind11_nopython)
message(FATAL_ERROR "Cannot activate tests in NOPYTHON mode")
else()
add_subdirectory(tests)
endif()
endif()
endif()
# Better symmetry with find_package(pybind11 CONFIG) mode.
if(NOT PYBIND11_MASTER_PROJECT)
set(pybind11_FOUND
TRUE
CACHE INTERNAL "True if pybind11 and all required components found on the system")
endif()

29
libs/pybind/LICENSE Normal file
View File

@ -0,0 +1,29 @@
Copyright (c) 2016 Wenzel Jakob <wenzel.jakob@epfl.ch>, All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Please also refer to the file .github/CONTRIBUTING.md, which clarifies licensing of
external contributions to this project including patches, pull requests, etc.

5
libs/pybind/MANIFEST.in Normal file
View File

@ -0,0 +1,5 @@
recursive-include pybind11/include/pybind11 *.h
recursive-include pybind11 *.py
recursive-include pybind11 py.typed
include pybind11/share/cmake/pybind11/*.cmake
include LICENSE README.rst pyproject.toml setup.py setup.cfg

180
libs/pybind/README.rst Normal file
View File

@ -0,0 +1,180 @@
.. figure:: https://github.com/pybind/pybind11/raw/master/docs/pybind11-logo.png
:alt: pybind11 logo
**pybind11 — Seamless operability between C++11 and Python**
|Latest Documentation Status| |Stable Documentation Status| |Gitter chat| |GitHub Discussions| |CI| |Build status|
|Repology| |PyPI package| |Conda-forge| |Python Versions|
`Setuptools example <https://github.com/pybind/python_example>`_
`Scikit-build example <https://github.com/pybind/scikit_build_example>`_
`CMake example <https://github.com/pybind/cmake_example>`_
.. start
**pybind11** is a lightweight header-only library that exposes C++ types
in Python and vice versa, mainly to create Python bindings of existing
C++ code. Its goals and syntax are similar to the excellent
`Boost.Python <http://www.boost.org/doc/libs/1_58_0/libs/python/doc/>`_
library by David Abrahams: to minimize boilerplate code in traditional
extension modules by inferring type information using compile-time
introspection.
The main issue with Boost.Python—and the reason for creating such a
similar project—is Boost. Boost is an enormously large and complex suite
of utility libraries that works with almost every C++ compiler in
existence. This compatibility has its cost: arcane template tricks and
workarounds are necessary to support the oldest and buggiest of compiler
specimens. Now that C++11-compatible compilers are widely available,
this heavy machinery has become an excessively large and unnecessary
dependency.
Think of this library as a tiny self-contained version of Boost.Python
with everything stripped away that isn't relevant for binding
generation. Without comments, the core header files only require ~4K
lines of code and depend on Python (3.6+, or PyPy) and the C++
standard library. This compact implementation was possible thanks to
some of the new C++11 language features (specifically: tuples, lambda
functions and variadic templates). Since its creation, this library has
grown beyond Boost.Python in many ways, leading to dramatically simpler
binding code in many common situations.
Tutorial and reference documentation is provided at
`pybind11.readthedocs.io <https://pybind11.readthedocs.io/en/latest>`_.
A PDF version of the manual is available
`here <https://pybind11.readthedocs.io/_/downloads/en/latest/pdf/>`_.
And the source code is always available at
`github.com/pybind/pybind11 <https://github.com/pybind/pybind11>`_.
Core features
-------------
pybind11 can map the following core C++ features to Python:
- Functions accepting and returning custom data structures per value,
reference, or pointer
- Instance methods and static methods
- Overloaded functions
- Instance attributes and static attributes
- Arbitrary exception types
- Enumerations
- Callbacks
- Iterators and ranges
- Custom operators
- Single and multiple inheritance
- STL data structures
- Smart pointers with reference counting like ``std::shared_ptr``
- Internal references with correct reference counting
- C++ classes with virtual (and pure virtual) methods can be extended
in Python
Goodies
-------
In addition to the core functionality, pybind11 provides some extra
goodies:
- Python 3.6+, and PyPy3 7.3 are supported with an implementation-agnostic
interface (pybind11 2.9 was the last version to support Python 2 and 3.5).
- It is possible to bind C++11 lambda functions with captured
variables. The lambda capture data is stored inside the resulting
Python function object.
- pybind11 uses C++11 move constructors and move assignment operators
whenever possible to efficiently transfer custom data types.
- It's easy to expose the internal storage of custom data types through
Pythons' buffer protocols. This is handy e.g. for fast conversion
between C++ matrix classes like Eigen and NumPy without expensive
copy operations.
- pybind11 can automatically vectorize functions so that they are
transparently applied to all entries of one or more NumPy array
arguments.
- Python's slice-based access and assignment operations can be
supported with just a few lines of code.
- Everything is contained in just a few header files; there is no need
to link against any additional libraries.
- Binaries are generally smaller by a factor of at least 2 compared to
equivalent bindings generated by Boost.Python. A recent pybind11
conversion of PyRosetta, an enormous Boost.Python binding project,
`reported <https://graylab.jhu.edu/Sergey/2016.RosettaCon/PyRosetta-4.pdf>`_
a binary size reduction of **5.4x** and compile time reduction by
**5.8x**.
- Function signatures are precomputed at compile time (using
``constexpr``), leading to smaller binaries.
- With little extra effort, C++ types can be pickled and unpickled
similar to regular Python objects.
Supported compilers
-------------------
1. Clang/LLVM 3.3 or newer (for Apple Xcode's clang, this is 5.0.0 or
newer)
2. GCC 4.8 or newer
3. Microsoft Visual Studio 2017 or newer
4. Intel classic C++ compiler 18 or newer (ICC 20.2 tested in CI)
5. Cygwin/GCC (previously tested on 2.5.1)
6. NVCC (CUDA 11.0 tested in CI)
7. NVIDIA PGI (20.9 tested in CI)
About
-----
This project was created by `Wenzel
Jakob <http://rgl.epfl.ch/people/wjakob>`_. Significant features and/or
improvements to the code were contributed by Jonas Adler, Lori A. Burns,
Sylvain Corlay, Eric Cousineau, Aaron Gokaslan, Ralf Grosse-Kunstleve, Trent Houliston, Axel
Huebl, @hulucc, Yannick Jadoul, Sergey Lyskov Johan Mabille, Tomasz Miąsko,
Dean Moldovan, Ben Pritchard, Jason Rhinelander, Boris Schäling, Pim
Schellart, Henry Schreiner, Ivan Smirnov, Boris Staletic, and Patrick Stewart.
We thank Google for a generous financial contribution to the continuous
integration infrastructure used by this project.
Contributing
~~~~~~~~~~~~
See the `contributing
guide <https://github.com/pybind/pybind11/blob/master/.github/CONTRIBUTING.md>`_
for information on building and contributing to pybind11.
License
~~~~~~~
pybind11 is provided under a BSD-style license that can be found in the
`LICENSE <https://github.com/pybind/pybind11/blob/master/LICENSE>`_
file. By using, distributing, or contributing to this project, you agree
to the terms and conditions of this license.
.. |Latest Documentation Status| image:: https://readthedocs.org/projects/pybind11/badge?version=latest
:target: http://pybind11.readthedocs.org/en/latest
.. |Stable Documentation Status| image:: https://img.shields.io/badge/docs-stable-blue.svg
:target: http://pybind11.readthedocs.org/en/stable
.. |Gitter chat| image:: https://img.shields.io/gitter/room/gitterHQ/gitter.svg
:target: https://gitter.im/pybind/Lobby
.. |CI| image:: https://github.com/pybind/pybind11/workflows/CI/badge.svg
:target: https://github.com/pybind/pybind11/actions
.. |Build status| image:: https://ci.appveyor.com/api/projects/status/riaj54pn4h08xy40?svg=true
:target: https://ci.appveyor.com/project/wjakob/pybind11
.. |PyPI package| image:: https://img.shields.io/pypi/v/pybind11.svg
:target: https://pypi.org/project/pybind11/
.. |Conda-forge| image:: https://img.shields.io/conda/vn/conda-forge/pybind11.svg
:target: https://github.com/conda-forge/pybind11-feedstock
.. |Repology| image:: https://repology.org/badge/latest-versions/python:pybind11.svg
:target: https://repology.org/project/python:pybind11/versions
.. |Python Versions| image:: https://img.shields.io/pypi/pyversions/pybind11.svg
:target: https://pypi.org/project/pybind11/
.. |GitHub Discussions| image:: https://img.shields.io/static/v1?label=Discussions&message=Ask&color=blue&logo=github
:target: https://github.com/pybind/pybind11/discussions

21
libs/pybind/docs/Doxyfile Normal file
View File

@ -0,0 +1,21 @@
PROJECT_NAME = pybind11
INPUT = ../include/pybind11/
RECURSIVE = YES
GENERATE_HTML = NO
GENERATE_LATEX = NO
GENERATE_XML = YES
XML_OUTPUT = .build/doxygenxml
XML_PROGRAMLISTING = YES
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
EXPAND_AS_DEFINED = PYBIND11_RUNTIME_EXCEPTION
ALIASES = "rst=\verbatim embed:rst"
ALIASES += "endrst=\endverbatim"
QUIET = YES
WARNINGS = YES
WARN_IF_UNDOCUMENTED = NO
PREDEFINED = PYBIND11_NOINLINE

192
libs/pybind/docs/Makefile Normal file
View File

@ -0,0 +1,192 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = .build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pybind11.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pybind11.qhc"
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/pybind11"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/pybind11"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -0,0 +1,3 @@
.highlight .go {
color: #707070;
}

View File

@ -0,0 +1,81 @@
Chrono
======
When including the additional header file :file:`pybind11/chrono.h` conversions
from C++11 chrono datatypes to python datetime objects are automatically enabled.
This header also enables conversions of python floats (often from sources such
as ``time.monotonic()``, ``time.perf_counter()`` and ``time.process_time()``)
into durations.
An overview of clocks in C++11
------------------------------
A point of confusion when using these conversions is the differences between
clocks provided in C++11. There are three clock types defined by the C++11
standard and users can define their own if needed. Each of these clocks have
different properties and when converting to and from python will give different
results.
The first clock defined by the standard is ``std::chrono::system_clock``. This
clock measures the current date and time. However, this clock changes with to
updates to the operating system time. For example, if your time is synchronised
with a time server this clock will change. This makes this clock a poor choice
for timing purposes but good for measuring the wall time.
The second clock defined in the standard is ``std::chrono::steady_clock``.
This clock ticks at a steady rate and is never adjusted. This makes it excellent
for timing purposes, however the value in this clock does not correspond to the
current date and time. Often this clock will be the amount of time your system
has been on, although it does not have to be. This clock will never be the same
clock as the system clock as the system clock can change but steady clocks
cannot.
The third clock defined in the standard is ``std::chrono::high_resolution_clock``.
This clock is the clock that has the highest resolution out of the clocks in the
system. It is normally a typedef to either the system clock or the steady clock
but can be its own independent clock. This is important as when using these
conversions as the types you get in python for this clock might be different
depending on the system.
If it is a typedef of the system clock, python will get datetime objects, but if
it is a different clock they will be timedelta objects.
Provided conversions
--------------------
.. rubric:: C++ to Python
- ``std::chrono::system_clock::time_point````datetime.datetime``
System clock times are converted to python datetime instances. They are
in the local timezone, but do not have any timezone information attached
to them (they are naive datetime objects).
- ``std::chrono::duration````datetime.timedelta``
Durations are converted to timedeltas, any precision in the duration
greater than microseconds is lost by rounding towards zero.
- ``std::chrono::[other_clocks]::time_point````datetime.timedelta``
Any clock time that is not the system clock is converted to a time delta.
This timedelta measures the time from the clocks epoch to now.
.. rubric:: Python to C++
- ``datetime.datetime`` or ``datetime.date`` or ``datetime.time````std::chrono::system_clock::time_point``
Date/time objects are converted into system clock timepoints. Any
timezone information is ignored and the type is treated as a naive
object.
- ``datetime.timedelta````std::chrono::duration``
Time delta are converted into durations with microsecond precision.
- ``datetime.timedelta````std::chrono::[other_clocks]::time_point``
Time deltas that are converted into clock timepoints are treated as
the amount of time from the start of the clocks epoch.
- ``float````std::chrono::duration``
Floats that are passed to C++ as durations be interpreted as a number of
seconds. These will be converted to the duration using ``duration_cast``
from the float.
- ``float````std::chrono::[other_clocks]::time_point``
Floats that are passed to C++ as time points will be interpreted as the
number of seconds from the start of the clocks epoch.

View File

@ -0,0 +1,93 @@
Custom type casters
===================
In very rare cases, applications may require custom type casters that cannot be
expressed using the abstractions provided by pybind11, thus requiring raw
Python C API calls. This is fairly advanced usage and should only be pursued by
experts who are familiar with the intricacies of Python reference counting.
The following snippets demonstrate how this works for a very simple ``inty``
type that that should be convertible from Python types that provide a
``__int__(self)`` method.
.. code-block:: cpp
struct inty { long long_value; };
void print(inty s) {
std::cout << s.long_value << std::endl;
}
The following Python snippet demonstrates the intended usage from the Python side:
.. code-block:: python
class A:
def __int__(self):
return 123
from example import print
print(A())
To register the necessary conversion routines, it is necessary to add an
instantiation of the ``pybind11::detail::type_caster<T>`` template.
Although this is an implementation detail, adding an instantiation of this
type is explicitly allowed.
.. code-block:: cpp
namespace pybind11 { namespace detail {
template <> struct type_caster<inty> {
public:
/**
* This macro establishes the name 'inty' in
* function signatures and declares a local variable
* 'value' of type inty
*/
PYBIND11_TYPE_CASTER(inty, const_name("inty"));
/**
* Conversion part 1 (Python->C++): convert a PyObject into a inty
* instance or return false upon failure. The second argument
* indicates whether implicit conversions should be applied.
*/
bool load(handle src, bool) {
/* Extract PyObject from handle */
PyObject *source = src.ptr();
/* Try converting into a Python integer value */
PyObject *tmp = PyNumber_Long(source);
if (!tmp)
return false;
/* Now try to convert into a C++ int */
value.long_value = PyLong_AsLong(tmp);
Py_DECREF(tmp);
/* Ensure return code was OK (to avoid out-of-range errors etc) */
return !(value.long_value == -1 && !PyErr_Occurred());
}
/**
* Conversion part 2 (C++ -> Python): convert an inty instance into
* a Python object. The second and third arguments are used to
* indicate the return value policy and parent object (for
* ``return_value_policy::reference_internal``) and are generally
* ignored by implicit casters.
*/
static handle cast(inty src, return_value_policy /* policy */, handle /* parent */) {
return PyLong_FromLong(src.long_value);
}
};
}} // namespace pybind11::detail
.. note::
A ``type_caster<T>`` defined with ``PYBIND11_TYPE_CASTER(T, ...)`` requires
that ``T`` is default-constructible (``value`` is first default constructed
and then ``load()`` assigns to it).
.. warning::
When using custom type casters, it's important to declare them consistently
in every compilation unit of the Python extension module. Otherwise,
undefined behavior can ensue.

View File

@ -0,0 +1,310 @@
Eigen
#####
`Eigen <http://eigen.tuxfamily.org>`_ is C++ header-based library for dense and
sparse linear algebra. Due to its popularity and widespread adoption, pybind11
provides transparent conversion and limited mapping support between Eigen and
Scientific Python linear algebra data types.
To enable the built-in Eigen support you must include the optional header file
:file:`pybind11/eigen.h`.
Pass-by-value
=============
When binding a function with ordinary Eigen dense object arguments (for
example, ``Eigen::MatrixXd``), pybind11 will accept any input value that is
already (or convertible to) a ``numpy.ndarray`` with dimensions compatible with
the Eigen type, copy its values into a temporary Eigen variable of the
appropriate type, then call the function with this temporary variable.
Sparse matrices are similarly copied to or from
``scipy.sparse.csr_matrix``/``scipy.sparse.csc_matrix`` objects.
Pass-by-reference
=================
One major limitation of the above is that every data conversion implicitly
involves a copy, which can be both expensive (for large matrices) and disallows
binding functions that change their (Matrix) arguments. Pybind11 allows you to
work around this by using Eigen's ``Eigen::Ref<MatrixType>`` class much as you
would when writing a function taking a generic type in Eigen itself (subject to
some limitations discussed below).
When calling a bound function accepting a ``Eigen::Ref<const MatrixType>``
type, pybind11 will attempt to avoid copying by using an ``Eigen::Map`` object
that maps into the source ``numpy.ndarray`` data: this requires both that the
data types are the same (e.g. ``dtype='float64'`` and ``MatrixType::Scalar`` is
``double``); and that the storage is layout compatible. The latter limitation
is discussed in detail in the section below, and requires careful
consideration: by default, numpy matrices and Eigen matrices are *not* storage
compatible.
If the numpy matrix cannot be used as is (either because its types differ, e.g.
passing an array of integers to an Eigen parameter requiring doubles, or
because the storage is incompatible), pybind11 makes a temporary copy and
passes the copy instead.
When a bound function parameter is instead ``Eigen::Ref<MatrixType>`` (note the
lack of ``const``), pybind11 will only allow the function to be called if it
can be mapped *and* if the numpy array is writeable (that is
``a.flags.writeable`` is true). Any access (including modification) made to
the passed variable will be transparently carried out directly on the
``numpy.ndarray``.
This means you can write code such as the following and have it work as
expected:
.. code-block:: cpp
void scale_by_2(Eigen::Ref<Eigen::VectorXd> v) {
v *= 2;
}
Note, however, that you will likely run into limitations due to numpy and
Eigen's difference default storage order for data; see the below section on
:ref:`storage_orders` for details on how to bind code that won't run into such
limitations.
.. note::
Passing by reference is not supported for sparse types.
Returning values to Python
==========================
When returning an ordinary dense Eigen matrix type to numpy (e.g.
``Eigen::MatrixXd`` or ``Eigen::RowVectorXf``) pybind11 keeps the matrix and
returns a numpy array that directly references the Eigen matrix: no copy of the
data is performed. The numpy array will have ``array.flags.owndata`` set to
``False`` to indicate that it does not own the data, and the lifetime of the
stored Eigen matrix will be tied to the returned ``array``.
If you bind a function with a non-reference, ``const`` return type (e.g.
``const Eigen::MatrixXd``), the same thing happens except that pybind11 also
sets the numpy array's ``writeable`` flag to false.
If you return an lvalue reference or pointer, the usual pybind11 rules apply,
as dictated by the binding function's return value policy (see the
documentation on :ref:`return_value_policies` for full details). That means,
without an explicit return value policy, lvalue references will be copied and
pointers will be managed by pybind11. In order to avoid copying, you should
explicitly specify an appropriate return value policy, as in the following
example:
.. code-block:: cpp
class MyClass {
Eigen::MatrixXd big_mat = Eigen::MatrixXd::Zero(10000, 10000);
public:
Eigen::MatrixXd &getMatrix() { return big_mat; }
const Eigen::MatrixXd &viewMatrix() { return big_mat; }
};
// Later, in binding code:
py::class_<MyClass>(m, "MyClass")
.def(py::init<>())
.def("copy_matrix", &MyClass::getMatrix) // Makes a copy!
.def("get_matrix", &MyClass::getMatrix, py::return_value_policy::reference_internal)
.def("view_matrix", &MyClass::viewMatrix, py::return_value_policy::reference_internal)
;
.. code-block:: python
a = MyClass()
m = a.get_matrix() # flags.writeable = True, flags.owndata = False
v = a.view_matrix() # flags.writeable = False, flags.owndata = False
c = a.copy_matrix() # flags.writeable = True, flags.owndata = True
# m[5,6] and v[5,6] refer to the same element, c[5,6] does not.
Note in this example that ``py::return_value_policy::reference_internal`` is
used to tie the life of the MyClass object to the life of the returned arrays.
You may also return an ``Eigen::Ref``, ``Eigen::Map`` or other map-like Eigen
object (for example, the return value of ``matrix.block()`` and related
methods) that map into a dense Eigen type. When doing so, the default
behaviour of pybind11 is to simply reference the returned data: you must take
care to ensure that this data remains valid! You may ask pybind11 to
explicitly *copy* such a return value by using the
``py::return_value_policy::copy`` policy when binding the function. You may
also use ``py::return_value_policy::reference_internal`` or a
``py::keep_alive`` to ensure the data stays valid as long as the returned numpy
array does.
When returning such a reference of map, pybind11 additionally respects the
readonly-status of the returned value, marking the numpy array as non-writeable
if the reference or map was itself read-only.
.. note::
Sparse types are always copied when returned.
.. _storage_orders:
Storage orders
==============
Passing arguments via ``Eigen::Ref`` has some limitations that you must be
aware of in order to effectively pass matrices by reference. First and
foremost is that the default ``Eigen::Ref<MatrixType>`` class requires
contiguous storage along columns (for column-major types, the default in Eigen)
or rows if ``MatrixType`` is specifically an ``Eigen::RowMajor`` storage type.
The former, Eigen's default, is incompatible with ``numpy``'s default row-major
storage, and so you will not be able to pass numpy arrays to Eigen by reference
without making one of two changes.
(Note that this does not apply to vectors (or column or row matrices): for such
types the "row-major" and "column-major" distinction is meaningless).
The first approach is to change the use of ``Eigen::Ref<MatrixType>`` to the
more general ``Eigen::Ref<MatrixType, 0, Eigen::Stride<Eigen::Dynamic,
Eigen::Dynamic>>`` (or similar type with a fully dynamic stride type in the
third template argument). Since this is a rather cumbersome type, pybind11
provides a ``py::EigenDRef<MatrixType>`` type alias for your convenience (along
with EigenDMap for the equivalent Map, and EigenDStride for just the stride
type).
This type allows Eigen to map into any arbitrary storage order. This is not
the default in Eigen for performance reasons: contiguous storage allows
vectorization that cannot be done when storage is not known to be contiguous at
compile time. The default ``Eigen::Ref`` stride type allows non-contiguous
storage along the outer dimension (that is, the rows of a column-major matrix
or columns of a row-major matrix), but not along the inner dimension.
This type, however, has the added benefit of also being able to map numpy array
slices. For example, the following (contrived) example uses Eigen with a numpy
slice to multiply by 2 all coefficients that are both on even rows (0, 2, 4,
...) and in columns 2, 5, or 8:
.. code-block:: cpp
m.def("scale", [](py::EigenDRef<Eigen::MatrixXd> m, double c) { m *= c; });
.. code-block:: python
# a = np.array(...)
scale_by_2(myarray[0::2, 2:9:3])
The second approach to avoid copying is more intrusive: rearranging the
underlying data types to not run into the non-contiguous storage problem in the
first place. In particular, that means using matrices with ``Eigen::RowMajor``
storage, where appropriate, such as:
.. code-block:: cpp
using RowMatrixXd = Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor>;
// Use RowMatrixXd instead of MatrixXd
Now bound functions accepting ``Eigen::Ref<RowMatrixXd>`` arguments will be
callable with numpy's (default) arrays without involving a copying.
You can, alternatively, change the storage order that numpy arrays use by
adding the ``order='F'`` option when creating an array:
.. code-block:: python
myarray = np.array(source, order="F")
Such an object will be passable to a bound function accepting an
``Eigen::Ref<MatrixXd>`` (or similar column-major Eigen type).
One major caveat with this approach, however, is that it is not entirely as
easy as simply flipping all Eigen or numpy usage from one to the other: some
operations may alter the storage order of a numpy array. For example, ``a2 =
array.transpose()`` results in ``a2`` being a view of ``array`` that references
the same data, but in the opposite storage order!
While this approach allows fully optimized vectorized calculations in Eigen, it
cannot be used with array slices, unlike the first approach.
When *returning* a matrix to Python (either a regular matrix, a reference via
``Eigen::Ref<>``, or a map/block into a matrix), no special storage
consideration is required: the created numpy array will have the required
stride that allows numpy to properly interpret the array, whatever its storage
order.
Failing rather than copying
===========================
The default behaviour when binding ``Eigen::Ref<const MatrixType>`` Eigen
references is to copy matrix values when passed a numpy array that does not
conform to the element type of ``MatrixType`` or does not have a compatible
stride layout. If you want to explicitly avoid copying in such a case, you
should bind arguments using the ``py::arg().noconvert()`` annotation (as
described in the :ref:`nonconverting_arguments` documentation).
The following example shows an example of arguments that don't allow data
copying to take place:
.. code-block:: cpp
// The method and function to be bound:
class MyClass {
// ...
double some_method(const Eigen::Ref<const MatrixXd> &matrix) { /* ... */ }
};
float some_function(const Eigen::Ref<const MatrixXf> &big,
const Eigen::Ref<const MatrixXf> &small) {
// ...
}
// The associated binding code:
using namespace pybind11::literals; // for "arg"_a
py::class_<MyClass>(m, "MyClass")
// ... other class definitions
.def("some_method", &MyClass::some_method, py::arg().noconvert());
m.def("some_function", &some_function,
"big"_a.noconvert(), // <- Don't allow copying for this arg
"small"_a // <- This one can be copied if needed
);
With the above binding code, attempting to call the the ``some_method(m)``
method on a ``MyClass`` object, or attempting to call ``some_function(m, m2)``
will raise a ``RuntimeError`` rather than making a temporary copy of the array.
It will, however, allow the ``m2`` argument to be copied into a temporary if
necessary.
Note that explicitly specifying ``.noconvert()`` is not required for *mutable*
Eigen references (e.g. ``Eigen::Ref<MatrixXd>`` without ``const`` on the
``MatrixXd``): mutable references will never be called with a temporary copy.
Vectors versus column/row matrices
==================================
Eigen and numpy have fundamentally different notions of a vector. In Eigen, a
vector is simply a matrix with the number of columns or rows set to 1 at
compile time (for a column vector or row vector, respectively). NumPy, in
contrast, has comparable 2-dimensional 1xN and Nx1 arrays, but *also* has
1-dimensional arrays of size N.
When passing a 2-dimensional 1xN or Nx1 array to Eigen, the Eigen type must
have matching dimensions: That is, you cannot pass a 2-dimensional Nx1 numpy
array to an Eigen value expecting a row vector, or a 1xN numpy array as a
column vector argument.
On the other hand, pybind11 allows you to pass 1-dimensional arrays of length N
as Eigen parameters. If the Eigen type can hold a column vector of length N it
will be passed as such a column vector. If not, but the Eigen type constraints
will accept a row vector, it will be passed as a row vector. (The column
vector takes precedence when both are supported, for example, when passing a
1D numpy array to a MatrixXd argument). Note that the type need not be
explicitly a vector: it is permitted to pass a 1D numpy array of size 5 to an
Eigen ``Matrix<double, Dynamic, 5>``: you would end up with a 1x5 Eigen matrix.
Passing the same to an ``Eigen::MatrixXd`` would result in a 5x1 Eigen matrix.
When returning an Eigen vector to numpy, the conversion is ambiguous: a row
vector of length 4 could be returned as either a 1D array of length 4, or as a
2D array of size 1x4. When encountering such a situation, pybind11 compromises
by considering the returned Eigen type: if it is a compile-time vector--that
is, the type has either the number of rows or columns set to 1 at compile
time--pybind11 converts to a 1D numpy array when returning the value. For
instances that are a vector only at run-time (e.g. ``MatrixXd``,
``Matrix<float, Dynamic, 4>``), pybind11 returns the vector as a 2D array to
numpy. If this isn't want you want, you can use ``array.reshape(...)`` to get
a view of the same data in the desired dimensions.
.. seealso::
The file :file:`tests/test_eigen.cpp` contains a complete example that
shows how to pass Eigen sparse and dense data types in more detail.

View File

@ -0,0 +1,109 @@
Functional
##########
The following features must be enabled by including :file:`pybind11/functional.h`.
Callbacks and passing anonymous functions
=========================================
The C++11 standard brought lambda functions and the generic polymorphic
function wrapper ``std::function<>`` to the C++ programming language, which
enable powerful new ways of working with functions. Lambda functions come in
two flavors: stateless lambda function resemble classic function pointers that
link to an anonymous piece of code, while stateful lambda functions
additionally depend on captured variables that are stored in an anonymous
*lambda closure object*.
Here is a simple example of a C++ function that takes an arbitrary function
(stateful or stateless) with signature ``int -> int`` as an argument and runs
it with the value 10.
.. code-block:: cpp
int func_arg(const std::function<int(int)> &f) {
return f(10);
}
The example below is more involved: it takes a function of signature ``int -> int``
and returns another function of the same kind. The return value is a stateful
lambda function, which stores the value ``f`` in the capture object and adds 1 to
its return value upon execution.
.. code-block:: cpp
std::function<int(int)> func_ret(const std::function<int(int)> &f) {
return [f](int i) {
return f(i) + 1;
};
}
This example demonstrates using python named parameters in C++ callbacks which
requires using ``py::cpp_function`` as a wrapper. Usage is similar to defining
methods of classes:
.. code-block:: cpp
py::cpp_function func_cpp() {
return py::cpp_function([](int i) { return i+1; },
py::arg("number"));
}
After including the extra header file :file:`pybind11/functional.h`, it is almost
trivial to generate binding code for all of these functions.
.. code-block:: cpp
#include <pybind11/functional.h>
PYBIND11_MODULE(example, m) {
m.def("func_arg", &func_arg);
m.def("func_ret", &func_ret);
m.def("func_cpp", &func_cpp);
}
The following interactive session shows how to call them from Python.
.. code-block:: pycon
$ python
>>> import example
>>> def square(i):
... return i * i
...
>>> example.func_arg(square)
100L
>>> square_plus_1 = example.func_ret(square)
>>> square_plus_1(4)
17L
>>> plus_1 = func_cpp()
>>> plus_1(number=43)
44L
.. warning::
Keep in mind that passing a function from C++ to Python (or vice versa)
will instantiate a piece of wrapper code that translates function
invocations between the two languages. Naturally, this translation
increases the computational cost of each function call somewhat. A
problematic situation can arise when a function is copied back and forth
between Python and C++ many times in a row, in which case the underlying
wrappers will accumulate correspondingly. The resulting long sequence of
C++ -> Python -> C++ -> ... roundtrips can significantly decrease
performance.
There is one exception: pybind11 detects case where a stateless function
(i.e. a function pointer or a lambda function without captured variables)
is passed as an argument to another C++ function exposed in Python. In this
case, there is no overhead. Pybind11 will extract the underlying C++
function pointer from the wrapped function to sidestep a potential C++ ->
Python -> C++ roundtrip. This is demonstrated in :file:`tests/test_callbacks.cpp`.
.. note::
This functionality is very useful when generating bindings for callbacks in
C++ libraries (e.g. GUI libraries, asynchronous networking libraries, etc.).
The file :file:`tests/test_callbacks.cpp` contains a complete example
that demonstrates how to work with callbacks and anonymous functions in
more detail.

View File

@ -0,0 +1,43 @@
.. _type-conversions:
Type conversions
################
Apart from enabling cross-language function calls, a fundamental problem
that a binding tool like pybind11 must address is to provide access to
native Python types in C++ and vice versa. There are three fundamentally
different ways to do this—which approach is preferable for a particular type
depends on the situation at hand.
1. Use a native C++ type everywhere. In this case, the type must be wrapped
using pybind11-generated bindings so that Python can interact with it.
2. Use a native Python type everywhere. It will need to be wrapped so that
C++ functions can interact with it.
3. Use a native C++ type on the C++ side and a native Python type on the
Python side. pybind11 refers to this as a *type conversion*.
Type conversions are the most "natural" option in the sense that native
(non-wrapped) types are used everywhere. The main downside is that a copy
of the data must be made on every Python ↔ C++ transition: this is
needed since the C++ and Python versions of the same type generally won't
have the same memory layout.
pybind11 can perform many kinds of conversions automatically. An overview
is provided in the table ":ref:`conversion_table`".
The following subsections discuss the differences between these options in more
detail. The main focus in this section is on type conversions, which represent
the last case of the above list.
.. toctree::
:maxdepth: 1
overview
strings
stl
functional
chrono
eigen
custom

View File

@ -0,0 +1,170 @@
Overview
########
.. rubric:: 1. Native type in C++, wrapper in Python
Exposing a custom C++ type using :class:`py::class_` was covered in detail
in the :doc:`/classes` section. There, the underlying data structure is
always the original C++ class while the :class:`py::class_` wrapper provides
a Python interface. Internally, when an object like this is sent from C++ to
Python, pybind11 will just add the outer wrapper layer over the native C++
object. Getting it back from Python is just a matter of peeling off the
wrapper.
.. rubric:: 2. Wrapper in C++, native type in Python
This is the exact opposite situation. Now, we have a type which is native to
Python, like a ``tuple`` or a ``list``. One way to get this data into C++ is
with the :class:`py::object` family of wrappers. These are explained in more
detail in the :doc:`/advanced/pycpp/object` section. We'll just give a quick
example here:
.. code-block:: cpp
void print_list(py::list my_list) {
for (auto item : my_list)
std::cout << item << " ";
}
.. code-block:: pycon
>>> print_list([1, 2, 3])
1 2 3
The Python ``list`` is not converted in any way -- it's just wrapped in a C++
:class:`py::list` class. At its core it's still a Python object. Copying a
:class:`py::list` will do the usual reference-counting like in Python.
Returning the object to Python will just remove the thin wrapper.
.. rubric:: 3. Converting between native C++ and Python types
In the previous two cases we had a native type in one language and a wrapper in
the other. Now, we have native types on both sides and we convert between them.
.. code-block:: cpp
void print_vector(const std::vector<int> &v) {
for (auto item : v)
std::cout << item << "\n";
}
.. code-block:: pycon
>>> print_vector([1, 2, 3])
1 2 3
In this case, pybind11 will construct a new ``std::vector<int>`` and copy each
element from the Python ``list``. The newly constructed object will be passed
to ``print_vector``. The same thing happens in the other direction: a new
``list`` is made to match the value returned from C++.
Lots of these conversions are supported out of the box, as shown in the table
below. They are very convenient, but keep in mind that these conversions are
fundamentally based on copying data. This is perfectly fine for small immutable
types but it may become quite expensive for large data structures. This can be
avoided by overriding the automatic conversion with a custom wrapper (i.e. the
above-mentioned approach 1). This requires some manual effort and more details
are available in the :ref:`opaque` section.
.. _conversion_table:
List of all builtin conversions
-------------------------------
The following basic data types are supported out of the box (some may require
an additional extension header to be included). To pass other data structures
as arguments and return values, refer to the section on binding :ref:`classes`.
+------------------------------------+---------------------------+-----------------------------------+
| Data type | Description | Header file |
+====================================+===========================+===================================+
| ``int8_t``, ``uint8_t`` | 8-bit integers | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``int16_t``, ``uint16_t`` | 16-bit integers | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``int32_t``, ``uint32_t`` | 32-bit integers | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``int64_t``, ``uint64_t`` | 64-bit integers | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``ssize_t``, ``size_t`` | Platform-dependent size | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``float``, ``double`` | Floating point types | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``bool`` | Two-state Boolean type | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``char`` | Character literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``char16_t`` | UTF-16 character literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``char32_t`` | UTF-32 character literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``wchar_t`` | Wide character literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``const char *`` | UTF-8 string literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``const char16_t *`` | UTF-16 string literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``const char32_t *`` | UTF-32 string literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``const wchar_t *`` | Wide string literal | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::string`` | STL dynamic UTF-8 string | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::u16string`` | STL dynamic UTF-16 string | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::u32string`` | STL dynamic UTF-32 string | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::wstring`` | STL dynamic wide string | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::string_view``, | STL C++17 string views | :file:`pybind11/pybind11.h` |
| ``std::u16string_view``, etc. | | |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::pair<T1, T2>`` | Pair of two custom types | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::tuple<...>`` | Arbitrary tuple of types | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::reference_wrapper<...>`` | Reference type wrapper | :file:`pybind11/pybind11.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::complex<T>`` | Complex numbers | :file:`pybind11/complex.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::array<T, Size>`` | STL static array | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::vector<T>`` | STL dynamic array | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::deque<T>`` | STL double-ended queue | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::valarray<T>`` | STL value array | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::list<T>`` | STL linked list | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::map<T1, T2>`` | STL ordered map | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::unordered_map<T1, T2>`` | STL unordered map | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::set<T>`` | STL ordered set | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::unordered_set<T>`` | STL unordered set | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::optional<T>`` | STL optional type (C++17) | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::experimental::optional<T>`` | STL optional type (exp.) | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::variant<...>`` | Type-safe union (C++17) | :file:`pybind11/stl.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::filesystem::path<T>`` | STL path (C++17) [#]_ | :file:`pybind11/stl/filesystem.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::function<...>`` | STL polymorphic function | :file:`pybind11/functional.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::chrono::duration<...>`` | STL time duration | :file:`pybind11/chrono.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``std::chrono::time_point<...>`` | STL date/time | :file:`pybind11/chrono.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``Eigen::Matrix<...>`` | Eigen: dense matrix | :file:`pybind11/eigen.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``Eigen::Map<...>`` | Eigen: mapped memory | :file:`pybind11/eigen.h` |
+------------------------------------+---------------------------+-----------------------------------+
| ``Eigen::SparseMatrix<...>`` | Eigen: sparse matrix | :file:`pybind11/eigen.h` |
+------------------------------------+---------------------------+-----------------------------------+
.. [#] ``std::filesystem::path`` is converted to ``pathlib.Path`` and
``os.PathLike`` is converted to ``std::filesystem::path``.

View File

@ -0,0 +1,249 @@
STL containers
##############
Automatic conversion
====================
When including the additional header file :file:`pybind11/stl.h`, conversions
between ``std::vector<>``/``std::deque<>``/``std::list<>``/``std::array<>``/``std::valarray<>``,
``std::set<>``/``std::unordered_set<>``, and
``std::map<>``/``std::unordered_map<>`` and the Python ``list``, ``set`` and
``dict`` data structures are automatically enabled. The types ``std::pair<>``
and ``std::tuple<>`` are already supported out of the box with just the core
:file:`pybind11/pybind11.h` header.
The major downside of these implicit conversions is that containers must be
converted (i.e. copied) on every Python->C++ and C++->Python transition, which
can have implications on the program semantics and performance. Please read the
next sections for more details and alternative approaches that avoid this.
.. note::
Arbitrary nesting of any of these types is possible.
.. seealso::
The file :file:`tests/test_stl.cpp` contains a complete
example that demonstrates how to pass STL data types in more detail.
.. _cpp17_container_casters:
C++17 library containers
========================
The :file:`pybind11/stl.h` header also includes support for ``std::optional<>``
and ``std::variant<>``. These require a C++17 compiler and standard library.
In C++14 mode, ``std::experimental::optional<>`` is supported if available.
Various versions of these containers also exist for C++11 (e.g. in Boost).
pybind11 provides an easy way to specialize the ``type_caster`` for such
types:
.. code-block:: cpp
// `boost::optional` as an example -- can be any `std::optional`-like container
namespace pybind11 { namespace detail {
template <typename T>
struct type_caster<boost::optional<T>> : optional_caster<boost::optional<T>> {};
}}
The above should be placed in a header file and included in all translation units
where automatic conversion is needed. Similarly, a specialization can be provided
for custom variant types:
.. code-block:: cpp
// `boost::variant` as an example -- can be any `std::variant`-like container
namespace pybind11 { namespace detail {
template <typename... Ts>
struct type_caster<boost::variant<Ts...>> : variant_caster<boost::variant<Ts...>> {};
// Specifies the function used to visit the variant -- `apply_visitor` instead of `visit`
template <>
struct visit_helper<boost::variant> {
template <typename... Args>
static auto call(Args &&...args) -> decltype(boost::apply_visitor(args...)) {
return boost::apply_visitor(args...);
}
};
}} // namespace pybind11::detail
The ``visit_helper`` specialization is not required if your ``name::variant`` provides
a ``name::visit()`` function. For any other function name, the specialization must be
included to tell pybind11 how to visit the variant.
.. warning::
When converting a ``variant`` type, pybind11 follows the same rules as when
determining which function overload to call (:ref:`overload_resolution`), and
so the same caveats hold. In particular, the order in which the ``variant``'s
alternatives are listed is important, since pybind11 will try conversions in
this order. This means that, for example, when converting ``variant<int, bool>``,
the ``bool`` variant will never be selected, as any Python ``bool`` is already
an ``int`` and is convertible to a C++ ``int``. Changing the order of alternatives
(and using ``variant<bool, int>``, in this example) provides a solution.
.. note::
pybind11 only supports the modern implementation of ``boost::variant``
which makes use of variadic templates. This requires Boost 1.56 or newer.
.. _opaque:
Making opaque types
===================
pybind11 heavily relies on a template matching mechanism to convert parameters
and return values that are constructed from STL data types such as vectors,
linked lists, hash tables, etc. This even works in a recursive manner, for
instance to deal with lists of hash maps of pairs of elementary and custom
types, etc.
However, a fundamental limitation of this approach is that internal conversions
between Python and C++ types involve a copy operation that prevents
pass-by-reference semantics. What does this mean?
Suppose we bind the following function
.. code-block:: cpp
void append_1(std::vector<int> &v) {
v.push_back(1);
}
and call it from Python, the following happens:
.. code-block:: pycon
>>> v = [5, 6]
>>> append_1(v)
>>> print(v)
[5, 6]
As you can see, when passing STL data structures by reference, modifications
are not propagated back the Python side. A similar situation arises when
exposing STL data structures using the ``def_readwrite`` or ``def_readonly``
functions:
.. code-block:: cpp
/* ... definition ... */
class MyClass {
std::vector<int> contents;
};
/* ... binding code ... */
py::class_<MyClass>(m, "MyClass")
.def(py::init<>())
.def_readwrite("contents", &MyClass::contents);
In this case, properties can be read and written in their entirety. However, an
``append`` operation involving such a list type has no effect:
.. code-block:: pycon
>>> m = MyClass()
>>> m.contents = [5, 6]
>>> print(m.contents)
[5, 6]
>>> m.contents.append(7)
>>> print(m.contents)
[5, 6]
Finally, the involved copy operations can be costly when dealing with very
large lists. To deal with all of the above situations, pybind11 provides a
macro named ``PYBIND11_MAKE_OPAQUE(T)`` that disables the template-based
conversion machinery of types, thus rendering them *opaque*. The contents of
opaque objects are never inspected or extracted, hence they *can* be passed by
reference. For instance, to turn ``std::vector<int>`` into an opaque type, add
the declaration
.. code-block:: cpp
PYBIND11_MAKE_OPAQUE(std::vector<int>);
before any binding code (e.g. invocations to ``class_::def()``, etc.). This
macro must be specified at the top level (and outside of any namespaces), since
it adds a template instantiation of ``type_caster``. If your binding code consists of
multiple compilation units, it must be present in every file (typically via a
common header) preceding any usage of ``std::vector<int>``. Opaque types must
also have a corresponding ``class_`` declaration to associate them with a name
in Python, and to define a set of available operations, e.g.:
.. code-block:: cpp
py::class_<std::vector<int>>(m, "IntVector")
.def(py::init<>())
.def("clear", &std::vector<int>::clear)
.def("pop_back", &std::vector<int>::pop_back)
.def("__len__", [](const std::vector<int> &v) { return v.size(); })
.def("__iter__", [](std::vector<int> &v) {
return py::make_iterator(v.begin(), v.end());
}, py::keep_alive<0, 1>()) /* Keep vector alive while iterator is used */
// ....
.. seealso::
The file :file:`tests/test_opaque_types.cpp` contains a complete
example that demonstrates how to create and expose opaque types using
pybind11 in more detail.
.. _stl_bind:
Binding STL containers
======================
The ability to expose STL containers as native Python objects is a fairly
common request, hence pybind11 also provides an optional header file named
:file:`pybind11/stl_bind.h` that does exactly this. The mapped containers try
to match the behavior of their native Python counterparts as much as possible.
The following example showcases usage of :file:`pybind11/stl_bind.h`:
.. code-block:: cpp
// Don't forget this
#include <pybind11/stl_bind.h>
PYBIND11_MAKE_OPAQUE(std::vector<int>);
PYBIND11_MAKE_OPAQUE(std::map<std::string, double>);
// ...
// later in binding code:
py::bind_vector<std::vector<int>>(m, "VectorInt");
py::bind_map<std::map<std::string, double>>(m, "MapStringDouble");
When binding STL containers pybind11 considers the types of the container's
elements to decide whether the container should be confined to the local module
(via the :ref:`module_local` feature). If the container element types are
anything other than already-bound custom types bound without
``py::module_local()`` the container binding will have ``py::module_local()``
applied. This includes converting types such as numeric types, strings, Eigen
types; and types that have not yet been bound at the time of the stl container
binding. This module-local binding is designed to avoid potential conflicts
between module bindings (for example, from two separate modules each attempting
to bind ``std::vector<int>`` as a python type).
It is possible to override this behavior to force a definition to be either
module-local or global. To do so, you can pass the attributes
``py::module_local()`` (to make the binding module-local) or
``py::module_local(false)`` (to make the binding global) into the
``py::bind_vector`` or ``py::bind_map`` arguments:
.. code-block:: cpp
py::bind_vector<std::vector<int>>(m, "VectorInt", py::module_local(false));
Note, however, that such a global binding would make it impossible to load this
module at the same time as any other pybind module that also attempts to bind
the same container type (``std::vector<int>`` in the above example).
See :ref:`module_local` for more details on module-local bindings.
.. seealso::
The file :file:`tests/test_stl_binders.cpp` shows how to use the
convenience STL container wrappers.

View File

@ -0,0 +1,292 @@
Strings, bytes and Unicode conversions
######################################
Passing Python strings to C++
=============================
When a Python ``str`` is passed from Python to a C++ function that accepts
``std::string`` or ``char *`` as arguments, pybind11 will encode the Python
string to UTF-8. All Python ``str`` can be encoded in UTF-8, so this operation
does not fail.
The C++ language is encoding agnostic. It is the responsibility of the
programmer to track encodings. It's often easiest to simply `use UTF-8
everywhere <http://utf8everywhere.org/>`_.
.. code-block:: c++
m.def("utf8_test",
[](const std::string &s) {
cout << "utf-8 is icing on the cake.\n";
cout << s;
}
);
m.def("utf8_charptr",
[](const char *s) {
cout << "My favorite food is\n";
cout << s;
}
);
.. code-block:: pycon
>>> utf8_test("🎂")
utf-8 is icing on the cake.
🎂
>>> utf8_charptr("🍕")
My favorite food is
🍕
.. note::
Some terminal emulators do not support UTF-8 or emoji fonts and may not
display the example above correctly.
The results are the same whether the C++ function accepts arguments by value or
reference, and whether or not ``const`` is used.
Passing bytes to C++
--------------------
A Python ``bytes`` object will be passed to C++ functions that accept
``std::string`` or ``char*`` *without* conversion. In order to make a function
*only* accept ``bytes`` (and not ``str``), declare it as taking a ``py::bytes``
argument.
Returning C++ strings to Python
===============================
When a C++ function returns a ``std::string`` or ``char*`` to a Python caller,
**pybind11 will assume that the string is valid UTF-8** and will decode it to a
native Python ``str``, using the same API as Python uses to perform
``bytes.decode('utf-8')``. If this implicit conversion fails, pybind11 will
raise a ``UnicodeDecodeError``.
.. code-block:: c++
m.def("std_string_return",
[]() {
return std::string("This string needs to be UTF-8 encoded");
}
);
.. code-block:: pycon
>>> isinstance(example.std_string_return(), str)
True
Because UTF-8 is inclusive of pure ASCII, there is never any issue with
returning a pure ASCII string to Python. If there is any possibility that the
string is not pure ASCII, it is necessary to ensure the encoding is valid
UTF-8.
.. warning::
Implicit conversion assumes that a returned ``char *`` is null-terminated.
If there is no null terminator a buffer overrun will occur.
Explicit conversions
--------------------
If some C++ code constructs a ``std::string`` that is not a UTF-8 string, one
can perform a explicit conversion and return a ``py::str`` object. Explicit
conversion has the same overhead as implicit conversion.
.. code-block:: c++
// This uses the Python C API to convert Latin-1 to Unicode
m.def("str_output",
[]() {
std::string s = "Send your r\xe9sum\xe9 to Alice in HR"; // Latin-1
py::str py_s = PyUnicode_DecodeLatin1(s.data(), s.length());
return py_s;
}
);
.. code-block:: pycon
>>> str_output()
'Send your résumé to Alice in HR'
The `Python C API
<https://docs.python.org/3/c-api/unicode.html#built-in-codecs>`_ provides
several built-in codecs.
One could also use a third party encoding library such as libiconv to transcode
to UTF-8.
Return C++ strings without conversion
-------------------------------------
If the data in a C++ ``std::string`` does not represent text and should be
returned to Python as ``bytes``, then one can return the data as a
``py::bytes`` object.
.. code-block:: c++
m.def("return_bytes",
[]() {
std::string s("\xba\xd0\xba\xd0"); // Not valid UTF-8
return py::bytes(s); // Return the data without transcoding
}
);
.. code-block:: pycon
>>> example.return_bytes()
b'\xba\xd0\xba\xd0'
Note the asymmetry: pybind11 will convert ``bytes`` to ``std::string`` without
encoding, but cannot convert ``std::string`` back to ``bytes`` implicitly.
.. code-block:: c++
m.def("asymmetry",
[](std::string s) { // Accepts str or bytes from Python
return s; // Looks harmless, but implicitly converts to str
}
);
.. code-block:: pycon
>>> isinstance(example.asymmetry(b"have some bytes"), str)
True
>>> example.asymmetry(b"\xba\xd0\xba\xd0") # invalid utf-8 as bytes
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xba in position 0: invalid start byte
Wide character strings
======================
When a Python ``str`` is passed to a C++ function expecting ``std::wstring``,
``wchar_t*``, ``std::u16string`` or ``std::u32string``, the ``str`` will be
encoded to UTF-16 or UTF-32 depending on how the C++ compiler implements each
type, in the platform's native endianness. When strings of these types are
returned, they are assumed to contain valid UTF-16 or UTF-32, and will be
decoded to Python ``str``.
.. code-block:: c++
#define UNICODE
#include <windows.h>
m.def("set_window_text",
[](HWND hwnd, std::wstring s) {
// Call SetWindowText with null-terminated UTF-16 string
::SetWindowText(hwnd, s.c_str());
}
);
m.def("get_window_text",
[](HWND hwnd) {
const int buffer_size = ::GetWindowTextLength(hwnd) + 1;
auto buffer = std::make_unique< wchar_t[] >(buffer_size);
::GetWindowText(hwnd, buffer.data(), buffer_size);
std::wstring text(buffer.get());
// wstring will be converted to Python str
return text;
}
);
Strings in multibyte encodings such as Shift-JIS must transcoded to a
UTF-8/16/32 before being returned to Python.
Character literals
==================
C++ functions that accept character literals as input will receive the first
character of a Python ``str`` as their input. If the string is longer than one
Unicode character, trailing characters will be ignored.
When a character literal is returned from C++ (such as a ``char`` or a
``wchar_t``), it will be converted to a ``str`` that represents the single
character.
.. code-block:: c++
m.def("pass_char", [](char c) { return c; });
m.def("pass_wchar", [](wchar_t w) { return w; });
.. code-block:: pycon
>>> example.pass_char("A")
'A'
While C++ will cast integers to character types (``char c = 0x65;``), pybind11
does not convert Python integers to characters implicitly. The Python function
``chr()`` can be used to convert integers to characters.
.. code-block:: pycon
>>> example.pass_char(0x65)
TypeError
>>> example.pass_char(chr(0x65))
'A'
If the desire is to work with an 8-bit integer, use ``int8_t`` or ``uint8_t``
as the argument type.
Grapheme clusters
-----------------
A single grapheme may be represented by two or more Unicode characters. For
example 'é' is usually represented as U+00E9 but can also be expressed as the
combining character sequence U+0065 U+0301 (that is, the letter 'e' followed by
a combining acute accent). The combining character will be lost if the
two-character sequence is passed as an argument, even though it renders as a
single grapheme.
.. code-block:: pycon
>>> example.pass_wchar("é")
'é'
>>> combining_e_acute = "e" + "\u0301"
>>> combining_e_acute
'é'
>>> combining_e_acute == "é"
False
>>> example.pass_wchar(combining_e_acute)
'e'
Normalizing combining characters before passing the character literal to C++
may resolve *some* of these issues:
.. code-block:: pycon
>>> example.pass_wchar(unicodedata.normalize("NFC", combining_e_acute))
'é'
In some languages (Thai for example), there are `graphemes that cannot be
expressed as a single Unicode code point
<http://unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries>`_, so there is
no way to capture them in a C++ character type.
C++17 string views
==================
C++17 string views are automatically supported when compiling in C++17 mode.
They follow the same rules for encoding and decoding as the corresponding STL
string type (for example, a ``std::u16string_view`` argument will be passed
UTF-16-encoded data, and a returned ``std::string_view`` will be decoded as
UTF-8).
References
==========
* `The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) <https://www.joelonsoftware.com/2003/10/08/the-absolute-minimum-every-software-developer-absolutely-positively-must-know-about-unicode-and-character-sets-no-excuses/>`_
* `C++ - Using STL Strings at Win32 API Boundaries <https://msdn.microsoft.com/en-ca/magazine/mt238407.aspx>`_

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,262 @@
.. _embedding:
Embedding the interpreter
#########################
While pybind11 is mainly focused on extending Python using C++, it's also
possible to do the reverse: embed the Python interpreter into a C++ program.
All of the other documentation pages still apply here, so refer to them for
general pybind11 usage. This section will cover a few extra things required
for embedding.
Getting started
===============
A basic executable with an embedded interpreter can be created with just a few
lines of CMake and the ``pybind11::embed`` target, as shown below. For more
information, see :doc:`/compiling`.
.. code-block:: cmake
cmake_minimum_required(VERSION 3.4)
project(example)
find_package(pybind11 REQUIRED) # or `add_subdirectory(pybind11)`
add_executable(example main.cpp)
target_link_libraries(example PRIVATE pybind11::embed)
The essential structure of the ``main.cpp`` file looks like this:
.. code-block:: cpp
#include <pybind11/embed.h> // everything needed for embedding
namespace py = pybind11;
int main() {
py::scoped_interpreter guard{}; // start the interpreter and keep it alive
py::print("Hello, World!"); // use the Python API
}
The interpreter must be initialized before using any Python API, which includes
all the functions and classes in pybind11. The RAII guard class ``scoped_interpreter``
takes care of the interpreter lifetime. After the guard is destroyed, the interpreter
shuts down and clears its memory. No Python functions can be called after this.
Executing Python code
=====================
There are a few different ways to run Python code. One option is to use ``eval``,
``exec`` or ``eval_file``, as explained in :ref:`eval`. Here is a quick example in
the context of an executable with an embedded interpreter:
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
int main() {
py::scoped_interpreter guard{};
py::exec(R"(
kwargs = dict(name="World", number=42)
message = "Hello, {name}! The answer is {number}".format(**kwargs)
print(message)
)");
}
Alternatively, similar results can be achieved using pybind11's API (see
:doc:`/advanced/pycpp/index` for more details).
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
using namespace py::literals;
int main() {
py::scoped_interpreter guard{};
auto kwargs = py::dict("name"_a="World", "number"_a=42);
auto message = "Hello, {name}! The answer is {number}"_s.format(**kwargs);
py::print(message);
}
The two approaches can also be combined:
.. code-block:: cpp
#include <pybind11/embed.h>
#include <iostream>
namespace py = pybind11;
using namespace py::literals;
int main() {
py::scoped_interpreter guard{};
auto locals = py::dict("name"_a="World", "number"_a=42);
py::exec(R"(
message = "Hello, {name}! The answer is {number}".format(**locals())
)", py::globals(), locals);
auto message = locals["message"].cast<std::string>();
std::cout << message;
}
Importing modules
=================
Python modules can be imported using ``module_::import()``:
.. code-block:: cpp
py::module_ sys = py::module_::import("sys");
py::print(sys.attr("path"));
For convenience, the current working directory is included in ``sys.path`` when
embedding the interpreter. This makes it easy to import local Python files:
.. code-block:: python
"""calc.py located in the working directory"""
def add(i, j):
return i + j
.. code-block:: cpp
py::module_ calc = py::module_::import("calc");
py::object result = calc.attr("add")(1, 2);
int n = result.cast<int>();
assert(n == 3);
Modules can be reloaded using ``module_::reload()`` if the source is modified e.g.
by an external process. This can be useful in scenarios where the application
imports a user defined data processing script which needs to be updated after
changes by the user. Note that this function does not reload modules recursively.
.. _embedding_modules:
Adding embedded modules
=======================
Embedded binary modules can be added using the ``PYBIND11_EMBEDDED_MODULE`` macro.
Note that the definition must be placed at global scope. They can be imported
like any other module.
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
PYBIND11_EMBEDDED_MODULE(fast_calc, m) {
// `m` is a `py::module_` which is used to bind functions and classes
m.def("add", [](int i, int j) {
return i + j;
});
}
int main() {
py::scoped_interpreter guard{};
auto fast_calc = py::module_::import("fast_calc");
auto result = fast_calc.attr("add")(1, 2).cast<int>();
assert(result == 3);
}
Unlike extension modules where only a single binary module can be created, on
the embedded side an unlimited number of modules can be added using multiple
``PYBIND11_EMBEDDED_MODULE`` definitions (as long as they have unique names).
These modules are added to Python's list of builtins, so they can also be
imported in pure Python files loaded by the interpreter. Everything interacts
naturally:
.. code-block:: python
"""py_module.py located in the working directory"""
import cpp_module
a = cpp_module.a
b = a + 1
.. code-block:: cpp
#include <pybind11/embed.h>
namespace py = pybind11;
PYBIND11_EMBEDDED_MODULE(cpp_module, m) {
m.attr("a") = 1;
}
int main() {
py::scoped_interpreter guard{};
auto py_module = py::module_::import("py_module");
auto locals = py::dict("fmt"_a="{} + {} = {}", **py_module.attr("__dict__"));
assert(locals["a"].cast<int>() == 1);
assert(locals["b"].cast<int>() == 2);
py::exec(R"(
c = a + b
message = fmt.format(a, b, c)
)", py::globals(), locals);
assert(locals["c"].cast<int>() == 3);
assert(locals["message"].cast<std::string>() == "1 + 2 = 3");
}
Interpreter lifetime
====================
The Python interpreter shuts down when ``scoped_interpreter`` is destroyed. After
this, creating a new instance will restart the interpreter. Alternatively, the
``initialize_interpreter`` / ``finalize_interpreter`` pair of functions can be used
to directly set the state at any time.
Modules created with pybind11 can be safely re-initialized after the interpreter
has been restarted. However, this may not apply to third-party extension modules.
The issue is that Python itself cannot completely unload extension modules and
there are several caveats with regard to interpreter restarting. In short, not
all memory may be freed, either due to Python reference cycles or user-created
global data. All the details can be found in the CPython documentation.
.. warning::
Creating two concurrent ``scoped_interpreter`` guards is a fatal error. So is
calling ``initialize_interpreter`` for a second time after the interpreter
has already been initialized.
Do not use the raw CPython API functions ``Py_Initialize`` and
``Py_Finalize`` as these do not properly handle the lifetime of
pybind11's internal data.
Sub-interpreter support
=======================
Creating multiple copies of ``scoped_interpreter`` is not possible because it
represents the main Python interpreter. Sub-interpreters are something different
and they do permit the existence of multiple interpreters. This is an advanced
feature of the CPython API and should be handled with care. pybind11 does not
currently offer a C++ interface for sub-interpreters, so refer to the CPython
documentation for all the details regarding this feature.
We'll just mention a couple of caveats the sub-interpreters support in pybind11:
1. Sub-interpreters will not receive independent copies of embedded modules.
Instead, these are shared and modifications in one interpreter may be
reflected in another.
2. Managing multiple threads, multiple interpreters and the GIL can be
challenging and there are several caveats here, even within the pure
CPython API (please refer to the Python docs for details). As for
pybind11, keep in mind that ``gil_scoped_release`` and ``gil_scoped_acquire``
do not take sub-interpreters into account.

View File

@ -0,0 +1,398 @@
Exceptions
##########
Built-in C++ to Python exception translation
============================================
When Python calls C++ code through pybind11, pybind11 provides a C++ exception handler
that will trap C++ exceptions, translate them to the corresponding Python exception,
and raise them so that Python code can handle them.
pybind11 defines translations for ``std::exception`` and its standard
subclasses, and several special exception classes that translate to specific
Python exceptions. Note that these are not actually Python exceptions, so they
cannot be examined using the Python C API. Instead, they are pure C++ objects
that pybind11 will translate the corresponding Python exception when they arrive
at its exception handler.
.. tabularcolumns:: |p{0.5\textwidth}|p{0.45\textwidth}|
+--------------------------------------+--------------------------------------+
| Exception thrown by C++ | Translated to Python exception type |
+======================================+======================================+
| :class:`std::exception` | ``RuntimeError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::bad_alloc` | ``MemoryError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::domain_error` | ``ValueError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::invalid_argument` | ``ValueError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::length_error` | ``ValueError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::out_of_range` | ``IndexError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::range_error` | ``ValueError`` |
+--------------------------------------+--------------------------------------+
| :class:`std::overflow_error` | ``OverflowError`` |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::stop_iteration` | ``StopIteration`` (used to implement |
| | custom iterators) |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::index_error` | ``IndexError`` (used to indicate out |
| | of bounds access in ``__getitem__``, |
| | ``__setitem__``, etc.) |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::key_error` | ``KeyError`` (used to indicate out |
| | of bounds access in ``__getitem__``, |
| | ``__setitem__`` in dict-like |
| | objects, etc.) |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::value_error` | ``ValueError`` (used to indicate |
| | wrong value passed in |
| | ``container.remove(...)``) |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::type_error` | ``TypeError`` |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::buffer_error` | ``BufferError`` |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::import_error` | ``ImportError`` |
+--------------------------------------+--------------------------------------+
| :class:`pybind11::attribute_error` | ``AttributeError`` |
+--------------------------------------+--------------------------------------+
| Any other exception | ``RuntimeError`` |
+--------------------------------------+--------------------------------------+
Exception translation is not bidirectional. That is, *catching* the C++
exceptions defined above will not trap exceptions that originate from
Python. For that, catch :class:`pybind11::error_already_set`. See :ref:`below
<handling_python_exceptions_cpp>` for further details.
There is also a special exception :class:`cast_error` that is thrown by
:func:`handle::call` when the input arguments cannot be converted to Python
objects.
Registering custom translators
==============================
If the default exception conversion policy described above is insufficient,
pybind11 also provides support for registering custom exception translators.
Similar to pybind11 classes, exception translators can be local to the module
they are defined in or global to the entire python session. To register a simple
exception conversion that translates a C++ exception into a new Python exception
using the C++ exception's ``what()`` method, a helper function is available:
.. code-block:: cpp
py::register_exception<CppExp>(module, "PyExp");
This call creates a Python exception class with the name ``PyExp`` in the given
module and automatically converts any encountered exceptions of type ``CppExp``
into Python exceptions of type ``PyExp``.
A matching function is available for registering a local exception translator:
.. code-block:: cpp
py::register_local_exception<CppExp>(module, "PyExp");
It is possible to specify base class for the exception using the third
parameter, a ``handle``:
.. code-block:: cpp
py::register_exception<CppExp>(module, "PyExp", PyExc_RuntimeError);
py::register_local_exception<CppExp>(module, "PyExp", PyExc_RuntimeError);
Then ``PyExp`` can be caught both as ``PyExp`` and ``RuntimeError``.
The class objects of the built-in Python exceptions are listed in the Python
documentation on `Standard Exceptions <https://docs.python.org/3/c-api/exceptions.html#standard-exceptions>`_.
The default base class is ``PyExc_Exception``.
When more advanced exception translation is needed, the functions
``py::register_exception_translator(translator)`` and
``py::register_local_exception_translator(translator)`` can be used to register
functions that can translate arbitrary exception types (and which may include
additional logic to do so). The functions takes a stateless callable (e.g. a
function pointer or a lambda function without captured variables) with the call
signature ``void(std::exception_ptr)``.
When a C++ exception is thrown, the registered exception translators are tried
in reverse order of registration (i.e. the last registered translator gets the
first shot at handling the exception). All local translators will be tried
before a global translator is tried.
Inside the translator, ``std::rethrow_exception`` should be used within
a try block to re-throw the exception. One or more catch clauses to catch
the appropriate exceptions should then be used with each clause using
``PyErr_SetString`` to set a Python exception or ``ex(string)`` to set
the python exception to a custom exception type (see below).
To declare a custom Python exception type, declare a ``py::exception`` variable
and use this in the associated exception translator (note: it is often useful
to make this a static declaration when using it inside a lambda expression
without requiring capturing).
The following example demonstrates this for a hypothetical exception classes
``MyCustomException`` and ``OtherException``: the first is translated to a
custom python exception ``MyCustomError``, while the second is translated to a
standard python RuntimeError:
.. code-block:: cpp
static py::exception<MyCustomException> exc(m, "MyCustomError");
py::register_exception_translator([](std::exception_ptr p) {
try {
if (p) std::rethrow_exception(p);
} catch (const MyCustomException &e) {
exc(e.what());
} catch (const OtherException &e) {
PyErr_SetString(PyExc_RuntimeError, e.what());
}
});
Multiple exceptions can be handled by a single translator, as shown in the
example above. If the exception is not caught by the current translator, the
previously registered one gets a chance.
If none of the registered exception translators is able to handle the
exception, it is handled by the default converter as described in the previous
section.
.. seealso::
The file :file:`tests/test_exceptions.cpp` contains examples
of various custom exception translators and custom exception types.
.. note::
Call either ``PyErr_SetString`` or a custom exception's call
operator (``exc(string)``) for every exception caught in a custom exception
translator. Failure to do so will cause Python to crash with ``SystemError:
error return without exception set``.
Exceptions that you do not plan to handle should simply not be caught, or
may be explicitly (re-)thrown to delegate it to the other,
previously-declared existing exception translators.
Note that ``libc++`` and ``libstdc++`` `behave differently <https://stackoverflow.com/questions/19496643/using-clang-fvisibility-hidden-and-typeinfo-and-type-erasure/28827430>`_
with ``-fvisibility=hidden``. Therefore exceptions that are used across ABI boundaries need to be explicitly exported, as exercised in ``tests/test_exceptions.h``.
See also: "Problems with C++ exceptions" under `GCC Wiki <https://gcc.gnu.org/wiki/Visibility>`_.
Local vs Global Exception Translators
=====================================
When a global exception translator is registered, it will be applied across all
modules in the reverse order of registration. This can create behavior where the
order of module import influences how exceptions are translated.
If module1 has the following translator:
.. code-block:: cpp
py::register_exception_translator([](std::exception_ptr p) {
try {
if (p) std::rethrow_exception(p);
} catch (const std::invalid_argument &e) {
PyErr_SetString("module1 handled this")
}
}
and module2 has the following similar translator:
.. code-block:: cpp
py::register_exception_translator([](std::exception_ptr p) {
try {
if (p) std::rethrow_exception(p);
} catch (const std::invalid_argument &e) {
PyErr_SetString("module2 handled this")
}
}
then which translator handles the invalid_argument will be determined by the
order that module1 and module2 are imported. Since exception translators are
applied in the reverse order of registration, which ever module was imported
last will "win" and that translator will be applied.
If there are multiple pybind11 modules that share exception types (either
standard built-in or custom) loaded into a single python instance and
consistent error handling behavior is needed, then local translators should be
used.
Changing the previous example to use ``register_local_exception_translator``
would mean that when invalid_argument is thrown in the module2 code, the
module2 translator will always handle it, while in module1, the module1
translator will do the same.
.. _handling_python_exceptions_cpp:
Handling exceptions from Python in C++
======================================
When C++ calls Python functions, such as in a callback function or when
manipulating Python objects, and Python raises an ``Exception``, pybind11
converts the Python exception into a C++ exception of type
:class:`pybind11::error_already_set` whose payload contains a C++ string textual
summary and the actual Python exception. ``error_already_set`` is used to
propagate Python exception back to Python (or possibly, handle them in C++).
.. tabularcolumns:: |p{0.5\textwidth}|p{0.45\textwidth}|
+--------------------------------------+--------------------------------------+
| Exception raised in Python | Thrown as C++ exception type |
+======================================+======================================+
| Any Python ``Exception`` | :class:`pybind11::error_already_set` |
+--------------------------------------+--------------------------------------+
For example:
.. code-block:: cpp
try {
// open("missing.txt", "r")
auto file = py::module_::import("io").attr("open")("missing.txt", "r");
auto text = file.attr("read")();
file.attr("close")();
} catch (py::error_already_set &e) {
if (e.matches(PyExc_FileNotFoundError)) {
py::print("missing.txt not found");
} else if (e.matches(PyExc_PermissionError)) {
py::print("missing.txt found but not accessible");
} else {
throw;
}
}
Note that C++ to Python exception translation does not apply here, since that is
a method for translating C++ exceptions to Python, not vice versa. The error raised
from Python is always ``error_already_set``.
This example illustrates this behavior:
.. code-block:: cpp
try {
py::eval("raise ValueError('The Ring')");
} catch (py::value_error &boromir) {
// Boromir never gets the ring
assert(false);
} catch (py::error_already_set &frodo) {
// Frodo gets the ring
py::print("I will take the ring");
}
try {
// py::value_error is a request for pybind11 to raise a Python exception
throw py::value_error("The ball");
} catch (py::error_already_set &cat) {
// cat won't catch the ball since
// py::value_error is not a Python exception
assert(false);
} catch (py::value_error &dog) {
// dog will catch the ball
py::print("Run Spot run");
throw; // Throw it again (pybind11 will raise ValueError)
}
Handling errors from the Python C API
=====================================
Where possible, use :ref:`pybind11 wrappers <wrappers>` instead of calling
the Python C API directly. When calling the Python C API directly, in
addition to manually managing reference counts, one must follow the pybind11
error protocol, which is outlined here.
After calling the Python C API, if Python returns an error,
``throw py::error_already_set();``, which allows pybind11 to deal with the
exception and pass it back to the Python interpreter. This includes calls to
the error setting functions such as ``PyErr_SetString``.
.. code-block:: cpp
PyErr_SetString(PyExc_TypeError, "C API type error demo");
throw py::error_already_set();
// But it would be easier to simply...
throw py::type_error("pybind11 wrapper type error");
Alternately, to ignore the error, call `PyErr_Clear
<https://docs.python.org/3/c-api/exceptions.html#c.PyErr_Clear>`_.
Any Python error must be thrown or cleared, or Python/pybind11 will be left in
an invalid state.
Chaining exceptions ('raise from')
==================================
Python has a mechanism for indicating that exceptions were caused by other
exceptions:
.. code-block:: py
try:
print(1 / 0)
except Exception as exc:
raise RuntimeError("could not divide by zero") from exc
To do a similar thing in pybind11, you can use the ``py::raise_from`` function. It
sets the current python error indicator, so to continue propagating the exception
you should ``throw py::error_already_set()``.
.. code-block:: cpp
try {
py::eval("print(1 / 0"));
} catch (py::error_already_set &e) {
py::raise_from(e, PyExc_RuntimeError, "could not divide by zero");
throw py::error_already_set();
}
.. versionadded:: 2.8
.. _unraisable_exceptions:
Handling unraisable exceptions
==============================
If a Python function invoked from a C++ destructor or any function marked
``noexcept(true)`` (collectively, "noexcept functions") throws an exception, there
is no way to propagate the exception, as such functions may not throw.
Should they throw or fail to catch any exceptions in their call graph,
the C++ runtime calls ``std::terminate()`` to abort immediately.
Similarly, Python exceptions raised in a class's ``__del__`` method do not
propagate, but are logged by Python as an unraisable error. In Python 3.8+, a
`system hook is triggered
<https://docs.python.org/3/library/sys.html#sys.unraisablehook>`_
and an auditing event is logged.
Any noexcept function should have a try-catch block that traps
class:`error_already_set` (or any other exception that can occur). Note that
pybind11 wrappers around Python exceptions such as
:class:`pybind11::value_error` are *not* Python exceptions; they are C++
exceptions that pybind11 catches and converts to Python exceptions. Noexcept
functions cannot propagate these exceptions either. A useful approach is to
convert them to Python exceptions and then ``discard_as_unraisable`` as shown
below.
.. code-block:: cpp
void nonthrowing_func() noexcept(true) {
try {
// ...
} catch (py::error_already_set &eas) {
// Discard the Python error using Python APIs, using the C++ magic
// variable __func__. Python already knows the type and value and of the
// exception object.
eas.discard_as_unraisable(__func__);
} catch (const std::exception &e) {
// Log and discard C++ exceptions.
third_party::log(e);
}
}
.. versionadded:: 2.6

View File

@ -0,0 +1,614 @@
Functions
#########
Before proceeding with this section, make sure that you are already familiar
with the basics of binding functions and classes, as explained in :doc:`/basics`
and :doc:`/classes`. The following guide is applicable to both free and member
functions, i.e. *methods* in Python.
.. _return_value_policies:
Return value policies
=====================
Python and C++ use fundamentally different ways of managing the memory and
lifetime of objects managed by them. This can lead to issues when creating
bindings for functions that return a non-trivial type. Just by looking at the
type information, it is not clear whether Python should take charge of the
returned value and eventually free its resources, or if this is handled on the
C++ side. For this reason, pybind11 provides a several *return value policy*
annotations that can be passed to the :func:`module_::def` and
:func:`class_::def` functions. The default policy is
:enum:`return_value_policy::automatic`.
Return value policies are tricky, and it's very important to get them right.
Just to illustrate what can go wrong, consider the following simple example:
.. code-block:: cpp
/* Function declaration */
Data *get_data() { return _data; /* (pointer to a static data structure) */ }
...
/* Binding code */
m.def("get_data", &get_data); // <-- KABOOM, will cause crash when called from Python
What's going on here? When ``get_data()`` is called from Python, the return
value (a native C++ type) must be wrapped to turn it into a usable Python type.
In this case, the default return value policy (:enum:`return_value_policy::automatic`)
causes pybind11 to assume ownership of the static ``_data`` instance.
When Python's garbage collector eventually deletes the Python
wrapper, pybind11 will also attempt to delete the C++ instance (via ``operator
delete()``) due to the implied ownership. At this point, the entire application
will come crashing down, though errors could also be more subtle and involve
silent data corruption.
In the above example, the policy :enum:`return_value_policy::reference` should have
been specified so that the global data instance is only *referenced* without any
implied transfer of ownership, i.e.:
.. code-block:: cpp
m.def("get_data", &get_data, py::return_value_policy::reference);
On the other hand, this is not the right policy for many other situations,
where ignoring ownership could lead to resource leaks.
As a developer using pybind11, it's important to be familiar with the different
return value policies, including which situation calls for which one of them.
The following table provides an overview of available policies:
.. tabularcolumns:: |p{0.5\textwidth}|p{0.45\textwidth}|
+--------------------------------------------------+----------------------------------------------------------------------------+
| Return value policy | Description |
+==================================================+============================================================================+
| :enum:`return_value_policy::take_ownership` | Reference an existing object (i.e. do not create a new copy) and take |
| | ownership. Python will call the destructor and delete operator when the |
| | object's reference count reaches zero. Undefined behavior ensues when the |
| | C++ side does the same, or when the data was not dynamically allocated. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::copy` | Create a new copy of the returned object, which will be owned by Python. |
| | This policy is comparably safe because the lifetimes of the two instances |
| | are decoupled. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::move` | Use ``std::move`` to move the return value contents into a new instance |
| | that will be owned by Python. This policy is comparably safe because the |
| | lifetimes of the two instances (move source and destination) are decoupled.|
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::reference` | Reference an existing object, but do not take ownership. The C++ side is |
| | responsible for managing the object's lifetime and deallocating it when |
| | it is no longer used. Warning: undefined behavior will ensue when the C++ |
| | side deletes an object that is still referenced and used by Python. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::reference_internal` | Indicates that the lifetime of the return value is tied to the lifetime |
| | of a parent object, namely the implicit ``this``, or ``self`` argument of |
| | the called method or property. Internally, this policy works just like |
| | :enum:`return_value_policy::reference` but additionally applies a |
| | ``keep_alive<0, 1>`` *call policy* (described in the next section) that |
| | prevents the parent object from being garbage collected as long as the |
| | return value is referenced by Python. This is the default policy for |
| | property getters created via ``def_property``, ``def_readwrite``, etc. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::automatic` | This policy falls back to the policy |
| | :enum:`return_value_policy::take_ownership` when the return value is a |
| | pointer. Otherwise, it uses :enum:`return_value_policy::move` or |
| | :enum:`return_value_policy::copy` for rvalue and lvalue references, |
| | respectively. See above for a description of what all of these different |
| | policies do. This is the default policy for ``py::class_``-wrapped types. |
+--------------------------------------------------+----------------------------------------------------------------------------+
| :enum:`return_value_policy::automatic_reference` | As above, but use policy :enum:`return_value_policy::reference` when the |
| | return value is a pointer. This is the default conversion policy for |
| | function arguments when calling Python functions manually from C++ code |
| | (i.e. via ``handle::operator()``) and the casters in ``pybind11/stl.h``. |
| | You probably won't need to use this explicitly. |
+--------------------------------------------------+----------------------------------------------------------------------------+
Return value policies can also be applied to properties:
.. code-block:: cpp
class_<MyClass>(m, "MyClass")
.def_property("data", &MyClass::getData, &MyClass::setData,
py::return_value_policy::copy);
Technically, the code above applies the policy to both the getter and the
setter function, however, the setter doesn't really care about *return*
value policies which makes this a convenient terse syntax. Alternatively,
targeted arguments can be passed through the :class:`cpp_function` constructor:
.. code-block:: cpp
class_<MyClass>(m, "MyClass")
.def_property("data",
py::cpp_function(&MyClass::getData, py::return_value_policy::copy),
py::cpp_function(&MyClass::setData)
);
.. warning::
Code with invalid return value policies might access uninitialized memory or
free data structures multiple times, which can lead to hard-to-debug
non-determinism and segmentation faults, hence it is worth spending the
time to understand all the different options in the table above.
.. note::
One important aspect of the above policies is that they only apply to
instances which pybind11 has *not* seen before, in which case the policy
clarifies essential questions about the return value's lifetime and
ownership. When pybind11 knows the instance already (as identified by its
type and address in memory), it will return the existing Python object
wrapper rather than creating a new copy.
.. note::
The next section on :ref:`call_policies` discusses *call policies* that can be
specified *in addition* to a return value policy from the list above. Call
policies indicate reference relationships that can involve both return values
and parameters of functions.
.. note::
As an alternative to elaborate call policies and lifetime management logic,
consider using smart pointers (see the section on :ref:`smart_pointers` for
details). Smart pointers can tell whether an object is still referenced from
C++ or Python, which generally eliminates the kinds of inconsistencies that
can lead to crashes or undefined behavior. For functions returning smart
pointers, it is not necessary to specify a return value policy.
.. _call_policies:
Additional call policies
========================
In addition to the above return value policies, further *call policies* can be
specified to indicate dependencies between parameters or ensure a certain state
for the function call.
Keep alive
----------
In general, this policy is required when the C++ object is any kind of container
and another object is being added to the container. ``keep_alive<Nurse, Patient>``
indicates that the argument with index ``Patient`` should be kept alive at least
until the argument with index ``Nurse`` is freed by the garbage collector. Argument
indices start at one, while zero refers to the return value. For methods, index
``1`` refers to the implicit ``this`` pointer, while regular arguments begin at
index ``2``. Arbitrarily many call policies can be specified. When a ``Nurse``
with value ``None`` is detected at runtime, the call policy does nothing.
When the nurse is not a pybind11-registered type, the implementation internally
relies on the ability to create a *weak reference* to the nurse object. When
the nurse object is not a pybind11-registered type and does not support weak
references, an exception will be thrown.
If you use an incorrect argument index, you will get a ``RuntimeError`` saying
``Could not activate keep_alive!``. You should review the indices you're using.
Consider the following example: here, the binding code for a list append
operation ties the lifetime of the newly added element to the underlying
container:
.. code-block:: cpp
py::class_<List>(m, "List")
.def("append", &List::append, py::keep_alive<1, 2>());
For consistency, the argument indexing is identical for constructors. Index
``1`` still refers to the implicit ``this`` pointer, i.e. the object which is
being constructed. Index ``0`` refers to the return type which is presumed to
be ``void`` when a constructor is viewed like a function. The following example
ties the lifetime of the constructor element to the constructed object:
.. code-block:: cpp
py::class_<Nurse>(m, "Nurse")
.def(py::init<Patient &>(), py::keep_alive<1, 2>());
.. note::
``keep_alive`` is analogous to the ``with_custodian_and_ward`` (if Nurse,
Patient != 0) and ``with_custodian_and_ward_postcall`` (if Nurse/Patient ==
0) policies from Boost.Python.
Call guard
----------
The ``call_guard<T>`` policy allows any scope guard type ``T`` to be placed
around the function call. For example, this definition:
.. code-block:: cpp
m.def("foo", foo, py::call_guard<T>());
is equivalent to the following pseudocode:
.. code-block:: cpp
m.def("foo", [](args...) {
T scope_guard;
return foo(args...); // forwarded arguments
});
The only requirement is that ``T`` is default-constructible, but otherwise any
scope guard will work. This is very useful in combination with ``gil_scoped_release``.
See :ref:`gil`.
Multiple guards can also be specified as ``py::call_guard<T1, T2, T3...>``. The
constructor order is left to right and destruction happens in reverse.
.. seealso::
The file :file:`tests/test_call_policies.cpp` contains a complete example
that demonstrates using `keep_alive` and `call_guard` in more detail.
.. _python_objects_as_args:
Python objects as arguments
===========================
pybind11 exposes all major Python types using thin C++ wrapper classes. These
wrapper classes can also be used as parameters of functions in bindings, which
makes it possible to directly work with native Python types on the C++ side.
For instance, the following statement iterates over a Python ``dict``:
.. code-block:: cpp
void print_dict(const py::dict& dict) {
/* Easily interact with Python types */
for (auto item : dict)
std::cout << "key=" << std::string(py::str(item.first)) << ", "
<< "value=" << std::string(py::str(item.second)) << std::endl;
}
It can be exported:
.. code-block:: cpp
m.def("print_dict", &print_dict);
And used in Python as usual:
.. code-block:: pycon
>>> print_dict({"foo": 123, "bar": "hello"})
key=foo, value=123
key=bar, value=hello
For more information on using Python objects in C++, see :doc:`/advanced/pycpp/index`.
Accepting \*args and \*\*kwargs
===============================
Python provides a useful mechanism to define functions that accept arbitrary
numbers of arguments and keyword arguments:
.. code-block:: python
def generic(*args, **kwargs):
... # do something with args and kwargs
Such functions can also be created using pybind11:
.. code-block:: cpp
void generic(py::args args, const py::kwargs& kwargs) {
/// .. do something with args
if (kwargs)
/// .. do something with kwargs
}
/// Binding code
m.def("generic", &generic);
The class ``py::args`` derives from ``py::tuple`` and ``py::kwargs`` derives
from ``py::dict``.
You may also use just one or the other, and may combine these with other
arguments. Note, however, that ``py::kwargs`` must always be the last argument
of the function, and ``py::args`` implies that any further arguments are
keyword-only (see :ref:`keyword_only_arguments`).
Please refer to the other examples for details on how to iterate over these,
and on how to cast their entries into C++ objects. A demonstration is also
available in ``tests/test_kwargs_and_defaults.cpp``.
.. note::
When combining \*args or \*\*kwargs with :ref:`keyword_args` you should
*not* include ``py::arg`` tags for the ``py::args`` and ``py::kwargs``
arguments.
Default arguments revisited
===========================
The section on :ref:`default_args` previously discussed basic usage of default
arguments using pybind11. One noteworthy aspect of their implementation is that
default arguments are converted to Python objects right at declaration time.
Consider the following example:
.. code-block:: cpp
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg("arg") = SomeType(123));
In this case, pybind11 must already be set up to deal with values of the type
``SomeType`` (via a prior instantiation of ``py::class_<SomeType>``), or an
exception will be thrown.
Another aspect worth highlighting is that the "preview" of the default argument
in the function signature is generated using the object's ``__repr__`` method.
If not available, the signature may not be very helpful, e.g.:
.. code-block:: pycon
FUNCTIONS
...
| myFunction(...)
| Signature : (MyClass, arg : SomeType = <SomeType object at 0x101b7b080>) -> NoneType
...
The first way of addressing this is by defining ``SomeType.__repr__``.
Alternatively, it is possible to specify the human-readable preview of the
default argument manually using the ``arg_v`` notation:
.. code-block:: cpp
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg_v("arg", SomeType(123), "SomeType(123)"));
Sometimes it may be necessary to pass a null pointer value as a default
argument. In this case, remember to cast it to the underlying type in question,
like so:
.. code-block:: cpp
py::class_<MyClass>("MyClass")
.def("myFunction", py::arg("arg") = static_cast<SomeType *>(nullptr));
.. _keyword_only_arguments:
Keyword-only arguments
======================
Python implements keyword-only arguments by specifying an unnamed ``*``
argument in a function definition:
.. code-block:: python
def f(a, *, b): # a can be positional or via keyword; b must be via keyword
pass
f(a=1, b=2) # good
f(b=2, a=1) # good
f(1, b=2) # good
f(1, 2) # TypeError: f() takes 1 positional argument but 2 were given
Pybind11 provides a ``py::kw_only`` object that allows you to implement
the same behaviour by specifying the object between positional and keyword-only
argument annotations when registering the function:
.. code-block:: cpp
m.def("f", [](int a, int b) { /* ... */ },
py::arg("a"), py::kw_only(), py::arg("b"));
.. versionadded:: 2.6
A ``py::args`` argument implies that any following arguments are keyword-only,
as if ``py::kw_only()`` had been specified in the same relative location of the
argument list as the ``py::args`` argument. The ``py::kw_only()`` may be
included to be explicit about this, but is not required.
.. versionchanged:: 2.9
This can now be combined with ``py::args``. Before, ``py::args`` could only
occur at the end of the argument list, or immediately before a ``py::kwargs``
argument at the end.
Positional-only arguments
=========================
Python 3.8 introduced a new positional-only argument syntax, using ``/`` in the
function definition (note that this has been a convention for CPython
positional arguments, such as in ``pow()``, since Python 2). You can
do the same thing in any version of Python using ``py::pos_only()``:
.. code-block:: cpp
m.def("f", [](int a, int b) { /* ... */ },
py::arg("a"), py::pos_only(), py::arg("b"));
You now cannot give argument ``a`` by keyword. This can be combined with
keyword-only arguments, as well.
.. versionadded:: 2.6
.. _nonconverting_arguments:
Non-converting arguments
========================
Certain argument types may support conversion from one type to another. Some
examples of conversions are:
* :ref:`implicit_conversions` declared using ``py::implicitly_convertible<A,B>()``
* Calling a method accepting a double with an integer argument
* Calling a ``std::complex<float>`` argument with a non-complex python type
(for example, with a float). (Requires the optional ``pybind11/complex.h``
header).
* Calling a function taking an Eigen matrix reference with a numpy array of the
wrong type or of an incompatible data layout. (Requires the optional
``pybind11/eigen.h`` header).
This behaviour is sometimes undesirable: the binding code may prefer to raise
an error rather than convert the argument. This behaviour can be obtained
through ``py::arg`` by calling the ``.noconvert()`` method of the ``py::arg``
object, such as:
.. code-block:: cpp
m.def("floats_only", [](double f) { return 0.5 * f; }, py::arg("f").noconvert());
m.def("floats_preferred", [](double f) { return 0.5 * f; }, py::arg("f"));
Attempting the call the second function (the one without ``.noconvert()``) with
an integer will succeed, but attempting to call the ``.noconvert()`` version
will fail with a ``TypeError``:
.. code-block:: pycon
>>> floats_preferred(4)
2.0
>>> floats_only(4)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: floats_only(): incompatible function arguments. The following argument types are supported:
1. (f: float) -> float
Invoked with: 4
You may, of course, combine this with the :var:`_a` shorthand notation (see
:ref:`keyword_args`) and/or :ref:`default_args`. It is also permitted to omit
the argument name by using the ``py::arg()`` constructor without an argument
name, i.e. by specifying ``py::arg().noconvert()``.
.. note::
When specifying ``py::arg`` options it is necessary to provide the same
number of options as the bound function has arguments. Thus if you want to
enable no-convert behaviour for just one of several arguments, you will
need to specify a ``py::arg()`` annotation for each argument with the
no-convert argument modified to ``py::arg().noconvert()``.
.. _none_arguments:
Allow/Prohibiting None arguments
================================
When a C++ type registered with :class:`py::class_` is passed as an argument to
a function taking the instance as pointer or shared holder (e.g. ``shared_ptr``
or a custom, copyable holder as described in :ref:`smart_pointers`), pybind
allows ``None`` to be passed from Python which results in calling the C++
function with ``nullptr`` (or an empty holder) for the argument.
To explicitly enable or disable this behaviour, using the
``.none`` method of the :class:`py::arg` object:
.. code-block:: cpp
py::class_<Dog>(m, "Dog").def(py::init<>());
py::class_<Cat>(m, "Cat").def(py::init<>());
m.def("bark", [](Dog *dog) -> std::string {
if (dog) return "woof!"; /* Called with a Dog instance */
else return "(no dog)"; /* Called with None, dog == nullptr */
}, py::arg("dog").none(true));
m.def("meow", [](Cat *cat) -> std::string {
// Can't be called with None argument
return "meow";
}, py::arg("cat").none(false));
With the above, the Python call ``bark(None)`` will return the string ``"(no
dog)"``, while attempting to call ``meow(None)`` will raise a ``TypeError``:
.. code-block:: pycon
>>> from animals import Dog, Cat, bark, meow
>>> bark(Dog())
'woof!'
>>> meow(Cat())
'meow'
>>> bark(None)
'(no dog)'
>>> meow(None)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: meow(): incompatible function arguments. The following argument types are supported:
1. (cat: animals.Cat) -> str
Invoked with: None
The default behaviour when the tag is unspecified is to allow ``None``.
.. note::
Even when ``.none(true)`` is specified for an argument, ``None`` will be converted to a
``nullptr`` *only* for custom and :ref:`opaque <opaque>` types. Pointers to built-in types
(``double *``, ``int *``, ...) and STL types (``std::vector<T> *``, ...; if ``pybind11/stl.h``
is included) are copied when converted to C++ (see :doc:`/advanced/cast/overview`) and will
not allow ``None`` as argument. To pass optional argument of these copied types consider
using ``std::optional<T>``
.. _overload_resolution:
Overload resolution order
=========================
When a function or method with multiple overloads is called from Python,
pybind11 determines which overload to call in two passes. The first pass
attempts to call each overload without allowing argument conversion (as if
every argument had been specified as ``py::arg().noconvert()`` as described
above).
If no overload succeeds in the no-conversion first pass, a second pass is
attempted in which argument conversion is allowed (except where prohibited via
an explicit ``py::arg().noconvert()`` attribute in the function definition).
If the second pass also fails a ``TypeError`` is raised.
Within each pass, overloads are tried in the order they were registered with
pybind11. If the ``py::prepend()`` tag is added to the definition, a function
can be placed at the beginning of the overload sequence instead, allowing user
overloads to proceed built in functions.
What this means in practice is that pybind11 will prefer any overload that does
not require conversion of arguments to an overload that does, but otherwise
prefers earlier-defined overloads to later-defined ones.
.. note::
pybind11 does *not* further prioritize based on the number/pattern of
overloaded arguments. That is, pybind11 does not prioritize a function
requiring one conversion over one requiring three, but only prioritizes
overloads requiring no conversion at all to overloads that require
conversion of at least one argument.
.. versionadded:: 2.6
The ``py::prepend()`` tag.
Binding functions with template parameters
==========================================
You can bind functions that have template parameters. Here's a function:
.. code-block:: cpp
template <typename T>
void set(T t);
C++ templates cannot be instantiated at runtime, so you cannot bind the
non-instantiated function:
.. code-block:: cpp
// BROKEN (this will not compile)
m.def("set", &set);
You must bind each instantiated function template separately. You may bind
each instantiation with the same name, which will be treated the same as
an overloaded function:
.. code-block:: cpp
m.def("set", &set<int>);
m.def("set", &set<std::string>);
Sometimes it's more clear to bind them with separate names, which is also
an option:
.. code-block:: cpp
m.def("setInt", &set<int>);
m.def("setString", &set<std::string>);

View File

@ -0,0 +1,337 @@
Miscellaneous
#############
.. _macro_notes:
General notes regarding convenience macros
==========================================
pybind11 provides a few convenience macros such as
:func:`PYBIND11_DECLARE_HOLDER_TYPE` and ``PYBIND11_OVERRIDE_*``. Since these
are "just" macros that are evaluated in the preprocessor (which has no concept
of types), they *will* get confused by commas in a template argument; for
example, consider:
.. code-block:: cpp
PYBIND11_OVERRIDE(MyReturnType<T1, T2>, Class<T3, T4>, func)
The limitation of the C preprocessor interprets this as five arguments (with new
arguments beginning after each comma) rather than three. To get around this,
there are two alternatives: you can use a type alias, or you can wrap the type
using the ``PYBIND11_TYPE`` macro:
.. code-block:: cpp
// Version 1: using a type alias
using ReturnType = MyReturnType<T1, T2>;
using ClassType = Class<T3, T4>;
PYBIND11_OVERRIDE(ReturnType, ClassType, func);
// Version 2: using the PYBIND11_TYPE macro:
PYBIND11_OVERRIDE(PYBIND11_TYPE(MyReturnType<T1, T2>),
PYBIND11_TYPE(Class<T3, T4>), func)
The ``PYBIND11_MAKE_OPAQUE`` macro does *not* require the above workarounds.
.. _gil:
Global Interpreter Lock (GIL)
=============================
When calling a C++ function from Python, the GIL is always held.
The classes :class:`gil_scoped_release` and :class:`gil_scoped_acquire` can be
used to acquire and release the global interpreter lock in the body of a C++
function call. In this way, long-running C++ code can be parallelized using
multiple Python threads. Taking :ref:`overriding_virtuals` as an example, this
could be realized as follows (important changes highlighted):
.. code-block:: cpp
:emphasize-lines: 8,9,31,32
class PyAnimal : public Animal {
public:
/* Inherit the constructors */
using Animal::Animal;
/* Trampoline (need one for each virtual function) */
std::string go(int n_times) {
/* Acquire GIL before calling Python code */
py::gil_scoped_acquire acquire;
PYBIND11_OVERRIDE_PURE(
std::string, /* Return type */
Animal, /* Parent class */
go, /* Name of function */
n_times /* Argument(s) */
);
}
};
PYBIND11_MODULE(example, m) {
py::class_<Animal, PyAnimal> animal(m, "Animal");
animal
.def(py::init<>())
.def("go", &Animal::go);
py::class_<Dog>(m, "Dog", animal)
.def(py::init<>());
m.def("call_go", [](Animal *animal) -> std::string {
/* Release GIL before calling into (potentially long-running) C++ code */
py::gil_scoped_release release;
return call_go(animal);
});
}
The ``call_go`` wrapper can also be simplified using the ``call_guard`` policy
(see :ref:`call_policies`) which yields the same result:
.. code-block:: cpp
m.def("call_go", &call_go, py::call_guard<py::gil_scoped_release>());
Binding sequence data types, iterators, the slicing protocol, etc.
==================================================================
Please refer to the supplemental example for details.
.. seealso::
The file :file:`tests/test_sequences_and_iterators.cpp` contains a
complete example that shows how to bind a sequence data type, including
length queries (``__len__``), iterators (``__iter__``), the slicing
protocol and other kinds of useful operations.
Partitioning code over multiple extension modules
=================================================
It's straightforward to split binding code over multiple extension modules,
while referencing types that are declared elsewhere. Everything "just" works
without any special precautions. One exception to this rule occurs when
extending a type declared in another extension module. Recall the basic example
from Section :ref:`inheritance`.
.. code-block:: cpp
py::class_<Pet> pet(m, "Pet");
pet.def(py::init<const std::string &>())
.def_readwrite("name", &Pet::name);
py::class_<Dog>(m, "Dog", pet /* <- specify parent */)
.def(py::init<const std::string &>())
.def("bark", &Dog::bark);
Suppose now that ``Pet`` bindings are defined in a module named ``basic``,
whereas the ``Dog`` bindings are defined somewhere else. The challenge is of
course that the variable ``pet`` is not available anymore though it is needed
to indicate the inheritance relationship to the constructor of ``class_<Dog>``.
However, it can be acquired as follows:
.. code-block:: cpp
py::object pet = (py::object) py::module_::import("basic").attr("Pet");
py::class_<Dog>(m, "Dog", pet)
.def(py::init<const std::string &>())
.def("bark", &Dog::bark);
Alternatively, you can specify the base class as a template parameter option to
``class_``, which performs an automated lookup of the corresponding Python
type. Like the above code, however, this also requires invoking the ``import``
function once to ensure that the pybind11 binding code of the module ``basic``
has been executed:
.. code-block:: cpp
py::module_::import("basic");
py::class_<Dog, Pet>(m, "Dog")
.def(py::init<const std::string &>())
.def("bark", &Dog::bark);
Naturally, both methods will fail when there are cyclic dependencies.
Note that pybind11 code compiled with hidden-by-default symbol visibility (e.g.
via the command line flag ``-fvisibility=hidden`` on GCC/Clang), which is
required for proper pybind11 functionality, can interfere with the ability to
access types defined in another extension module. Working around this requires
manually exporting types that are accessed by multiple extension modules;
pybind11 provides a macro to do just this:
.. code-block:: cpp
class PYBIND11_EXPORT Dog : public Animal {
...
};
Note also that it is possible (although would rarely be required) to share arbitrary
C++ objects between extension modules at runtime. Internal library data is shared
between modules using capsule machinery [#f6]_ which can be also utilized for
storing, modifying and accessing user-defined data. Note that an extension module
will "see" other extensions' data if and only if they were built with the same
pybind11 version. Consider the following example:
.. code-block:: cpp
auto data = reinterpret_cast<MyData *>(py::get_shared_data("mydata"));
if (!data)
data = static_cast<MyData *>(py::set_shared_data("mydata", new MyData(42)));
If the above snippet was used in several separately compiled extension modules,
the first one to be imported would create a ``MyData`` instance and associate
a ``"mydata"`` key with a pointer to it. Extensions that are imported later
would be then able to access the data behind the same pointer.
.. [#f6] https://docs.python.org/3/extending/extending.html#using-capsules
Module Destructors
==================
pybind11 does not provide an explicit mechanism to invoke cleanup code at
module destruction time. In rare cases where such functionality is required, it
is possible to emulate it using Python capsules or weak references with a
destruction callback.
.. code-block:: cpp
auto cleanup_callback = []() {
// perform cleanup here -- this function is called with the GIL held
};
m.add_object("_cleanup", py::capsule(cleanup_callback));
This approach has the potential downside that instances of classes exposed
within the module may still be alive when the cleanup callback is invoked
(whether this is acceptable will generally depend on the application).
Alternatively, the capsule may also be stashed within a type object, which
ensures that it not called before all instances of that type have been
collected:
.. code-block:: cpp
auto cleanup_callback = []() { /* ... */ };
m.attr("BaseClass").attr("_cleanup") = py::capsule(cleanup_callback);
Both approaches also expose a potentially dangerous ``_cleanup`` attribute in
Python, which may be undesirable from an API standpoint (a premature explicit
call from Python might lead to undefined behavior). Yet another approach that
avoids this issue involves weak reference with a cleanup callback:
.. code-block:: cpp
// Register a callback function that is invoked when the BaseClass object is collected
py::cpp_function cleanup_callback(
[](py::handle weakref) {
// perform cleanup here -- this function is called with the GIL held
weakref.dec_ref(); // release weak reference
}
);
// Create a weak reference with a cleanup callback and initially leak it
(void) py::weakref(m.attr("BaseClass"), cleanup_callback).release();
.. note::
PyPy does not garbage collect objects when the interpreter exits. An alternative
approach (which also works on CPython) is to use the :py:mod:`atexit` module [#f7]_,
for example:
.. code-block:: cpp
auto atexit = py::module_::import("atexit");
atexit.attr("register")(py::cpp_function([]() {
// perform cleanup here -- this function is called with the GIL held
}));
.. [#f7] https://docs.python.org/3/library/atexit.html
Generating documentation using Sphinx
=====================================
Sphinx [#f4]_ has the ability to inspect the signatures and documentation
strings in pybind11-based extension modules to automatically generate beautiful
documentation in a variety formats. The python_example repository [#f5]_ contains a
simple example repository which uses this approach.
There are two potential gotchas when using this approach: first, make sure that
the resulting strings do not contain any :kbd:`TAB` characters, which break the
docstring parsing routines. You may want to use C++11 raw string literals,
which are convenient for multi-line comments. Conveniently, any excess
indentation will be automatically be removed by Sphinx. However, for this to
work, it is important that all lines are indented consistently, i.e.:
.. code-block:: cpp
// ok
m.def("foo", &foo, R"mydelimiter(
The foo function
Parameters
----------
)mydelimiter");
// *not ok*
m.def("foo", &foo, R"mydelimiter(The foo function
Parameters
----------
)mydelimiter");
By default, pybind11 automatically generates and prepends a signature to the docstring of a function
registered with ``module_::def()`` and ``class_::def()``. Sometimes this
behavior is not desirable, because you want to provide your own signature or remove
the docstring completely to exclude the function from the Sphinx documentation.
The class ``options`` allows you to selectively suppress auto-generated signatures:
.. code-block:: cpp
PYBIND11_MODULE(example, m) {
py::options options;
options.disable_function_signatures();
m.def("add", [](int a, int b) { return a + b; }, "A function which adds two numbers");
}
Note that changes to the settings affect only function bindings created during the
lifetime of the ``options`` instance. When it goes out of scope at the end of the module's init function,
the default settings are restored to prevent unwanted side effects.
.. [#f4] http://www.sphinx-doc.org
.. [#f5] http://github.com/pybind/python_example
.. _avoiding-cpp-types-in-docstrings:
Avoiding C++ types in docstrings
================================
Docstrings are generated at the time of the declaration, e.g. when ``.def(...)`` is called.
At this point parameter and return types should be known to pybind11.
If a custom type is not exposed yet through a ``py::class_`` constructor or a custom type caster,
its C++ type name will be used instead to generate the signature in the docstring:
.. code-block:: text
| __init__(...)
| __init__(self: example.Foo, arg0: ns::Bar) -> None
^^^^^^^
This limitation can be circumvented by ensuring that C++ classes are registered with pybind11
before they are used as a parameter or return type of a function:
.. code-block:: cpp
PYBIND11_MODULE(example, m) {
auto pyFoo = py::class_<ns::Foo>(m, "Foo");
auto pyBar = py::class_<ns::Bar>(m, "Bar");
pyFoo.def(py::init<const ns::Bar&>());
pyBar.def(py::init<const ns::Foo&>());
}

View File

@ -0,0 +1,13 @@
Python C++ interface
####################
pybind11 exposes Python types and functions using thin C++ wrappers, which
makes it possible to conveniently call Python code from C++ without resorting
to Python's C API.
.. toctree::
:maxdepth: 2
object
numpy
utilities

View File

@ -0,0 +1,455 @@
.. _numpy:
NumPy
#####
Buffer protocol
===============
Python supports an extremely general and convenient approach for exchanging
data between plugin libraries. Types can expose a buffer view [#f2]_, which
provides fast direct access to the raw internal data representation. Suppose we
want to bind the following simplistic Matrix class:
.. code-block:: cpp
class Matrix {
public:
Matrix(size_t rows, size_t cols) : m_rows(rows), m_cols(cols) {
m_data = new float[rows*cols];
}
float *data() { return m_data; }
size_t rows() const { return m_rows; }
size_t cols() const { return m_cols; }
private:
size_t m_rows, m_cols;
float *m_data;
};
The following binding code exposes the ``Matrix`` contents as a buffer object,
making it possible to cast Matrices into NumPy arrays. It is even possible to
completely avoid copy operations with Python expressions like
``np.array(matrix_instance, copy = False)``.
.. code-block:: cpp
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
.def_buffer([](Matrix &m) -> py::buffer_info {
return py::buffer_info(
m.data(), /* Pointer to buffer */
sizeof(float), /* Size of one scalar */
py::format_descriptor<float>::format(), /* Python struct-style format descriptor */
2, /* Number of dimensions */
{ m.rows(), m.cols() }, /* Buffer dimensions */
{ sizeof(float) * m.cols(), /* Strides (in bytes) for each index */
sizeof(float) }
);
});
Supporting the buffer protocol in a new type involves specifying the special
``py::buffer_protocol()`` tag in the ``py::class_`` constructor and calling the
``def_buffer()`` method with a lambda function that creates a
``py::buffer_info`` description record on demand describing a given matrix
instance. The contents of ``py::buffer_info`` mirror the Python buffer protocol
specification.
.. code-block:: cpp
struct buffer_info {
void *ptr;
py::ssize_t itemsize;
std::string format;
py::ssize_t ndim;
std::vector<py::ssize_t> shape;
std::vector<py::ssize_t> strides;
};
To create a C++ function that can take a Python buffer object as an argument,
simply use the type ``py::buffer`` as one of its arguments. Buffers can exist
in a great variety of configurations, hence some safety checks are usually
necessary in the function body. Below, you can see a basic example on how to
define a custom constructor for the Eigen double precision matrix
(``Eigen::MatrixXd``) type, which supports initialization from compatible
buffer objects (e.g. a NumPy matrix).
.. code-block:: cpp
/* Bind MatrixXd (or some other Eigen type) to Python */
typedef Eigen::MatrixXd Matrix;
typedef Matrix::Scalar Scalar;
constexpr bool rowMajor = Matrix::Flags & Eigen::RowMajorBit;
py::class_<Matrix>(m, "Matrix", py::buffer_protocol())
.def(py::init([](py::buffer b) {
typedef Eigen::Stride<Eigen::Dynamic, Eigen::Dynamic> Strides;
/* Request a buffer descriptor from Python */
py::buffer_info info = b.request();
/* Some basic validation checks ... */
if (info.format != py::format_descriptor<Scalar>::format())
throw std::runtime_error("Incompatible format: expected a double array!");
if (info.ndim != 2)
throw std::runtime_error("Incompatible buffer dimension!");
auto strides = Strides(
info.strides[rowMajor ? 0 : 1] / (py::ssize_t)sizeof(Scalar),
info.strides[rowMajor ? 1 : 0] / (py::ssize_t)sizeof(Scalar));
auto map = Eigen::Map<Matrix, 0, Strides>(
static_cast<Scalar *>(info.ptr), info.shape[0], info.shape[1], strides);
return Matrix(map);
}));
For reference, the ``def_buffer()`` call for this Eigen data type should look
as follows:
.. code-block:: cpp
.def_buffer([](Matrix &m) -> py::buffer_info {
return py::buffer_info(
m.data(), /* Pointer to buffer */
sizeof(Scalar), /* Size of one scalar */
py::format_descriptor<Scalar>::format(), /* Python struct-style format descriptor */
2, /* Number of dimensions */
{ m.rows(), m.cols() }, /* Buffer dimensions */
{ sizeof(Scalar) * (rowMajor ? m.cols() : 1),
sizeof(Scalar) * (rowMajor ? 1 : m.rows()) }
/* Strides (in bytes) for each index */
);
})
For a much easier approach of binding Eigen types (although with some
limitations), refer to the section on :doc:`/advanced/cast/eigen`.
.. seealso::
The file :file:`tests/test_buffers.cpp` contains a complete example
that demonstrates using the buffer protocol with pybind11 in more detail.
.. [#f2] http://docs.python.org/3/c-api/buffer.html
Arrays
======
By exchanging ``py::buffer`` with ``py::array`` in the above snippet, we can
restrict the function so that it only accepts NumPy arrays (rather than any
type of Python object satisfying the buffer protocol).
In many situations, we want to define a function which only accepts a NumPy
array of a certain data type. This is possible via the ``py::array_t<T>``
template. For instance, the following function requires the argument to be a
NumPy array containing double precision values.
.. code-block:: cpp
void f(py::array_t<double> array);
When it is invoked with a different type (e.g. an integer or a list of
integers), the binding code will attempt to cast the input into a NumPy array
of the requested type. This feature requires the :file:`pybind11/numpy.h`
header to be included. Note that :file:`pybind11/numpy.h` does not depend on
the NumPy headers, and thus can be used without declaring a build-time
dependency on NumPy; NumPy>=1.7.0 is a runtime dependency.
Data in NumPy arrays is not guaranteed to packed in a dense manner;
furthermore, entries can be separated by arbitrary column and row strides.
Sometimes, it can be useful to require a function to only accept dense arrays
using either the C (row-major) or Fortran (column-major) ordering. This can be
accomplished via a second template argument with values ``py::array::c_style``
or ``py::array::f_style``.
.. code-block:: cpp
void f(py::array_t<double, py::array::c_style | py::array::forcecast> array);
The ``py::array::forcecast`` argument is the default value of the second
template parameter, and it ensures that non-conforming arguments are converted
into an array satisfying the specified requirements instead of trying the next
function overload.
There are several methods on arrays; the methods listed below under references
work, as well as the following functions based on the NumPy API:
- ``.dtype()`` returns the type of the contained values.
- ``.strides()`` returns a pointer to the strides of the array (optionally pass
an integer axis to get a number).
- ``.flags()`` returns the flag settings. ``.writable()`` and ``.owndata()``
are directly available.
- ``.offset_at()`` returns the offset (optionally pass indices).
- ``.squeeze()`` returns a view with length-1 axes removed.
- ``.view(dtype)`` returns a view of the array with a different dtype.
- ``.reshape({i, j, ...})`` returns a view of the array with a different shape.
``.resize({...})`` is also available.
- ``.index_at(i, j, ...)`` gets the count from the beginning to a given index.
There are also several methods for getting references (described below).
Structured types
================
In order for ``py::array_t`` to work with structured (record) types, we first
need to register the memory layout of the type. This can be done via
``PYBIND11_NUMPY_DTYPE`` macro, called in the plugin definition code, which
expects the type followed by field names:
.. code-block:: cpp
struct A {
int x;
double y;
};
struct B {
int z;
A a;
};
// ...
PYBIND11_MODULE(test, m) {
// ...
PYBIND11_NUMPY_DTYPE(A, x, y);
PYBIND11_NUMPY_DTYPE(B, z, a);
/* now both A and B can be used as template arguments to py::array_t */
}
The structure should consist of fundamental arithmetic types, ``std::complex``,
previously registered substructures, and arrays of any of the above. Both C++
arrays and ``std::array`` are supported. While there is a static assertion to
prevent many types of unsupported structures, it is still the user's
responsibility to use only "plain" structures that can be safely manipulated as
raw memory without violating invariants.
Vectorizing functions
=====================
Suppose we want to bind a function with the following signature to Python so
that it can process arbitrary NumPy array arguments (vectors, matrices, general
N-D arrays) in addition to its normal arguments:
.. code-block:: cpp
double my_func(int x, float y, double z);
After including the ``pybind11/numpy.h`` header, this is extremely simple:
.. code-block:: cpp
m.def("vectorized_func", py::vectorize(my_func));
Invoking the function like below causes 4 calls to be made to ``my_func`` with
each of the array elements. The significant advantage of this compared to
solutions like ``numpy.vectorize()`` is that the loop over the elements runs
entirely on the C++ side and can be crunched down into a tight, optimized loop
by the compiler. The result is returned as a NumPy array of type
``numpy.dtype.float64``.
.. code-block:: pycon
>>> x = np.array([[1, 3], [5, 7]])
>>> y = np.array([[2, 4], [6, 8]])
>>> z = 3
>>> result = vectorized_func(x, y, z)
The scalar argument ``z`` is transparently replicated 4 times. The input
arrays ``x`` and ``y`` are automatically converted into the right types (they
are of type ``numpy.dtype.int64`` but need to be ``numpy.dtype.int32`` and
``numpy.dtype.float32``, respectively).
.. note::
Only arithmetic, complex, and POD types passed by value or by ``const &``
reference are vectorized; all other arguments are passed through as-is.
Functions taking rvalue reference arguments cannot be vectorized.
In cases where the computation is too complicated to be reduced to
``vectorize``, it will be necessary to create and access the buffer contents
manually. The following snippet contains a complete example that shows how this
works (the code is somewhat contrived, since it could have been done more
simply using ``vectorize``).
.. code-block:: cpp
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
namespace py = pybind11;
py::array_t<double> add_arrays(py::array_t<double> input1, py::array_t<double> input2) {
py::buffer_info buf1 = input1.request(), buf2 = input2.request();
if (buf1.ndim != 1 || buf2.ndim != 1)
throw std::runtime_error("Number of dimensions must be one");
if (buf1.size != buf2.size)
throw std::runtime_error("Input shapes must match");
/* No pointer is passed, so NumPy will allocate the buffer */
auto result = py::array_t<double>(buf1.size);
py::buffer_info buf3 = result.request();
double *ptr1 = static_cast<double *>(buf1.ptr);
double *ptr2 = static_cast<double *>(buf2.ptr);
double *ptr3 = static_cast<double *>(buf3.ptr);
for (size_t idx = 0; idx < buf1.shape[0]; idx++)
ptr3[idx] = ptr1[idx] + ptr2[idx];
return result;
}
PYBIND11_MODULE(test, m) {
m.def("add_arrays", &add_arrays, "Add two NumPy arrays");
}
.. seealso::
The file :file:`tests/test_numpy_vectorize.cpp` contains a complete
example that demonstrates using :func:`vectorize` in more detail.
Direct access
=============
For performance reasons, particularly when dealing with very large arrays, it
is often desirable to directly access array elements without internal checking
of dimensions and bounds on every access when indices are known to be already
valid. To avoid such checks, the ``array`` class and ``array_t<T>`` template
class offer an unchecked proxy object that can be used for this unchecked
access through the ``unchecked<N>`` and ``mutable_unchecked<N>`` methods,
where ``N`` gives the required dimensionality of the array:
.. code-block:: cpp
m.def("sum_3d", [](py::array_t<double> x) {
auto r = x.unchecked<3>(); // x must have ndim = 3; can be non-writeable
double sum = 0;
for (py::ssize_t i = 0; i < r.shape(0); i++)
for (py::ssize_t j = 0; j < r.shape(1); j++)
for (py::ssize_t k = 0; k < r.shape(2); k++)
sum += r(i, j, k);
return sum;
});
m.def("increment_3d", [](py::array_t<double> x) {
auto r = x.mutable_unchecked<3>(); // Will throw if ndim != 3 or flags.writeable is false
for (py::ssize_t i = 0; i < r.shape(0); i++)
for (py::ssize_t j = 0; j < r.shape(1); j++)
for (py::ssize_t k = 0; k < r.shape(2); k++)
r(i, j, k) += 1.0;
}, py::arg().noconvert());
To obtain the proxy from an ``array`` object, you must specify both the data
type and number of dimensions as template arguments, such as ``auto r =
myarray.mutable_unchecked<float, 2>()``.
If the number of dimensions is not known at compile time, you can omit the
dimensions template parameter (i.e. calling ``arr_t.unchecked()`` or
``arr.unchecked<T>()``. This will give you a proxy object that works in the
same way, but results in less optimizable code and thus a small efficiency
loss in tight loops.
Note that the returned proxy object directly references the array's data, and
only reads its shape, strides, and writeable flag when constructed. You must
take care to ensure that the referenced array is not destroyed or reshaped for
the duration of the returned object, typically by limiting the scope of the
returned instance.
The returned proxy object supports some of the same methods as ``py::array`` so
that it can be used as a drop-in replacement for some existing, index-checked
uses of ``py::array``:
- ``.ndim()`` returns the number of dimensions
- ``.data(1, 2, ...)`` and ``r.mutable_data(1, 2, ...)``` returns a pointer to
the ``const T`` or ``T`` data, respectively, at the given indices. The
latter is only available to proxies obtained via ``a.mutable_unchecked()``.
- ``.itemsize()`` returns the size of an item in bytes, i.e. ``sizeof(T)``.
- ``.ndim()`` returns the number of dimensions.
- ``.shape(n)`` returns the size of dimension ``n``
- ``.size()`` returns the total number of elements (i.e. the product of the shapes).
- ``.nbytes()`` returns the number of bytes used by the referenced elements
(i.e. ``itemsize()`` times ``size()``).
.. seealso::
The file :file:`tests/test_numpy_array.cpp` contains additional examples
demonstrating the use of this feature.
Ellipsis
========
Python provides a convenient ``...`` ellipsis notation that is often used to
slice multidimensional arrays. For instance, the following snippet extracts the
middle dimensions of a tensor with the first and last index set to zero.
.. code-block:: python
a = ... # a NumPy array
b = a[0, ..., 0]
The function ``py::ellipsis()`` function can be used to perform the same
operation on the C++ side:
.. code-block:: cpp
py::array a = /* A NumPy array */;
py::array b = a[py::make_tuple(0, py::ellipsis(), 0)];
Memory view
===========
For a case when we simply want to provide a direct accessor to C/C++ buffer
without a concrete class object, we can return a ``memoryview`` object. Suppose
we wish to expose a ``memoryview`` for 2x4 uint8_t array, we can do the
following:
.. code-block:: cpp
const uint8_t buffer[] = {
0, 1, 2, 3,
4, 5, 6, 7
};
m.def("get_memoryview2d", []() {
return py::memoryview::from_buffer(
buffer, // buffer pointer
{ 2, 4 }, // shape (rows, cols)
{ sizeof(uint8_t) * 4, sizeof(uint8_t) } // strides in bytes
);
})
This approach is meant for providing a ``memoryview`` for a C/C++ buffer not
managed by Python. The user is responsible for managing the lifetime of the
buffer. Using a ``memoryview`` created in this way after deleting the buffer in
C++ side results in undefined behavior.
We can also use ``memoryview::from_memory`` for a simple 1D contiguous buffer:
.. code-block:: cpp
m.def("get_memoryview1d", []() {
return py::memoryview::from_memory(
buffer, // buffer pointer
sizeof(uint8_t) * 8 // buffer size
);
})
.. versionchanged:: 2.6
``memoryview::from_memory`` added.

View File

@ -0,0 +1,286 @@
Python types
############
.. _wrappers:
Available wrappers
==================
All major Python types are available as thin C++ wrapper classes. These
can also be used as function parameters -- see :ref:`python_objects_as_args`.
Available types include :class:`handle`, :class:`object`, :class:`bool_`,
:class:`int_`, :class:`float_`, :class:`str`, :class:`bytes`, :class:`tuple`,
:class:`list`, :class:`dict`, :class:`slice`, :class:`none`, :class:`capsule`,
:class:`iterable`, :class:`iterator`, :class:`function`, :class:`buffer`,
:class:`array`, and :class:`array_t`.
.. warning::
Be sure to review the :ref:`pytypes_gotchas` before using this heavily in
your C++ API.
.. _instantiating_compound_types:
Instantiating compound Python types from C++
============================================
Dictionaries can be initialized in the :class:`dict` constructor:
.. code-block:: cpp
using namespace pybind11::literals; // to bring in the `_a` literal
py::dict d("spam"_a=py::none(), "eggs"_a=42);
A tuple of python objects can be instantiated using :func:`py::make_tuple`:
.. code-block:: cpp
py::tuple tup = py::make_tuple(42, py::none(), "spam");
Each element is converted to a supported Python type.
A `simple namespace`_ can be instantiated using
.. code-block:: cpp
using namespace pybind11::literals; // to bring in the `_a` literal
py::object SimpleNamespace = py::module_::import("types").attr("SimpleNamespace");
py::object ns = SimpleNamespace("spam"_a=py::none(), "eggs"_a=42);
Attributes on a namespace can be modified with the :func:`py::delattr`,
:func:`py::getattr`, and :func:`py::setattr` functions. Simple namespaces can
be useful as lightweight stand-ins for class instances.
.. _simple namespace: https://docs.python.org/3/library/types.html#types.SimpleNamespace
.. _casting_back_and_forth:
Casting back and forth
======================
In this kind of mixed code, it is often necessary to convert arbitrary C++
types to Python, which can be done using :func:`py::cast`:
.. code-block:: cpp
MyClass *cls = ...;
py::object obj = py::cast(cls);
The reverse direction uses the following syntax:
.. code-block:: cpp
py::object obj = ...;
MyClass *cls = obj.cast<MyClass *>();
When conversion fails, both directions throw the exception :class:`cast_error`.
.. _python_libs:
Accessing Python libraries from C++
===================================
It is also possible to import objects defined in the Python standard
library or available in the current Python environment (``sys.path``) and work
with these in C++.
This example obtains a reference to the Python ``Decimal`` class.
.. code-block:: cpp
// Equivalent to "from decimal import Decimal"
py::object Decimal = py::module_::import("decimal").attr("Decimal");
.. code-block:: cpp
// Try to import scipy
py::object scipy = py::module_::import("scipy");
return scipy.attr("__version__");
.. _calling_python_functions:
Calling Python functions
========================
It is also possible to call Python classes, functions and methods
via ``operator()``.
.. code-block:: cpp
// Construct a Python object of class Decimal
py::object pi = Decimal("3.14159");
.. code-block:: cpp
// Use Python to make our directories
py::object os = py::module_::import("os");
py::object makedirs = os.attr("makedirs");
makedirs("/tmp/path/to/somewhere");
One can convert the result obtained from Python to a pure C++ version
if a ``py::class_`` or type conversion is defined.
.. code-block:: cpp
py::function f = <...>;
py::object result_py = f(1234, "hello", some_instance);
MyClass &result = result_py.cast<MyClass>();
.. _calling_python_methods:
Calling Python methods
========================
To call an object's method, one can again use ``.attr`` to obtain access to the
Python method.
.. code-block:: cpp
// Calculate e^π in decimal
py::object exp_pi = pi.attr("exp")();
py::print(py::str(exp_pi));
In the example above ``pi.attr("exp")`` is a *bound method*: it will always call
the method for that same instance of the class. Alternately one can create an
*unbound method* via the Python class (instead of instance) and pass the ``self``
object explicitly, followed by other arguments.
.. code-block:: cpp
py::object decimal_exp = Decimal.attr("exp");
// Compute the e^n for n=0..4
for (int n = 0; n < 5; n++) {
py::print(decimal_exp(Decimal(n));
}
Keyword arguments
=================
Keyword arguments are also supported. In Python, there is the usual call syntax:
.. code-block:: python
def f(number, say, to):
... # function code
f(1234, say="hello", to=some_instance) # keyword call in Python
In C++, the same call can be made using:
.. code-block:: cpp
using namespace pybind11::literals; // to bring in the `_a` literal
f(1234, "say"_a="hello", "to"_a=some_instance); // keyword call in C++
Unpacking arguments
===================
Unpacking of ``*args`` and ``**kwargs`` is also possible and can be mixed with
other arguments:
.. code-block:: cpp
// * unpacking
py::tuple args = py::make_tuple(1234, "hello", some_instance);
f(*args);
// ** unpacking
py::dict kwargs = py::dict("number"_a=1234, "say"_a="hello", "to"_a=some_instance);
f(**kwargs);
// mixed keywords, * and ** unpacking
py::tuple args = py::make_tuple(1234);
py::dict kwargs = py::dict("to"_a=some_instance);
f(*args, "say"_a="hello", **kwargs);
Generalized unpacking according to PEP448_ is also supported:
.. code-block:: cpp
py::dict kwargs1 = py::dict("number"_a=1234);
py::dict kwargs2 = py::dict("to"_a=some_instance);
f(**kwargs1, "say"_a="hello", **kwargs2);
.. seealso::
The file :file:`tests/test_pytypes.cpp` contains a complete
example that demonstrates passing native Python types in more detail. The
file :file:`tests/test_callbacks.cpp` presents a few examples of calling
Python functions from C++, including keywords arguments and unpacking.
.. _PEP448: https://www.python.org/dev/peps/pep-0448/
.. _implicit_casting:
Implicit casting
================
When using the C++ interface for Python types, or calling Python functions,
objects of type :class:`object` are returned. It is possible to invoke implicit
conversions to subclasses like :class:`dict`. The same holds for the proxy objects
returned by ``operator[]`` or ``obj.attr()``.
Casting to subtypes improves code readability and allows values to be passed to
C++ functions that require a specific subtype rather than a generic :class:`object`.
.. code-block:: cpp
#include <pybind11/numpy.h>
using namespace pybind11::literals;
py::module_ os = py::module_::import("os");
py::module_ path = py::module_::import("os.path"); // like 'import os.path as path'
py::module_ np = py::module_::import("numpy"); // like 'import numpy as np'
py::str curdir_abs = path.attr("abspath")(path.attr("curdir"));
py::print(py::str("Current directory: ") + curdir_abs);
py::dict environ = os.attr("environ");
py::print(environ["HOME"]);
py::array_t<float> arr = np.attr("ones")(3, "dtype"_a="float32");
py::print(py::repr(arr + py::int_(1)));
These implicit conversions are available for subclasses of :class:`object`; there
is no need to call ``obj.cast()`` explicitly as for custom classes, see
:ref:`casting_back_and_forth`.
.. note::
If a trivial conversion via move constructor is not possible, both implicit and
explicit casting (calling ``obj.cast()``) will attempt a "rich" conversion.
For instance, ``py::list env = os.attr("environ");`` will succeed and is
equivalent to the Python code ``env = list(os.environ)`` that produces a
list of the dict keys.
.. TODO: Adapt text once PR #2349 has landed
Handling exceptions
===================
Python exceptions from wrapper classes will be thrown as a ``py::error_already_set``.
See :ref:`Handling exceptions from Python in C++
<handling_python_exceptions_cpp>` for more information on handling exceptions
raised when calling C++ wrapper classes.
.. _pytypes_gotchas:
Gotchas
=======
Default-Constructed Wrappers
----------------------------
When a wrapper type is default-constructed, it is **not** a valid Python object (i.e. it is not ``py::none()``). It is simply the same as
``PyObject*`` null pointer. To check for this, use
``static_cast<bool>(my_wrapper)``.
Assigning py::none() to wrappers
--------------------------------
You may be tempted to use types like ``py::str`` and ``py::dict`` in C++
signatures (either pure C++, or in bound signatures), and assign them default
values of ``py::none()``. However, in a best case scenario, it will fail fast
because ``None`` is not convertible to that type (e.g. ``py::dict``), or in a
worse case scenario, it will silently work but corrupt the types you want to
work with (e.g. ``py::str(py::none())`` will yield ``"None"`` in Python).

View File

@ -0,0 +1,155 @@
Utilities
#########
Using Python's print function in C++
====================================
The usual way to write output in C++ is using ``std::cout`` while in Python one
would use ``print``. Since these methods use different buffers, mixing them can
lead to output order issues. To resolve this, pybind11 modules can use the
:func:`py::print` function which writes to Python's ``sys.stdout`` for consistency.
Python's ``print`` function is replicated in the C++ API including optional
keyword arguments ``sep``, ``end``, ``file``, ``flush``. Everything works as
expected in Python:
.. code-block:: cpp
py::print(1, 2.0, "three"); // 1 2.0 three
py::print(1, 2.0, "three", "sep"_a="-"); // 1-2.0-three
auto args = py::make_tuple("unpacked", true);
py::print("->", *args, "end"_a="<-"); // -> unpacked True <-
.. _ostream_redirect:
Capturing standard output from ostream
======================================
Often, a library will use the streams ``std::cout`` and ``std::cerr`` to print,
but this does not play well with Python's standard ``sys.stdout`` and ``sys.stderr``
redirection. Replacing a library's printing with ``py::print <print>`` may not
be feasible. This can be fixed using a guard around the library function that
redirects output to the corresponding Python streams:
.. code-block:: cpp
#include <pybind11/iostream.h>
...
// Add a scoped redirect for your noisy code
m.def("noisy_func", []() {
py::scoped_ostream_redirect stream(
std::cout, // std::ostream&
py::module_::import("sys").attr("stdout") // Python output
);
call_noisy_func();
});
.. warning::
The implementation in ``pybind11/iostream.h`` is NOT thread safe. Multiple
threads writing to a redirected ostream concurrently cause data races
and potentially buffer overflows. Therefore it is currently a requirement
that all (possibly) concurrent redirected ostream writes are protected by
a mutex. #HelpAppreciated: Work on iostream.h thread safety. For more
background see the discussions under
`PR #2982 <https://github.com/pybind/pybind11/pull/2982>`_ and
`PR #2995 <https://github.com/pybind/pybind11/pull/2995>`_.
This method respects flushes on the output streams and will flush if needed
when the scoped guard is destroyed. This allows the output to be redirected in
real time, such as to a Jupyter notebook. The two arguments, the C++ stream and
the Python output, are optional, and default to standard output if not given. An
extra type, ``py::scoped_estream_redirect <scoped_estream_redirect>``, is identical
except for defaulting to ``std::cerr`` and ``sys.stderr``; this can be useful with
``py::call_guard``, which allows multiple items, but uses the default constructor:
.. code-block:: cpp
// Alternative: Call single function using call guard
m.def("noisy_func", &call_noisy_function,
py::call_guard<py::scoped_ostream_redirect,
py::scoped_estream_redirect>());
The redirection can also be done in Python with the addition of a context
manager, using the ``py::add_ostream_redirect() <add_ostream_redirect>`` function:
.. code-block:: cpp
py::add_ostream_redirect(m, "ostream_redirect");
The name in Python defaults to ``ostream_redirect`` if no name is passed. This
creates the following context manager in Python:
.. code-block:: python
with ostream_redirect(stdout=True, stderr=True):
noisy_function()
It defaults to redirecting both streams, though you can use the keyword
arguments to disable one of the streams if needed.
.. note::
The above methods will not redirect C-level output to file descriptors, such
as ``fprintf``. For those cases, you'll need to redirect the file
descriptors either directly in C or with Python's ``os.dup2`` function
in an operating-system dependent way.
.. _eval:
Evaluating Python expressions from strings and files
====================================================
pybind11 provides the ``eval``, ``exec`` and ``eval_file`` functions to evaluate
Python expressions and statements. The following example illustrates how they
can be used.
.. code-block:: cpp
// At beginning of file
#include <pybind11/eval.h>
...
// Evaluate in scope of main module
py::object scope = py::module_::import("__main__").attr("__dict__");
// Evaluate an isolated expression
int result = py::eval("my_variable + 10", scope).cast<int>();
// Evaluate a sequence of statements
py::exec(
"print('Hello')\n"
"print('world!');",
scope);
// Evaluate the statements in an separate Python file on disk
py::eval_file("script.py", scope);
C++11 raw string literals are also supported and quite handy for this purpose.
The only requirement is that the first statement must be on a new line following
the raw string delimiter ``R"(``, ensuring all lines have common leading indent:
.. code-block:: cpp
py::exec(R"(
x = get_answer()
if x == 42:
print('Hello World!')
else:
print('Bye!')
)", scope
);
.. note::
`eval` and `eval_file` accept a template parameter that describes how the
string/file should be interpreted. Possible choices include ``eval_expr``
(isolated expression), ``eval_single_statement`` (a single statement, return
value is always ``none``), and ``eval_statements`` (sequence of statements,
return value is always ``none``). `eval` defaults to ``eval_expr``,
`eval_file` defaults to ``eval_statements`` and `exec` is just a shortcut
for ``eval<eval_statements>``.

View File

@ -0,0 +1,174 @@
Smart pointers
##############
std::unique_ptr
===============
Given a class ``Example`` with Python bindings, it's possible to return
instances wrapped in C++11 unique pointers, like so
.. code-block:: cpp
std::unique_ptr<Example> create_example() { return std::unique_ptr<Example>(new Example()); }
.. code-block:: cpp
m.def("create_example", &create_example);
In other words, there is nothing special that needs to be done. While returning
unique pointers in this way is allowed, it is *illegal* to use them as function
arguments. For instance, the following function signature cannot be processed
by pybind11.
.. code-block:: cpp
void do_something_with_example(std::unique_ptr<Example> ex) { ... }
The above signature would imply that Python needs to give up ownership of an
object that is passed to this function, which is generally not possible (for
instance, the object might be referenced elsewhere).
std::shared_ptr
===============
The binding generator for classes, :class:`class_`, can be passed a template
type that denotes a special *holder* type that is used to manage references to
the object. If no such holder type template argument is given, the default for
a type named ``Type`` is ``std::unique_ptr<Type>``, which means that the object
is deallocated when Python's reference count goes to zero.
It is possible to switch to other types of reference counting wrappers or smart
pointers, which is useful in codebases that rely on them. For instance, the
following snippet causes ``std::shared_ptr`` to be used instead.
.. code-block:: cpp
py::class_<Example, std::shared_ptr<Example> /* <- holder type */> obj(m, "Example");
Note that any particular class can only be associated with a single holder type.
One potential stumbling block when using holder types is that they need to be
applied consistently. Can you guess what's broken about the following binding
code?
.. code-block:: cpp
class Child { };
class Parent {
public:
Parent() : child(std::make_shared<Child>()) { }
Child *get_child() { return child.get(); } /* Hint: ** DON'T DO THIS ** */
private:
std::shared_ptr<Child> child;
};
PYBIND11_MODULE(example, m) {
py::class_<Child, std::shared_ptr<Child>>(m, "Child");
py::class_<Parent, std::shared_ptr<Parent>>(m, "Parent")
.def(py::init<>())
.def("get_child", &Parent::get_child);
}
The following Python code will cause undefined behavior (and likely a
segmentation fault).
.. code-block:: python
from example import Parent
print(Parent().get_child())
The problem is that ``Parent::get_child()`` returns a pointer to an instance of
``Child``, but the fact that this instance is already managed by
``std::shared_ptr<...>`` is lost when passing raw pointers. In this case,
pybind11 will create a second independent ``std::shared_ptr<...>`` that also
claims ownership of the pointer. In the end, the object will be freed **twice**
since these shared pointers have no way of knowing about each other.
There are two ways to resolve this issue:
1. For types that are managed by a smart pointer class, never use raw pointers
in function arguments or return values. In other words: always consistently
wrap pointers into their designated holder types (such as
``std::shared_ptr<...>``). In this case, the signature of ``get_child()``
should be modified as follows:
.. code-block:: cpp
std::shared_ptr<Child> get_child() { return child; }
2. Adjust the definition of ``Child`` by specifying
``std::enable_shared_from_this<T>`` (see cppreference_ for details) as a
base class. This adds a small bit of information to ``Child`` that allows
pybind11 to realize that there is already an existing
``std::shared_ptr<...>`` and communicate with it. In this case, the
declaration of ``Child`` should look as follows:
.. _cppreference: http://en.cppreference.com/w/cpp/memory/enable_shared_from_this
.. code-block:: cpp
class Child : public std::enable_shared_from_this<Child> { };
.. _smart_pointers:
Custom smart pointers
=====================
pybind11 supports ``std::unique_ptr`` and ``std::shared_ptr`` right out of the
box. For any other custom smart pointer, transparent conversions can be enabled
using a macro invocation similar to the following. It must be declared at the
top namespace level before any binding code:
.. code-block:: cpp
PYBIND11_DECLARE_HOLDER_TYPE(T, SmartPtr<T>);
The first argument of :func:`PYBIND11_DECLARE_HOLDER_TYPE` should be a
placeholder name that is used as a template parameter of the second argument.
Thus, feel free to use any identifier, but use it consistently on both sides;
also, don't use the name of a type that already exists in your codebase.
The macro also accepts a third optional boolean parameter that is set to false
by default. Specify
.. code-block:: cpp
PYBIND11_DECLARE_HOLDER_TYPE(T, SmartPtr<T>, true);
if ``SmartPtr<T>`` can always be initialized from a ``T*`` pointer without the
risk of inconsistencies (such as multiple independent ``SmartPtr`` instances
believing that they are the sole owner of the ``T*`` pointer). A common
situation where ``true`` should be passed is when the ``T`` instances use
*intrusive* reference counting.
Please take a look at the :ref:`macro_notes` before using this feature.
By default, pybind11 assumes that your custom smart pointer has a standard
interface, i.e. provides a ``.get()`` member function to access the underlying
raw pointer. If this is not the case, pybind11's ``holder_helper`` must be
specialized:
.. code-block:: cpp
// Always needed for custom holder types
PYBIND11_DECLARE_HOLDER_TYPE(T, SmartPtr<T>);
// Only needed if the type's `.get()` goes by another name
namespace pybind11 { namespace detail {
template <typename T>
struct holder_helper<SmartPtr<T>> { // <-- specialization
static const T *get(const SmartPtr<T> &p) { return p.getPointer(); }
};
}}
The above specialization informs pybind11 that the custom ``SmartPtr`` class
provides ``.get()`` functionality via ``.getPointer()``.
.. seealso::
The file :file:`tests/test_smart_ptr.cpp` contains a complete example
that demonstrates how to work with custom reference-counting holder types
in more detail.

307
libs/pybind/docs/basics.rst Normal file
View File

@ -0,0 +1,307 @@
.. _basics:
First steps
###########
This sections demonstrates the basic features of pybind11. Before getting
started, make sure that development environment is set up to compile the
included set of test cases.
Compiling the test cases
========================
Linux/macOS
-----------
On Linux you'll need to install the **python-dev** or **python3-dev** packages as
well as **cmake**. On macOS, the included python version works out of the box,
but **cmake** must still be installed.
After installing the prerequisites, run
.. code-block:: bash
mkdir build
cd build
cmake ..
make check -j 4
The last line will both compile and run the tests.
Windows
-------
On Windows, only **Visual Studio 2017** and newer are supported.
.. Note::
To use the C++17 in Visual Studio 2017 (MSVC 14.1), pybind11 requires the flag
``/permissive-`` to be passed to the compiler `to enforce standard conformance`_. When
building with Visual Studio 2019, this is not strictly necessary, but still advised.
.. _`to enforce standard conformance`: https://docs.microsoft.com/en-us/cpp/build/reference/permissive-standards-conformance?view=vs-2017
To compile and run the tests:
.. code-block:: batch
mkdir build
cd build
cmake ..
cmake --build . --config Release --target check
This will create a Visual Studio project, compile and run the target, all from the
command line.
.. Note::
If all tests fail, make sure that the Python binary and the testcases are compiled
for the same processor type and bitness (i.e. either **i386** or **x86_64**). You
can specify **x86_64** as the target architecture for the generated Visual Studio
project using ``cmake -A x64 ..``.
.. seealso::
Advanced users who are already familiar with Boost.Python may want to skip
the tutorial and look at the test cases in the :file:`tests` directory,
which exercise all features of pybind11.
Header and namespace conventions
================================
For brevity, all code examples assume that the following two lines are present:
.. code-block:: cpp
#include <pybind11/pybind11.h>
namespace py = pybind11;
Some features may require additional headers, but those will be specified as needed.
.. _simple_example:
Creating bindings for a simple function
=======================================
Let's start by creating Python bindings for an extremely simple function, which
adds two numbers and returns their result:
.. code-block:: cpp
int add(int i, int j) {
return i + j;
}
For simplicity [#f1]_, we'll put both this function and the binding code into
a file named :file:`example.cpp` with the following contents:
.. code-block:: cpp
#include <pybind11/pybind11.h>
int add(int i, int j) {
return i + j;
}
PYBIND11_MODULE(example, m) {
m.doc() = "pybind11 example plugin"; // optional module docstring
m.def("add", &add, "A function that adds two numbers");
}
.. [#f1] In practice, implementation and binding code will generally be located
in separate files.
The :func:`PYBIND11_MODULE` macro creates a function that will be called when an
``import`` statement is issued from within Python. The module name (``example``)
is given as the first macro argument (it should not be in quotes). The second
argument (``m``) defines a variable of type :class:`py::module_ <module>` which
is the main interface for creating bindings. The method :func:`module_::def`
generates binding code that exposes the ``add()`` function to Python.
.. note::
Notice how little code was needed to expose our function to Python: all
details regarding the function's parameters and return value were
automatically inferred using template metaprogramming. This overall
approach and the used syntax are borrowed from Boost.Python, though the
underlying implementation is very different.
pybind11 is a header-only library, hence it is not necessary to link against
any special libraries and there are no intermediate (magic) translation steps.
On Linux, the above example can be compiled using the following command:
.. code-block:: bash
$ c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) example.cpp -o example$(python3-config --extension-suffix)
.. note::
If you used :ref:`include_as_a_submodule` to get the pybind11 source, then
use ``$(python3-config --includes) -Iextern/pybind11/include`` instead of
``$(python3 -m pybind11 --includes)`` in the above compilation, as
explained in :ref:`building_manually`.
For more details on the required compiler flags on Linux and macOS, see
:ref:`building_manually`. For complete cross-platform compilation instructions,
refer to the :ref:`compiling` page.
The `python_example`_ and `cmake_example`_ repositories are also a good place
to start. They are both complete project examples with cross-platform build
systems. The only difference between the two is that `python_example`_ uses
Python's ``setuptools`` to build the module, while `cmake_example`_ uses CMake
(which may be preferable for existing C++ projects).
.. _python_example: https://github.com/pybind/python_example
.. _cmake_example: https://github.com/pybind/cmake_example
Building the above C++ code will produce a binary module file that can be
imported to Python. Assuming that the compiled module is located in the
current directory, the following interactive Python session shows how to
load and execute the example:
.. code-block:: pycon
$ python
Python 3.9.10 (main, Jan 15 2022, 11:48:04)
[Clang 13.0.0 (clang-1300.0.29.3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import example
>>> example.add(1, 2)
3
>>>
.. _keyword_args:
Keyword arguments
=================
With a simple code modification, it is possible to inform Python about the
names of the arguments ("i" and "j" in this case).
.. code-block:: cpp
m.def("add", &add, "A function which adds two numbers",
py::arg("i"), py::arg("j"));
:class:`arg` is one of several special tag classes which can be used to pass
metadata into :func:`module_::def`. With this modified binding code, we can now
call the function using keyword arguments, which is a more readable alternative
particularly for functions taking many parameters:
.. code-block:: pycon
>>> import example
>>> example.add(i=1, j=2)
3L
The keyword names also appear in the function signatures within the documentation.
.. code-block:: pycon
>>> help(example)
....
FUNCTIONS
add(...)
Signature : (i: int, j: int) -> int
A function which adds two numbers
A shorter notation for named arguments is also available:
.. code-block:: cpp
// regular notation
m.def("add1", &add, py::arg("i"), py::arg("j"));
// shorthand
using namespace pybind11::literals;
m.def("add2", &add, "i"_a, "j"_a);
The :var:`_a` suffix forms a C++11 literal which is equivalent to :class:`arg`.
Note that the literal operator must first be made visible with the directive
``using namespace pybind11::literals``. This does not bring in anything else
from the ``pybind11`` namespace except for literals.
.. _default_args:
Default arguments
=================
Suppose now that the function to be bound has default arguments, e.g.:
.. code-block:: cpp
int add(int i = 1, int j = 2) {
return i + j;
}
Unfortunately, pybind11 cannot automatically extract these parameters, since they
are not part of the function's type information. However, they are simple to specify
using an extension of :class:`arg`:
.. code-block:: cpp
m.def("add", &add, "A function which adds two numbers",
py::arg("i") = 1, py::arg("j") = 2);
The default values also appear within the documentation.
.. code-block:: pycon
>>> help(example)
....
FUNCTIONS
add(...)
Signature : (i: int = 1, j: int = 2) -> int
A function which adds two numbers
The shorthand notation is also available for default arguments:
.. code-block:: cpp
// regular notation
m.def("add1", &add, py::arg("i") = 1, py::arg("j") = 2);
// shorthand
m.def("add2", &add, "i"_a=1, "j"_a=2);
Exporting variables
===================
To expose a value from C++, use the ``attr`` function to register it in a
module as shown below. Built-in types and general objects (more on that later)
are automatically converted when assigned as attributes, and can be explicitly
converted using the function ``py::cast``.
.. code-block:: cpp
PYBIND11_MODULE(example, m) {
m.attr("the_answer") = 42;
py::object world = py::cast("World");
m.attr("what") = world;
}
These are then accessible from Python:
.. code-block:: pycon
>>> import example
>>> example.the_answer
42
>>> example.what
'World'
.. _supported_types:
Supported data types
====================
A large number of data types are supported out of the box and can be used
seamlessly as functions arguments, return values or with ``py::cast`` in general.
For a full overview, see the :doc:`advanced/cast/index` section.

Some files were not shown because too many files have changed in this diff Show More