139 Commits

Author SHA1 Message Date
10e9398f29 Remove out-dated windows_build and replace by only one version
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / build-windows (push) Has been cancelled
Release / release (push) Has been cancelled
2025-10-06 18:15:07 +02:00
2d2f0ec5e4 Add plot command line option and method for projections 2025-10-06 17:59:09 +02:00
3b3f0a1823 ignore that cache decorator does not exisit in python 3.8 2025-10-06 17:00:43 +02:00
20122eb32e fix attribute error for python <3.10 2025-10-06 16:57:38 +02:00
c68aa29141 deal with backport StrEnum doesn't work with python <3.10 and requirements.txt for backports 2025-10-06 16:53:52 +02:00
b638237f66 add git lfs to unit_test action 2025-10-06 16:46:33 +02:00
9ce902d82c Allow StrEnum from backports for python <3.11 2025-10-06 16:36:26 +02:00
a3dccb8429 Merge branch 'refactor' 2025-10-06 16:17:33 +02:00
b80a01325b Update version information for commit before merge of version 3
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / build-windows (push) Has been cancelled
Release / release (push) Has been cancelled
2025-10-06 16:12:47 +02:00
51cb3087cb fix 1e9 factor in time slicing 2025-10-06 16:10:14 +02:00
e39d8b108a update version info data 2025-10-06 15:46:27 +02:00
2985d10883 Fix q-calculation for incidentAngle flag 2025-10-06 15:45:35 +02:00
cec4fc2965 Modifictaions from discussion, some reduction issues remain (time slicing, q-calculation) 2025-10-06 15:34:21 +02:00
2467ba38b8 Move execution script from eos.py to package and rename package to eos, add entry point to setup 2025-10-06 11:38:52 +02:00
3a21a8505a Performance evaluation add TODOs 2025-10-06 10:10:41 +02:00
e9b860f7cd Fix numba dtypes to avoid unneeded conversions 2025-10-06 09:57:21 +02:00
8f2695f8a7 Reorganize event actions to separate numba functions 2025-10-06 09:54:17 +02:00
f6bda1160e Finalize implementation of new file_reader interface into reduction and change tests to run from base path 2025-10-06 09:46:03 +02:00
2a10597dc3 Update test case to new data, fix sample model from config parameter, add git attributes for lfs of test files 2025-10-06 09:17:41 +02:00
9f0de521ee Add sample model parsing from hdf 2025-10-06 08:19:49 +02:00
650730ce12 Move calculation codes out of reducer and add projection classes that handle histogramming 2025-10-03 18:31:36 +02:00
93405c880d Ensure normalization file load from file is valid by encoding all actions performed on file with input parameters 2025-10-03 11:40:56 +02:00
68f671f447 Perform reduction action for normalization with less steps directly in reducer class, prepare same for datasets 2025-10-02 18:26:05 +02:00
cb4415ad3d Separate PathResolver and Normalisation, prepare different event treatment for normalization and datafiles 2025-10-02 18:03:19 +02:00
2a89e58698 Dynamically extend event recarray and only filter events by setting mask values. Masks are now bitmasks to track which filter applied where. 2025-10-02 16:41:09 +02:00
5ecdecfe24 Extract some actions from file reader to event actions as they depend on series time or parameter overwrites 2025-10-02 12:23:23 +02:00
ae6aa9d24d make refactored reader work run without time-slicing 2025-10-02 08:25:11 +02:00
39b267e96c Start including refactored data reader 2025-10-01 18:23:12 +02:00
fe2975c71d Fix some bugs from new options configuration and make test run again 2025-10-01 14:11:16 +02:00
1e78325663 Include old file_reader updates from reorganisation branch 2025-10-01 13:09:22 +02:00
1d74d947de Remove old datasets, start update full analysis test 2025-10-01 13:02:26 +02:00
c540fa16fe Move event data type specification in separate method and decouple event handling from file reader by using a Protocol 2025-10-01 13:02:25 +02:00
7cf7ff8e61 All actions within old file_reader now implemented as new style event transformations 2025-10-01 13:02:25 +02:00
753b32203a Start separating code into different files 2025-10-01 13:02:09 +02:00
d936f7ae7d Add option to read dataset partially by limiting maximum events to be read 2025-10-01 13:01:21 +02:00
429bd47c59 Move event data to record arrays to keep events/packets/pulses connected 2025-10-01 13:01:21 +02:00
1a3c637b39 Begin refactoring file_reader to separate data readout, corrections and other actions to decouple AmorData class 2025-10-01 13:01:21 +02:00
f46eefb0cb Start cleaning up with separating numba check from function usage 2025-10-01 13:01:21 +02:00
0697e5ce80 typo fix 2025-09-30 13:02:00 +02:00
f88cea417c orso model ca now be a dict and read from yaml file 2025-09-30 10:21:01 +02:00
a22732c612 fixed the wrong proton-current matching 2025-09-16 11:08:52 +02:00
ae5b3e789e added TODO for polarization 2025-09-15 17:30:59 +02:00
082d96510b reorganised file expansion 2025-09-08 16:53:52 +02:00
e24ce01419 Put output format evaluation in config and use StrEnum for it 2025-09-08 16:09:00 +02:00
68e592cfe4 commented out "autoscale" - has to be fixed 2025-09-05 14:45:54 +02:00
03e2f461cb completed help and group entries for command line options 2025-09-05 14:39:46 +02:00
03539eaea6 division by zero in ts mode fixed 2025-09-05 10:42:31 +02:00
f5b4f8bba2 changed code formatting, fixed the pc normalisation 2025-09-05 10:22:45 +02:00
319cbb4c2c Start implementing new way to build command line arguments and defaults based on options classes directly 2025-09-03 12:47:39 +02:00
d1e075cca4 adapted otions to new command-line grammar 2025-09-02 19:18:41 +02:00
7f01f89f2b Start implementing new way to build command line arguments and defaults based on options classes directly 2025-08-27 17:19:40 +02:00
b6e3cb2eef fixed signe in chopper phase calculation 2025-08-27 08:24:10 +02:00
443cb30b79 fixes in the polarisation handlinge 2025-06-30 15:41:20 +02:00
057742cabd taking chopper info from event stream, new debug output, fix for chopper offset 2025-06-24 11:06:07 +02:00
2689a969f1 fallback in case no chopper trigger signal is available, typo fixes 2025-06-13 12:38:26 +02:00
aff4c5b603 added TODO 2025-06-10 09:55:25 +02:00
887ec6fa23 added absolute path to clas 2025-06-10 08:54:58 +02:00
6102ed7123 chopper phases defaults, e2h_2025 2025-06-08 15:50:08 +02:00
bbac3bc943 Merge branch 'main' of https://github.com/jochenstahn/amor 2025-06-08 15:33:51 +02:00
bacbd18854 uploading the correct e2h now... 2025-06-08 15:31:51 +02:00
Jochen Stahn
aa7919b83d file_reader: cache lookup extended 2025-06-08 12:06:43 +02:00
f65651d0e5 events2histogram version for new hdf archiecture 2025-06-07 17:19:40 +02:00
1b0df0ec45 some polishing 2025-05-30 11:18:59 +02:00
5f94cc7b92 adapted parameter pathes for hdf file, introduced chopper phases from file 2025-05-29 18:11:51 +02:00
a947a70d2d included polarisation info, using chopper trigger signal for normalisation 2025-05-29 11:26:37 +02:00
729332f49d Bump version number 2025-01-30 17:04:22 +01:00
500d53a9b2 Fetch commits and tags for release action 2025-01-30 16:30:35 +01:00
dac76efdd1 set the tag for github release to the last git tag 2025-01-30 15:18:52 +01:00
e78200a39d add workflow dispatch action for updating release 2025-01-30 14:57:06 +01:00
c6bde8cd85 add permission id-token to action 2025-01-30 14:53:24 +01:00
8e9c03a4fb try github action with trusted publishing (no user/token) 2025-01-30 14:48:30 +01:00
155a167e26 update version date
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / build-windows (push) Has been cancelled
Release / release (push) Has been cancelled
2025-01-30 13:38:45 +01:00
fcc1fd7d84 delete old manual and life_histogram, update events2histogram 2025-01-30 13:37:27 +01:00
ee34cc96d4 direct beam reference fixed 2025-01-29 13:54:56 +01:00
Jochen Stahn
64edd8e8d7 Merge pull request #6 from jochenstahn/faster_pulsetimes
Faster pulsetimes
2025-01-29 08:21:14 +01:00
96681c8f81 fix case where there is no pulse missing 2025-01-28 16:17:48 +01:00
87a246196f apply scaling factor when time slicing 2025-01-28 16:02:54 +01:00
3607de9864 replace pulseTimeS generation in for loop by filling of pulse gaps 2025-01-28 16:00:52 +01:00
23e310ba6a removed fix coded proton time offset 2024-12-16 13:50:46 +01:00
9f2dfbcbe1 corrected output about automatic scaling 2024-12-16 08:15:11 +01:00
25f8708b4c re-bump version, make release dependent on windows build 2024-12-13 15:57:12 +01:00
2e565f55be Include timezone data in windows distribution 2024-12-13 15:36:22 +01:00
b48bea7726 increment version 2024-12-13 14:46:23 +01:00
55e9407b67 update revision info
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / release (push) Has been skipped
Release / build-windows (push) Has been cancelled
2024-12-13 14:37:50 +01:00
5fc512632f Merge pull request #5 from jochenstahn/time
Handle timezone correctly
2024-12-13 14:35:16 +01:00
55a7b7270c minor cleanup 2024-12-13 14:33:04 +01:00
f81582554e remove unnecessary requirement for pytz and insert timezone into datetime instead of recreation 2024-12-13 14:17:51 +01:00
3e22a94ff6 install zoneinfo backports for python 3.8 2024-12-13 14:07:12 +01:00
Jochen Stahn
c1dc194548 different method to add timezone 2024-12-10 14:39:59 +01:00
Jochen Stahn
c5f5602a89 added pytz 2024-12-09 10:19:22 +01:00
cf9c0018a5 tried to handle summer time correctly.... 2024-12-09 10:03:56 +01:00
03ac04ab05 removed control output 2024-12-06 14:59:18 +01:00
b2df980a21 fix of handling of pulses for more than one file 2024-12-06 10:36:25 +01:00
eb54e52c66 changed format for a logging output about low monitor signal 2024-12-04 10:51:05 +01:00
9b4c384425 release to pypi after storing archive to artifact 2024-12-04 08:48:35 +01:00
ce44e04014 add windows build to release action
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / release (push) Has been skipped
Release / build-windows (push) Has been cancelled
2024-12-03 16:11:04 +01:00
ce90fc005a remove upterm from tests 2024-12-03 15:17:12 +01:00
29edbce5bc change iso time offset to include minutes for compatibility with older python 2024-12-03 15:02:25 +01:00
a0b0ff436c replace calculated offset by Iso string 2024-12-03 14:51:09 +01:00
2cae45659c correct for local timezone issues 2024-12-03 14:12:37 +01:00
9693951233 explicitly define utc as timezone for file 2024-12-03 13:52:29 +01:00
e2970a8a5f add upterm action to workflow for debugging 2024-12-03 13:13:28 +01:00
7dcede44b6 removed obsolete control output 2024-11-28 09:03:57 +01:00
f6f853e6e4 commented VERY time consuming control output: events per pulse 2024-11-27 08:29:33 +01:00
ad8245a9a0 relative theta range over absolute over default 2024-11-26 15:27:07 +01:00
33b8829a09 check for empty files introduced 2024-11-22 11:24:17 +01:00
f645082c20 Merge branch 'main' of https://github.com/jochenstahn/amor 2024-11-08 09:01:37 +01:00
1bd1100035 changes in the "pulse generation" - not yet finished 2024-11-08 08:51:56 +01:00
Artur Glavic
b94b6714a3 Update setup.cfg 2024-11-05 13:15:20 +01:00
d4462f450e change to rst README 2024-10-30 15:36:43 +01:00
80d1014750 update readme with installation instructions 2024-10-30 15:05:33 +01:00
a3eb6a173e bump version 2024-10-30 14:32:04 +01:00
bada209c1e minor fix of release workflow and add github release
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / release (push) Has been skipped
2024-10-30 14:31:06 +01:00
d6f699e10c minor fix of release workflow 2024-10-30 14:28:34 +01:00
8fa11fa05a add configuration for pypi release 2024-10-30 14:07:48 +01:00
c7b7f38a6d make sure the right pytest is used 2024-10-30 13:34:45 +01:00
5d50e92521 try to use older Ubuntu for failing tests, could not reproduce locally 2024-10-30 13:26:03 +01:00
aa57d75751 don't fail action when just one python version fails 2024-10-30 13:08:50 +01:00
409e62b63f set branch to run tests to main 2024-10-30 13:02:34 +01:00
1c459ae5d3 install correct requirements file in workflow 2024-10-30 13:01:56 +01:00
12d0370807 add test data and pytest action to repository 2024-10-30 12:59:00 +01:00
b1e7b68a21 update tests to new configuration parameters 2024-10-30 12:32:24 +01:00
4134e106ff improved algorythm for pulse generation 2024-10-25 15:35:47 +02:00
5cce1bedc9 fixed a bug in the time-monitor calculation and added units 2024-10-24 08:54:43 +02:00
52bd731a3c set startTime to hdf entry start_time 2024-10-23 15:32:46 +02:00
fbb44d435b changed keys for monitorType 2024-10-22 17:46:31 +02:00
182cc5ab3d added TODO for the resolution calculation 2024-10-22 17:05:09 +02:00
9f2bc034e7 debugging output for neutron pulse times and proton current 2024-10-22 15:47:25 +02:00
7f2d5cf765 added monitorType 2024-10-22 09:43:44 +02:00
772d921f03 Merge branch 'main' of https://github.com/jochenstahn/amor 2024-10-10 14:09:51 +02:00
f5ce5e9d73 format of info output for proton current adapted 2024-10-10 08:06:07 +02:00
7fc64a58f0 sorted outputPath to output 2024-10-05 13:57:39 +02:00
b848d66290 changed path variable names + new option lowCurrentThreshold 2024-10-04 15:10:11 +02:00
1f394b0e86 replaced scipy interpolation by dedicated routine (by Artur) 2024-09-27 16:27:50 +02:00
69620d0f16 removed doupble normalisation.... 2024-09-26 11:04:51 +02:00
0e727ce591 fixed monitor for normalisation measurement 2024-09-26 10:24:37 +02:00
f2ebe5ce0c new method for time / protonCharge normalisation - compatible with old data files 2024-09-26 10:09:15 +02:00
acc508e9ea timestamps are now all of type np.int64 with unit ns 2024-09-25 16:35:31 +02:00
e6578d5bf2 fix merge error missing method calls 2024-09-25 14:06:55 +02:00
46 changed files with 2951 additions and 3524 deletions

1
.gitattributes vendored Normal file
View File

@@ -0,0 +1 @@
*.hdf filter=lfs diff=lfs merge=lfs -text

105
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,105 @@
name: Release
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
tags:
- "*"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
inputs:
build-items:
description: 'Items to be build'
required: true
default: 'all'
type: choice
options:
- all
- windows
- linux
- all_incl_release
jobs:
build-ubuntu-latest:
runs-on: ubuntu-latest
if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all", "linux", "all_incl_release"]'), github.event.inputs.build-items)) }}
permissions:
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install build
pip install -r requirements.txt
- name: Build PyPI package
run: |
python3 -m build
- name: Archive distribution
uses: actions/upload-artifact@v4
with:
name: linux-dist
path: |
dist/*.tar.gz
- name: Upload to PyPI
#if: github.event_name != 'workflow_dispatch'
uses: pypa/gh-action-pypi-publish@release/v1
with:
# user: __token__
# password: ${{ secrets.PYPI_TOKEN }}
skip-existing: true
build-windows:
runs-on: windows-latest
if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all", "windows", "all_incl_release"]'), github.event.inputs.build-items)) }}
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: Install dependencies
run: |
C:\Miniconda\condabin\conda.bat env update --file conda_windows.yml --name base
C:\Miniconda\condabin\conda.bat init powershell
- name: Build with pyinstaller
run: |
pyinstaller windows_build.spec
cd dist\eos
Compress-Archive -Path .\* -Destination ..\..\eos.zip
- name: Archive distribution
uses: actions/upload-artifact@v4
with:
name: windows-dist
path: |
eos.zip
release:
if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all_incl_release"]'), github.event.inputs.build-items)) }}
runs-on: ubuntu-latest
needs: [build-ubuntu-latest, build-windows]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/download-artifact@v4
with:
name: linux-dist
- uses: actions/download-artifact@v4
with:
name: windows-dist
- name: get latest version tag
run: echo "RELEASE_TAG=$(git describe --abbrev=0 --tags)" >> $GITHUB_ENV
- uses: ncipollo/release-action@v1
with:
artifacts: "amor*.tar.gz,*.zip"
token: ${{ secrets.GITHUB_TOKEN }}
allowUpdates: true
tag: ${{ env.RELEASE_TAG }}

38
.github/workflows/unit_tests.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
name: Unit Testing
on:
push:
branches:
- main
pull_request:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-22.04
strategy:
matrix:
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
fail-fast: false
steps:
- uses: actions/checkout@v4
with:
lfs: 'true'
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
pip install -r requirements.txt
- name: Test with pytest
run: |
python -m pytest tests

4
.gitignore vendored
View File

@@ -5,6 +5,8 @@
__pycache__
raw
.idea
test_data
test_results
build
dist
profile_test.prof
amor_eos.log

View File

@@ -1,12 +0,0 @@
Software repository for the neutron reflectometer Amor at the Paul Scherrer Institut, Switzerland
reduction of the raw files (.hdf) to reflectivity files in one of the representations of the **ORSO reflectivity file format**:
> `eos.py`
> `neos.py` : version for playing and testing
visualisation of the content of a raw file (.hdf):
> `events2histogram.py`
#TODO: real readme for final system needed.

27
README.rst Normal file
View File

@@ -0,0 +1,27 @@
EOS - The AMOR focusing reflectometry data reduction software
-------------------------------------------------------------
.. image:: https://img.shields.io/pypi/v/amor-eos.svg
:target: https://pypi.python.org/pypi/amor-eos/
Software repository for the neutron reflectometer Amor at the Paul Scherrer Institut, Switzerland
Reduction of the raw files (.hdf) to reflectivity files in one of the representations of the **ORSO reflectivity file format**:
eos --help
visualisation of the content of a raw file (.hdf):
events2histogram.py
:TODO: real readme for final system needed.
Installation
------------
Create a virtual python environment (>3.8) and install the PyPI package:
pip install amor-eos
On Windows you can also use the binary eos.exe that you find in the
[GitHub Releases]([https://github.com/jochenstahn/amor/releases/latest) section

File diff suppressed because it is too large Load Diff

44
conda_windows.yml Normal file
View File

@@ -0,0 +1,44 @@
name: eos_build
channels:
- defaults
dependencies:
- altgraph=0.17.3=py312haa95532_0
- blas=1.0=mkl
- bzip2=1.0.8=h2bbff1b_6
- ca-certificates=2024.11.26=haa95532_0
- expat=2.6.3=h5da7b33_0
- h5py=3.12.1=py312h3b2c811_0
- hdf5=1.12.1=h51c971a_3
- icc_rt=2022.1.0=h6049295_2
- intel-openmp=2023.1.0=h59b6b97_46320
- libffi=3.4.4=hd77b12b_1
- llvmlite=0.43.0=py312hf2fb9eb_0
- mkl=2023.1.0=h6b88ed4_46358
- mkl-service=2.4.0=py312h2bbff1b_1
- mkl_fft=1.3.11=py312h827c3e9_0
- mkl_random=1.2.8=py312h0158946_0
- numba=0.60.0=py312h0158946_0
- numpy=1.26.4=py312hfd52020_0
- numpy-base=1.26.4=py312h4dde369_0
- openssl=3.0.15=h827c3e9_0
- packaging=24.1=py312haa95532_0
- pefile=2023.2.7=py312haa95532_0
- pip=24.2=py312haa95532_0
- pyinstaller=6.9.0=py312h0416ee5_0
- pyinstaller-hooks-contrib=2024.7=py312haa95532_0
- python=3.12.7=h14ffc60_0
- pywin32-ctypes=0.2.2=py312haa95532_0
- setuptools=75.1.0=py312haa95532_0
- sqlite=3.45.3=h2bbff1b_0
- tbb=2021.8.0=h59b6b97_0
- tk=8.6.14=h0416ee5_0
- tzdata=2024b=h04d1e81_0
- vc=14.40=h2eaa2aa_1
- vs2015_runtime=14.40.33807=h98bb1dd_1
- wheel=0.44.0=py312haa95532_0
- xz=5.4.6=h8cc25b3_1
- zlib=1.2.13=h8cc25b3_1
- pip:
- orsopy==1.2.1
- pyyaml==6.0.2
- tzdata

46
eos.py
View File

@@ -1,46 +0,0 @@
#!/usr/bin/env python
"""
eos reduces measurements performed on Amor@SINQ, PSI
Author: Jochen Stahn (algorithms, python draft),
Artur Glavic (structuring and optimisation of code)
conventions (not strictly followed, yet):
- array names end with the suffix '_x[y]' with the meaning
_e = events
_tof
_l = lambda
_t = theta
_z = detector z
_lz = (lambda, detector z)
_q = q_z
"""
import logging
from libeos.command_line import command_line_options
from libeos.logconfig import setup_logging
from libeos.reduction import AmorReduction
#=====================================================================================================
# TODO:
# - calculate resolution using the chopperPhase
# - deal with background correction
# - format of 'call' + add '-Y' if not supplied
#=====================================================================================================
def main():
setup_logging()
logging.warning('######## eos - data reduction for Amor ########')
# read command line arguments and generate classes holding configuration parameters
config = command_line_options()
# Create reducer with these arguments
reducer = AmorReduction(config)
# Perform actual reduction
reducer.reduce()
logging.info('######## eos - finished ########')
if __name__ == '__main__':
main()

View File

@@ -1,6 +1,6 @@
"""
Package to handle data redction at AMOR instrument to be used by eos.py script.
Package to handle data redction at AMOR instrument to be used by __main__.py script.
"""
__version__ = '2.1.0'
__date__ = '2024-08-25'
__version__ = '3.0.0'
__date__ = '2025-10-06'

41
eos/__main__.py Normal file
View File

@@ -0,0 +1,41 @@
"""
eos reduces measurements performed on Amor@SINQ, PSI
Author: Jochen Stahn (algorithms, python draft),
Artur Glavic (structuring and optimisation of code)
"""
import logging
# need to do absolute import here as pyinstaller requires it
from eos.options import EOSConfig, ReaderConfig, ExperimentConfig, ReductionConfig, OutputConfig
from eos.command_line import commandLineArgs
from eos.logconfig import setup_logging, update_loglevel
def main():
setup_logging()
# read command line arguments and generate classes holding configuration parameters
clas = commandLineArgs([ReaderConfig, ExperimentConfig, ReductionConfig, OutputConfig],
'eos')
update_loglevel(clas.verbose)
reader_config = ReaderConfig.from_args(clas)
experiment_config = ExperimentConfig.from_args(clas)
reduction_config = ReductionConfig.from_args(clas)
output_config = OutputConfig.from_args(clas)
config = EOSConfig(reader_config, experiment_config, reduction_config, output_config)
logging.warning('######## eos - data reduction for Amor ########')
# only import heavy module if sufficient command line parameters were provided
from eos.reduction import AmorReduction
# Create reducer with these arguments
reducer = AmorReduction(config)
# Perform actual reduction
reducer.reduce()
logging.info('######## eos - finished ########')
if __name__ == '__main__':
main()

39
eos/command_line.py Normal file
View File

@@ -0,0 +1,39 @@
import argparse
from typing import List
from .options import ArgParsable
def commandLineArgs(config_items: List[ArgParsable], program_name=None):
"""
Process command line argument.
The type of the default values is used for conversion and validation.
"""
msg = "eos reads data from (one or several) raw file(s) of the .hdf format, \
performs various corrections, conversations and projections and exports\
the resulting reflectivity in an orso-compatible format."
clas = argparse.ArgumentParser(prog=program_name,
description = msg, formatter_class=argparse.ArgumentDefaultsHelpFormatter)
clas.add_argument('-v', '--verbose', action='count', default=0)
clas_groups = {}
all_arguments = []
for cls in config_items:
all_arguments += cls.get_commandline_parameters()
all_arguments.sort() # parameters are sorted alphabetically, unless they have higher priority
for cpc in all_arguments:
if not cpc.group in clas_groups:
clas_groups[cpc.group] = clas.add_argument_group(cpc.group)
if cpc.short_form:
clas_groups[cpc.group].add_argument(
f'-{cpc.short_form}', f'--{cpc.argument}', **cpc.add_argument_args
)
else:
clas_groups[cpc.group].add_argument(
f'--{cpc.argument}', **cpc.add_argument_args
)
return clas.parse_args()

View File

@@ -4,3 +4,4 @@ Constants used in data reduction.
hdm = 6.626176e-34/1.674928e-27 # h / m
lamdaCut = 2.5 # Aa
lamdaMax = 15.0 # Aa

121
eos/event_analysis.py Normal file
View File

@@ -0,0 +1,121 @@
"""
Define an event dataformat that performs reduction actions like wavelength calculation on per-event basis.
With large number of events these actions can be time consuming so they use numba based functions.
"""
import numpy as np
import logging
from typing import Tuple
from . import const
from .event_data_types import EventDataAction, EventDatasetProtocol, append_fields, EVENT_BITMASKS
from .helpers import filter_project_x, merge_frames, extract_walltime
from .instrument import Detector
from .options import IncidentAngle
from .header import Header
class ExtractWalltime(EventDataAction):
def perform_action(self, dataset: EventDatasetProtocol) ->None:
wallTime = extract_walltime(dataset.data.events.tof,
dataset.data.packets.start_index,
dataset.data.packets.Time)
logging.debug(f' expending event stream by wallTime')
new_events = append_fields(dataset.data.events, [('wallTime', wallTime.dtype)])
new_events.wallTime = wallTime
dataset.data.events = new_events
class MergeFrames(EventDataAction):
def perform_action(self, dataset: EventDatasetProtocol)->None:
tofCut = const.lamdaCut*dataset.geometry.chopperDetectorDistance/const.hdm*1e-13
total_offset = (tofCut +
dataset.timing.tau * (dataset.timing.ch1TriggerPhase + dataset.timing.chopperPhase/2)/180)
dataset.data.events.tof = merge_frames(dataset.data.events.tof, tofCut, dataset.timing.tau, total_offset)
class AnalyzePixelIDs(EventDataAction):
def __init__(self, yRange: Tuple[int, int]):
self.yRange = yRange
def perform_action(self, dataset: EventDatasetProtocol) ->None:
d = dataset.data
(detZ, detXdist, delta, mask) = filter_project_x(
Detector.pixelLookUp, d.events.pixelID, self.yRange[0], self.yRange[1]
)
ana_events = append_fields(d.events, [
('detZ', detZ.dtype), ('detXdist', detXdist.dtype), ('delta', delta.dtype)])
# add analysis per event
ana_events.detZ = detZ
ana_events.detXdist = detXdist
ana_events.delta = delta
ana_events.mask += np.logical_not(mask)*EVENT_BITMASKS['yRange']
d.events = ana_events
class CalculateWavelength(EventDataAction):
def __init__(self, lambdaRange: Tuple[float, float]):
self.lambdaRange = lambdaRange
def perform_action(self, dataset: EventDatasetProtocol) ->None:
d = dataset.data
if not 'detXdist' in dataset.data.events.dtype.names:
raise ValueError("CalculateWavelength requires dataset with analyzed pixels, perform AnalyzePixelIDs first")
#lamdaMax = const.lamdaCut+1.e13*dataset.timing.tau*const.hdm/(dataset.geometry.chopperDetectorDistance+124.)
# lambda
# TODO: one of the most time consuming actions, could be implemented in numba, instead?
lamda = (1.e13*const.hdm)*d.events.tof/(dataset.geometry.chopperDetectorDistance+d.events.detXdist)
final_events = append_fields(d.events, [('lamda', np.float64)])
# add analysis per event
final_events.lamda = lamda
final_events.mask += EVENT_BITMASKS["LamdaRange"]*(
(self.lambdaRange[0]>lamda) | (lamda>self.lambdaRange[1]))
d.events = final_events
class CalculateQ(EventDataAction):
def __init__(self, incidentAngle: IncidentAngle):
self.incidentAngle = incidentAngle
def perform_action(self, dataset: EventDatasetProtocol) ->None:
d = dataset.data
if not 'lamda' in dataset.data.events.dtype.names:
raise ValueError("CalculateQ requires dataset with analyzed wavelength, perform CalculateWavelength first")
lamda = d.events.lamda
final_events = append_fields(d.events, [('qz', np.float64)])
# alpha_f
# q_z
if self.incidentAngle == IncidentAngle.alphaF:
alphaF_e = dataset.geometry.nu - dataset.geometry.mu + d.events.delta
final_events.qz = 4*np.pi*(np.sin(np.deg2rad(alphaF_e))/lamda)
elif self.incidentAngle == IncidentAngle.nu:
alphaF_e = (dataset.geometry.nu + d.events.delta + dataset.geometry.kap + dataset.geometry.kad) / 2.
final_events.qz = 4*np.pi*(np.sin(np.deg2rad(alphaF_e))/lamda)
else:
alphaF_e = dataset.geometry.nu - dataset.geometry.mu + d.events.delta
alphaI = dataset.geometry.kap + dataset.geometry.kad + dataset.geometry.mu
final_events.qz = 2*np.pi * ((np.sin(np.deg2rad(alphaF_e)) + np.sin(np.deg2rad(alphaI)))/lamda)
final_events = append_fields(final_events, [('qx', np.float64)])
final_events.qx = 2*np.pi * ((np.cos(np.deg2rad(alphaF_e)) - np.cos(np.deg2rad(alphaI)))/lamda)
dataset.data.events = final_events
def update_header(self, header: Header):
if self.incidentAngle == IncidentAngle.alphaF:
header.measurement_scheme = 'angle- and energy-dispersive'
else:
header.measurement_scheme = 'energy-dispersive'
class FilterQzRange(EventDataAction):
def __init__(self, qzRange: Tuple[float, float]):
self.qzRange = qzRange
def perform_action(self, dataset: EventDatasetProtocol) ->None:
d = dataset.data
if not 'qz' in dataset.data.events.dtype.names:
raise ValueError("FilterQzRange requires dataset with qz values per events, perform WavelengthAndQ first")
if self.qzRange[1]<0.5:
d.events.mask += EVENT_BITMASKS["qRange"]*((self.qzRange[0]>d.events.qz) | (d.events.qz>self.qzRange[1]))

144
eos/event_data_types.py Normal file
View File

@@ -0,0 +1,144 @@
"""
Specify the data type and protocol used for event datasets.
"""
from typing import List, Optional, Protocol, Tuple
from dataclasses import dataclass
from .header import Header
from abc import ABC, abstractmethod
from hashlib import sha256
import numpy as np
import logging
@dataclass
class AmorGeometry:
mu:float
nu:float
kap:float
kad:float
div:float
chopperSeparation: float
detectorDistance: float
chopperDetectorDistance: float
@dataclass
class AmorTiming:
ch1TriggerPhase: float
ch2TriggerPhase: float
chopperSpeed: float
chopperPhase: float
tau: float
# Structured datatypes used for event streams
EVENT_TYPE = np.dtype([('tof', np.float64), ('pixelID', np.uint32), ('mask', np.int32)])
PACKET_TYPE = np.dtype([('start_index', np.uint32), ('Time', np.int64)])
PULSE_TYPE = np.dtype([('time', np.int64), ('monitor', np.float32)])
PC_TYPE = np.dtype([('current', np.float32), ('time', np.int64)])
# define the bitmask for individual event filters
EVENT_BITMASKS = {
'MonitorThreshold': 1,
'StrangeTimes': 2,
'yRange': 4,
'LamdaRange': 8,
'qRange': 16,
}
def append_fields(input: np.recarray, new_fields: List[Tuple[str, np.dtype]]):
# add one ore more fields to a recarray, numpy functions seems to fail
flds = [(name, dtypei[0]) for name, dtypei in input.dtype.fields.items()]
flds += new_fields
output = np.recarray(len(input), dtype=flds)
for field in input.dtype.fields.keys():
output[field] = input[field]
return output
@dataclass
class AmorEventStream:
events: np.recarray # EVENT_TYPE
packets: np.recarray # PACKET_TYPE
pulses: np.recarray # PULSE_TYPE
proton_current: np.recarray # PC_TYPE
class EventDatasetProtocol(Protocol):
"""
Minimal attributes a dataset needs to provide to work with EventDataAction
"""
geometry: AmorGeometry
timing: AmorTiming
data: AmorEventStream
def append(self, other):
# Should define a way to add events from other to own
...
def update_header(self, header:Header):
# update a header with the information read from file
...
class EventDataAction(ABC):
"""
Abstract base class used for actions applied to an EventDatasetProtocol based objects.
Each action can optionally modify the header information.
Actions can be combined using the pipe operator | (OR).
"""
def __call__(self, dataset: EventDatasetProtocol)->None:
logging.debug(f" Enter action {self.__class__.__name__} on {dataset!r}")
self.perform_action(dataset)
@abstractmethod
def perform_action(self, dataset: EventDatasetProtocol)->None: ...
def update_header(self, header:Header)->None:
if hasattr(self, 'action_name'):
header.reduction.corrections.append(getattr(self, 'action_name'))
def __or__(self, other:'EventDataAction')->'CombinedAction':
return CombinedAction([self, other])
def __repr__(self):
output = self.__class__.__name__+'('
for key,value in self.__dict__.items():
output += f'{key}={value}, '
return output.rstrip(', ')+')'
def action_hash(self)->bytes:
# generate a unique hash that encodes this action with its configuration parameters
mh = sha256()
mh.update(self.__class__.__name__.encode())
for key,value in sorted(self.__dict__.items()):
mh.update(repr(value).encode())
return mh.hexdigest()
class CombinedAction(EventDataAction):
"""
Used to perform multiple actions in one call. Stores a sequence of actions
that are then performed individually one after the other.
"""
def __init__(self, actions: List[EventDataAction]) -> None:
self._actions = actions
def perform_action(self, dataset: EventDatasetProtocol)->None:
for action in self._actions:
action(dataset)
def update_header(self, header:Header)->None:
for action in self._actions:
action.update_header(header)
def __or__(self, other:'EventDataAction')->'CombinedAction':
return CombinedAction(self._actions+[other])
def __repr__(self):
output = repr(self._actions[0])
for ai in self._actions[1:]:
output += ' | '+repr(ai)
return output
def action_hash(self)->bytes:
mh = sha256()
for action in self._actions:
mh.update(action.action_hash().encode())
return mh.hexdigest()

179
eos/event_handling.py Normal file
View File

@@ -0,0 +1,179 @@
"""
Calculations performed on AmorEventData.
This module contains actions that do not need the numba base helper functions. Other actions are in event_analysis
"""
import logging
import os
import numpy as np
from .header import Header
from .options import ExperimentConfig, MonitorType
from .event_data_types import EventDatasetProtocol, EventDataAction, EVENT_BITMASKS
class ApplyPhaseOffset(EventDataAction):
def __init__(self, chopperPhaseOffset: float):
self.chopperPhaseOffset=chopperPhaseOffset
def perform_action(self, dataset: EventDatasetProtocol) ->None:
logging.debug(
f' replaced ch1TriggerPhase = {dataset.timing.ch1TriggerPhase} '
f'with {self.chopperPhaseOffset}')
dataset.timing.ch1TriggerPhase = self.chopperPhaseOffset
class ApplyParameterOverwrites(EventDataAction):
def __init__(self, config: ExperimentConfig):
self.config=config
def perform_action(self, dataset: EventDatasetProtocol) ->None:
if self.config.muOffset:
logging.debug(f' set muOffset = {self.config.muOffset}')
dataset.geometry.mu += self.config.muOffset
if self.config.mu:
logging.debug(f' replaced mu = {dataset.geometry.mu} with {self.config.mu}')
dataset.geometry.mu = self.config.mu
if self.config.nu:
logging.debug(f' replaced nu = {dataset.geometry.nu} with {self.config.nu}')
dataset.geometry.nu = self.config.nu
logging.info(f' mu = {dataset.geometry.mu:6.3f}, '
f'nu = {dataset.geometry.nu:6.3f}, '
f'kap = {dataset.geometry.kap:6.3f}, '
f'kad = {dataset.geometry.kad:6.3f}')
def update_header(self, header:Header) ->None:
if self.config.sampleModel:
import yaml
from orsopy.fileio.model_language import SampleModel
if self.config.sampleModel.endswith('.yml') or self.config.sampleModel.endswith('.yaml'):
if os.path.isfile(self.config.sampleModel):
with open(self.config.sampleModel, 'r') as model_yml:
model = yaml.safe_load(model_yml)
else:
logging.warning(f' ! the file {self.config.sampleModel}.yml does not exist. Ignored!')
return
else:
model = dict(stack=self.config.sampleModel)
logging.debug(f' set sample.model = {self.config.sampleModel}')
header.sample.model = SampleModel.from_dict(model)
class CorrectChopperPhase(EventDataAction):
def perform_action(self, dataset: EventDatasetProtocol) ->None:
dataset.data.events.tof += dataset.timing.tau*(dataset.timing.ch1TriggerPhase-dataset.timing.chopperPhase/2)/180
class CorrectSeriesTime(EventDataAction):
def __init__(self, seriesStartTime):
self.seriesStartTime = np.int64(seriesStartTime)
def perform_action(self, dataset: EventDatasetProtocol)->None:
if not 'wallTime' in dataset.data.events.dtype.names:
raise ValueError("CorrectTimeSeries requires walltTime to be extracted, please run ExtractWalltime first")
dataset.data.pulses.time -= self.seriesStartTime
dataset.data.events.wallTime -= self.seriesStartTime
dataset.data.proton_current.time -= self.seriesStartTime
start, stop = dataset.data.events.wallTime[0], dataset.data.events.wallTime[-1]
logging.debug(f' wall time from {start/1e9:6.1f} s to {stop/1e9:6.1f} s, '
f'series time = {self.seriesStartTime/1e9:6.1f}')
class AssociatePulseWithMonitor(EventDataAction):
def __init__(self, monitorType:MonitorType, lowCurrentThreshold:float):
self.monitorType = monitorType
self.lowCurrentThreshold = lowCurrentThreshold
def perform_action(self, dataset: EventDatasetProtocol)->None:
logging.debug(f' using monitor type {self.monitorType}')
if self.monitorType in [MonitorType.proton_charge or MonitorType.debug]:
if not 'wallTime' in dataset.data.events.dtype.names:
raise ValueError(
"AssociatePulseWithMonitor requires walltTime to be extracted, please run ExtractWalltime first")
monitorPerPulse = self.get_current_per_pulse(dataset.data.pulses.time,
dataset.data.proton_current.time,
dataset.data.proton_current.current)\
* 2*dataset.timing.tau * 1e-3
# filter low-current pulses
dataset.data.pulses.monitor = np.where(
monitorPerPulse > 2*dataset.timing.tau * self.lowCurrentThreshold * 1e-3,
monitorPerPulse, 0)
elif self.monitorType==MonitorType.time:
dataset.data.pulses.monitor = 2*dataset.timing.tau
else: # pulses
dataset.data.pulses.monitor = 1
if self.monitorType == MonitorType.debug:
cpp, t_bins = np.histogram(dataset.data.events.wallTime, dataset.data.pulses.time)
np.savetxt('tme.hst', np.vstack((dataset.data.pulses.time[:-1], cpp, dataset.data.pulses.monitor[:-1])).T)
if self.monitorType in [MonitorType.proton_charge or MonitorType.debug]:
self.monitor_threshold(dataset)
def monitor_threshold(self, dataset):
goodTimeS = dataset.data.pulses.time[dataset.data.pulses.monitor!=0]
filter_e = np.logical_not(np.isin(dataset.data.events.wallTime, goodTimeS))
dataset.data.events.mask += EVENT_BITMASKS['MonitorThreshold']*filter_e
logging.info(f' low-beam (<{self.lowCurrentThreshold} mC) rejected pulses: '
f'{dataset.data.pulses.monitor.shape[0]-goodTimeS.shape[0]} '
f'out of {dataset.data.pulses.monitor.shape[0]}')
logging.info(f' with {filter_e.sum()} events')
if goodTimeS.shape[0]:
logging.info(f' average counts per pulse = {dataset.data.events.shape[0]/goodTimeS.shape[0]:7.1f}')
else:
logging.info(f' average counts per pulse = undefined')
@staticmethod
def get_current_per_pulse(pulseTimeS, currentTimeS, currents):
# add currents for early pulses and current time value after last pulse (j+1)
currentTimeS = np.hstack([[0], currentTimeS, [pulseTimeS[-1]+1]])
currents = np.hstack([[0], currents])
pulseCurrentS = np.zeros(pulseTimeS.shape[0], dtype=float)
j = 0
for i, ti in enumerate(pulseTimeS):
# find the last current item that was before this pulse
while ti >= currentTimeS[j+1]:
j += 1
pulseCurrentS[i] = currents[j]
return pulseCurrentS
class FilterStrangeTimes(EventDataAction):
def perform_action(self, dataset: EventDatasetProtocol)->None:
filter_e = np.logical_not(dataset.data.events.tof<=2*dataset.timing.tau)
dataset.data.events.mask += EVENT_BITMASKS['StrangeTimes']*filter_e
if filter_e.any():
logging.warning(f' strange times: {filter_e.sum()}')
class TofTimeCorrection(EventDataAction):
def __init__(self, correct_chopper_opening: bool = True):
self.correct_chopper_opening = correct_chopper_opening
def perform_action(self, dataset: EventDatasetProtocol) ->None:
d = dataset.data
if self.correct_chopper_opening:
d.events.tof -= ( d.events.delta / 180. ) * dataset.timing.tau
else:
d.events.tof -= ( dataset.geometry.kad / 180. ) * dataset.timing.tau
class ApplyMask(EventDataAction):
def __init__(self, bitmask_filter=None):
self.bitmask_filter = bitmask_filter
def perform_action(self, dataset: EventDatasetProtocol) ->None:
# TODO: why is this action time consuming?
d = dataset.data
pre_filter = d.events.shape[0]
if logging.getLogger().level == logging.DEBUG:
# only run this calculation if debug level is actually active
filtered_by_mask = {}
for key, value in EVENT_BITMASKS.items():
filtered_by_mask[key] = ((d.events.mask & value)!=0).sum()
logging.debug(f" Removed by filters: {filtered_by_mask}")
if self.bitmask_filter is None:
d.events = d.events[d.events.mask==0]
else:
# remove the provided bitmask_filter bits from the events
# this means that all bits that are set in bitmask_filter will NOT be used to filter events
fltr = (d.events.mask & (~self.bitmask_filter)) == 0
d.events = d.events[fltr]
post_filter = d.events.shape[0]
logging.info(f' number of events: total = {pre_filter:7d}, filtered = {post_filter:7d}')

345
eos/file_reader.py Normal file
View File

@@ -0,0 +1,345 @@
"""
Reading of Amor NeXus data files to extract metadata and event stream.
"""
from typing import BinaryIO, List, Union
import h5py
import numpy as np
import platform
import logging
import subprocess
from datetime import datetime
from orsopy import fileio
from orsopy.fileio.model_language import SampleModel
from . import const
from .header import Header
from .event_data_types import AmorGeometry, AmorTiming, AmorEventStream, PACKET_TYPE, EVENT_TYPE, PULSE_TYPE, PC_TYPE
try:
import zoneinfo
except ImportError:
# for python versions < 3.9 try to use the backports version
from backports import zoneinfo
# Time zone used to interpret time strings
AMOR_LOCAL_TIMEZONE = zoneinfo.ZoneInfo(key='Europe/Zurich')
if platform.node().startswith('amor'):
NICOS_CACHE_DIR = '/home/amor/nicosdata/amor/cache/'
GREP = '/usr/bin/grep "%s"'
else:
NICOS_CACHE_DIR = None
class AmorEventData:
"""
Read one amor NeXus datafile and extract relevant header information.
Implements EventDatasetProtocol
"""
file_list: List[str]
first_index: int
last_index: int = -1
EOF: bool = False
max_events: int
owner: fileio.Person
experiment: fileio.Experiment
sample: fileio.Sample
instrument_settings: fileio.InstrumentSettings
geometry: AmorGeometry
timing: AmorTiming
data: AmorEventStream
eventStartTime: np.int64
def __init__(self, fileName:Union[str, h5py.File, BinaryIO], first_index:int=0, max_events:int=100_000_000):
if type(fileName) is str:
self.file_list = [fileName]
self.hdf = h5py.File(fileName, 'r', swmr=True)
elif type(fileName) is h5py.File:
self.file_list = [fileName.filename]
self.hdf = fileName
else:
self.file_list = [repr(fileName)]
self.hdf = h5py.File(fileName, 'r')
self.first_index = first_index
self.max_events = max_events
self.read_header_info()
self.read_instrument_configuration()
self.read_event_stream()
# actions applied to any dataset
self.read_chopper_trigger_stream()
self.read_proton_current_stream()
if type(fileName) is str:
# close the input file to free memory, only if the file was opened in this object
self.hdf.close()
del(self.hdf)
def _replace_if_missing(self, key, nicos_key, dtype=float):
try:
return dtype(self.hdf[f'/entry1/Amor/{key}'][0])
except(KeyError, IndexError):
if NICOS_CACHE_DIR:
try:
logging.warning(f" using parameter {nicos_key} from nicos cache")
year_date = self.fileDate.strftime('%Y')
value = str(subprocess.getoutput(f'{GREP} {NICOS_CACHE_DIR}nicos-{nicos_key}/{year_date}')).split('\t')[-1]
return dtype(value)
except Exception:
logging.error("Couldn't get value from nicos cache", exc_info=True)
return dtype(0)
else:
logging.warning(f" parameter {key} not found, relpace by zero")
return dtype(0)
def read_header_info(self):
# read general information and first data set
title = self.hdf['entry1/title'][0].decode('utf-8')
proposal_id = self.hdf['entry1/proposal_id'][0].decode('utf-8')
user_name = self.hdf['entry1/user/name'][0].decode('utf-8')
user_affiliation = 'unknown'
user_email = self.hdf['entry1/user/email'][0].decode('utf-8')
user_orcid = None
sampleName = self.hdf['entry1/sample/name'][0].decode('utf-8')
model = self.hdf['entry1/sample/model'][0].decode('utf-8')
if 'stack:' in model:
import yaml
model = yaml.safe_load(model)
else:
model = dict(stack=model)
instrumentName = 'Amor'
source = self.hdf['entry1/Amor/source/name'][0].decode('utf-8')
sourceProbe = 'neutron'
start_time = self.hdf['entry1/start_time'][0].decode('utf-8')
# extract start time as unix time, adding UTC offset of 1h to time string
start_date = datetime.fromisoformat(start_time)
self.fileDate = start_date.replace(tzinfo=AMOR_LOCAL_TIMEZONE)
self.owner = fileio.Person(
name=user_name,
affiliation=user_affiliation,
contact=user_email,
)
if user_orcid:
self.owner.orcid = user_orcid
self.experiment = fileio.Experiment(
title=title,
instrument=instrumentName,
start_date=start_date,
probe=sourceProbe,
facility=source,
proposalID=proposal_id
)
self.sample = fileio.Sample(
name=sampleName,
model=SampleModel.from_dict(model),
sample_parameters=None,
)
def read_instrument_configuration(self):
chopperSeparation = float(np.take(self.hdf['entry1/Amor/chopper/pair_separation'], 0))
detectorDistance = float(np.take(self.hdf['entry1/Amor/detector/transformation/distance'], 0))
chopperDetectorDistance = detectorDistance-float(np.take(self.hdf['entry1/Amor/chopper/distance'], 0))
polarizationConfigs = ['unpolarized', 'unpolarized', 'po', 'mo', 'op', 'pp', 'mp', 'om', 'pm', 'mm']
mu = self._replace_if_missing('instrument_control_parameters/mu', 'mu', float)
nu = self._replace_if_missing('instrument_control_parameters/nu', 'nu', float)
kap = self._replace_if_missing('instrument_control_parameters/kappa', 'kappa', float)
kad = self._replace_if_missing('instrument_control_parameters/kappa_offset', 'kad', float)
div = self._replace_if_missing('instrument_control_parameters/div', 'div', float)
ch1TriggerPhase = self._replace_if_missing('chopper/ch1_trigger_phase', 'ch1_trigger_phase', float)
ch2TriggerPhase = self._replace_if_missing('chopper/ch2_trigger_phase', 'ch2_trigger_phase', float)
try:
chopperTriggerTime = (float(self.hdf['entry1/Amor/chopper/ch2_trigger/event_time_zero'][7]) \
-float(self.hdf['entry1/Amor/chopper/ch2_trigger/event_time_zero'][0])) \
/7
chopperTriggerTimeDiff = float(self.hdf['entry1/Amor/chopper/ch2_trigger/event_time_offset'][2])
except (KeyError, IndexError):
logging.debug(' chopper speed and phase taken from .hdf file')
chopperSpeed = self._replace_if_missing('chopper/rotation_speed', 'chopper_phase', float)
chopperPhase = self._replace_if_missing('chopper/phase', 'chopper_phase', float)
tau = 30/chopperSpeed
else:
tau = int(1e-6*chopperTriggerTime/2+0.5)*(1e-3)
chopperTriggerPhase = 180e-9*chopperTriggerTimeDiff/tau
chopperSpeed = 30/tau
chopperPhase = chopperTriggerPhase+ch1TriggerPhase-ch2TriggerPhase
self.geometry = AmorGeometry(mu, nu, kap, kad, div,
chopperSeparation, detectorDistance, chopperDetectorDistance)
self.timing = AmorTiming(ch1TriggerPhase, ch2TriggerPhase, chopperSpeed, chopperPhase, tau)
polarizationConfigLabel = self._replace_if_missing('polarization/configuration/average_value', 'polarizer_config_label', int)
polarizationConfig = fileio.Polarization(polarizationConfigs[polarizationConfigLabel])
logging.debug(f' polarization configuration: {polarizationConfig} (index {polarizationConfigLabel})')
self.instrument_settings = fileio.InstrumentSettings(
incident_angle = fileio.ValueRange(round(mu+kap+kad-0.5*div, 3),
round(mu+kap+kad+0.5*div, 3),
'deg'),
wavelength = fileio.ValueRange(const.lamdaCut, const.lamdaMax, 'angstrom'),
#polarization = fileio.Polarization.unpolarized,
polarization = fileio.Polarization(polarizationConfig)
)
self.instrument_settings.mu = fileio.Value(
round(mu, 3),
'deg',
comment='sample angle to horizon')
self.instrument_settings.nu = fileio.Value(
round(nu, 3),
'deg',
comment='detector angle to horizon')
self.instrument_settings.div = fileio.Value(
round(div, 3),
'deg',
comment='incoming beam divergence')
self.instrument_settings.kap = fileio.Value(
round(kap, 3),
'deg',
comment='incoming beam inclination')
if abs(kad)>0.02:
self.instrument_settings.kad = fileio.Value(
round(kad, 3),
'deg',
comment='incoming beam angular offset')
def update_header(self, header:Header):
"""
Add dataset information into an existing header.
"""
logging.info(f' meta data from: {self.file_list[0]}')
header.owner = self.owner
header.experiment = self.experiment
header.sample = self.sample
header.measurement_instrument_settings = self.instrument_settings
def read_event_stream(self):
"""
Read the actual event data from file. If file is too large, find event index from packets
that allow splitting of file smaller than self.max_events.
"""
packets = np.recarray(self.hdf['/entry1/Amor/detector/data/event_index'].shape, dtype=PACKET_TYPE)
packets.start_index = self.hdf['/entry1/Amor/detector/data/event_index'][:]
packets.Time = self.hdf['/entry1/Amor/detector/data/event_time_zero'][:]
try:
# packet index that matches first event index
start_packet = int(np.where(packets.start_index==self.first_index)[0][0])
except IndexError:
raise IndexError(f'No event packet found starting at event #{self.first_index}')
packets = packets[start_packet:]
nevts = self.hdf['/entry1/Amor/detector/data/event_time_offset'].shape[0]
if (nevts-self.first_index)>self.max_events:
end_packet = np.where(packets.start_index<=(self.first_index+self.max_events))[0][-1]
self.last_index = packets.start_index[end_packet]-1
packets = packets[:end_packet]
else:
self.last_index = nevts-1
self.EOF = True
nevts = self.last_index+1-self.first_index
# adapte packet to event index relation
packets.start_index -= self.first_index
events = np.recarray(nevts, dtype=EVENT_TYPE)
events.tof = np.array(self.hdf['/entry1/Amor/detector/data/event_time_offset'][self.first_index:self.last_index+1])/1.e9
events.pixelID = self.hdf['/entry1/Amor/detector/data/event_id'][self.first_index:self.last_index+1]
events.mask = 0
pulses = self.read_chopper_trigger_stream()
current = self.read_proton_current_stream()
self.data = AmorEventStream(events, packets, pulses, current)
if self.first_index>0 and not self.EOF:
# label the file name if not all events were used
self.file_list[0] += f'[{self.first_index}:{self.last_index}]'
def read_chopper_trigger_stream(self):
chopper1TriggerTime = np.array(self.hdf['entry1/Amor/chopper/ch2_trigger/event_time_zero'][:-2], dtype=np.int64)
#self.chopper2TriggerTime = self.chopper1TriggerTime + np.array(self.hdf['entry1/Amor/chopper/ch2_trigger/event_time'][:-2], dtype=np.int64)
# + np.array(self.hdf['entry1/Amor/chopper/ch2_trigger/event_time_offset'][:], dtype=np.int64)
if np.shape(chopper1TriggerTime)[0] > 2:
startTime = chopper1TriggerTime[0]
pulseTimeS = chopper1TriggerTime
else:
logging.warn(' no chopper trigger data available, using event steram instead')
startTime = np.array(self.hdf['/entry1/Amor/detector/data/event_time_zero'][0], dtype=np.int64)
stopTime = np.array(self.hdf['/entry1/Amor/detector/data/event_time_zero'][-2], dtype=np.int64)
pulseTimeS = np.arange(startTime, stopTime, self.timing.tau*1e9, dtype=np.int64)
pulses = np.recarray(pulseTimeS.shape, dtype=PULSE_TYPE)
pulses.time = pulseTimeS
pulses.monitor = 1. # default is monitor pulses as it requires no calculation
# apply filter in case the events were filtered
if self.first_index>0 or not self.EOF:
pulses = pulses[(pulses.time>=self.data.packets.Time[0])&(pulses.time<=self.data.packets.Time[-1])]
self.eventStartTime = startTime
return pulses
def read_proton_current_stream(self):
proton_current = np.recarray(self.hdf['entry1/Amor/detector/proton_current/time'].shape, dtype=PC_TYPE)
proton_current.time = self.hdf['entry1/Amor/detector/proton_current/time'][:]
proton_current.current = self.hdf['entry1/Amor/detector/proton_current/value'][:,0]
if self.first_index>0 or not self.EOF:
proton_current = proton_current[(proton_current.time>=self.data.packets.Time[0])&
(proton_current.time<=self.data.packets.Time[-1])]
return proton_current
def info(self):
output = ""
for key in ['owner', 'experiment', 'sample', 'instrument_settings']:
value = repr(getattr(self, key)).replace("\n","\n ")
output += f'\n{key}={value},'
output += '\n'
return output
def append(self, other):
"""
Append event streams from another file to this one. Adjusts the event indices in the
packets to stay valid.
"""
new_events = np.concatenate([self.data.events, other.data.events]).view(np.recarray)
new_pulses = np.concatenate([self.data.pulses, other.data.pulses]).view(np.recarray)
new_proton_current = np.concatenate([self.data.proton_current, other.data.proton_current]).view(np.recarray)
new_packets = np.concatenate([self.data.packets, other.data.packets]).view(np.recarray)
new_packets.start_index[self.data.packets.shape[0]:] += self.data.events.shape[0]
self.data = AmorEventStream(new_events, new_packets, new_pulses, new_proton_current)
# Indicate that this is amodified dataset, basically counts number of appends as negative indices
self.last_index = min(self.last_index-1, -1)
self.file_list += other.file_list
def __repr__(self):
output = (f"AmorEventData({self.file_list!r}) # {self.data.events.shape[0]} events, "
f"{self.data.pulses.shape[0]} pulses")
return output
def get_timeslice(self, start, end)->'AmorEventData':
# return a new dataset with just events that occured in given time slice
if not 'wallTime' in self.data.events.dtype.names:
raise ValueError("This dataset is missing a wallTime that is required for time slicing")
# convert from seconds to epoch integer values
start , end = start*1e9, end*1e9
event_filter = self.data.events.wallTime>=start
event_filter &= self.data.events.wallTime<end
pulse_filter = self.data.pulses.time>=start
pulse_filter &= self.data.pulses.time<end
output = super().__new__(AmorEventData)
for key, value in self.__dict__.items():
if key == 'data':
continue
else:
setattr(output, key, value)
# TODO: this is not strictly correct, as the packet/event relationship is lost
output.data = AmorEventStream(self.data.events[event_filter], self.data.packets,
self.data.pulses[pulse_filter], self.data.proton_current)
return output

10
eos/helpers.py Normal file
View File

@@ -0,0 +1,10 @@
"""
Helper functions used during calculations. Uses numba enhanced functions if available, otherwise numpy based
fallback is imported.
"""
try:
from .helpers_numba import merge_frames, extract_walltime, filter_project_x, calculate_derived_properties_focussing
except ImportError:
from .helpers_fallback import merge_frames, extract_walltime, filter_project_x, calculate_derived_properties_focussing

26
eos/helpers_fallback.py Normal file
View File

@@ -0,0 +1,26 @@
"""
Equivalent function as in helpers_numba.py but using just numpy functionality.
"""
import numpy as np
def merge_frames(tof_e, tofCut, tau, total_offset):
# tof shifted to 1 frame
return np.remainder(tof_e-(tofCut-tau), tau)+total_offset
def extract_walltime(tof_e, dataPacket_p, dataPacketTime_p):
output = np.empty(np.shape(tof_e)[0], dtype=np.int64)
for i in range(len(dataPacket_p)-1):
output[dataPacket_p[i]:dataPacket_p[i+1]] = dataPacketTime_p[i]
output[dataPacket_p[-1]:] = dataPacketTime_p[-1]
return output
def filter_project_x(pixelLookUp, pixelID_e, ymin, ymax):
(detY_e, detZ_e, detXdist_e, delta_e) = pixelLookUp[np.int_(pixelID_e)-1, :].T
# define mask and filter y range
mask_e = (ymin<=detY_e) & (detY_e<=ymax)
return (detZ_e, detXdist_e, delta_e, mask_e)
def calculate_derived_properties_focussing(tof_e, detXdist_e, delta_e, mask_e,
lmin, lmax, nu, mu, chopperDetectorDistance, hdm):
raise NotImplementedError("Only exists in numba implementation so far.")

View File

@@ -11,12 +11,11 @@ def merge_frames(tof_e, tofCut, tau, total_offset):
tof_e_out[ti] = ((tof_e[ti]-dt)%tau)+total_offset # tof shifted to 1 frame
return tof_e_out
@nb.jit(nb.float64[:](nb.float64[:], nb.uint64[:], nb.float64[:]),
@nb.jit(nb.int64[:](nb.float64[:], nb.uint32[:], nb.int64[:]),
nopython=True, parallel=True, cache=True)
def extract_walltime(tof_e, dataPacket_p, dataPacketTime_p):
# assigning every event the wall time of the event packet (absolute time of pulse ?start?)
totalNumber = np.shape(tof_e)[0]
#wallTime_e = np.empty(totalNumber, dtype=np.float64)
wallTime_e = np.empty(totalNumber, dtype=np.int64)
for i in nb.prange(len(dataPacket_p)-1):
for j in range(dataPacket_p[i], dataPacket_p[i+1]):
@@ -26,7 +25,7 @@ def extract_walltime(tof_e, dataPacket_p, dataPacketTime_p):
return wallTime_e
@nb.jit(nb.types.Tuple((nb.int64[:], nb.float64[:], nb.float64[:], nb.boolean[:]))
(nb.float64[:, :], nb.int64[:], nb.int64, nb.int64),
(nb.float64[:, :], nb.uint32[:], nb.int64, nb.int64),
nopython=True, parallel=True, cache=True)
def filter_project_x(pixelLookUp, pixelID_e, ymin, ymax):
# project events on z-axis and create filter for events outside of y-range

View File

@@ -7,6 +7,13 @@ import numpy as np
from . import const
try:
from functools import cache
except ImportError:
# python <3.9
def cache(func): return func
class Detector:
nBlades = 14 # number of active blades in the detector
nWires = 32 # number of wires per blade
@@ -18,14 +25,57 @@ class Detector:
zero = 0.5*nBlades*bladeZ # mm vertical center of the detector
distance = 4000. # mm distance from focal point to leading blade edge
class Grid:
delta_z: np.ndarray
pixelLookUp: np.ndarray
@staticmethod
def resolve_pixels():
"""
Determine spatial coordinats and angles from pixel number,
does only have to be computed once for the detector
"""
if hasattr(Detector, 'pixelLookUp'):
return
nPixel = Detector.nWires * Detector.nStripes * Detector.nBlades
pixelID = np.arange(nPixel)
(bladeNr, bPixel) = np.divmod(pixelID, Detector.nWires * Detector.nStripes)
(bZi, detYi) = np.divmod(bPixel, Detector.nStripes) # z index on blade, y index on detector
detZi = bladeNr * Detector.nWires + bZi # z index on detector
detX = bZi * Detector.dX # x position in detector
# detZ = Detector.zero - bladeNr * Detector.bladeZ - bZi * Detector.dZ # z position on detector
bladeAngle = np.rad2deg( 2. * np.arcsin(0.5*Detector.bladeZ / Detector.distance) )
delta = (Detector.nBlades/2. - bladeNr) * bladeAngle \
- np.rad2deg( np.arctan(bZi*Detector.dZ / ( Detector.distance + bZi * Detector.dX) ) )
delta_z = delta[detYi==1]
pixel_lookup=np.vstack((detYi.T, detZi.T, detX.T, delta.T)).T
Detector.delta_z = delta_z
Detector.pixelLookUp = pixel_lookup
# guarantee that pixelLookUp has been computed
Detector.resolve_pixels()
class LZGrid:
dldl = 0.005 # Delta lambda / lambda
# as using cahced results, make sure the object is not modified
@property
def qResolution(self):
return self._qResolution
@property
def qzRange(self):
return self._qzRange
def __init__(self, qResolution, qzRange):
self.lamdaCut = const.lamdaCut
self.dldl = 0.005 # Delta lambda / lambda
self.qResolution = qResolution
self.qzRange = qzRange
self._qResolution = qResolution
self._qzRange = qzRange
@property
@cache
def shape(self):
# gives the shape of the grid, not of the bin-edges
return (self.lamda().shape[0]-1, self.z().shape[0]-1)
@cache
def q(self):
resolutions = [0.005, 0.01, 0.02, 0.025, 0.04, 0.05, 0.1, 1]
a, b = np.histogram([self.qResolution], bins = resolutions)
@@ -40,18 +90,22 @@ class Grid:
q_grid = q_grid[q_grid>=self.qzRange[0]]
return q_grid
@cache
def lamda(self):
lamdaMax = 16
lamdaMin = self.lamdaCut
lamdaMin = const.lamdaCut
lamda_grid = lamdaMin*(1+self.dldl)**np.arange(int(np.log(lamdaMax/lamdaMin)/np.log(1+self.dldl)+1))
return lamda_grid
@cache
def z(self):
return np.arange(Detector.nBlades*Detector.nWires+1)
@cache
def lz(self):
return np.ones(( np.shape(self.lamda()[:-1])[0], np.shape(self.z()[:-1])[0] ))
return np.ones(( self.lamda().shape[0]-1, self.z().shape[0]-1))
@cache
def delta(self, detectorDistance):
# unused for now
bladeAngle = np.rad2deg( 2. * np.arcsin(0.5*Detector.bladeZ / detectorDistance) )

View File

@@ -33,10 +33,10 @@ def setup_logging():
logfile.setLevel(logging.DEBUG)
logger.addHandler(logfile)
def update_loglevel(verbose=False, debug=False):
if verbose:
def update_loglevel(verbose=0):
if verbose==1:
logging.getLogger().handlers[0].setLevel(logging.INFO)
if debug:
if verbose>1:
console = logging.getLogger().handlers[0]
console.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(levelname).1s %(message)s')

78
eos/normalization.py Normal file
View File

@@ -0,0 +1,78 @@
"""
Defines how to normalize a focusing reflectometry dataset by a reference measurement.
"""
import logging
import os
import numpy as np
from typing import List, Optional
from .event_data_types import EventDatasetProtocol
from .header import Header
from .options import NormalisationMethod
from .instrument import Detector, LZGrid
class LZNormalisation:
file_list = List[str]
angle: float
monitor: float
norm: np.ndarray
def __init__(self, reference:EventDatasetProtocol, normalisationMethod: NormalisationMethod, grid: LZGrid):
self.angle = reference.geometry.nu-reference.geometry.mu
lamda_e = reference.data.events.lamda
detZ_e = reference.data.events.detZ
self.monitor = np.sum(reference.data.pulses.monitor)
norm_lz, _, _ = np.histogram2d(lamda_e, detZ_e, bins=(grid.lamda(), grid.z()))
norm_lz = np.where(norm_lz>2, norm_lz, np.nan)
if normalisationMethod==NormalisationMethod.direct_beam:
self.norm = np.flip(norm_lz, 1)
else:
# correct for reference sm reflectivity
lamda_l = grid.lamda()
theta_z = self.angle+Detector.delta_z
lamda_lz = (grid.lz().T*lamda_l[:-1]).T
theta_lz = grid.lz()*theta_z
qz_lz = 4.0*np.pi*np.sin(np.deg2rad(theta_lz))/lamda_lz
# TODO: introduce variable for `m` and propably for the slope
# Correct reflectivity of m=5 supermirror
Rsm_lz = np.ones(np.shape(qz_lz))
Rsm_lz = np.where(qz_lz>0.0217, 1-(qz_lz-0.0217)*(0.0625/0.0217), Rsm_lz)
Rsm_lz = np.where(qz_lz>0.0217*5, np.nan, Rsm_lz)
self.norm = norm_lz/Rsm_lz
self.file_list = [os.path.basename(entry) for entry in reference.file_list]
@classmethod
def from_file(cls, filename, check_hash=None) -> Optional['LZNormalisation']:
self = super().__new__(cls)
with open(filename, 'rb') as fh:
hash = str(np.load(fh, allow_pickle=True))
self.file_list = np.load(fh, allow_pickle=True)
self.angle = np.load(fh, allow_pickle=True)
self.norm = np.load(fh, allow_pickle=True)
self.monitor = np.load(fh, allow_pickle=True)
if check_hash is not None and hash != check_hash:
logging.info(' file hash does not match this reduction configuration')
raise ValueError('file hash does not match this reduction configuration')
return self
@classmethod
def unity(cls, grid:LZGrid) -> 'LZNormalisation':
logging.warning(f'normalisation is unity')
self = super().__new__(cls)
self.norm = grid.lz()
self.file_list = []
self.angle = 1.
self.monitor = 1.
return self
def safe(self, filename, hash):
with open(filename, 'wb') as fh:
np.save(fh, hash, allow_pickle=False)
np.save(fh, np.array(self.file_list), allow_pickle=False)
np.save(fh, np.array(self.angle), allow_pickle=False)
np.save(fh, self.norm, allow_pickle=False)
np.save(fh, self.monitor, allow_pickle=False)
def update_header(self, header:Header):
header.measurement_additional_files = self.file_list

611
eos/options.py Normal file
View File

@@ -0,0 +1,611 @@
"""
Classes for stroing various configurations needed for reduction.
"""
import argparse
from dataclasses import dataclass, field, Field, fields, MISSING
from typing import get_args, get_origin, List, Optional, Tuple, Union
from datetime import datetime
from os import path
import numpy as np
import logging
try:
from enum import StrEnum
except ImportError:
try:
# python <3.11 try to use backports
from backports.strenum import StrEnum
except ImportError:
# python <3.10 use Enum instead
from enum import Enum as StrEnum
@dataclass
class CommandlineParameterConfig:
argument: str # default parameter for command line resutign ins "--argument"
add_argument_args: dict # all arguments that will be passed to add_argument method
short_form: Optional[str] = None
group: str = 'misc'
priority: int = 0
def __gt__(self, other):
"""
Sort required arguments first, then use priority, then name
"""
return (not self.add_argument_args.get('required', False), -self.priority, self.argument)>(
not other.add_argument_args.get('required', False), -other.priority, other.argument)
class ArgParsable:
def __init_subclass__(cls):
# create a nice documentation string that takes help into account
cls.__doc__ = cls.__name__ + " Parameters:\n"
for key, typ in cls.__annotations__.items():
if get_origin(typ) is Union and type(None) in get_args(typ):
optional = True
typ = get_args(typ)[0]
else:
optional = False
value = getattr(cls, key, None)
try:
cls.__doc__ += f" {key} ({typ.__name__})"
except AttributeError:
cls.__doc__ += f" {key}"
if isinstance(value, Field):
if value.default is not MISSING:
cls.__doc__ += f" = {value.default}"
if 'help' in value.metadata:
cls.__doc__ += f" - {value.metadata['help']}"
elif value is not None:
cls.__doc__ += f" = {value}"
if optional:
cls.__doc__ += " [Optional]"
cls.__doc__ += "\n"
return cls
@classmethod
def get_commandline_parameters(cls) -> List[CommandlineParameterConfig]:
"""
Return a list of arguments used in building the command line parameters.
Union types besides Optional are not supported.
"""
output = []
for field in fields(cls):
args={}
if field.default is not MISSING:
args['default'] = field.default
args['required'] = False
elif field.default_factory is not MISSING:
args['default'] = field.default_factory()
args['required'] = False
else:
args['required'] = True
if get_origin(field.type) is Union and type(None) in get_args(field.type):
# optional argument
typ = get_args(field.type)[0]
del(args['default'])
else:
typ = field.type
if get_origin(typ) is list:
args['nargs'] = '+'
typ = get_args(typ)[0]
elif get_origin(typ) is tuple:
args['nargs'] = len(get_args(typ))
typ = get_args(typ)[0]
if issubclass(typ, StrEnum):
args['choices'] = [ci.value for ci in typ]
if field.default is not MISSING:
args['default'] = field.default.value
typ = str
if typ is bool:
args['action'] = 'store_false' if field.default else 'store_true'
else:
args['type'] = typ
if 'help' in field.metadata:
args['help'] = field.metadata['help']
output.append(CommandlineParameterConfig(
field.name,
add_argument_args=args,
group=field.metadata.get('group', 'misc'),
short_form=field.metadata.get('short', None),
priority=field.metadata.get('priority', 0),
))
return output
@classmethod
def get_default(cls, key):
"""
Return the default argument for an attribute, None if it doesn't exist.
"""
for field in fields(cls):
if field.name != key:
continue
if field.default is not MISSING:
return field.default
elif field.default_factory is not MISSING:
return field.default_factory()
return None
def is_default(self, key):
value = getattr(self, key)
return value == self.get_default(key)
@classmethod
def from_args(cls, args: argparse.Namespace):
"""
Create the child class from the command line argument Namespace object.
All attributes that are not needed for this class are ignored.
"""
inpargs = {}
for field in fields(cls):
value = getattr(args, field.name)
typ = field.type
if get_origin(field.type) is Union and type(None) in get_args(field.type):
# optional argument
typ = get_args(field.type)[0]
if isinstance(typ, type) and issubclass(typ, StrEnum):
# convert str to enum
try:
value = typ(value)
except ValueError:
choices = [ci.value for ci in typ]
raise ValueError(f"Parameter --{field.name} has to be one of {choices}")
inpargs[field.name] = value
return cls(**inpargs)
# definition of command line arguments
@dataclass
class ReaderConfig(ArgParsable):
year: int = field(
default=datetime.now().year,
metadata={
'short': 'Y',
'group': 'input data',
'help': 'year the measurement was performed',
},
)
rawPath: List[str] = field(
default_factory=lambda: ['.', path.join('.','raw'), path.join('..','raw'), path.join('..','..','raw')],
metadata={
'short': 'rp',
'group': 'input data',
'help': 'search paths for hdf files',
},
)
startTime: Optional[float] = field(
default = None,
metadata={
'short': 'st',
'group': 'data manicure',
'help': 'set time zero other than the start of the data aquisition',
},
)
class IncidentAngle(StrEnum):
alphaF = 'alphaF'
mu = 'mu'
nu = 'nu'
class MonitorType(StrEnum):
auto = 'a'
proton_charge = 'p'
time = 't'
neutron_monitor = 'n'
debug = 'x'
@dataclass
class ExperimentConfig(ArgParsable):
chopperPhase: float = field(
default=0,
metadata={
'short': 'cp',
'group': 'instrument settings',
'help': 'phase between opening of chopper 1 and closing of chopper 2 window',
},
)
chopperPhaseOffset: float = field(
default=-5,
metadata={
'short': 'co',
'group': 'instrument settings',
'help': 'phase between chopper 1 index pulse and closing edge',
},
)
chopperSpeed: float = field(
default=500,
metadata={
'short': 'cs',
'group': 'instrument settings',
'help': 'rotation speed of the chopper disks in rpm',
},
)
yRange: Tuple[int, int] = field(
default=(18, 48),
metadata={
'short': 'y',
'group': 'region of interest',
'help': 'horizontal pixel range on the detector to be used',
},
)
lambdaRange: Tuple[float, float] = field(
default_factory=lambda: [3, 12.5],
metadata={
'short': 'l',
'group': 'region of interest',
'help': 'wavelength range to be used (in angstrom)',
},
)
lowCurrentThreshold: float = field(
default=50,
metadata={
'short': 'pt',
'group': 'instrument settings',
'help': 'proton current below which the events are ignored (per chopper pulse)',
},
)
incidentAngle: IncidentAngle = field(
default=IncidentAngle.alphaF,
metadata={
'short': 'ai',
'group': 'instrument settings',
'help': 'calculate alphaI = [alphaF], [mu]+kappa+delta_kappa or ([nu]+kappa+delta_kappa)/2',
},
)
alphaF = 'alphaF'
sampleModel: Optional[str] = field(
default=None,
metadata={
'short': 'sm',
'group': 'sample',
'help': 'orso type string to describe the sample in one line',
},
)
mu: Optional[float] = field(
default=None,
metadata={
'short': 'mu',
'group': 'sample',
'help': 'inclination of the sample surface w.r.t. the instrument horizon',
},
)
nu: Optional[float] = field(
default=None,
metadata={
'short': 'nu',
'group': 'sample',
'help': 'inclination of the detector w.r.t. the instrument horizon',
},
)
muOffset: Optional[float] = field(
default=0,
metadata={
'short': 'm',
'group': 'sample',
'help': 'correction offset for mu misalignment (mu_real = mu_file + mu_offset)',
},
)
monitorType: MonitorType = field(
default=MonitorType.proton_charge,
metadata={
'short': 'mt',
'group': 'instrument settings',
'help': 'one of [a]uto, [p]rotonCurrent, [t]ime or [n]eutronMonitor',
},
)
class NormalisationMethod(StrEnum):
direct_beam = 'd'
over_illuminated = 'o'
under_illuminated = 'u'
@dataclass
class ReductionConfig(ArgParsable):
fileIdentifier: List[str] = field(
metadata={
'short': 'f',
'priority': 100,
'group': 'input data',
'help': 'file number(s) or offset (if < 1)',
},
)
qResolution: float = field(
default=0.01,
metadata={
'short': 'r',
'group': 'data manicure',
'help': 'output resolution of q-scale Delta q / q',
},
)
qzRange: Tuple[float, float] = field(
default_factory=lambda: [0.005, 0.51],
metadata={
'short': 'q',
'group': 'region of interest',
'help': '?',
},
)
thetaRange: Tuple[float, float] = field(
default_factory=lambda: [-12., 12.],
metadata={
'short': 't',
'group': 'region of interest',
'help': 'absolute theta region of interest',
},
)
thetaRangeR: Tuple[float, float] = field(
default_factory=lambda: [-0.75, 0.75],
metadata={
'short': 'T',
'group': 'region of interest',
'help': 'theta region of interest w.r.t. beam center',
},
)
normalisationMethod: NormalisationMethod = field(
default=NormalisationMethod.over_illuminated,
metadata={
'short': 'nm',
'priority': 90,
'group': 'input data',
'help': 'normalisation method: [o]verillumination, [u]nderillumination, [d]irect_beam'})
scale: List[float] = field(
default_factory=lambda: [1.],
metadata={
'short': 's',
'group': 'data manicure',
'help': '(list of) scaling factors, if less elements than files use the last one',
},
)
autoscale: Tuple[float, float] = field(
default=None,
metadata={
'short': 'S',
'group': 'data manicure',
'help': '',
},
)
subtract: Optional[str] = field(
default=None,
metadata={
'short': 'sub',
'group': 'input data',
'help': 'File with R(q_z) curve to be subtracted (in .Rqz.ort format)'})
normalisationFileIdentifier: Optional[List[str]] = field(
default_factory=lambda: [None],
metadata={
'short': 'n',
'priority': 90,
'group': 'input data',
'help': 'file number(s) of normalisation measurement'})
timeSlize: Optional[List[float]] = field(
default= None,
metadata={
'short': 'ts',
'group': 'region of interest',
'help': 'time slizing <interval> ,[<start> [,stop]]',
},
)
def _expand_file_list(self, short_notation:str):
"""Evaluate string entry for file number lists"""
file_list=[]
for i in short_notation.split(','):
if '-' in i:
if ':' in i:
step = i.split(':', 1)[1]
file_list += range(int(i.split('-', 1)[0]),
int((i.rsplit('-', 1)[1]).split(':', 1)[0])+1,
int(step))
else:
step = 1
file_list += range(int(i.split('-', 1)[0]),
int(i.split('-', 1)[1])+1,
int(step))
else:
file_list += [int(i)]
file_list.sort()
return file_list
def data_files(self):
# get input files from expanding fileIdentifier
return list(map(self._expand_file_list, self.fileIdentifier))
def normal_files(self):
return list(map(self._expand_file_list, self.normalisationFileIdentifier))
class OutputFomatOption(StrEnum):
Rqz_ort = "Rqz.ort"
Rqz_orb = "Rqz.orb"
Rlt_ort = "Rlt.ort"
Rlt_orb = "Rlt.orb"
ort = "ort"
orb = "orb"
Rqz = "Rqz"
Rlt = "Rlt"
class PlotColormaps(StrEnum):
gist_ncar = "gist_ncar"
viridis = "viridis"
inferno = "inferno"
gist_rainbow = "gist_rainbow"
nipy_spectral = "nipy_spectral"
@dataclass
class OutputConfig(ArgParsable):
outputFormats: List[OutputFomatOption] = field(
default_factory=lambda: ['Rqz.ort'],
metadata={
'short': 'of',
'group': 'output',
'help': 'one of "Rqz[.ort]", "Rlt[.ort]" or both with "ort"',
},
)
outputName: str = field(
default='fromEOS',
metadata={
'short': 'o',
'group': 'output',
'help': '?',
},
)
outputPath: str = field(
default='.',
metadata={
'short': 'op',
'group': 'output',
'help': '?',
},
)
plot: bool = field(
default=False,
metadata={
'group': 'output',
'help': 'show matplotlib graphs of results',
},
)
plot_colormap: PlotColormaps = field(
default=PlotColormaps.gist_ncar,
metadata={
'short': 'pcmap',
'group': 'output',
'help': 'matplotlib colormap used in lambda-theta graphs when plotting',
},
)
def _output_format_list(self, outputFormat):
format_list = []
if OutputFomatOption.ort in outputFormat\
or OutputFomatOption.Rqz_ort in outputFormat\
or OutputFomatOption.Rqz in outputFormat:
format_list.append(OutputFomatOption.Rqz_ort)
if OutputFomatOption.ort in outputFormat\
or OutputFomatOption.Rlt_ort in outputFormat\
or OutputFomatOption.Rlt in outputFormat:
format_list.append(OutputFomatOption.Rlt_ort)
if OutputFomatOption.orb in outputFormat\
or OutputFomatOption.Rqz_orb in outputFormat\
or OutputFomatOption.Rqz in outputFormat:
format_list.append(OutputFomatOption.Rqz_orb)
if OutputFomatOption.orb in outputFormat\
or OutputFomatOption.Rlt_orb in outputFormat\
or OutputFomatOption.Rlt in outputFormat:
format_list.append(OutputFomatOption.Rlt_orb)
return sorted(format_list, reverse=True)
def __post_init__(self):
self.outputFormats = self._output_format_list(self.outputFormats)
# ===================================
@dataclass
class EOSConfig:
reader: ReaderConfig
experiment: ExperimentConfig
reduction: ReductionConfig
output: OutputConfig
_call_string_overwrite=None
#@property
#def call_string(self)->str:
# if self._call_string_overwrite:
# return self._call_string_overwrite
# else:
# return self.calculate_call_string()
def call_string(self):
base = 'eos'
inpt = ''
if self.reader.year:
inpt += f' -Y {self.reader.year}'
else:
inpt += f' -Y {datetime.now().year}'
if np.shape(self.reader.rawPath)[0] == 1:
inpt += f' --rawPath {self.reader.rawPath}'
if self.reduction.subtract:
inpt += f' -subtract {self.reduction.subtract}'
if self.reduction.normalisationFileIdentifier:
inpt += f' -n {" ".join(self.reduction.normalisationFileIdentifier)}'
if self.reduction.fileIdentifier:
inpt += f' -f {" ".join(self.reduction.fileIdentifier)}'
otpt = ''
if self.reduction.qResolution:
otpt += f' -r {self.reduction.qResolution}'
if self.output.outputPath != '.':
inpt += f' --outputdPath {self.output.outputPath}'
if self.output.outputName:
otpt += f' -o {self.output.outputName}'
if self.output.outputFormats != ['Rqz.ort']:
otpt += f' -of {" ".join(self.output.outputFormats)}'
mask = ''
mask += f' -y {" ".join(str(ii) for ii in self.experiment.yRange)}'
mask += f' -l {" ".join(str(ff) for ff in self.experiment.lambdaRange)}'
mask += f' -t {" ".join(str(ff) for ff in self.reduction.thetaRange)}'
mask += f' -T {" ".join(str(ff) for ff in self.reduction.thetaRangeR)}'
mask += f' -q {" ".join(str(ff) for ff in self.reduction.qzRange)}'
para = ''
# TODO: Check if we want these parameters for defaults
para += f' --chopperPhase {self.experiment.chopperPhase}'
para += f' --chopperPhaseOffset {self.experiment.chopperPhaseOffset}'
if self.experiment.mu:
para += f' --mu {self.experiment.mu}'
elif self.experiment.muOffset:
para += f' --muOffset {self.experiment.muOffset}'
if self.experiment.nu:
para += f' --nu {self.experiment.nu}'
modl = ''
if self.experiment.sampleModel:
modl += f" --sampleModel '{self.experiment.sampleModel}'"
acts = ''
if self.reduction.autoscale:
acts += f' --autoscale {" ".join(str(ff) for ff in self.reduction.autoscale)}'
# TODO: Check if should be shown if not default
acts += f' --scale {self.reduction.scale}'
if self.reduction.timeSlize:
acts += f' --timeSlize {" ".join(str(ff) for ff in self.reduction.timeSlize)}'
mlst = base + inpt + otpt
if mask:
mlst += mask
if para:
mlst += para
if acts:
mlst += acts
if modl:
mlst += modl
if len(mlst) > 70:
mlst = base + ' ' + inpt + ' ' + otpt
if mask:
mlst += ' ' + mask
if para:
mlst += ' ' + para
if acts:
mlst += ' ' + acts
if modl:
mlst += ' ' + modl
logging.debug(f'Argument list build in EOSConfig.call_string: {mlst}')
return mlst

49
eos/path_handling.py Normal file
View File

@@ -0,0 +1,49 @@
"""
Defines how file paths are resolved from short_notation, year and number to filename.
"""
import os
from typing import List
class PathResolver:
def __init__(self, year, rawPath):
self.year = year
self.rawPath = rawPath
def resolve(self, short_notation):
return list(map(self.get_path, self.expand_file_list(short_notation)))
@staticmethod
def expand_file_list(short_notation)->List[int]:
"""Evaluate string entry for file number lists"""
file_list = []
for i in short_notation.split(','):
if '-' in i:
if ':' in i:
step = i.split(':', 1)[1]
file_list += range(int(i.split('-', 1)[0]),
int((i.rsplit('-', 1)[1]).split(':', 1)[0])+1,
int(step))
else:
step = 1
file_list += range(int(i.split('-', 1)[0]),
int(i.split('-', 1)[1])+1,
int(step))
else:
file_list += [int(i)]
return list(sorted(file_list))
def get_path(self, number):
fileName = f'amor{self.year}n{number:06d}.hdf'
path = ''
for rawd in self.rawPath:
if os.path.exists(os.path.join(rawd, fileName)):
path = rawd
break
if not path:
if os.path.exists(
f'/afs/psi.ch/project/sinqdata/{self.year}/amor/{int(number/1000)}/{fileName}'):
path = f'/afs/psi.ch/project/sinqdata/{self.year}/amor/{int(number/1000)}'
else:
raise FileNotFoundError(f'# ERROR: the file {fileName} can not be found in {self.rawPath}')
return os.path.join(path, fileName)

317
eos/projection.py Normal file
View File

@@ -0,0 +1,317 @@
"""
Classes used to calculate projections/binnings from event data onto given grids.
"""
import logging
import numpy as np
from dataclasses import dataclass
from .event_data_types import EventDatasetProtocol
from .instrument import Detector, LZGrid
from .normalization import LZNormalisation
@dataclass
class ProjectedReflectivity:
R: np.ndarray
dR: np.ndarray
Q: np.ndarray
dQ: np.ndarray
@property
def data(self):
"""
Return combined data compatible with storing as columns in orso file.
Q, R, dR, dQ
"""
return np.array([self.Q, self.R, self.dR, self.dQ]).T
def data_for_time(self, time):
tme = np.ones(np.shape(self.Q))*time
return np.array([self.Q, self.R, self.dR, self.dQ, tme]).T
def scale(self, factor):
self.R *= factor
self.dR *= factor
def autoscale(self, range):
filter_q = (range[0]<=self.Q) & (self.Q<=range[1])
filter_q &= self.dR>0
if filter_q.sum()>0:
scale = (self.R[filter_q]/self.dR[filter_q]).sum()/(self.R[filter_q]**2/self.dR[filter_q]).sum()
self.scale(scale)
logging.info(f' scaling factor = {scale}')
return scale
else:
logging.warning(' automatic scaling not possible')
return 1.0
def stitch(self, other: 'ProjectedReflectivity'):
# find scaling factor between two reflectivities at points both are not zero
filter_q = np.logical_not(np.isnan(other.R*self.R))
filter_q &= self.R>0
filter_q &= other.R>0
R1 = self.R[filter_q]
dR1 = self.dR[filter_q]
R2 = other.R[filter_q]
dR2 = other.dR[filter_q]
if len(R1)>0:
scale = (R1**2*R2**2/(dR1**2*dR2**2)).sum() / (R1**3*R2/(dR1**2*dR2**2)).sum()
self.scale(scale)
logging.info(f' scaling factor = {scale}')
return scale
else:
logging.warning(' automatic scaling not possible')
return 1.0
def subtract(self, R, dR):
# subtract another dataset with same q-points
self.R -= R
self.dR = np.sqrt(self.dR**2+dR**2)
def plot(self, **kwargs):
from matplotlib import pyplot as plt
plt.errorbar(self.Q, self.R, xerr=self.dQ, yerr=self.dR, **kwargs)
plt.yscale('log')
plt.xlabel('Q / $\\AA^{-1}$')
plt.ylabel('R')
class LZProjection:
grid: LZGrid
lamda: np.ndarray
alphaF: np.ndarray
is_normalized: bool
data: np.recarray
def __init__(self, tthh: float, grid: LZGrid):
self.grid = grid
self.is_normalized = False
alphaF_z = tthh + Detector.delta_z
lamda_l = self.grid.lamda()
lamda_c = (lamda_l[:-1]+lamda_l[1:])/2
lz_shape = self.grid.lz()
self.lamda = lz_shape*lamda_c[:, np.newaxis]
self.alphaF = lz_shape*alphaF_z[np.newaxis, :]
self.data = np.zeros(self.alphaF.shape, dtype=[
('I', np.float64),
('mask', bool),
('ref', np.float64),
('err', np.float64),
('res', np.float64),
('qz', np.float64),
('qx', np.float64),
('norm', np.float64),
]).view(np.recarray)
self.data.mask = True
self.monitor = 0.
@classmethod
def from_dataset(cls, dataset: EventDatasetProtocol, grid: LZGrid, has_offspecular=False):
tthh = dataset.geometry.nu - dataset.geometry.mu
output = cls(tthh, grid)
output.correct_gravity(dataset.geometry.detectorDistance)
if has_offspecular:
alphaI_lz = grid.lz()*(dataset.geometry.mu+dataset.geometry.kap+dataset.geometry.kad)
output.calculate_q(alphaI_lz)
else:
output.calculate_q()
return output
def correct_gravity(self, detector_distance):
self.alphaF += np.rad2deg( np.arctan( 3.07e-10 * detector_distance * self.lamda**2 ) )
def calculate_q(self, alphaI=None):
if alphaI is None:
self.data.qz = 4.0*np.pi*np.sin(np.deg2rad(self.alphaF))/self.lamda
self.data.qx = 0.*self.data.qz
else:
self.data.qz = 2.0*np.pi*(np.sin(np.deg2rad(self.alphaF))+np.sin(np.deg2rad(alphaI)))/self.lamda
self.data.qx = 2.0*np.pi*(np.cos(np.deg2rad(self.alphaF))-np.cos(np.deg2rad(alphaI)))/self.lamda
if self.data.qz[0,self.data.qz.shape[1]//2] < 0:
# assuming a 'measurement from below' when center of detector at negative qz
self.data.qz *= -1
self.calculate_q_resolution()
def calculate_q_resolution(self):
res_lz = self.grid.lz() * 0.022**2
res_lz = res_lz + (0.008/self.alphaF)**2
self.data.res = self.data.qz * np.sqrt(res_lz)
def apply_theta_filter(self, theta_range):
# Filters points within theta range
self.data.mask &= (self.alphaF<theta_range[0])|(self.alphaF>theta_range[1])
def apply_theta_mask(self, theta_range):
# Mask points outside theta range
self.data.mask &= self.alphaF>=theta_range[0]
self.data.mask &= self.alphaF<=theta_range[1]
def apply_lamda_mask(self, lamda_range):
# Mask points outside lambda range
self.data.mask &= self.lamda>=lamda_range[0]
self.data.mask &= self.lamda<=lamda_range[1]
def apply_norm_mask(self, norm: LZNormalisation):
# Mask points where normliazation is nan
self.data.mask &= np.logical_not(np.isnan(norm.norm))
def project(self, dataset: EventDatasetProtocol, monitor: float):
"""
Project dataset on grid and add to intensity.
Can be called multiple times to sequentially add events.
"""
e = dataset.data.events
int_lz, *_ = np.histogram2d(e.lamda, e.detZ, bins = (self.grid.lamda(), self.grid.z()))
self.data.I += int_lz
self.monitor += monitor
# in case the intensity changed one needs to normalize again
self.is_normalized = False
@property
def I(self):
output = self.data.I[:]
output[np.logical_not(self.data.mask)] = np.nan
return output / self.monitor
def calc_error(self):
# calculate error bars for resulting intensity after normalization
self.data.err = self.data.ref * np.sqrt( 1./(self.data.I+.1) + 1./self.data.norm )
def normalize_over_illuminated(self, norm: LZNormalisation):
"""
Normalize the dataaset and take into account a difference in
detector angle for measurement and reference.
"""
norm_lz = norm.norm
thetaN_z = Detector.delta_z+norm.angle
thetaN_lz = np.ones_like(norm_lz)*thetaN_z
thetaN_lz = np.where(np.absolute(thetaN_lz)>5e-3, thetaN_lz, np.nan)
self.data.mask &= (np.absolute(thetaN_lz)>5e-3)
ref_lz = (self.data.I*np.absolute(thetaN_lz))/(norm_lz*np.absolute(self.alphaF))
ref_lz *= norm.monitor/self.monitor
ref_lz[np.logical_not(self.data.mask)] = np.nan
self.data.norm = norm_lz
self.data.ref = ref_lz
self.calc_error()
self.is_normalized = True
def normalize_no_footprint(self, norm: LZNormalisation):
norm_lz = norm.norm
ref_lz = (self.data.I/norm_lz)
ref_lz *= norm.monitor/self.monitor
ref_lz[np.logical_not(self.data.mask)] = np.nan
self.data.norm = norm_lz
self.data.ref = ref_lz
self.calc_error()
self.is_normalized = True
def scale(self, factor: float):
if not self.is_normalized:
raise ValueError("Dataset needs to be normalized, first")
self.data.ref *= factor
self.data.err *= factor
def project_on_qz(self):
if not self.is_normalized:
raise ValueError("Dataset needs to be normalized, first")
q_q = self.grid.q()
weights_lzf = self.data.norm[self.data.mask]
q_lzf = self.data.qz[self.data.mask]
R_lzf = self.data.ref[self.data.mask]
dR_lzf = self.data.err[self.data.mask]
dq_lzf = self.data.res[self.data.mask]
N_q = np.histogram(q_lzf, bins = q_q, weights = weights_lzf )[0]
N_q = np.where(N_q > 0, N_q, np.nan)
R_q = np.histogram(q_lzf, bins = q_q, weights = weights_lzf * R_lzf )[0]
R_q = R_q / N_q
dR_q = np.histogram(q_lzf, bins = q_q, weights = (weights_lzf * dR_lzf)**2 )[0]
dR_q = np.sqrt( dR_q ) / N_q
# TODO: different error propagations for dR and dq!
# this is what should work:
#dq_q = np.histogram(q_lzf, bins = q_q, weights = (weights_lzf * dq_lzf)**2 )[0]
#dq_q = np.sqrt( dq_q ) / N_q
# and this actually works:
N_q = np.histogram(q_lzf, bins = q_q, weights = weights_lzf**2 )[0]
N_q = np.where(N_q > 0, N_q, np.nan)
dq_q = np.histogram(q_lzf, bins = q_q, weights = (weights_lzf * dq_lzf)**2 )[0]
dq_q = np.sqrt( dq_q / N_q )
return ProjectedReflectivity(R_q, dR_q, (q_q[1:]+q_q[:-1])/2., dq_q)
############## potential speedup not used right now, needs to be tested ####################
@classmethod
def histogram2d_lz(cls, lamda_e, detZ_e, bins):
"""
Perform binning operation equivalent to numpy bin2d for the sepcial case
of the second dimension using integer positions (pre-defined pixels).
Based on the devide_bin algorithm below.
"""
dimension = bins[1].shape[0]-1
if not (np.array(bins[1])==np.arange(dimension+1)).all():
raise ValueError("histogram2d_lz requires second bin dimension to be contigous integer range")
binning = cls.devide_bin(lamda_e, detZ_e.astype(np.int64), bins[0], dimension)
return np.array(binning), bins[0], bins[1]
@classmethod
def devide_bin(cls, lambda_e, position_e, lamda_edges, dimension):
'''
Use a divide and conquer strategy to bin the data. For the actual binning the
numpy bincount function is used, as it is much faster than histogram for
counting of integer values.
:param lambda_e: Array of wavelength for each event
:param position_e: Array of positional indices for each event
:param lamda_edges: The edges of bins to be used for the histogram
:param dimension: position number of buckets in output arrray
:return: 2D list of dimensions (lambda, x) of counts
'''
if len(lambda_e)==0:
# no more events in range, return empty bins
return [np.zeros(dimension, dtype=np.int64).tolist()]*(len(lamda_edges)-1)
if len(lamda_edges)==2:
# deepest recursion reached, all items should be within the two ToF edges
return [np.bincount(position_e, minlength=dimension).tolist()]
# split all events into two time of flight regions
split_idx = len(lamda_edges)//2
left_region = lambda_e<lamda_edges[split_idx]
left_list = cls.devide_bin(lambda_e[left_region], position_e[left_region],
lamda_edges[:split_idx+1], dimension)
right_region = np.logical_not(left_region)
right_list = cls.devide_bin(lambda_e[right_region], position_e[right_region],
lamda_edges[split_idx:], dimension)
return left_list+right_list
def plot(self, **kwargs):
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm
if 'colorbar' in kwargs:
cmap=True
del(kwargs['colorbar'])
else:
cmap=False
if self.is_normalized:
if not 'norm' in kwargs:
kwargs['norm'] = LogNorm(2e-3, 2.0)
plt.pcolormesh(self.lamda, self.alphaF, self.data.ref, **kwargs)
if cmap:
plt.colorbar(label='R')
else:
if not 'norm' in kwargs:
kwargs['norm'] = LogNorm()
plt.pcolormesh(self.lamda, self.alphaF, self.data.I, **kwargs)
if cmap:
plt.colorbar(label='I / cpm')
plt.xlabel('$\\lambda$ / $\\AA$')
plt.ylabel('$\\Theta$ / °')

414
eos/reduction.py Normal file
View File

@@ -0,0 +1,414 @@
import logging
import os
import sys
import numpy as np
from orsopy import fileio
from .file_reader import AmorEventData
from .header import Header
from .path_handling import PathResolver
from .options import EOSConfig, IncidentAngle, MonitorType, NormalisationMethod
from .instrument import Detector, LZGrid
from .normalization import LZNormalisation
from . import event_handling as eh, event_analysis as ea
from .projection import LZProjection
MONITOR_UNITS = {
MonitorType.neutron_monitor: 'cnts',
MonitorType.proton_charge: 'mC',
MonitorType.time: 's',
MonitorType.auto: 'various',
MonitorType.debug: 'mC',
}
class AmorReduction:
def __init__(self, config: EOSConfig):
self.experiment_config = config.experiment
self.reader_config = config.reader
self.reduction_config = config.reduction
self.output_config = config.output
self.header = Header()
self.header.reduction.call = config.call_string()
self.prepare_actions()
def prepare_actions(self):
"""
Does not do any actual reduction.
"""
self.path_resolver = PathResolver(self.reader_config.year, self.reader_config.rawPath)
# setup all actions performed on event datasets before projection on the grid
# The order of these corrections matter as some rely on parameters modified before
if self.reduction_config.normalisationFileIdentifier:
# explicit steps performed on AmorEventDataset for normalization files
self.normevent_actions = eh.ApplyPhaseOffset(self.experiment_config.chopperPhaseOffset)
self.normevent_actions |= eh.CorrectChopperPhase()
if self.experiment_config.monitorType in [MonitorType.proton_charge, MonitorType.debug]:
self.normevent_actions |= ea.ExtractWalltime()
self.normevent_actions |= eh.AssociatePulseWithMonitor(self.experiment_config.monitorType,
self.experiment_config.lowCurrentThreshold)
self.normevent_actions |= eh.FilterStrangeTimes()
self.normevent_actions |= ea.MergeFrames()
self.normevent_actions |= ea.AnalyzePixelIDs(self.experiment_config.yRange)
self.normevent_actions |= eh.TofTimeCorrection(self.experiment_config.incidentAngle==IncidentAngle.alphaF)
self.normevent_actions |= ea.CalculateWavelength(self.experiment_config.lambdaRange)
self.normevent_actions |= eh.ApplyMask()
# Actions on datasets not used for normalization
self.dataevent_actions = eh.ApplyPhaseOffset(self.experiment_config.chopperPhaseOffset)
self.dataevent_actions |= eh.ApplyParameterOverwrites(self.experiment_config) # some actions use instrument parameters, change before that
self.dataevent_actions |= eh.CorrectChopperPhase()
self.dataevent_actions |= ea.ExtractWalltime()
self.dataevent_time_correction = eh.CorrectSeriesTime(0) # will be set from first dataset
self.dataevent_actions |= self.dataevent_time_correction
self.dataevent_actions |= eh.AssociatePulseWithMonitor(self.experiment_config.monitorType,
self.experiment_config.lowCurrentThreshold)
self.dataevent_actions |= eh.FilterStrangeTimes()
self.dataevent_actions |= ea.MergeFrames()
self.dataevent_actions |= ea.AnalyzePixelIDs(self.experiment_config.yRange)
self.dataevent_actions |= eh.TofTimeCorrection(self.experiment_config.incidentAngle==IncidentAngle.alphaF)
self.dataevent_actions |= ea.CalculateWavelength(self.experiment_config.lambdaRange)
self.dataevent_actions |= ea.CalculateQ(self.experiment_config.incidentAngle)
self.dataevent_actions |= ea.FilterQzRange(self.reduction_config.qzRange)
self.dataevent_actions |= eh.ApplyMask()
self.grid = LZGrid(self.reduction_config.qResolution, self.reduction_config.qzRange)
def reduce(self):
if not os.path.exists(f'{self.output_config.outputPath}'):
logging.debug(f'Creating destination path {self.output_config.outputPath}')
os.system(f'mkdir {self.output_config.outputPath}')
# load or create normalisation matrix
if self.reduction_config.normalisationFileIdentifier:
# TODO: change option definition to single normalization short_code
self.create_normalisation_map(self.reduction_config.normalisationFileIdentifier[0])
else:
self.norm = LZNormalisation.unity(self.grid)
# load R(q_z) curve to be subtracted:
if self.reduction_config.subtract:
self.sq_q, self.sR_q, self.sdR_q, self.sFileName = self.loadRqz(self.reduction_config.subtract)
logging.warning(f'loaded background file: {self.sFileName}')
self.header.reduction.corrections.append(f'background from \'{self.sFileName}\' subtracted')
self.subtract = True
else:
self.subtract = False
# load measurement data and do the reduction
self.datasetsRqz = []
self.datasetsRlt = []
for i, short_notation in enumerate(self.reduction_config.fileIdentifier):
self.read_file_block(i, short_notation)
# output
logging.warning('output:')
if 'Rqz.ort' in self.output_config.outputFormats:
self.save_Rqz()
if 'Rlt.ort' in self.output_config.outputFormats:
self.save_Rtl()
if self.output_config.plot:
import matplotlib.pyplot as plt
if 'Rqz.ort' in self.output_config.outputFormats:
plt.figure(num=99)
plt.legend()
plt.show()
def read_file_block(self, i, short_notation):
logging.warning('reading input:')
file_list = self.path_resolver.resolve(short_notation)
self.header.measurement_data_files = []
self.dataset = AmorEventData(file_list[0])
if self.experiment_config.monitorType==MonitorType.auto:
if self.dataset.data.proton_current.current.sum()>1:
self.experiment_config.monitorType = MonitorType.proton_charge
logging.debug(' monitor type set to "proton current"')
else:
self.experiment_config.monitorType = MonitorType.time
logging.debug(' monitor type set to "time"')
# update actions to sue selected monitor
self.prepare_actions()
# reload normalization to make sure the monitor matches
if self.reduction_config.normalisationFileIdentifier:
self.create_normalisation_map(self.reduction_config.normalisationFileIdentifier[0])
self.dataevent_time_correction.seriesStartTime = self.dataset.eventStartTime
self.dataevent_actions(self.dataset)
self.dataset.update_header(self.header)
self.dataevent_actions.update_header(self.header)
for fi in file_list[1:]:
di = AmorEventData(fi)
self.dataevent_actions(di)
self.dataset.append(di)
for fileName in file_list:
self.header.measurement_data_files.append(fileio.File( file=fileName.split('/')[-1],
timestamp=self.dataset.fileDate))
if self.reduction_config.timeSlize:
if i>0:
logging.warning(" time slizing should only be used for one set of datafiles, check parameters")
self.analyze_timeslices(i)
else:
self.analyze_unsliced(i)
def analyze_unsliced(self, i):
self.monitor = np.sum(self.dataset.data.pulses.monitor)
logging.warning(f' monitor = {self.monitor:8.2f} {MONITOR_UNITS[self.experiment_config.monitorType]}')
proj:LZProjection = self.project_on_lz()
try:
scale = self.reduction_config.scale[i]
except IndexError:
scale = self.reduction_config.scale[-1]
proj.scale(scale)
if 'Rqz.ort' in self.output_config.outputFormats:
headerRqz = self.header.orso_header()
headerRqz.data_set = f'Nr {i} : mu = {self.dataset.geometry.mu:6.3f} deg'
# projection on qz-grid
result = proj.project_on_qz()
if self.reduction_config.autoscale:
if i==0:
result.autoscale(self.reduction_config.autoscale)
else:
result.stitch(self.last_result)
if self.subtract:
if len(result.Q)==len(self.sq_q):
result.subtract(self.sR_q, self.sdR_q)
else:
logging.warning(
f'backgroung file {self.sFileName} not compatible with q_z scale ({len(self.sq_q)} vs. {len(result.Q)})')
orso_data = fileio.OrsoDataset(headerRqz, result.data)
self.last_result = result
self.datasetsRqz.append(orso_data)
if self.output_config.plot:
import matplotlib.pyplot as plt
# plot all reflectivity results in same graph
plt.figure(num=99)
result.plot(label=f'{self.reduction_config.fileIdentifier[i]}')
if 'Rlt.ort' in self.output_config.outputFormats:
columns = [
fileio.Column('Qz', '1/angstrom', 'normal momentum transfer'),
fileio.Column('R', '', 'specular reflectivity'),
fileio.ErrorColumn(error_of='R', error_type='uncertainty', value_is='sigma'),
fileio.ErrorColumn(error_of='Qz', error_type='resolution', value_is='sigma'),
fileio.Column('lambda', 'angstrom', 'wavelength'),
fileio.Column('alpha_f', 'deg', 'final angle'),
fileio.Column('l', '', 'index of lambda-bin'),
fileio.Column('t', '', 'index of theta bin'),
fileio.Column('intensity', '', 'filtered neutron events per pixel'),
fileio.Column('norm', '', 'normalisation matrix'),
fileio.Column('mask', '', 'pixels used for calculating R(q_z)'),
fileio.Column('Qx', '1/angstrom', 'parallel momentum transfer'),
]
ts, zs = proj.data.shape
lindex_lz = np.tile(np.arange(1, ts+1), (zs, 1)).T
tindex_lz = np.tile(np.arange(1, zs+1), (ts, 1))
j = 0
for item in zip(
proj.data.qz.T,
proj.data.ref.T,
proj.data.err.T,
proj.data.res.T,
proj.lamda.T,
proj.alphaF.T,
lindex_lz.T,
tindex_lz.T,
proj.data.I.T,
proj.data.norm.T,
proj.data.mask.T,
proj.data.qx.T,
):
data = np.array(list(item)).T
headerRlt = self.header.orso_header(columns=columns)
headerRlt.data_set = f'dataset_{i}_{j+1} : alpha_f = {proj.alphaF[0, j]:6.3f} deg'
orso_data = fileio.OrsoDataset(headerRlt, data)
self.datasetsRlt.append(orso_data)
j += 1
if self.output_config.plot:
import matplotlib.pyplot as plt
plt.figure()
proj.plot(colorbar=True, cmap=str(self.output_config.plot_colormap))
plt.title(f'{self.reduction_config.fileIdentifier[i]}')
def analyze_timeslices(self, i):
wallTime_e = np.float64(self.dataset.data.events.wallTime)/1e9
pulseTimeS = np.float64(self.dataset.data.pulses.time)/1e9
interval = self.reduction_config.timeSlize[0]
try:
start = self.reduction_config.timeSlize[1]
except IndexError:
start = 0
try:
stop = self.reduction_config.timeSlize[2]
except IndexError:
stop = wallTime_e[-1]
# make overwriting log lines possible by removing newline at the end
#logging.StreamHandler.terminator = "\r"
logging.warning(f' time slizing')
logging.info(' slize time monitor')
for ti, time in enumerate(np.arange(start, stop, interval)):
slice = self.dataset.get_timeslice(time, time+interval)
self.monitor = np.sum(slice.data.pulses.monitor)
logging.info(f' {ti:<4d} {time:6.0f} {self.monitor:7.2f} {MONITOR_UNITS[self.experiment_config.monitorType]}')
proj: LZProjection = self.project_on_lz(slice)
try:
scale = self.reduction_config.scale[i]
except IndexError:
scale = self.reduction_config.scale[-1]
proj.scale(scale)
# projection on qz-grid
result = proj.project_on_qz()
if self.reduction_config.autoscale:
# scale every slice the same
if ti==0:
if i==0:
atscale = result.autoscale(self.reduction_config.autoscale)
else:
atscale = result.stitch(self.last_result)
else:
result.scale(atscale)
if self.subtract:
if len(result.Q)==len(self.sq_q):
result.subtract(self.sR_q, self.sdR_q)
else:
logging.warning(
f'backgroung file {self.sFileName} not compatible with q_z scale ({len(self.sq_q)} vs. {len(result.Q)})')
headerRqz = self.header.orso_header(
extra_columns=[fileio.Column('time', 's', 'time relative to start of measurement series')])
headerRqz.data_set = f'{i}_{ti}: time = {time:8.1f} s to {time+interval:8.1f} s'
orso_data = fileio.OrsoDataset(headerRqz, result.data_for_time(time))
self.datasetsRqz.append(orso_data)
if self.output_config.plot:
import matplotlib.pyplot as plt
# plot all reflectivity results in same graph
plt.figure(num=99)
result.plot(label=f'{self.reduction_config.fileIdentifier[i]} @ {time:.1f}s')
self.last_result = result
# reset normal logging behavior
#logging.StreamHandler.terminator = "\n"
logging.info(f' done {min(time+interval, pulseTimeS[-1]):5.0f}')
def save_Rqz(self):
fname = os.path.join(self.output_config.outputPath, f'{self.output_config.outputName}.Rqz.ort')
logging.warning(f' {fname}')
theSecondLine = f' {self.header.experiment.title} | {self.header.experiment.start_date} | sample {self.header.sample.name} | R(q_z)'
fileio.save_orso(self.datasetsRqz, fname, data_separator='\n', comment=theSecondLine)
def save_Rtl(self):
fname = os.path.join(self.output_config.outputPath, f'{self.output_config.outputName}.Rlt.ort')
logging.warning(f' {fname}')
theSecondLine = f' {self.header.experiment.title} | {self.header.experiment.start_date} | sample {self.header.sample.name} | R(lambda, theta)'
fileio.save_orso(self.datasetsRlt, fname, data_separator='\n', comment=theSecondLine)
def loadRqz(self, name):
fname = os.path.join(self.output_config.outputPath, name)
if os.path.exists(fname):
fileName = fname
elif os.path.exists(f'{fname}.Rqz.ort'):
fileName = f'{fname}.Rqz.ort'
else:
sys.exit(f'### the background file \'{fname}\' does not exist! => stopping')
q_q, Sq_q, dS_q = np.loadtxt(fileName, usecols=(0, 1, 2), comments='#', unpack=True)
return q_q, Sq_q, dS_q, fileName
def create_normalisation_map(self, short_notation):
outputPath = self.output_config.outputPath
normalisation_list = self.path_resolver.expand_file_list(short_notation)
name = '_'.join(map(str, normalisation_list))
n_path = os.path.join(outputPath, f'{name}.norm')
self.norm = None
if os.path.exists(n_path):
logging.debug(f'trying to load matrix from file {n_path}')
try:
self.norm = LZNormalisation.from_file(n_path, self.normevent_actions.action_hash())
except (ValueError, EOFError):
self.norm =None
else:
logging.warning(f'normalisation matrix: found and using {n_path}')
if self.norm is None:
# in case file does not exist or the action hash doesn't match, create new normalization
logging.warning(f'normalisation matrix: using the files {normalisation_list}')
normalization_files = list(map(self.path_resolver.get_path, normalisation_list))
reference = AmorEventData(normalization_files[0])
self.normevent_actions(reference)
for nfi in normalization_files[1:]:
toadd = AmorEventData(nfi)
self.normevent_actions(toadd)
reference.append(toadd)
self.norm = LZNormalisation(reference, self.reduction_config.normalisationMethod, self.grid)
if reference.data.events.shape[0] > 1e6:
self.norm.safe(n_path, self.normevent_actions.action_hash())
self.header.measurement_additional_files = self.norm.file_list
self.header.reduction.corrections.append('normalisation with \'additional files\'')
def project_on_lz(self, dataset=None):
if dataset is None:
dataset=self.dataset
proj = LZProjection.from_dataset(dataset, self.grid,
has_offspecular=(self.experiment_config.incidentAngle!=IncidentAngle.alphaF))
if not self.reduction_config.is_default('thetaRangeR'):
t0 = dataset.geometry.nu - dataset.geometry.mu
# adjust range based on detector center
thetaRange = [ti+t0 for ti in self.reduction_config.thetaRangeR]
proj.apply_theta_mask(thetaRange)
elif not self.reduction_config.is_default('thetaRange'):
proj.apply_theta_mask(self.reduction_config.thetaRange)
else:
thetaRange = [dataset.geometry.nu - dataset.geometry.mu - dataset.geometry.div/2,
dataset.geometry.nu - dataset.geometry.mu + dataset.geometry.div/2]
proj.apply_theta_mask(thetaRange)
proj.apply_lamda_mask(self.experiment_config.lambdaRange)
proj.apply_norm_mask(self.norm)
proj.project(dataset, self.monitor)
if self.reduction_config.normalisationMethod == NormalisationMethod.over_illuminated:
logging.debug(' assuming an overilluminated sample and correcting for the angle of incidence')
proj.normalize_over_illuminated(self.norm)
elif self.reduction_config.normalisationMethod==NormalisationMethod.under_illuminated:
logging.debug(' assuming an underilluminated sample and ignoring the angle of incidence')
proj.normalize_no_footprint(self.norm)
elif self.reduction_config.normalisationMethod==NormalisationMethod.direct_beam:
logging.debug(' assuming direct beam for normalisation and ignoring the angle of incidence')
proj.normalize_no_footprint(self.norm)
else:
logging.error('unknown normalisation method! Use [u]nder, [o]ver or [d]irect illumination')
proj.normalize_no_footprint(self.norm)
if self.monitor<=1e-6:
logging.info(' low monitor -> nan output')
proj.data.ref *= np.nan
return proj

View File

@@ -1,11 +1,10 @@
__version__ = '2024-03-15'
__version__ = '2025-06-07'
# essential changes with regard to 2022 version
# - imprved orso header
# - nexus compatible
# - new theta grid
# - new content in data_e (angleas rather than distances)
# - bug fixes: tof correction within detector
# essential changes with regard to 2024 version
# - accepts new hdf structure
# TODO:
# - output path
# - solve the confusion for negative file numbers and the connecting '-'
import os
import sys
@@ -88,7 +87,7 @@ def analyse_ev(event_e, tof_e, yMin, yMax, thetaMin, thetaMax):
# correct tof for beam size effect at chopper
# t_cor = (delta / 180 deg) * tau
data_e[:,0] -= ( data_e[:,5] / 180. ) * tau
data_e[:,0] -= ( data_e[:,5] / 180. ) * tau
# effective flight path length
#data_e[:,6] = chopperDetectorDistance + data_e[:,6]
@@ -145,22 +144,23 @@ class Meta:
ka0 = 0.245 # given inclination of the beam after the Selene guide
year_date = str(datetime.today()).split(' ')[0].replace("-", "/", 1)
# deside from where to take the control paralemters
try:
self.mu = float(np.take(fh['/entry1/Amor/master_parameters/mu/value'], 0))
self.nu = float(np.take(fh['/entry1/Amor/master_parameters/nu/value'], 0))
self.kap = float(np.take(fh['/entry1/Amor/master_parameters/kap/value'], 0))
self.kad = float(np.take(fh['/entry1/Amor/master_parameters/kad/value'], 0))
self.div = float(np.take(fh['/entry1/Amor/master_parameters/div/value'], 0))
chSp = float(np.take(fh['/entry1/Amor/chopper/rotation_speed/value'], 0))
self.chPh = float(np.take(fh['/entry1/Amor/chopper/phase/value'], 0))
try:
self.mu = float(np.take(fh['/entry1/Amor/instrument_control_parameters/mu'], 0))
self.nu = float(np.take(fh['/entry1/Amor/instrument_control_parameters/nu'], 0))
self.kap = float(np.take(fh['/entry1/Amor/instrument_control_parameters/kappa'], 0))
self.kad = float(np.take(fh['/entry1/Amor/instrument_control_parameters/kappa_offset'], 0))
self.div = float(np.take(fh['/entry1/Amor/instrument_control_parameters/div'], 0))
chopperSpeed = float(np.take(fh['/entry1/Amor/chopper/rotation_speed'], 0))
chopperPhase = float(np.take(fh['/entry1/Amor/chopper/phase'], 0))
ch1TriggerPhase = float(np.take(fh['/entry1/Amor/chopper/ch1_trigger_phase'], 0))
polarizationConfigLabel = int(np.take(fh['/entry1/Amor/polarization/configuration/value'], 0))
#polarizationConfigLabel = 1
except (KeyError, IndexError):
logging.warning(f" using parameters from nicos cache")
#cachePath = '/home/amor/nicosdata/amor/cache/'
#cachePath = '/home/nicos/amorcache/'
cachePath = '/home/amor/cache/'
cachePath = '/home/amor/nicosdata/amor/cache/'
year_date = str(datetime.today()).split(' ')[0].replace("-", "/", 1)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-mu/'+year_date)).split('\t')[-1]
self.mu = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-nu/'+year_date)).split('\t')[-1]
@@ -172,31 +172,23 @@ class Meta:
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-div/'+year_date)).split('\t')[-1]
self.div = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-ch1_speed/'+year_date)).split('\t')[-1]
chSp = float(value)
self.chPh = np.nan
chopperSpeed = float(value)
#value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-chopper_speed'+year_date)).split('\t')[-1]
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-chopper_phase/'+year_date)).split('\t')[-1]
chopperPhase = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-ch1_trigger_phase/'+year_date)).split('\t')[-1]
ch1TriggerPhase = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-polarization_config_label/'+year_date)).split('\t')[-1]
polarizationConfigLabel = int(value)
if chSp:
self.tau = 30. / chSp
else:
self.tau = 0
self.tau = 30. / chopperSpeed
try: # not yet correctly implemented in nexus template
spin = str(fh['/entry1/polarizer/spin_flipper/spin'][0].decode('utf-8'))
if spin == "b'p'":
self.spin = 'p'
elif spin == "b'm'":
self.spin = 'm'
elif spin == "b'up'":
self.spin = 'p'
elif spin == "b'down'":
self.spin = 'm'
elif spin == '?':
self.spin = '?'
else:
self.spin = 'n'
except (KeyError, IndexError):
self.spin = '?'
self.chopperTriggerPhase = ch1TriggerPhase + chopperPhase/2
print(f'# chopper trigger phase = {ch1TriggerPhase}')
polarizationConfigs = ['undefined', 'oo', 'po', 'mo', 'op', 'pp', 'mp', 'om', 'pm', 'mm']
self.polarizationConfig = polarizationConfigs[int(polarizationConfigLabel)]
#self.polarizationConfig = 'po'
# for .ort header
@@ -391,19 +383,13 @@ class PlotSelection:
# header / meta data
def header(self, filename, mu, nu, totalCounts, countingTime, spin):
def header(self, filename, mu, nu, totalCounts, countingTime, polarizationConfig):
number = filename.split('n')[1].split('.')[0].lstrip('0')
if spin == 'p':
spin = ' <+|'
elif spin == 'm':
spin = ' <-|'
else:
spin = ''
header = "#{} \u03bc={:>1.2f} \u03bd={:>1.2f} {} {:>10} cts {:>8.1f} s".format(number, mu+5e-3, nu+5e-3, spin, totalCounts, countingTime)
header = "#{} \u03bc={:>1.2f} \u03bd={:>1.2f} {} {:>10} cts {:>8.1f} s".format(number, mu+5e-3, nu+5e-3, polarizationConfig, totalCounts, countingTime)
return header
def headline(self, numberString, totalCounts):
headLine = "#{} \u03bc={:>1.2f} \u03bd={:>1.2f} {:>12,} cts {:>8.1f} s".format(numberString, mu+5e-3, nu+5e-3, totalCounts, countingTime)
def headline(self, numberString, totalCounts, polarizationConfig):
headLine = "#{} \u03bc={:>1.2f} \u03bd={:>1.2f} p={} {:>12,} cts {:>8.1f} s".format(numberString, mu+5e-3, nu+5e-3, polarizationConfig, totalCounts, countingTime)
return headLine
# grids
@@ -446,7 +432,7 @@ class PlotSelection:
# create PNG with several plots
def all(self, numberString, arg, data_e):
def all(self, numberString, arg, data_e, polarizationConfig):
#cmap='gist_earth'
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
@@ -501,7 +487,7 @@ class PlotSelection:
lw = I_q[ ((q_lim[0] < bins_q[:-1]) & (bins_q[:-1] < q_lim[1])) ].min()
plt.ylim(max(-0.1, np.log(lw+.1)/np.log(10)-0.1), )
#
headline = self.headline(numberString, np.shape(data_e)[0])
headline = self.headline(numberString, np.shape(data_e)[0], polarizationConfig)
plt.title(headline, loc='left', y=2.8, c='r')
fig.colorbar(cb, ax=mlt)
plt.subplots_adjust(hspace=0.6, wspace=0.1)
@@ -510,7 +496,7 @@ class PlotSelection:
# create PNG with one plot
def Iyz(self, numberString, arg, data_e):
def Iyz(self, numberString, arg, data_e, polarizationConfig):
det = Detector()
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
@@ -533,13 +519,13 @@ class PlotSelection:
plt.pcolormesh(bins_y[:],bins_z[:],I_yz.T, cmap=cmap)
plt.xlabel('$y ~/~ \\mathrm{bins}$')
plt.ylabel('$z ~/~ \\mathrm{bins}$')
headline = self.headline(numberString, np.shape(data_e)[0])
headline = self.headline(numberString, np.shape(data_e)[0], polarizationConfig)
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
#plt.close()
def Ilt(self, numberString, arg, data_e) :
def Ilt(self, numberString, arg, data_e, polarizationConfig) :
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
@@ -568,13 +554,13 @@ class PlotSelection:
plt.ylim(top=np.max(bins_t))
plt.xlabel('$\\lambda ~/~ \\mathrm{\\AA}$')
plt.ylabel('$\\theta ~/~ \\mathrm{deg}$')
headline = self.headline(numberString, np.shape(data_e)[0])
headline = self.headline(numberString, np.shape(data_e)[0], polarizationConfig)
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
#plt.close()
def Itz(self, numberString, arg, data_e):
def Itz(self, numberString, arg, data_e, polarizationConfig):
det = Detector()
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
@@ -604,13 +590,13 @@ class PlotSelection:
plt.ylim(0,)
plt.xlabel('$t ~/~ \\mathrm{s}$')
plt.ylabel('$z$ pixel row')
headline = self.headline(numberString, np.shape(data_e)[0])
headline = self.headline(numberString, np.shape(data_e)[0], polarizationConfig)
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
#plt.close()
def Iq(self, numberString, arg, data_e):
def Iq(self, numberString, arg, data_e, polarizationConfig):
I_q, bins_q = np.histogram(data_e[:,9], bins = self.q_grid())
err_q = np.sqrt(I_q+1)
#q_lim = 4*np.pi*np.array([ max( np.sin(self.theta_grid()[0]*np.pi/180.)/self.lamda_grid()[-1] , 1e-4 ),
@@ -634,12 +620,12 @@ class PlotSelection:
plt.ylabel('$\\log_{10}(\\mathrm{cnts})$')
plt.xlabel('$q_z ~/~ \\mathrm{\\AA}^{-1}$')
plt.xlim(q_lim)
headline = self.headline(numberString, np.shape(data_e)[0])
headline = self.headline(numberString, np.shape(data_e)[0], polarizationConfig)
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
#plt.close()
def Il(self, numberString, arg, data_e):
def Il(self, numberString, arg, data_e, polarizationConfig):
I_l, bins_l = np.histogram(data_e[:,7], bins = self.lamda_grid())
if arg == 'file':
header = '# lambda counts'
@@ -653,12 +639,12 @@ class PlotSelection:
plt.plot(bins_l[:-1], np.log(I_l+5.e-1)/np.log(10.))
plt.ylabel('$\\log_{10} I ~/~ \\mathrm{cnts}$')
plt.xlabel('$\\lambda ~/~ \\mathrm{\\AA}$')
headline = self.headline(numberString, np.shape(data_e)[0])
headline = self.headline(numberString, np.shape(data_e)[0], polarizationConfig)
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
#plt.close()
def It(self, numberString, arg, data_e):
def It(self, numberString, arg, data_e, polarizationConfig):
I_t, bins_t = np.histogram(data_e[:,8], bins = self.theta_grid())
if arg == 'file':
header = '# 2theta counts'
@@ -669,12 +655,12 @@ class PlotSelection:
plt.plot( I_t, bins_t[:-1])
plt.xlabel('$\\mathrm{cnts}$')
plt.ylabel('$\\theta ~/~ \\mathrm{deg}$')
headline = self.headline(numberString, np.shape(data_e)[0])
headline = self.headline(numberString, np.shape(data_e)[0], polarizationConfig)
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
#plt.close()
def tof(self, numberString, arg, data_e):
def tof(self, numberString, arg, data_e, polarizationConfig):
if foldback:
time_grid = np.arange(0, 1.3*tau, 0.0005)
else:
@@ -696,7 +682,7 @@ class PlotSelection:
plt.plot(bins_t[:-1]+2*tau, lI_t)
plt.ylabel('log(counts)')
plt.xlabel('time / s')
headline = self.headline(numberString, np.shape(data_e)[0])
headline = self.headline(numberString, np.shape(data_e)[0], polarizationConfig)
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png')
@@ -774,7 +760,7 @@ def process(dataPath, ident, clas):
timeMin = 0
timeMax = 0
chopperPhase = clas.chopperPhase
chopperTriggerPhase = clas.chopperTriggerPhase
tofOffset = clas.TOFOffset
thetaMin = clas.thetaRange[0]
thetaMax = clas.thetaRange[1]
@@ -840,29 +826,27 @@ def process(dataPath, ident, clas):
tau = meta.tau
try:
chPh
chopperTriggerPhase
except NameError:
chPh = meta.chPh
spin = meta.spin
chopperTriggerPhase = meta.chopperTriggerPhase
global countingTime, detectorDistance, chopperDetectorDistance
global countingTime, detectorDistance, chopperDetectorDistance, polarizationConfig
polarizationConfig = meta.polarizationConfig
detectorDistance = meta.detectorDistance
chopperDetectorDistance = meta.chopperDetectorDistance
countingTime = meta.countingTime
if verbous:
logging.info(" mu = {:>4.2f} deg, nu = {:>4.2f} deg".format(mu, nu))
if spin == 'u':
logging.info(' spin <+|')
elif spin == 'd':
logging.info(' spin <-|')
logging.info(f' polarization config: {polarizationConfig}')
try: lamdaMax
except NameError: lamdaMax = lamdaMin + tau * hdm/chopperDetectorDistance * 1e13
tofOffset = tau * chopperPhase / 180. # mismatch of chopper pulse and time-zero
tofCut = lamdaCut * chopperDetectorDistance / hdm * 1.e-13 # tof of frame start
tofOffset = chopperTriggerPhase * tau / 180 # mismatch of chopper pulse and time-zero
tof_e = np.array(ev['/entry1/Amor/detector/data/event_time_offset'][:], dtype=np.uint64)/1.e9 + tofOffset # tof
detPixelID_e = np.array(ev['/entry1/Amor/detector/data/event_id'][:], dtype=np.uint64) # pixel index
@@ -896,8 +880,8 @@ def process(dataPath, ident, clas):
if clas.spy:
number = filename.split('n')[1].split('.')[0].lstrip('0')
logging.info('chopper speed={:>4.0f} rpm, phase={:>5.3f} deg, tau={} s'.format(30/tau, chPh, tau))
logging.info('nr={}, spn={}, cnt={}, tme={}'.format(number, spin, np.shape(data_eSum)[0], sumTime))
logging.info('chopper speed={:>4.0f} rpm, phase={:>5.3f} deg, tau={} s'.format(30/tau, chopperTriggerPhase, tau))
logging.info('nr={}, spn={}, cnt={}, tme={}'.format(number, polarizationConfig, np.shape(data_eSum)[0], sumTime))
logging.info('mu={:>1.2f}, nu={:>1.2f}, kap={:>1.2f}, kad={:>1.2f}, div={:>1.2f}'.format(mu, nu, kap, kad, div))
sys.exit()
@@ -912,7 +896,7 @@ def process(dataPath, ident, clas):
#string = f"plott.{plotType} (numberString, '{arg}', data_e)"
try:
plotFunction = getattr(plott, plotType)
plotFunction(numberString, arg, data_e)
plotFunction(numberString, arg, data_e, polarizationConfig)
plt.close()
except Exception as e:
logging.error(f"ERROR: '{plotType}' is no known output format!")
@@ -932,6 +916,8 @@ def commandLineArgs():
help ="chopper speed in rpm")
clas.add_argument("-d", "--dataPath",
help ="relative path to directory with .hdf files")
clas.add_argument("-D", "--absDataPath",
help ="absolute path to directory with .hdf files")
clas.add_argument("-f", "--fileIdent",
default='0',
help ="file number or offset (if negative)")
@@ -959,8 +945,8 @@ def commandLineArgs():
default=99.,
type=float,
help ="value of nu")
clas.add_argument("-P", "--chopperPhase",
default=7.5,
clas.add_argument("-P", "--chopperTriggerPhase",
default=-7.5,
type=float,
help ="chopper phase offset")
clas.add_argument("-p", "--plot",
@@ -1007,12 +993,14 @@ def get_dataPath(clas):
dataPath = clas.dataPath + '/'
if not os.path.exists(dataPath):
sys.exit('# *** the directory "'+dataPath+'" does not exist ***')
if clas.absDataPath:
dataPath = clas.absDataPath + '/'
elif os.path.exists('./raw'):
dataPath = "./raw/"
elif os.path.exists('../raw'):
dataPath = "../raw/"
else:
sys.exit('# *** please provide the path to the .hdf data files (-d <path>, default is "./raw")')
sys.exit('# *** please provide the path to the .hdf data files (-d <rel path> | -D <abs path>, default is "./raw")')
return dataPath

View File

@@ -1,208 +0,0 @@
import argparse
from .logconfig import update_loglevel
from .options import ReaderConfig, EOSConfig, ExperimentConfig, OutputConfig, ReductionConfig, Defaults
def commandLineArgs():
"""
Process command line argument.
The type of the default values is used for conversion and validation.
"""
msg = "eos reads data from (one or several) raw file(s) of the .hdf format, \
performs various corrections, conversations and projections and exports\
the resulting reflectivity in an orso-compatible format."
clas = argparse.ArgumentParser(description = msg)
input_data = clas.add_argument_group('input data')
input_data.add_argument("-f", "--fileIdentifier",
required = True,
nargs = '+',
help = "file number(s) or offset (if < 1)")
input_data.add_argument("-n", "--normalisationFileIdentifier",
default = Defaults.normalisationFileIdentifier,
nargs = '+',
help = "file number(s) of normalisation measurement")
input_data.add_argument("-nm", "--normalisationMethod",
default = Defaults.normalisationMethod,
help = "normalisation method: [o]verillumination, [u]nderillumination, [d]irect_beam")
input_data.add_argument("--raw",
type = str,
default = Defaults.raw,
help = "relative path to directory with .hdf files")
input_data.add_argument("-d", "--dataPath",
type = str,
default = Defaults.dataPath,
help = "relative path for output")
input_data.add_argument("-Y", "--year",
default = Defaults.year,
type = int,
help = "year the measurement was performed")
input_data.add_argument("-sub", "--subtract",
help = "R(q_z) curve to be subtracted (in .Rqz.ort format)")
output = clas.add_argument_group('output')
output.add_argument("-o", "--outputName",
default = Defaults.outputName,
help = "output file name (withot suffix)")
output.add_argument("-of", "--outputFormat",
nargs = '+',
default = Defaults.outputFormat,
help = "one of [Rqz.ort, Rlt.ort]")
output.add_argument("-ai", "--incidentAngle",
type = str,
default = Defaults.incidentAngle,
help = "calulate alpha_i from [alphaF, mu, nu]",
)
output.add_argument("-r", "--qResolution",
default = Defaults.qResolution,
type = float,
help = "q_z resolution")
output.add_argument("-ts", "--timeSlize",
nargs = '+',
type = float,
help = "time slizing <interval> ,[<start> [,stop]]")
output.add_argument("-s", "--scale",
nargs = '+',
default = Defaults.scale,
type = float,
help = "scaling factor for R(q_z)")
output.add_argument("-S", "--autoscale",
nargs = 2,
type = float,
help = "scale to 1 in the given q_z range")
masks = clas.add_argument_group('masks')
masks.add_argument("-l", "--lambdaRange",
default = Defaults.lambdaRange,
nargs = 2,
type = float,
help = "wavelength range")
masks.add_argument("-t", "--thetaRange",
default = Defaults.thetaRange,
nargs = 2,
type = float,
help = "absolute theta range")
masks.add_argument("-T", "--thetaRangeR",
default = Defaults.thetaRangeR,
nargs = 2,
type = float,
help = "relative theta range")
masks.add_argument("-y", "--yRange",
default = Defaults.yRange,
nargs = 2,
type = int,
help = "detector y range")
masks.add_argument("-q", "--qzRange",
default = Defaults.qzRange,
nargs = 2,
type = float,
help = "q_z range")
overwrite = clas.add_argument_group('overwrite')
overwrite.add_argument("-cs", "--chopperSpeed",
default = Defaults.chopperSpeed,
type = float,
help = "chopper speed in rpm")
overwrite.add_argument("-cp", "--chopperPhase",
default = Defaults.chopperPhase,
type = float,
help = "chopper phase")
overwrite.add_argument("-co", "--chopperPhaseOffset",
default = Defaults.chopperPhaseOffset,
type = float,
help = "phase offset between chopper opening and trigger pulse")
overwrite.add_argument("-m", "--muOffset",
default = Defaults.muOffset,
type = float,
help = "mu offset")
overwrite.add_argument("-mu", "--mu",
default = Defaults.mu,
type = float,
help ="value of mu")
overwrite.add_argument("-nu", "--nu",
default = Defaults.nu,
type = float,
help = "value of nu")
overwrite.add_argument("-sm", "--sampleModel",
default = Defaults.sampleModel,
type = str,
help = "1-line orso sample model description")
misc = clas.add_argument_group('misc')
misc.add_argument('-v', '--verbose', action='store_true')
misc.add_argument('-vv', '--debug', action='store_true')
return clas.parse_args()
def expand_file_list(short_notation):
"""Evaluate string entry for file number lists"""
#log().debug('Executing get_flist')
file_list=[]
for i in short_notation.split(','):
if '-' in i:
if ':' in i:
step = i.split(':', 1)[1]
file_list += range(int(i.split('-', 1)[0]), int((i.rsplit('-', 1)[1]).split(':', 1)[0])+1, int(step))
else:
step = 1
file_list += range(int(i.split('-', 1)[0]), int(i.split('-', 1)[1])+1, int(step))
else:
file_list += [int(i)]
return sorted(file_list)
def output_format_list(outputFormat):
format_list = []
if 'ort' in outputFormat or 'Rqz.ort' in outputFormat or 'Rqz' in outputFormat:
format_list.append('Rqz.ort')
if 'ort' in outputFormat or 'Rlt.ort' in outputFormat or 'Rlt' in outputFormat:
format_list.append('Rlt.ort')
if 'orb' in outputFormat or 'Rqz.orb' in outputFormat or 'Rqz' in outputFormat:
format_list.append('Rqz.orb')
if 'orb' in outputFormat or 'Rlt.orb' in outputFormat or 'Rlt' in outputFormat:
format_list.append('Rlt.orb')
return sorted(format_list, reverse=True)
def command_line_options():
clas = commandLineArgs()
update_loglevel(clas.verbose, clas.debug)
reader_config = ReaderConfig(
year = clas.year,
raw = clas.raw,
dataPath = clas.dataPath
)
experiment_config = ExperimentConfig(
sampleModel = clas.sampleModel,
chopperPhase = clas.chopperPhase,
chopperPhaseOffset = clas.chopperPhaseOffset,
yRange = clas.yRange,
lambdaRange = clas.lambdaRange,
qzRange = clas.qzRange,
incidentAngle = clas.incidentAngle,
mu = clas.mu,
nu = clas.nu,
muOffset = clas.muOffset
)
reduction_config = ReductionConfig(
qResolution = clas.qResolution,
qzRange = clas.qzRange,
autoscale = clas.autoscale,
thetaRange = clas.thetaRange,
thetaRangeR = clas.thetaRangeR,
fileIdentifier = clas.fileIdentifier,
scale = clas.scale,
subtract = clas.subtract,
normalisationFileIdentifier = clas.normalisationFileIdentifier,
normalisationMethod = clas.normalisationMethod,
timeSlize = clas.timeSlize,
)
output_config = OutputConfig(
outputFormats = output_format_list(clas.outputFormat),
outputName = clas.outputName
)
return EOSConfig(reader_config, experiment_config, reduction_config, output_config)

View File

@@ -1,450 +0,0 @@
import logging
import os
import subprocess
import sys
from datetime import datetime
from typing import List
import h5py
import numpy as np
import scipy as sp
from orsopy import fileio
from orsopy.fileio.model_language import SampleModel
from . import const
from .header import Header
from .instrument import Detector
from .options import ExperimentConfig, ReaderConfig
try:
from . import nb_helpers
except Exception:
nb_helpers = None
class AmorData:
"""read meta-data and event streams from .hdf file(s), apply filters and conversions"""
chopperDetectorDistance: float
chopperDistance: float
chopperPhase: float
chopperSpeed: float
div: float
data_file_numbers: List[int]
delta_z: np.ndarray
detZ_e: np.ndarray
lamda_e: np.ndarray
wallTime_e: np.ndarray
kad: float
kap: float
lambdaMax: float
lambda_e: np.ndarray
monitor: float
mu: float
nu: float
tau: float
tofCut: float
start_date: str
seriesStartTime = None
#-------------------------------------------------------------------------------------------------
def __init__(self, header: Header, reader_config: ReaderConfig, config: ExperimentConfig,
short_notation:str, norm=False):
self.startTime = reader_config.startTime
self.header = header
self.config = config
self.reader_config = reader_config
self.expand_file_list(short_notation)
self.read_data(norm=norm)
#-------------------------------------------------------------------------------------------------
def read_data(self, norm=False):
self.file_list = []
for number in self.data_file_numbers:
self.file_list.append(self.path_generator(number))
## read specific meta data and measurement from first file
if norm:
self.readHeaderInfo = False
else:
self.readHeaderInfo = True
_detZ_e = []
_lamda_e = []
_wallTime_e = []
_monitor = 0
#_current = []
for file in self.file_list:
self.read_individual_data(file, norm)
_detZ_e = np.append(_detZ_e, self.detZ_e)
_lamda_e = np.append(_lamda_e, self.lamda_e)
_wallTime_e = np.append(_wallTime_e, self.wallTime_e)
_monitor += self.monitor
self.detZ_e = _detZ_e
self.lamda_e = _lamda_e
self.wallTime_e = _wallTime_e
self.monitor = _monitor
logging.warning(f' {self.monitorType} monitor = {self.monitor:9.3f}')
#-------------------------------------------------------------------------------------------------
#def path_generator(self, number):
# fileName = f'amor{self.reader_config.year}n{number:06d}.hdf'
# if os.path.exists(os.path.join(self.reader_config.dataPath,fileName)):
# path = self.reader_config.dataPath
# elif os.path.exists(fileName):
# path = '.'
# elif os.path.exists(os.path.join('.','raw', fileName)):
# path = os.path.join('.','raw')
# elif os.path.exists(os.path.join('..','raw', fileName)):
# path = os.path.join('..','raw')
# elif os.path.exists(f'/afs/psi.ch/project/sinqdata/{self.reader_config.year}/amor/{int(number/1000)}/{fileName}'):
# path = f'/afs/psi.ch/project/sinqdata/{self.reader_config.year}/amor/{int(number/1000)}'
# else:
# sys.exit(f'# ERROR: the file {fileName} is nowhere to be found!')
# return os.path.join(path, fileName)
#-------------------------------------------------------------------------------------------------
def path_generator(self, number):
fileName = f'amor{self.reader_config.year}n{number:06d}.hdf'
path = ''
for rawd in self.reader_config.raw:
if os.path.exists(os.path.join(rawd,fileName)):
path = rawd
break
if not path:
if os.path.exists(f'/afs/psi.ch/project/sinqdata/{self.reader_config.year}/amor/{int(number/1000)}/{fileName}'):
path = f'/afs/psi.ch/project/sinqdata/{self.reader_config.year}/amor/{int(number/1000)}'
else:
sys.exit(f'# ERROR: the file {fileName} can not be found in {self.reader_config.raw}!')
return os.path.join(path, fileName)
#-------------------------------------------------------------------------------------------------
def expand_file_list(self, short_notation):
"""Evaluate string entry for file number lists"""
#log().debug('Executing get_flist')
file_list=[]
for i in short_notation.split(','):
if '-' in i:
if ':' in i:
step = i.split(':', 1)[1]
file_list += range(int(i.split('-', 1)[0]), int((i.rsplit('-', 1)[1]).split(':', 1)[0])+1, int(step))
else:
step = 1
file_list += range(int(i.split('-', 1)[0]), int(i.split('-', 1)[1])+1, int(step))
else:
file_list += [int(i)]
self.data_file_numbers=sorted(file_list)
#-------------------------------------------------------------------------------------------------
def resolve_pixels(self):
"""determine spatial coordinats and angles from pixel number"""
nPixel = Detector.nWires * Detector.nStripes * Detector.nBlades
pixelID = np.arange(nPixel)
(bladeNr, bPixel) = np.divmod(pixelID, Detector.nWires * Detector.nStripes)
(bZi, detYi) = np.divmod(bPixel, Detector.nStripes) # z index on blade, y index on detector
detZi = bladeNr * Detector.nWires + bZi # z index on detector
detX = bZi * Detector.dX # x position in detector
# detZ = Detector.zero - bladeNr * Detector.bladeZ - bZi * Detector.dZ # z position on detector
bladeAngle = np.rad2deg( 2. * np.arcsin(0.5*Detector.bladeZ / Detector.distance) )
delta = (Detector.nBlades/2. - bladeNr) * bladeAngle \
- np.rad2deg( np.arctan(bZi*Detector.dZ / ( Detector.distance + bZi * Detector.dX) ) )
self.delta_z = delta[detYi==1]
return np.vstack((detYi.T, detZi.T, detX.T, delta.T)).T
#-------------------------------------------------------------------------------------------------
def read_individual_data(self, fileName, norm=False):
self.hdf = h5py.File(fileName, 'r', swmr=True)
if self.readHeaderInfo:
self.read_header_info()
logging.warning(f' data from file: {fileName}')
self.read_individual_header()
# add header content
if self.readHeaderInfo:
self.readHeaderInfo = False
self.header.measurement_instrument_settings = fileio.InstrumentSettings(
incident_angle = fileio.ValueRange(round(self.mu+self.kap+self.kad-0.5*self.div, 3),
round(self.mu+self.kap+self.kad+0.5*self.div, 3),
'deg'),
wavelength = fileio.ValueRange(const.lamdaCut, self.config.lambdaRange[1], 'angstrom'),
polarization = fileio.Polarization.unpolarized,
)
self.header.measurement_instrument_settings.mu = fileio.Value(round(self.mu, 3), 'deg', comment='sample angle to horizon')
self.header.measurement_instrument_settings.nu = fileio.Value(round(self.nu, 3), 'deg', comment='detector angle to horizon')
self.header.measurement_instrument_settings.div = fileio.Value(round(self.div, 3), 'deg', comment='incoming beam divergence')
self.header.measurement_instrument_settings.kap = fileio.Value(round(self.kap, 3), 'deg', comment='incoming beam inclination')
if abs(self.kad)>0.02:
self.header.measurement_instrument_settings.kad = fileio.Value(round(self.kad, 3), 'deg', comment='incoming beam angular offset')
if norm:
self.header.measurement_additional_files.append(fileio.File(file=fileName.split('/')[-1], timestamp=self.fileDate))
else:
self.header.measurement_data_files.append(fileio.File(file=fileName.split('/')[-1], timestamp=self.fileDate))
logging.info(f' mu = {self.mu:6.3f}, nu = {self.nu:6.3f}, kap = {self.kap:6.3f}, kad = {self.kad:6.3f}')
self.read_event_stream()
totalNumber = np.shape(self.tof_e)[0]
self.sort_pulses()
self.define_monitor()
# sort the events into the related pulses
self.extract_walltime(norm)
self.filter_strange_times()
self.merge_frames()
self.filter_project_x()
self.correct_for_chopper_opening()
self.calculate_derived_properties()
self.filter_qz_range(norm)
logging.info(f' number of events: total = {totalNumber:7d}, filtered = {np.shape(self.lamda_e)[0]:7d}')
def sort_pulses(self):
chopperPeriod = int(2*self.tau*1e9)
pulseTime = np.sort(self.dataPacketTime_p)
pulseTime = pulseTime[np.abs(pulseTime[:]-np.roll(pulseTime, 1)[:])>5]
if self.seriesStartTime is None:
self.seriesStartTime = float(pulseTime[0])
pulseTime -= self.seriesStartTime
self.stopTime = float(pulseTime[-1])
# fill in missing pulse times
# TODO: check for real end time
self.pulseTimeS = np.array([pulseTime[0]])
for tt in pulseTime[1:]:
nxt = self.pulseTimeS[-1] + chopperPeriod
while abs(tt - nxt) > self.tau*1e9:
self.pulseTimeS = np.append(self.pulseTimeS, nxt)
nxt += chopperPeriod
self.pulseTimeS = np.append(self.pulseTimeS, tt)
# remove 'partially filled' pulses
self.pulseTimeS = self.pulseTimeS[1:-1]
def associate_pulse_with_current(self):
if self.monitorType == 'protonCharge':
lowCurrentThreshold = 0.05 # mA
self.currentTime -= self.seriesStartTime
currentInterpolator = sp.interpolate.interp1d(self.currentTime, self.current, kind='previous', bounds_error=False, fill_value=0)
self.charge = np.array(currentInterpolator(self.pulseTimeS) * 2*self.tau *1e-3, dtype=float)
# filter low-current pulses
self.charge = np.where(self.charge > 2*self.tau *lowCurrentThreshold, self.charge, 0)
# remove 'partially filled' pulses
self.charge[0] = 0
self.charge[-1] = 0
def define_monitor(self):
if self.monitorType == 'protonCharge':
chargeSum = np.sum(self.charge)
logging.warning(f' proton charge = {chargeSum:9.3f} mC')
self.monitor = chargeSum
elif self.monitorType == 'countingTime':
self.monitor = self.stopTime - self.seriesStartTime
else:
self.monitor = 1.
def extract_walltime(self, norm):
#self.dataPacketTime_p = np.array(self.dataPacketTime_p, dtype=float) / 1e9
if nb_helpers:
self.wallTime_e = nb_helpers.extract_walltime(self.tof_e, self.dataPacket_p, self.dataPacketTime_p)
else:
self.wallTime_e = np.empty(np.shape(self.tof_e)[0], dtype=np.int64)
for i in range(len(self.dataPacket_p)-1):
self.wallTime_e[self.dataPacket_p[i]:self.dataPacket_p[i+1]] = self.dataPacketTime_p[i]
self.wallTime_e[self.dataPacket_p[-1]:] = self.dataPacketTime_p[-1]
#if not self.startTime and not norm:
# self.startTime = self.wallTime_e[0]
self.wallTime_e -= np.int64(self.seriesStartTime)
logging.debug(f' wall time from {self.wallTime_e[0]/1e9} to {self.wallTime_e[-1]/1e9}')
def monitor_threshold(self):
goodTimeS = self.pulseTimeS[self.charge!=0]
filter_e = np.where(np.isin(self.wallTime_e, goodTimeS), True, False)
self.tof_e = self.tof_e[filter_e]
self.pixelID_e = self.pixelID_e[filter_e]
self.wallTime_e = self.wallTime_e[filter_e]
logging.warning(f' rejected {np.shape(self.charge)[0]-np.shape(goodTimeS)[0]} pulses due to low beam current')
logging.warning(f' rejected {np.shape(filter_e)[0]-np.shape(self.tof_e)[0]} events due to low beam current')
def filter_qz_range(self, norm):
if self.config.qzRange[1]<0.3 and not norm:
self.mask_e = np.logical_and(self.mask_e,
(self.config.qzRange[0]<=self.qz_e) & (self.qz_e<=self.config.qzRange[1]))
self.detZ_e = self.detZ_e[self.mask_e]
self.lamda_e = self.lamda_e[self.mask_e]
self.wallTime_e = self.wallTime_e[self.mask_e]
def calculate_derived_properties(self):
self.lamdaMax = const.lamdaCut+1.e13*self.tau*const.hdm/(self.chopperDetectorDistance+124.)
if nb_helpers:
self.lamda_e, self.qz_e, self.mask_e = nb_helpers.calculate_derived_properties_focussing(
self.tof_e, self.detXdist_e, self.delta_e, self.mask_e,
self.config.lambdaRange[0], self.config.lambdaRange[1], self.nu, self.mu,
self.chopperDetectorDistance, const.hdm
)
return
# lambda
self.lamda_e = (1.e13*const.hdm)*self.tof_e/(self.chopperDetectorDistance+self.detXdist_e)
self.mask_e = np.logical_and(self.mask_e, (self.config.lambdaRange[0]<=self.lamda_e) & (
self.lamda_e<=self.config.lambdaRange[1]))
# alpha_f
# q_z
if self.config.incidentAngle == 'alphaF':
alphaF_e = self.nu - self.mu + self.delta_e
self.qz_e = 4*np.pi*(np.sin(np.deg2rad(alphaF_e))/self.lamda_e)
# qx_e = 0.
self.header.measurement_scheme = 'angle- and energy-dispersive'
elif self.config.incidentAngle == 'nu':
alphaF_e = (self.nu + self.delta_e + self.kap + self.kad) / 2.
self.qz_e = 4*np.pi*(np.sin(np.deg2rad(alphaF_e))/self.lamda_e)
# qx_e = 0.
self.header.measurement_scheme = 'energy-dispersive'
else:
alphaF_e = self.nu - self.mu + self.delta_e
alphaI = self.kap + self.kad + self.mu
self.qz_e = 2*np.pi * ((np.sin(np.deg2rad(alphaF_e)) + np.sin(np.deg2rad(alphaI)))/self.lamda_e)
self.qx_e = 2*np.pi * ((np.cos(np.deg2rad(alphaF_e)) - np.cos(np.deg2rad(alphaI)))/self.lamda_e)
self.header.measurement_scheme = 'energy-dispersive'
def correct_for_chopper_opening(self):
# correct tof for beam size effect at chopper: t_cor = (delta / 180 deg) * tau
if self.config.incidentAngle == 'alphaF':
self.tof_e -= ( self.delta_e / 180. ) * self.tau
else:
# TODO: check sign of correction
self.tof_e -= ( self.kad / 180. ) * self.tau
def filter_project_x(self):
pixelLookUp = self.resolve_pixels()
if nb_helpers:
(self.detZ_e, self.detXdist_e, self.delta_e, self.mask_e) = nb_helpers.filter_project_x(
pixelLookUp, self.pixelID_e.astype(np.int64), self.config.yRange[0], self.config.yRange[1]
)
else:
# resolve pixel ID into y and z indicees, x position and angle
(detY_e, self.detZ_e, self.detXdist_e, self.delta_e) = pixelLookUp[np.int_(self.pixelID_e)-1, :].T
# define mask and filter y range
self.mask_e = (self.config.yRange[0]<=detY_e) & (detY_e<=self.config.yRange[1])
def merge_frames(self):
total_offset = self.tofCut+self.tau*self.config.chopperPhaseOffset/180.
if nb_helpers:
self.tof_e = nb_helpers.merge_frames(self.tof_e, self.tofCut, self.tau, total_offset)
else:
self.tof_e = np.remainder(self.tof_e-(self.tofCut-self.tau), self.tau)+total_offset # tof shifted to 1 frame
def filter_strange_times(self):
# 'strange' tof times are those with t > 2 tau (originating from the efu)
filter_e = (self.tof_e<=2*self.tau)
self.tof_e = self.tof_e[filter_e]
self.pixelID_e = self.pixelID_e[filter_e]
self.wallTime_e = self.wallTime_e[filter_e]
if np.shape(filter_e)[0]-np.shape(self.tof_e)[0]>0.5:
logging.warning(f'# strange times: {np.shape(filter_e)[0]-np.shape(self.tof_e)[0]}')
def read_event_stream(self):
self.tof_e = np.array(self.hdf['/entry1/Amor/detector/data/event_time_offset'][:])/1.e9
self.pixelID_e = np.array(self.hdf['/entry1/Amor/detector/data/event_id'][:], dtype=np.int64)
self.dataPacket_p = np.array(self.hdf['/entry1/Amor/detector/data/event_index'][:], dtype=np.uint64)
#self.dataPacketTime_p = np.array(self.hdf['/entry1/Amor/detector/data/event_time_zero'][:], dtype=np.uint64)/1e9
self.dataPacketTime_p = np.array(self.hdf['/entry1/Amor/detector/data/event_time_zero'][:], dtype=float)
try:
self.currentTime = np.array(self.hdf['entry1/Amor/detector/proton_current/time'][:], dtype=np.int64)
self.current = np.array(self.hdf['entry1/Amor/detector/proton_current/value'][:,0], dtype=float)
if len(self.current)>0:
self.monitorType = 'protonCharge'
else:
self.monitorType = 'countingTime'
except(KeyError, IndexError):
self.monitorType = 'countingTime'
def read_individual_header(self):
self.chopperDistance = float(np.take(self.hdf['entry1/Amor/chopper/pair_separation'], 0))
self.detectorDistance = float(np.take(self.hdf['entry1/Amor/detector/transformation/distance'], 0))
self.chopperDetectorDistance = self.detectorDistance-float(np.take(self.hdf['entry1/Amor/chopper/distance'], 0))
self.tofCut = const.lamdaCut*self.chopperDetectorDistance/const.hdm*1.e-13
try:
self.mu = float(np.take(self.hdf['/entry1/Amor/master_parameters/mu/value'], 0))
self.nu = float(np.take(self.hdf['/entry1/Amor/master_parameters/nu/value'], 0))
self.kap = float(np.take(self.hdf['/entry1/Amor/master_parameters/kap/value'], 0))
self.kad = float(np.take(self.hdf['/entry1/Amor/master_parameters/kad/value'], 0))
self.div = float(np.take(self.hdf['/entry1/Amor/master_parameters/div/value'], 0))
self.chopperSpeed = float(np.take(self.hdf['/entry1/Amor/chopper/rotation_speed/value'], 0))
self.chopperPhase = float(np.take(self.hdf['/entry1/Amor/chopper/phase/value'], 0))
except(KeyError, IndexError):
logging.warning(" using parameters from nicos cache")
year_date = str(self.start_date).replace('-', '/', 1)
#cachePath = '/home/amor/nicosdata/amor/cache/'
#cachePath = '/home/nicos/amorcache/'
cachePath = '/home/amor/cache/'
value = str(subprocess.getoutput(f'/usr/bin/grep "value" {cachePath}nicos-mu/{year_date}')).split('\t')[-1]
self.mu = float(value)
value = str(subprocess.getoutput(f'/usr/bin/grep "value" {cachePath}nicos-nu/{year_date}')).split('\t')[-1]
self.nu = float(value)
value = str(subprocess.getoutput(f'/usr/bin/grep "value" {cachePath}nicos-kap/{year_date}')).split('\t')[-1]
self.kap = float(value)
value = str(subprocess.getoutput(f'/usr/bin/grep "value" {cachePath}nicos-kad/{year_date}')).split('\t')[-1]
self.kad = float(value)
value = str(subprocess.getoutput(f'/usr/bin/grep "value" {cachePath}nicos-div/{year_date}')).split('\t')[-1]
self.div = float(value)
value = str(subprocess.getoutput(f'/usr/bin/grep "value" {cachePath}nicos-ch1_speed/{year_date}')).split('\t')[-1]
self.chopperSpeed = float(value)
self.chopperPhase = self.config.chopperPhase
self.tau = 30. / self.chopperSpeed
if self.config.muOffset:
self.mu += self.config.muOffset
if self.config.mu:
self.mu = self.config.mu
if self.config.nu:
self.nu = self.config.nu
self.fileDate = datetime.fromisoformat( self.hdf['/entry1/start_time'][0].decode('utf-8') )
def read_header_info(self):
# read general information and first data set
logging.info(f' meta data from: {self.file_list[0]}')
self.hdf = h5py.File(self.file_list[0], 'r', swmr=True)
title = self.hdf['entry1/title'][0].decode('utf-8')
proposal_id = self.hdf['entry1/proposal_id'][0].decode('utf-8')
user_name = self.hdf['entry1/user/name'][0].decode('utf-8')
user_affiliation = 'unknown'
user_email = self.hdf['entry1/user/email'][0].decode('utf-8')
user_orcid = None
sampleName = self.hdf['entry1/sample/name'][0].decode('utf-8')
model = self.hdf['entry1/sample/model'][0].decode('utf-8')
instrumentName = 'Amor'
source = self.hdf['entry1/Amor/source/name'][0].decode('utf-8')
sourceProbe = 'neutron'
start_time = self.hdf['entry1/start_time'][0].decode('utf-8')
self.start_date = start_time.split(' ')[0]
if self.config.sampleModel:
model = self.config.sampleModel
# assembling orso header information
self.header.owner = fileio.Person(
name=user_name,
affiliation=user_affiliation,
contact=user_email,
)
if user_orcid:
self.header.owner.orcid = user_orcid
self.header.experiment = fileio.Experiment(
title=title,
instrument=instrumentName,
start_date=self.start_date,
probe=sourceProbe,
facility=source,
proposalID=proposal_id
)
self.header.sample = fileio.Sample(
name=sampleName,
model=SampleModel(stack=model),
sample_parameters=None,
)
self.header.measurement_scheme = 'angle- and energy-dispersive'

View File

@@ -1,187 +0,0 @@
"""
Classes for stroing various configurations needed for reduction.
"""
from dataclasses import dataclass, field
from typing import Optional, Tuple
from datetime import datetime
from os import path
import logging
class Defaults:
# fileIdentifier
dataPath = '.'
raw = ['.', path.join('.','raw'), path.join('..','raw'), path.join('..','..','raw')]
year = datetime.now().year
normalisationFileIdentifier = []
normalisationMethod = 'o'
# subtract
outputName = "fromEOS"
outputFormat = ['Rqz.ort']
incidentAngle = 'alphaF'
qResolution = 0.01
#timeSlize
scale = [1]
# autoscale
lambdaRange = [2., 15.]
thetaRange = [-12., 12.]
thetaRangeR = [-0.75, 0.75]
yRange = [11, 41]
qzRange = [0.005, 0.30]
chopperSpeed = 500
chopperPhase = -13.5
chopperPhaseOffset = 7
muOffset = 0
mu = 0
nu = 0
sampleModel = None
#
@dataclass
class ReaderConfig:
year: int
dataPath: str
raw: Tuple[str]
startTime: Optional[float] = 0
@dataclass
class ExperimentConfig:
incidentAngle: str
chopperPhase: float
yRange: Tuple[float, float]
lambdaRange: Tuple[float, float]
qzRange: Tuple[float, float]
sampleModel: Optional[str] = None
chopperPhaseOffset: float = 0
mu: Optional[float] = None
nu: Optional[float] = None
muOffset: Optional[float] = None
@dataclass
class ReductionConfig:
normalisationMethod: str
qResolution: float
qzRange: Tuple[float, float]
thetaRange: Tuple[float, float]
thetaRangeR: Tuple[float, float]
fileIdentifier: list = field(default_factory=lambda: ["0"])
scale: list = field(default_factory=lambda: [1]) #per file scaling; if less elements than files use the last one
autoscale: Optional[Tuple[bool, bool]] = None
subtract: Optional[str] = None
normalisationFileIdentifier: Optional[list] = None
timeSlize: Optional[list] = None
@dataclass
class OutputConfig:
outputFormats: list
outputName: str
@dataclass
class EOSConfig:
reader: ReaderConfig
experiment: ExperimentConfig
reduction: ReductionConfig
output: OutputConfig
_call_string_overwrite=None
#@property
#def call_string(self)->str:
# if self._call_string_overwrite:
# return self._call_string_overwrite
# else:
# return self.calculate_call_string()
def call_string(self):
base = 'python eos.py'
inpt = ''
if self.reader.year:
inpt += f' -Y {self.reader.year}'
else:
inpt += f' -Y {datetime.now().year}'
if self.reader.dataPath != '.':
inpt += f' --dataPath {self.reader.dataPath}'
#if self.reader.raw != '.':
# inpt = f' --rawd {self.reader.raw}'
if self.reduction.subtract:
inpt += f' -subtract {self.reduction.subtract}'
if self.reduction.normalisationFileIdentifier:
inpt += f' -n {" ".join(self.reduction.normalisationFileIdentifier)}'
if self.reduction.fileIdentifier:
inpt += f' -f {" ".join(self.reduction.fileIdentifier)}'
otpt = ''
if self.reduction.qResolution:
otpt += f' -r {self.reduction.qResolution}'
if self.output.outputName:
otpt += f' -o {self.output.outputName}'
if self.output.outputFormats != ['Rqz.ort']:
otpt += f' -of {" ".join(self.output.outputFormats)}'
mask = ''
if self.experiment.yRange != Defaults.yRange:
mask += f' -y {" ".join(str(ii) for ii in self.experiment.yRange)}'
if self.experiment.lambdaRange!= Defaults.lambdaRange:
mask += f' -l {" ".join(str(ff) for ff in self.experiment.lambdaRange)}'
if self.reduction.thetaRange != Defaults.thetaRange:
mask += f' -T {" ".join(str(ff) for ff in self.reduction.thetaRange)}'
elif self.reduction.thetaRangeR != Defaults.thetaRangeR:
mask += f' -t {" ".join(str(ff) for ff in self.reduction.thetaRangeR)}'
if self.experiment.qzRange!= Defaults.qzRange:
mask += f' -q {" ".join(str(ff) for ff in self.experiment.qzRange)}'
para = ''
if self.experiment.chopperPhase != Defaults.chopperPhase:
para += f' --chopperPhase {self.experiment.chopperPhase}'
if self.experiment.chopperPhaseOffset != Defaults.chopperPhaseOffset:
para += f' --chopperPhaseOffset {self.experiment.chopperPhaseOffset}'
if self.experiment.mu:
para += f' --mu {self.experiment.mu}'
elif self.experiment.muOffset:
para += f' --muOffset {self.experiment.muOffset}'
if self.experiment.nu:
para += f' --nu {self.experiment.nu}'
modl = ''
if self.experiment.sampleModel:
modl += f" --sampleModel '{self.experiment.sampleModel}'"
acts = ''
if self.reduction.autoscale:
acts += f' --autoscale {" ".join(str(ff) for ff in self.reduction.autoscale)}'
if self.reduction.scale != Defaults.scale:
acts += f' --scale {self.reduction.scale}'
if self.reduction.timeSlize:
acts += f' --timeSlize {" ".join(str(ff) for ff in self.reduction.timeSlize)}'
mlst = base + inpt + otpt
if mask:
mlst += mask
if para:
mlst += para
if acts:
mlst += acts
if modl:
mlst += modl
if len(mlst) > 70:
mlst = base + ' ' + inpt + ' ' + otpt
if mask:
mlst += ' ' + mask
if para:
mlst += ' ' + para
if acts:
mlst += ' ' + acts
if modl:
mlst += ' ' + modl
logging.debug(f'Argument list build in EOSConfig.call_string: {mlst}')
return mlst

View File

@@ -1,480 +0,0 @@
import logging
import os
import sys
import numpy as np
from orsopy import fileio
from .command_line import expand_file_list
from .file_reader import AmorData
from .header import Header
from .options import EOSConfig
from .instrument import Grid
class AmorReduction:
def __init__(self, config: EOSConfig):
self.experiment_config = config.experiment
self.reader_config = config.reader
self.reduction_config = config.reduction
self.output_config = config.output
self.grid = Grid(config.reduction.qResolution, config.experiment.qzRange)
self.header = Header()
self.header.reduction.call = config.call_string()
def reduce(self):
if not os.path.exists(f'{self.reader_config.dataPath}'):
logging.debug(f'Creating destination path {self.reader_config.dataPath}')
os.system(f'mkdir {self.reader_config.dataPath}')
# load or create normalisation matrix
if self.reduction_config.normalisationFileIdentifier:
self.create_normalisation_map(self.reduction_config.normalisationFileIdentifier[0])
else:
self.norm_lz = self.grid.lz()
self.normAngle = 1.
self.normMonitor = 1.
logging.warning('normalisation matrix: none requested')
# load R(q_z) curve to be subtracted:
if self.reduction_config.subtract:
self.sq_q, self.sR_q, self.sdR_q, self.sFileName = self.loadRqz(self.reduction_config.subtract)
logging.warning(f'loaded background file: {self.sFileName}')
self.header.reduction.corrections.append(f'background from \'{self.sFileName}\' subtracted')
self.subtract = True
else:
self.subtract = False
# load measurement data and do the reduction
self.datasetsRqz = []
self.datasetsRlt = []
for i, short_notation in enumerate(self.reduction_config.fileIdentifier):
self.read_file_block(i, short_notation)
# output
logging.warning('output:')
if 'Rqz.ort' in self.output_config.outputFormats:
self.save_Rqz()
if 'Rlt.ort' in self.output_config.outputFormats:
self.save_Rtl()
def read_file_block(self, i, short_notation):
logging.warning('reading input:')
self.header.measurement_data_files = []
self.file_reader = AmorData(header=self.header,
reader_config=self.reader_config,
config=self.experiment_config,
short_notation=short_notation)
if self.reduction_config.timeSlize:
self.read_timeslices(i)
else:
self.read_unsliced(i)
def read_unsliced(self, i):
lamda_e = self.file_reader.lamda_e
detZ_e = self.file_reader.detZ_e
qz_lz, qx_lz, ref_lz, err_lz, res_lz, lamda_lz, theta_lz, int_lz, self.mask_lz = self.project_on_lz(
self.file_reader, self.norm_lz, self.normAngle, lamda_e, detZ_e)
monitor = self.file_reader.monitor
if monitor>1 :
ref_lz /= monitor
err_lz /= monitor
try:
ref_lz *= self.reduction_config.scale[i]
err_lz *= self.reduction_config.scale[i]
except IndexError:
ref_lz *= self.reduction_config.scale[-1]
err_lz *= self.reduction_config.scale[-1]
if 'Rqz.ort' in self.output_config.outputFormats:
headerRqz = self.header.orso_header()
headerRqz.data_set = f'Nr {i} : mu = {self.file_reader.mu:6.3f} deg'
if qz_lz[0,int(np.shape(qz_lz)[1]/2)] < 0:
# assuming a 'measurement from below' when center of detector at negative qz
qz_lz *= -1
# projection on qz-grid
q_q, R_q, dR_q, dq_q = self.project_on_qz(qz_lz, ref_lz, err_lz, res_lz, self.norm_lz, self.mask_lz)
# The filtering is now done by restricting the qz-grid
#filter_q = np.where((self.experiment_config.qzRange[0]>q_q) & (q_q>self.experiment_config.qzRange[1]),
# False, True)
#q_q = q_q[filter_q]
#R_q = R_q[filter_q]
#dR_q = dR_q[filter_q]
#dq_q = dq_q[filter_q]
if self.reduction_config.autoscale:
if i==0:
R_q, dR_q = self.autoscale(q_q, R_q, dR_q)
else:
pRq_z = self.datasetsRqz[i-1].data[:, 1]
pdRq_z = self.datasetsRqz[i-1].data[:, 2]
R_q, dR_q = self.autoscale(q_q, R_q, dR_q, pRq_z, pdRq_z)
if self.subtract:
if len(q_q)==len(self.sq_q):
R_q -= self.sR_q
dR_q = np.sqrt(dR_q**2+self.sdR_q**2)
else:
logging.warning(
f'backgroung file {self.sFileName} not compatible with q_z scale ({len(self.sq_q)} vs. {len(q_q)})')
data = np.array([q_q, R_q, dR_q, dq_q]).T
orso_data = fileio.OrsoDataset(headerRqz, data)
self.datasetsRqz.append(orso_data)
if 'Rlt.ort' in self.output_config.outputFormats:
columns = [
fileio.Column('Qz', '1/angstrom', 'normal momentum transfer'),
fileio.Column('R', '', 'specular reflectivity'),
fileio.ErrorColumn(error_of='R', error_type='uncertainty', value_is='sigma'),
fileio.ErrorColumn(error_of='Qz', error_type='resolution', value_is='sigma'),
fileio.Column('lambda', 'angstrom', 'wavelength'),
fileio.Column('alpha_f', 'deg', 'final angle'),
fileio.Column('l', '', 'index of lambda-bin'),
fileio.Column('t', '', 'index of theta bin'),
fileio.Column('intensity', '', 'filtered neutron events per pixel'),
fileio.Column('norm', '', 'normalisation matrix'),
fileio.Column('mask', '', 'pixels used for calculating R(q_z)'),
fileio.Column('Qx', '1/angstrom', 'parallel momentum transfer'),
]
# data_source = file_reader.data_source
ts, zs = ref_lz.shape
lindex_lz = np.tile(np.arange(1, ts+1), (zs, 1)).T
tindex_lz = np.tile(np.arange(1, zs+1), (ts, 1))
j = 0
for item in zip(
qz_lz.T,
ref_lz.T,
err_lz.T,
res_lz.T,
lamda_lz.T,
theta_lz.T,
lindex_lz.T,
tindex_lz.T,
int_lz.T,
self.norm_lz.T,
np.where(self.mask_lz, 1, 0).T,
qx_lz.T,
):
data = np.array(list(item)).T
headerRlt = self.header.orso_header(columns=columns)
headerRlt.data_set = f'dataset_{i}_{j+1} : alpha_f = {theta_lz[0, j]:6.3f} deg'
orso_data = fileio.OrsoDataset(headerRlt, data)
self.datasetsRlt.append(orso_data)
j += 1
def read_timeslices(self, i):
wallTime_e = self.file_reader.wallTime_e
interval = self.reduction_config.timeSlize[0]
try:
start = self.reduction_config.timeSlize[1]
except:
start = 0
try:
stop = self.reduction_config.timeSlize[2]
except:
stop = wallTime_e[-1]
# make overwriting log lines possible by removing newline at the end
logging.StreamHandler.terminator = "\r"
for ti, time in enumerate(np.arange(start, stop, interval)):
logging.warning(f' time slize {ti:4d}')
filter_e = np.where((time<wallTime_e) & (wallTime_e<time+interval), True, False)
lamda_e = self.file_reader.lamda_e[filter_e]
detZ_e = self.file_reader.detZ_e[filter_e]
qz_lz, qx_lz, ref_lz, err_lz, res_lz, lamda_lz, theta_lz, int_lz, mask_lz = self.project_on_lz(
self.file_reader, self.norm_lz, self.normAngle, lamda_e, detZ_e)
q_q, R_q, dR_q, dq_q = self.project_on_qz(qz_lz, ref_lz, err_lz, res_lz, self.norm_lz, mask_lz)
filter_q = np.where((self.experiment_config.qzRange[0]<q_q) & (q_q<self.experiment_config.qzRange[1]),
True, False)
q_q = q_q[filter_q]
R_q = R_q[filter_q]
dR_q = dR_q[filter_q]
dq_q = dq_q[filter_q]
if self.reduction_config.autoscale:
R_q, dR_q = self.autoscale(q_q, R_q, dR_q)
if self.subtract:
if len(q_q)==len(self.sq_q):
R_q -= self.sR_q
dR_q = np.sqrt(dR_q**2+self.sdR_q**2)
else:
self.subtract = False
logging.warning(
f'background file {self.sFileName} not compatible with q_z scale ({len(self.sq_q)} vs. {len(q_q)})')
tme_q = np.ones(np.shape(q_q))*time
data = np.array([q_q, R_q, dR_q, dq_q, tme_q]).T
headerRqz = self.header.orso_header(
extra_columns=[fileio.Column('time', 's', 'time relative to start of measurement series')])
headerRqz.data_set = f'{i}_{ti}: time = {time:8.1f} s to {time+interval:8.1f} s'
orso_data = fileio.OrsoDataset(headerRqz, data)
self.datasetsRqz.append(orso_data)
# reset normal logging behavior
logging.StreamHandler.terminator = "\n"
logging.warning(f' time slizing, done')
def save_Rqz(self):
fname = os.path.join(self.reader_config.dataPath, f'{self.output_config.outputName}.Rqz.ort')
logging.warning(f' {fname}')
theSecondLine = f' {self.header.experiment.title} | {self.header.experiment.start_date} | sample {self.header.sample.name} | R(q_z)'
fileio.save_orso(self.datasetsRqz, fname, data_separator='\n', comment=theSecondLine)
def save_Rtl(self):
fname = os.path.join(self.reader_config.dataPath, f'{self.output_config.outputName}.Rlt.ort')
logging.warning(f' {fname}')
theSecondLine = f' {self.header.experiment.title} | {self.header.experiment.start_date} | sample {self.header.sample.name} | R(lambda, theta)'
fileio.save_orso(self.datasetsRlt, fname, data_separator='\n', comment=theSecondLine)
def autoscale(self, q_q, R_q, dR_q, pR_q=[], pdR_q=[]):
autoscale = self.reduction_config.autoscale
if len(pR_q) == 0:
filter_q = np.where((autoscale[0]<=q_q)&(q_q<=autoscale[1]), True, False)
filter_q = np.where(dR_q>0, filter_q, False)
if len(filter_q[filter_q]) > 0:
scale = np.sum(R_q[filter_q]**2/dR_q[filter_q]) / np.sum(R_q[filter_q]/dR_q[filter_q])
else:
logging.warning(' automatic scaling not possible')
scale = 1.
else:
filter_q = np.where(np.isnan(pR_q*R_q), False, True)
filter_q = np.where(R_q>0, filter_q, False)
filter_q = np.where(pR_q>0, filter_q, False)
if len(filter_q[filter_q]) > 0:
scale = np.sum(R_q[filter_q]**3 * pR_q[filter_q] / (dR_q[filter_q]**2 * pdR_q[filter_q]**2)) \
/ np.sum(R_q[filter_q]**2 * pR_q[filter_q]**2 / (dR_q[filter_q]**2 * pdR_q[filter_q]**2))
else:
logging.warning(' automatic scaling not possible')
scale = 1.
R_q /= scale
dR_q /= scale
logging.debug(f' scaling factor = {scale}')
return R_q, dR_q
def project_on_qz(self, q_lz, R_lz, dR_lz, dq_lz, norm_lz, mask_lz):
q_q = self.grid.q()
mask_lzf = mask_lz.flatten()
q_lzf = q_lz.flatten()[mask_lzf]
R_lzf = R_lz.flatten()[mask_lzf]
dR_lzf = dR_lz.flatten()[mask_lzf]
dq_lzf = dq_lz.flatten()[mask_lzf]
norm_lzf = norm_lz.flatten()[mask_lzf]
weights_lzf = norm_lzf
#weights_lzf = np.sqrt(norm_lzf)
#weights_lzf = 1 / dR_lzf
N_q = np.histogram(q_lzf, bins = q_q, weights = weights_lzf )[0]
N_q = np.where(N_q > 0, N_q, np.nan)
R_q = np.histogram(q_lzf, bins = q_q, weights = weights_lzf * R_lzf )[0]
R_q = R_q / N_q
dR_q = np.histogram(q_lzf, bins = q_q, weights = (weights_lzf * dR_lzf)**2 )[0]
dR_q = np.sqrt( dR_q ) / N_q
# TODO: different error propagations for dR and dq!
# this is what should work:
#dq_q = np.histogram(q_lzf, bins = q_q, weights = (weights_lzf * dq_lzf)**2 )[0]
#dq_q = np.sqrt( dq_q ) / N_q
# and this actually works:
N_q = np.histogram(q_lzf, bins = q_q, weights = weights_lzf**2 )[0]
N_q = np.where(N_q > 0, N_q, np.nan)
dq_q = np.histogram(q_lzf, bins = q_q, weights = (weights_lzf * dq_lzf)**2 )[0]
dq_q = np.sqrt( dq_q / N_q )
q_q = 0.5 * (q_q + np.roll(q_q, 1))
return q_q[1:], R_q, dR_q, dq_q
def loadRqz(self, name):
fname = os.path.join(self.reader_config.dataPath, name)
if os.path.exists(fname):
fileName = fname
elif os.path.exists(f'{fname}.Rqz.ort'):
fileName = f'{fname}.Rqz.ort'
else:
sys.exit(f'### the background file \'{fname}\' does not exist! => stopping')
q_q, Sq_q, dS_q = np.loadtxt(fileName, usecols=(0, 1, 2), comments='#', unpack=True)
return q_q, Sq_q, dS_q, fileName
def create_normalisation_map(self, short_notation):
dataPath = self.reader_config.dataPath
normalisation_list = expand_file_list(short_notation)
name = str(normalisation_list[0])
for i in range(1, len(normalisation_list), 1):
name = f'{name}_{normalisation_list[i]}'
n_path = os.path.join(dataPath, f'{name}.norm')
if os.path.exists(n_path):
logging.warning(f'normalisation matrix: found and using {n_path}')
with open(n_path, 'rb') as fh:
self.normFileList = np.load(fh, allow_pickle=True)
self.normAngle = np.load(fh, allow_pickle=True)
self.norm_lz = np.load(fh, allow_pickle=True)
self.normMonitor = np.load(fh, allow_pickle=True)
for i, entry in enumerate(self.normFileList):
self.normFileList[i] = entry.split('/')[-1]
self.header.measurement_additional_files = self.normFileList
else:
logging.warning(f'normalisation matrix: using the files {normalisation_list}')
fromHDF = AmorData(header=self.header,
reader_config=self.reader_config,
config=self.experiment_config,
short_notation=short_notation, norm=True)
self.normAngle = fromHDF.nu - fromHDF.mu
lamda_e = fromHDF.lamda_e
detZ_e = fromHDF.detZ_e
self.normMonitor = fromHDF.monitor
self.norm_lz, bins_l, bins_z = np.histogram2d(lamda_e, detZ_e, bins = (self.grid.lamda(), self.grid.z()))
self.norm_lz = np.where(self.norm_lz>2, self.norm_lz, np.nan)
# correct for the SM reflectivity
lamda_l = self.grid.lamda()
theta_z = self.normAngle + fromHDF.delta_z
lamda_lz = (self.grid.lz().T*lamda_l[:-1]).T
theta_lz = self.grid.lz()*theta_z
qz_lz = 4.0*np.pi * np.sin(np.deg2rad(theta_lz)) / lamda_lz
Rsm_lz = np.ones(np.shape(qz_lz))
Rsm_lz = np.where(qz_lz>0.0217, 1-(qz_lz-0.0217)*(0.0625/0.0217), Rsm_lz)
# TODO: introduce variable for `m` and propably for the decay
Rsm_lz = np.where(qz_lz>0.0217*5, np.nan, Rsm_lz)
self.norm_lz = self.norm_lz / Rsm_lz
if len(lamda_e) > 1e6:
with open(n_path, 'wb') as fh:
np.save(fh, np.array(fromHDF.file_list), allow_pickle=False)
np.save(fh, np.array(self.normAngle), allow_pickle=False)
np.save(fh, self.norm_lz, allow_pickle=False)
np.save(fh, self.normMonitor, allow_pickle=False)
self.normFileList = fromHDF.file_list
self.header.reduction.corrections.append('normalisation with \'additional files\'')
def project_on_lz(self, fromHDF, norm_lz, normAngle, lamda_e, detZ_e):
# projection on lambda-z-grid
lamda_l = self.grid.lamda()
alphaF_z = fromHDF.nu - fromHDF.mu + fromHDF.delta_z
#if self.experiment_config.incidentAngle == 'alphaF':
# alphaF_z = fromHDF.nu - fromHDF.mu + fromHDF.delta_z
#elif self.experiment_config.incidentAngle == 'nu':
# alphaF_z = (fromHDF.nu + fromHDF.delta_z + fromHDF.kap + fromHDF.kad) / 2.
#else:
# pass
lamda_lz = (self.grid.lz().T*lamda_l[:-1]).T
alphaF_lz = self.grid.lz()*alphaF_z
thetaN_z = fromHDF.delta_z + normAngle
thetaN_lz = np.ones(np.shape(norm_lz))*thetaN_z
thetaN_lz = np.where(np.absolute(thetaN_lz)>5e-3, thetaN_lz, np.nan)
mask_lz = np.where(np.isnan(norm_lz), False, True)
mask_lz = np.logical_and(mask_lz, np.where(np.absolute(thetaN_lz)>5e-3, True, False))
mask_lz = np.logical_and(mask_lz, np.where(np.absolute(alphaF_lz)>5e-3, True, False))
if self.reduction_config.thetaRange[1]<12:
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz >= self.reduction_config.thetaRange[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz <= self.reduction_config.thetaRange[1], True, False))
else:
self.reduction_config.thetaRange = [fromHDF.nu - fromHDF.mu - fromHDF.div/2,
fromHDF.nu - fromHDF.mu + fromHDF.div/2]
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz >= self.reduction_config.thetaRange[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz <= self.reduction_config.thetaRange[1], True, False))
if self.reduction_config.thetaRangeR[1]<12:
t0 = fromHDF.nu - fromHDF.mu
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz-t0 >= self.reduction_config.thetaRangeR[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz-t0 <= self.reduction_config.thetaRangeR[1], True, False))
if self.experiment_config.lambdaRange[1]<15:
mask_lz = np.logical_and(mask_lz, np.where(lamda_lz >= self.experiment_config.lambdaRange[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(lamda_lz <= self.experiment_config.lambdaRange[1], True, False))
# gravity correction
#alphaF_lz += np.rad2deg( np.arctan( 3.07e-10 * (fromHDF.detectorDistance + detXdist_e) * lamda_lz**2 ) )
alphaF_lz += np.rad2deg( np.arctan( 3.07e-10 * fromHDF.detectorDistance * lamda_lz**2 ) )
if self.experiment_config.incidentAngle == 'alphaF':
#alphaI_lz = alphaF_lz
qz_lz = 4.0*np.pi * np.sin(np.deg2rad(alphaF_lz)) / lamda_lz
qx_lz = self.grid.lz() * 0.
else:
alphaI_lz = self.grid.lz()*(fromHDF.mu + fromHDF.kap + fromHDF.kad)
qz_lz = 2.0*np.pi * (np.sin(np.deg2rad(alphaF_lz)) + np.sin(np.deg2rad(alphaI_lz))) / lamda_lz
qx_lz = 2.0*np.pi * (np.cos(np.deg2rad(alphaF_lz)) - np.cos(np.deg2rad(alphaI_lz))) / lamda_lz
int_lz, bins_l, bins_z = np.histogram2d(lamda_e, detZ_e, bins = (lamda_l, self.grid.z()))
# cut normalisation sample horizon
int_lz = np.where(mask_lz, int_lz, np.nan)
thetaF_lz = np.where(mask_lz, alphaF_lz, np.nan)
if self.reduction_config.normalisationMethod == 'o':
logging.debug(' assuming an overilluminated sample and correcting for the angle of incidence')
ref_lz = (int_lz * np.absolute(thetaN_lz)) / (norm_lz * np.absolute(thetaF_lz)) * self.normMonitor
elif self.reduction_config.normalisationMethod == 'u':
logging.debug(' assuming an underilluminated sample and ignoring the angle of incidence')
ref_lz = (int_lz / norm_lz) * self.normMonitor
elif self.reduction_config.normalisationMethod == 'd':
logging.debug(' assuming direct beam for normalisation and ignoring the angle of incidence')
norm_lz = np.flip(norm_lz,1)
ref_lz = (int_lz / norm_lz) * self.normMonitor
else:
logging.error('unknown normalisation method! Use [u], [o] or [d]')
ref_lz = (int_lz * np.absolute(thetaN_lz)) / (norm_lz * np.absolute(thetaF_lz)) * self.normMonitor
err_lz = ref_lz * np.sqrt( 1/(int_lz+.1) + 1/norm_lz )
res_lz = np.ones((np.shape(lamda_l[:-1])[0], np.shape(alphaF_z)[0])) * 0.022**2
res_lz = res_lz + (0.008/alphaF_lz)**2
res_lz = qz_lz * np.sqrt(res_lz)
return qz_lz, qx_lz, ref_lz, err_lz, res_lz, lamda_lz, alphaF_lz, int_lz, mask_lz
@staticmethod
def histogram2d_lz(lamda_e, detZ_e, bins):
"""
Perform binning operation equivalent to numpy bin2d for the sepcial case
of the second dimension using integer positions (pre-defined pixels).
Based on the devide_bin algorithm below.
"""
dimension = bins[1].shape[0]-1
if not (np.array(bins[1])==np.arange(dimension+1)).all():
raise ValueError("histogram2d_lz requires second bin dimension to be contigous integer range")
binning = AmorReduction.devide_bin(lamda_e, detZ_e.astype(np.int64), bins[0], dimension)
return np.array(binning), bins[0], bins[1]
@staticmethod
def devide_bin(lambda_e, position_e, lamda_edges, dimension):
'''
Use a divide and conquer strategy to bin the data. For the actual binning the
numpy bincount function is used, as it is much faster than histogram for
counting of integer values.
:param lambda_e: Array of wavelength for each event
:param position_e: Array of positional indices for each event
:param lamda_edges: The edges of bins to be used for the histogram
:param dimension: position number of buckets in output arrray
:return: 2D list of dimensions (lambda, x) of counts
'''
if len(lambda_e)==0:
# no more events in range, return empty bins
return [np.zeros(dimension, dtype=np.int64).tolist()]*(len(lamda_edges)-1)
if len(lamda_edges)==2:
# deepest recursion reached, all items should be within the two ToF edges
return [np.bincount(position_e, minlength=dimension).tolist()]
# split all events into two time of flight regions
split_idx = len(lamda_edges)//2
left_region = lambda_e<lamda_edges[split_idx]
left_list = AmorReduction.devide_bin(lambda_e[left_region], position_e[left_region],
lamda_edges[:split_idx+1], dimension)
right_region = np.logical_not(left_region)
right_list = AmorReduction.devide_bin(lambda_e[right_region], position_e[right_region],
lamda_edges[split_idx:], dimension)
return left_list+right_list

View File

@@ -1,716 +0,0 @@
__version__ = '2024-03-30'
import os
import sys
import subprocess
import h5py
import glob
import numpy as np
import argparse
import matplotlib.pyplot as plt
import matplotlib as mpl
import time
import logging
from datetime import datetime
#==============================================================================
#==============================================================================
class Detector:
def __init__(self):
self.nBlades = 14 # number of active blades in the detector
angle = np.deg2rad( 5.1 ) # deg angle of incidence of the beam on the blades (def: 5.1)
self.dZ = 4.0 * np.sin(angle) # mm height-distance of neighboring pixels on one blade
self.dX = 4.0 * np.cos(angle) # mm depth-distance of neighboring pixels on one blace
self.bladeZ = 10.7 # mm distance between detector blades (consistent with nu!)
self.zero = 0.5 * self.nBlades * self.bladeZ # mm vertical center of the detector
#==============================================================================
def pixel2quantity():
det = Detector()
nPixel = 64 * 32 * det.nBlades
pixelID = np.arange(nPixel)
(bladeNr, bPixel) = np.divmod(pixelID, 64*32)
(bZ, bY) = np.divmod(bPixel, 64)
z = det.zero - bladeNr * det.bladeZ - bZ * det.dZ
x = (31 - bZ) * det.dX
bladeAngle = np.rad2deg( 2. * np.arcsin(0.5*det.bladeZ / detectorDistance) )
delta = (det.nBlades/2. - bladeNr) * bladeAngle - np.rad2deg( np.arctan(bZ*det.dZ / ( detectorDistance + bZ * det.dX) ) )
dZ = bladeNr * 32 + bZ
quantity = np.vstack((dZ.T, bY.T, delta.T, x.T)).T
return quantity
#==============================================================================
def analyse_ev(event_e, tof_e, yMin, yMax, thetaMin, thetaMax):
data_e = np.zeros((len(event_e), 9), dtype=float)
# data_e column description:
# 0: wall time / s
# 1: pixelID
# 2: z on detector
# 3: y on detector
# 4: delta / deg = angle on detector
# 5: path within detector / mm
# 6: lambda / angstrom
# 7: theta / deg
# 8: q_z / angstrom^-1
data_e[:,0] = tof_e[:]
data_e[:,1] = event_e[:]
# filter 'strange' tof times > 2 tau
if True:
filter_e = (data_e[:,0] <= 2*tau)
#print(event_e[~filter_e])
#print(data_e[~filter_e,0])
data_e = data_e[filter_e,:]
if np.shape(filter_e)[0]-np.shape(data_e)[0] > 0.5 and verbous:
logging.warning(f'## strange times: {np.shape(filter_e)[0]-np.shape(data_e)[0]}')
pixelLookUp = pixel2quantity()
data_e[:,2:6] = pixelLookUp[np.int_(data_e[:,1])-1,:]
#================================
# filter y range
filter_e = (yMin <= data_e[:,3]) & (data_e[:,3] <= yMax)
data_e = data_e[filter_e,:]
# correct tof for beam size effect at chopper
data_e[:,0] -= ( data_e[:,4] / 180. ) * tau
# effective flight path length
#data_e[:,5] = chopperDetectorDistance + data_e[:,5]
# calculate lambda
hdm = 6.626176e-34/1.674928e-27 # h / m
data_e[:,6] = 1.e13 * data_e[:,0] * hdm / ( chopperDetectorDistance + data_e[:,5] )
# theta
data_e[:,7] = nu - mu + data_e[:,4]
# gravity compensation
data_e[:,7] += np.rad2deg( np.arctan( 3.07e-10 * ( detectorDistance + data_e[:,5]) * data_e[:,6] * data_e[:,6] ) )
# filter theta range
filter_l = (thetaMin <= data_e[:,7]) & (data_e[:,7] <= thetaMax)
data_e = data_e[filter_l,:]
# q_z
data_e[:,8] = 4*np.pi * np.sin( np.deg2rad( data_e[:,7] ) ) / data_e[:,6]
# filter q_z range
#filter_e = (qMin < data_e[:,6]) & (data_e[:,6] < qMax)
#data_e = data_e[filter_e,:]
return data_e
#==============================================================================
class Meta:
# AMOR hdf dataset with associated properties from metadata
def __init__(self, fileName):
self.fileName = fileName
fh = h5py.File(fileName, 'r', swmr=True)
# for processing
self.chopperDistance = float(np.take(fh['/entry1/Amor/chopper/pair_separation'], 0)) # mm
# the following is the distance from the sample to the detector entry window, not to the center of rotation
self.detectorDistance = float(np.take(fh['/entry1/Amor/detector/transformation/distance'], 0)) # mm
self.chopperDetectorDistance = self.detectorDistance - float(np.take(fh['entry1/Amor/chopper/distance'], 0)) # mm
self.lamdaCut = 2.5 # Aa
startDate = str(fh['/entry1/start_time'][0].decode('utf-8'))
self.startDate = datetime.strptime(startDate, '%Y-%m-%d %H:%M:%S')
startDate = datetime.timestamp(self.startDate)
self.countingTime = float(np.take(fh['/entry1/Amor/detector/data/event_time_zero'], -1))/1e9 - startDate
# not exact for low rates
ka0 = 0.245 # given inclination of the beam after the Selene guide
year_date = str(datetime.today()).split(' ')[0].replace("-", "/", 1)
# deside from where to take the control paralemters
try:
self.mu = float(np.take(fh['/entry1/Amor/master_parameters/mu/value'], 0))
self.nu = float(np.take(fh['/entry1/Amor/master_parameters/nu/value'], 0))
self.kap = float(np.take(fh['/entry1/Amor/master_parameters/kap/value'], 0))
self.kad = float(np.take(fh['/entry1/Amor/master_parameters/kad/value'], 0))
self.div = float(np.take(fh['/entry1/Amor/master_parameters/div/value'], 0))
chSp = float(np.take(fh['/entry1/Amor/chopper/rotation_speed/value'], 0))
self.chPh = float(np.take(fh['/entry1/Amor/chopper/phase/value'], 0))
except (KeyError, IndexError):
logging.warning(f" using parameters from nicos cache")
#cachePath = '/home/amor/nicosdata/amor/cache/'
cachePath = '/home/nicos/amorcache/'
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-mu/'+year_date)).split('\t')[-1]
self.mu = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-nu/'+year_date)).split('\t')[-1]
self.nu = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-kap/'+year_date)).split('\t')[-1]
self.kap = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-kad/'+year_date)).split('\t')[-1]
self.kad = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-div/'+year_date)).split('\t')[-1]
self.div = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-ch1_speed/'+year_date)).split('\t')[-1]
chSp = float(value)
self.chPh = np.nan
if chSp:
self.tau = 30. / chSp
else:
self.tau = 0
try: # not yet correctly implemented in nexus template
spin = str(fh['/entry1/polarizer/spin_flipper/spin'][0].decode('utf-8'))
if spin == "b'p'":
self.spin = 'p'
elif spin == "b'm'":
self.spin = 'm'
elif spin == "b'up'":
self.spin = 'p'
elif spin == "b'down'":
self.spin = 'm'
elif spin == '?':
self.spin = '?'
else:
self.spin = 'n'
except (KeyError, IndexError):
self.spin = '?'
fh.close()
#==============================================================================
def resolveNumber(dataPath, ident):
if ident == '0' or '-' in ident[0]:
try:
nnr = int(ident)
except:
logging.error("ERROR: '{}' is no valid file identifier!".format(ident))
fileNames = glob.glob(dataPath+'/*.hdf')
fileNames.sort()
fileName = fileNames[nnr-1]
fileName = fileName.split('/')[-1]
fileNumber = fileName.split('n')[1].split('.')[0].lstrip('0')
else:
fileNumber = ident
return fileNumber
#==============================================================================
def fileNameCreator(dataPath, ident):
clas = commandLineArgs()
ident=str(ident)
try:
nnr = int(ident)
except:
logging.error("ERROR: '{}' is no valid file identifier!".format(ident))
if nnr <= 0 :
fileName = glob.glob(dataPath+'/*.hdf')[nnr-1]
fileName = fileName.split('/')[-1]
else:
fileName = f'amor{clas.year}n{ident:>06s}'
fileName = fileName.split('.')[0]
fileName = fileName+'.hdf'
fileName = dataPath+fileName
fileNumber = fileName.split('n')[-1].split('.')[0].lstrip('0')
return fileName, fileNumber
#==============================================================================
class PlotSelection:
def headline(self, fileNumber, totalCounts):
headLine = "#{} \u03bc={:>1.2f} \u03bd={:>1.2f} {:>12,} cts {:>8.1f} s".format(fileNumber, mu+5e-3, nu+5e-3, totalCounts, countingTime)
return headLine
# grids
def y_grid(self):
y_grid = np.arange(yMin, yMax+1, 1)
return y_grid
def lamda_grid(self):
dldl = 0.005 # Delta lambda / lambda
lMin = max(2, lamdaMin)
lamda_grid = lMin*(1+dldl)**np.arange(int(np.log(lamdaMax/lMin)/np.log(1+dldl)+1))
return lamda_grid
def theta_grid(self):
det = Detector()
bladeAngle = np.rad2deg( 2. * np.arcsin(0.5*det.bladeZ / detectorDistance) )
blade_grid = np.arctan( np.arange(33) * det.dZ / ( detectorDistance + np.arange(33) * det.dX) )
blade_grid = np.rad2deg(blade_grid)
stepWidth = blade_grid[1] - blade_grid[0]
blade_grid = blade_grid - 0.2 * stepWidth
delta_grid = []
for b in np.arange(det.nBlades-1):
delta_grid = np.concatenate((delta_grid, blade_grid), axis=None)
blade_grid = blade_grid + bladeAngle
delta_grid = delta_grid[delta_grid<blade_grid[0]-0.5*stepWidth]
delta_grid = np.concatenate((delta_grid, blade_grid), axis=None)
theta_grid = nu - mu - np.flip(delta_grid) + 0.5*det.nBlades * bladeAngle
theta_grid = theta_grid[theta_grid>=thetaMin]
theta_grid = theta_grid[theta_grid<=thetaMax]
return theta_grid
def q_grid(self):
dqdq = 0.010 # Delta q_z / q_z
q_grid = qMin*(1.+dqdq)**np.arange(int(np.log(qMax/qMin)/np.log(1+dqdq)))
return q_grid
# create PNG with several plots
def all(self, fileNumber, arg, data_e):
#cmap='gist_earth'
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
I_yt, bins_y, bins_t = np.histogram2d(data_e[:,3], data_e[:,7], bins = (self.y_grid(), self.theta_grid()))
I_lt, bins_l, bins_t = np.histogram2d(data_e[:,6], data_e[:,7], bins = (self.lamda_grid(), self.theta_grid()))
I_q, bins_q = np.histogram(data_e[:,8], bins = self.q_grid())
q_lim = 4*np.pi*np.array([ max( np.sin(self.theta_grid()[0]*np.pi/180.)/self.lamda_grid()[-1] , 1e-4 ),
min( np.sin(self.theta_grid()[-1]*np.pi/180.)/self.lamda_grid()[0] , 0.03 )])
if arg == 'lin':
#vmin = min(np.min(I_lt), np.min(I_yt))
vmin = 0
vmax = max(5, np.max(I_lt), np.max(I_yt))
else:
vmin = 0
vmax = max(1, np.log(np.max(I_lt)+.1)/np.log(10)*1.05, np.log(np.max(I_yt)+.1)/np.log(10)*1.05)
# I(y, theta)
fig = plt.figure()
axs = fig.add_gridspec(2,3)
myt = fig.add_subplot(axs[0,0])
myt.set_title('detector area')
myt.set_xlabel('$y ~/~ \\mathrm{bins}$')
myt.set_ylabel('$\\theta ~/~ \\mathrm{deg}$')
if arg == 'lin':
myt.pcolormesh(bins_y, bins_t, I_yt.T, cmap=cmap, vmin=vmin, vmax=vmax)
else:
myt.pcolormesh(bins_y, bins_t, (np.log(I_yt + 5.e-1) / np.log(10.)).T, cmap=cmap, vmin=vmin, vmax=vmax)
# I(lambda, theta)
mlt = fig.add_subplot(axs[0,1:])
mlt.set_title('angle- and energy disperse')
mlt.set_xlabel('$\\lambda ~/~ \\mathrm{\\AA}$')
mlt.axes.get_yaxis().set_visible(False)
if arg == 'lin':
cb = mlt.pcolormesh(bins_l, bins_t, I_lt.T, cmap=cmap, vmin=vmin, vmax=vmax)
else:
cb = mlt.pcolormesh(bins_l, bins_t, (np.log(I_lt + 5.e-1) / np.log(10.)).T, cmap=cmap, vmin=vmin, vmax=vmax)
# I(q_z)
lqz = fig.add_subplot(axs[1,:])
lqz.set_title('$I(q_z)$')
lqz.set_ylabel('$\\log_{10}(\\mathrm{cnts})$')
lqz.set_xlabel('$q_z~/~\\mathrm{\\AA}^{-1}$')
lqz.set_xlim(q_lim)
if arg == 'lin':
plt.plot(bins_q[:-1], I_q, color='blue', linewidth=0.5)
else:
err_q = np.sqrt(I_q+1)
low_q = np.where(I_q-err_q>0, I_q-err_q, 0.1)
plt.fill_between(bins_q[:-1], np.log(low_q)/np.log(10), np.log(I_q+err_q/2)/np.log(10), color='lightgrey')
plt.plot(bins_q[:-1], np.log(I_q+5e-1)/np.log(10), color='blue', linewidth=0.5)
lw = I_q[ ((q_lim[0] < bins_q[:-1]) & (bins_q[:-1] < q_lim[1])) ].min()
plt.ylim(max(-0.1, np.log(lw+.1)/np.log(10)-0.1), )
#
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=2.8, c='r')
fig.colorbar(cb, ax=mlt)
plt.subplots_adjust(hspace=0.6, wspace=0.1)
plt.savefig(output, format='png', dpi=150)
# create PNG with one plot
def Iyz(self, fileNumber, arg, data_e):
det = Detector()
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
z_grid = np.arange(det.nBlades*32)
I_yz, bins_y, bins_z = np.histogram2d(data_e[:,3], data_e[:,2], bins = (self.y_grid(), z_grid))
if arg == 'log':
vmin = 0
vmax = max(1, np.log(np.max(I_yt)+.1)/np.log(10)*1.05)
plt.pcolormesh(bins_y[:],bins_z[:],(np.log(I_yz+6e-1)/np.log(10)).T, cmap=cmap, vmin=vmin, vmax=vmax)
else:
plt.pcolormesh(bins_y[:],bins_z[:],I_yz.T, cmap=cmap)
plt.xlabel('$y ~/~ \\mathrm{bins}$')
plt.ylabel('$z ~/~ \\mathrm{bins}$')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
def Ilt(self, fileNumber, arg, data_e) :
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
I_lt, bins_l, bins_t = np.histogram2d(data_e[:,6], data_e[:,7], bins = (self.lamda_grid(), self.theta_grid()))
if arg == 'log':
vmax = max(1, np.log(np.max(I_lt)+.1)/np.log(10)*1.05 )
plt.pcolormesh(bins_l, bins_t, (np.log(I_lt+I_lt[I_lt>0].min()/2)/np.log(10.)).T, cmap=cmap, vmin=0, vmax=vmax)
else :
vmax = max(np.max(I_lt), 5)
plt.pcolormesh(bins_l, bins_t, I_lt.T, cmap=cmap, vmin=0, vmax=vmax)
plt.xlim(0,)
#if np.min(bins_t) > 0.01 :
# plt.ylim(bottom=0)
#else:
# plt.ylim(bottom=np.min(bins_t))
#if np.max(bins_t) < -0.01:
# plt.ylim(top=0)
#else:
# plt.ylim(top=np.max(bins_t))
plt.xlim(lamdaMin, lamdaMax)
plt.ylim(thetaMin, thetaMax)
plt.xlabel('$\\lambda ~/~ \\mathrm{\\AA}$')
plt.ylabel('$\\theta ~/~ \\mathrm{deg}$')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
def Itz(self, fileNumber, arg, data_e):
det = Detector()
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
time_grid = np.arange(0, tau, 0.0005)
z_grid = np.arange(det.nBlades*32+1)
I_tz, bins_t, bins_z = np.histogram2d(data_e[:,0], data_e[:,2], bins = (time_grid, z_grid))
if arg == 'log':
vmax = max(2., np.log(np.max(I_tz)+.1)/np.log(10)*1.05 )
plt.pcolormesh(bins_t, bins_z, (np.log(I_tz+5.e-1)/np.log(10.)).T, cmap=cmap, vmin=0, vmax=vmax)
else :
vmax = max(np.max(I_tz), 5)
plt.pcolormesh(bins_t, bins_z, I_tz.T, cmap=cmap, vmin=0, vmax=vmax)
if True:
plt.xlim(0,)
plt.ylim(0,)
plt.xlabel('$t ~/~ \\mathrm{s}$')
plt.ylabel('$z$ pixel row')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
def Iq(self, fileNumber, arg, data_e):
I_q, bins_q = np.histogram(data_e[:,8], bins = self.q_grid())
err_q = np.sqrt(I_q+1)
q_lim = 4*np.pi*np.array([ max( np.sin(self.theta_grid()[0]*np.pi/180.)/self.lamda_grid()[-1] , 1e-4 ),
min( np.sin(self.theta_grid()[-1]*np.pi/180.)/self.lamda_grid()[0] , 0.03 )])
if arg == 'log':
low_q = np.where(I_q-err_q>0, I_q-err_q, 0.1)
plt.fill_between(bins_q[:-1], np.log(low_q)/np.log(10), np.log(I_q+err_q/2)/np.log(10), color='lightgrey')
plt.plot(bins_q[:-1], np.log(I_q+5e-1)/np.log(10), color='blue', linewidth=0.5)
lw = I_q[ ((q_lim[0] < bins_q[:-1]) & (bins_q[:-1] < q_lim[1])) ].min()
plt.ylim(max(-0.1, np.log(lw+.1)/np.log(10)-0.1), )
else:
plt.plot(bins_q[:-1], I_q, color='blue', linewidth=0.5)
plt.ylabel('$\\log_{10}(\\mathrm{cnts})$')
plt.xlabel('$q_z ~/~ \\mathrm{\\AA}^{-1}$')
plt.xlim(q_lim)
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
def Il(self, fileNumber, arg, data_e):
I_l, bins_l = np.histogram(data_e[:,6], bins = self.lamda_grid())
if arg == 'lin':
plt.plot(bins_l[:-1], I_l)
plt.ylabel('$I ~/~ \\mathrm{cnts}$')
else:
plt.plot(bins_l[:-1], np.log(I_l+5.e-1)/np.log(10.))
plt.ylabel('$\\log_{10} I ~/~ \\mathrm{cnts}$')
plt.xlabel('$\\lambda ~/~ \\mathrm{\\AA}$')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
def It(self, fileNumber, arg, data_e):
I_t, bins_t = np.histogram(data_e[:,7], bins = self.theta_grid())
plt.plot( I_t, bins_t[:-1])
plt.xlabel('$\\mathrm{cnts}$')
plt.ylabel('$\\theta ~/~ \\mathrm{deg}$')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
def tof(self, fileNumber, arg, data_e):
time_grid = np.arange(0, 1.3*tau, 0.0005)
I_t, bins_t = np.histogram(data_e[:,0], bins = time_grid)
if arg == 'lin':
plt.plot(bins_t[:-1]+tau, I_t)
plt.plot(bins_t[:-1], I_t)
plt.plot(bins_t[:-1]+2*tau, I_t)
else:
lI_t = np.log(I_t+5.e-1)/np.log(10.)
plt.plot(bins_t[:-1]+tau, lI_t)
plt.plot(bins_t[:-1], lI_t)
plt.plot(bins_t[:-1]+2*tau, lI_t)
plt.ylabel('log(counts)')
plt.xlabel('time / s')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png')
#==============================================================================
def process(dataPath, ident, clas):
#================================
# constants
hdm = 6.626176e-34/1.674928e-27 # h / m
#================================
# instrument specific parameters
#================================
global lamdaMin, lamdaMax, qMin, qMax, thetaMin, thetaMax, yMin, yMax
# defaults
lamdaCut = 2.5 # Aa used to reshuffle tof
# data filtering and folding
#================================
if clas.lambdaRange:
lamdaMin = clas.lambdaRange[0]
lamdaMax = clas.lambdaRange[1]
else:
lamdaMin = lamdaCut
chopperPhase = clas.chopperPhase
tofOffset = clas.TOFOffset
thetaMin = clas.thetaRange[0]
thetaMax = clas.thetaRange[1]
yMin = clas.yRange[0]
yMax = clas.yRange[1]
qMin = clas.qRange[0]
qMax = clas.qRange[1]
#================================
# find and open input file
global ev
data_eSum = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0]])
sumTime = 0
number = resolveNumber(dataPath, ident)
fileName, fileNumber = fileNameCreator(dataPath, str(number))
if verbous:
logging.info('life_histogrammer processing file ->\033[1m {} \033[0m<-'.format(fileNumber))
for i in range(6):
ev = h5py.File(fileName, 'r', swmr=True)
try:
ev['/entry1/Amor/detector/data/event_time_zero'][-1]
break
except (KeyError, IndexError):
ev.close()
if i < 5:
if verbous:
print("no data yet, retrying ({}) ".format(10-2*i), end='\r')
time.sleep(2)
continue
else:
if verbous:
print("# time-out: no longer waiting for data!\a")
return
# get and process data
meta = Meta(fileName)
global mu, nu, tau
if clas.mu < 98.:
mu = clas.mu
else:
mu = meta.mu + clas.muOffset
if clas.nu < 98.:
nu = clas.nu
else:
nu = meta.nu
if clas.chopperSpeed:
tau = 30./ clas.chopperSpeed
else:
tau = meta.tau
try:
chPh
except NameError:
chPh = meta.chPh
spin = meta.spin
global countingTime, detectorDistance, chopperDetectorDistance
detectorDistance = meta.detectorDistance
chopperDetectorDistance = meta.chopperDetectorDistance
countingTime = meta.countingTime
if verbous:
logging.info(" mu = {:>4.2f} deg, nu = {:>4.2f} deg".format(mu, nu))
if spin == 'u':
logging.info(' spin <+|')
elif spin == 'd':
logging.info(' spin <-|')
try: lamdaMax
except NameError: lamdaMax = lamdaMin + tau * hdm/chopperDetectorDistance * 1e13
tofOffset = tau * chopperPhase / 180. # mismatch of chopper pulse and time-zero
tofCut = lamdaCut * chopperDetectorDistance / hdm * 1.e-13 # tof of frame start
tof_e = np.array(ev['/entry1/Amor/detector/data/event_time_offset'][:], dtype=np.uint64)/1.e9 + tofOffset # tof
detPixelID_e = np.array(ev['/entry1/Amor/detector/data/event_id'][:], dtype=np.uint64) # pixel index
dataPacket_p = np.array(ev['/entry1/Amor/detector/data/event_index'][:], dtype=np.uint64) # data packet index
tof_e = np.where(tof_e<tofCut, tof_e+2.*tau, tof_e)
tof_e = np.where(tof_e>tau+tofCut, tof_e-tau, tof_e)
data_e = analyse_ev(detPixelID_e, tof_e, yMin, yMax, thetaMin, thetaMax)
ev.close()
data_eSum = np.append(data_eSum, data_e, axis=0)
sumTime += countingTime
if verbous:
logging.info(" total counts = {} in {:6.1f} s".format(np.shape(data_e)[0], sumTime))
#================================
# plotting data
plotType = clas.plot[0]
try:
arg = clas.plot[1]
except IndexError:
arg = 'def'
plott = PlotSelection()
try:
plotFunction = getattr(plott, plotType)
plotFunction(fileNumber, arg, data_e)
plt.close()
except Exception as e:
logging.error(f"ERROR: '{plotType}' is no known output format!")
logging.error(f" original error: {e}")
#==============================================================================
def commandLineArgs():
msg = "events2histogram reads the eventstream from an hdf raw file and \
creates various histogrammed outputs or plots."
clas = argparse.ArgumentParser(description = msg)
clas.add_argument("-c", "--chopperSpeed",
type=float,
help ="chopper speed in rpm")
clas.add_argument("-d", "--dataPath",
help ="relative path to directory with .hdf files")
clas.add_argument("-f", "--fileIdent",
default='0',
help ="file number or offset (if negative)")
clas.add_argument("-l", "--lambdaRange",
nargs=2,
type=float,
help ="wavelength range to be used")
clas.add_argument("-M", "--muOffset",
default=0.,
type=float,
help ="mu offset")
clas.add_argument("-m", "--mu",
default=99.,
type=float,
help ="value of mu")
clas.add_argument("-n", "--nu",
default=99.,
type=float,
help ="value of nu")
clas.add_argument("-P", "--chopperPhase",
default=-5.,
type=float,
help ="chopper phase offset")
clas.add_argument("-p", "--plot",
default=['all', 'def'],
nargs='+',
help ="select what to plot or write")
clas.add_argument("-q", "--qRange",
default=[0.005, 0.30],
nargs=2,
type=float,
help ="q_z range")
clas.add_argument("-T", "--TOFOffset",
default=0.0,
type=float,
help ="TOF zero offset")
clas.add_argument("-t", "--thetaRange",
default=[-12., 12.],
nargs=2,
type=float,
help ="theta range to be used")
clas.add_argument("-Y", "--year",
default = str(datetime.today()).split('-')[0],
help = "year, the measurement was performed")
clas.add_argument("-y", "--yRange",
default=[0, 63],
nargs=2,
type=int,
help ="detector y range to be used")
return clas.parse_args()
#==============================================================================
def get_dataPath(clas):
if clas.dataPath:
dataPath = clas.dataPath + '/'
if not os.path.exists(dataPath):
sys.exit('# *** the directory "'+dataPath+'" does not exist ***')
elif os.path.exists('./raw'):
dataPath = "./raw/"
elif os.path.exists('../raw'):
dataPath = "../raw/"
else:
sys.exit('# *** please provide the path to the .hdf data files (-d <path>, default is "./raw")')
return dataPath
#==============================================================================
def get_directDataPath(clas):
#dataPath = clas.dataPath + '/'
year = str(datetime.today()).split('-')[0]
year_date = str(datetime.today()).split(' ')[0].replace("-", "/", 1)
pNr = str(subprocess.getoutput('/usr/bin/grep "proposal\t" /home/amor/nicosdata/amor/cache/nicos-exp/'+year_date)[-1]).split('\'')[1]
dataPath = '/home/amor/nicosdata/amor/data/'+year+'/'+pNr+'/'
if not os.path.exists(dataPath):
sys.exit('# *** the directory "'+dataPath+'" does not exist ***')
return dataPath
#==============================================================================
def main():
global verbous, output, dataPath
clas = commandLineArgs()
dataPath = get_dataPath(clas)
logging.basicConfig(level=logging.INFO, format='# %(message)s')
verbous = True
output = 'life_plot.png'
process(dataPath, clas.fileIdent, clas)
#==============================================================================
#==============================================================================
if __name__ == "__main__":
main()

View File

@@ -1 +0,0 @@
eos.py

6
pyproject.toml Normal file
View File

@@ -0,0 +1,6 @@
[build-system]
requires = [
"setuptools>=42",
"wheel"
]
build-backend = "setuptools.build_meta"

View File

@@ -2,3 +2,5 @@ numpy
h5py
orsopy
numba
backports.strenum; python_version<"3.11"
backports.zoneinfo; python_version<"3.9"

36
setup.cfg Normal file
View File

@@ -0,0 +1,36 @@
[bdist_wheel]
universal = 1
[metadata]
name = amor_eos
version = attr: eos.__version__
author = Jochen Stahn - Paul Scherrer Institut
author_email = jochen.stahn@psi.ch
description = EOS reflectometry reduction for AMOR instrument
long_description = Reduces data obtained by focusing time of flight neutron reflectivity to full reflectivity curve.
license = MIT
classifiers =
Programming Language :: Python :: 3
License :: OSI Approved :: MIT License
Operating System :: OS Independent
Topic :: Scientific/Engineering
Development Status :: 5 - Production/Stable
[options]
python_requires = >=3.8
packages =
eos
install_requires =
numpy
h5py
orsopy
numba
backports.strenum; python_version<"3.11"
backports.zoneinfo; python_version<"3.9"
[project.urls]
Homepage = "https://github.com/jochenstahn/amor"
[options.entry_points]
console_scripts =
eos = eos.__main__:main

BIN
test_data/amor2025n005952.hdf LFS Normal file

Binary file not shown.

BIN
test_data/amor2025n006003.hdf LFS Normal file

Binary file not shown.

BIN
test_data/amor2025n006004.hdf LFS Normal file

Binary file not shown.

BIN
test_data/amor2025n006005.hdf LFS Normal file

Binary file not shown.

31
tests/analyze_hdf.py Normal file
View File

@@ -0,0 +1,31 @@
"""
Small helper to find information about hdf datafiles for debugging
"""
import h5py
def rec_tree(group, min_size=128):
if hasattr(group, 'keys'):
output = {}
total_size = 0
for key in group.keys():
subkeys, size = rec_tree(group[key], min_size)
total_size += size
if size>min_size:
if subkeys:
output[key] = subkeys
else:
output[key] = size
return output, size
elif hasattr(group, 'size'):
return None, group.size
else:
return None, 0
if __name__ == "__main__":
import sys
for fi in sys.argv[1:]:
print(fi)
print(rec_tree(sys.argv[1]))

64
tests/mock_data.py Normal file
View File

@@ -0,0 +1,64 @@
"""
Generate a mock dataset in memory for running unit tests.
"""
import h5py
import numpy as np
MOCK_METADATA = {
'title': 'Testdata',
'proposal_id': 'none',
'user/name': 'test user',
'user/email': 'test@user.de',
'sample/name': 'test sample',
'sample/model': 'air | Fe 12 | Si',
'Amor/source/name': 'SINQ',
'start_time': '2025-01-01 00:00:01',
}
MOCK_META_TYPED = {
'Amor/chopper/pair_separation': (1000.0, np.float32),
'Amor/detector/transformation/distance': (4000.0, np.float64),
'Amor/instrument_control_parameters/kappa': (1000.0, np.float64),
'Amor/instrument_control_parameters/kappa_offset': (1000.0, np.float64),
'Amor/instrument_control_parameters/div': (1.6, np.float64),
'Amor/chopper/ch1_trigger_phase': (-9.1, np.float64),
'Amor/chopper/ch2_trigger_phase': (6.75, np.float64),
'Amor/chopper/ch2_trigger/event_time_zero': ([0.0]*10, np.uint64),
'Amor/chopper/ch2_trigger/event_time_offset': ([0.0]*10, np.uint32),
'Amor/chopper/rotation_speed': (500.0, np.float64),
'Amor/chopper/phase': (0.0, np.float64),
'Amor/polarization/configuration/value': (0.0, np.float64),
}
def mock_data(mu=1.0, nu=2.0):
hdf = h5py.File.in_memory() # requires h5py >=3.13
ds = hdf.create_group('entry1')
for key, value in MOCK_METADATA.items():
ds.create_dataset(key, data=np.array([value.encode('utf-8')]))
for key, (value, dtype) in MOCK_META_TYPED.items():
if type(value) is list:
ds.create_dataset(key, data=np.array(value), dtype=dtype)
else:
ds.create_dataset(key, data=np.array([value]), dtype=dtype)
ds.create_dataset('Amor/instrument_control_parameters/mu', np.array([mu]), dtype=np.float64)
ds.create_dataset('Amor/instrument_control_parameters/nu', np.array([nu]), dtype=np.float64)
return hdf
def compare_with_real_data(fname):
hdf = h5py.File(fname, 'r')
ds = hdf['entry1']
for key, value in MOCK_METADATA.items():
try:
ds[key][0].decode('utf-8')
except KeyError:
print(f'/entry1/{key} does not exist in file')
for key, (value, dtype) in MOCK_META_TYPED.items():
try:
item = ds[key]
except KeyError:
print(f'/entry1/{key} does not exist in file')
else:
if item.dtype != dtype:
print(f'/entry1/{key} does not match {dtype}, dataset is {item.dtype}')

View File

@@ -1,16 +1,27 @@
import os
import cProfile
from unittest import TestCase
from libeos import options, reduction, logconfig
from dataclasses import fields, MISSING
from eos import options, reduction, logconfig
logconfig.setup_logging()
logconfig.update_loglevel(True, False)
logconfig.update_loglevel(1)
# TODO: add test for new features like proton charge normalization
# TODO: add unit tests for individual parts of reduction
class FullAmorTest(TestCase):
@classmethod
def setUpClass(cls):
# generate map for option defaults
cls._field_defaults = {}
for opt in [options.ExperimentConfig, options.ReductionConfig, options.OutputConfig]:
defaults = {}
for field in fields(opt):
if field.default not in [None, MISSING]:
defaults[field.name] = field.default
elif field.default_factory not in [None, MISSING]:
defaults[field.name] = field.default_factory()
cls._field_defaults[opt.__name__] = defaults
cls.pr = cProfile.Profile()
@classmethod
@@ -20,47 +31,47 @@ class FullAmorTest(TestCase):
def setUp(self):
self.pr.enable()
self.reader_config = options.ReaderConfig(
year=2023,
dataPath=os.path.join('..', "test_data"),
raw=(os.path.join('..', "test_data"),)
year=2025,
rawPath=["test_data"],
)
def tearDown(self):
self.pr.disable()
for fi in ['test.Rqz.ort', '614.norm']:
for fi in ['test_results/test.Rqz.ort', 'test_results/5952.norm']:
try:
os.unlink(os.path.join(self.reader_config.dataPath, fi))
os.unlink(fi)
except FileNotFoundError:
pass
def test_time_slicing(self):
experiment_config = options.ExperimentConfig(
chopperSpeed=self._field_defaults['ExperimentConfig']['chopperSpeed'],
chopperPhase=-13.5,
chopperPhaseOffset=-5,
yRange=(11., 41.),
lambdaRange=(2., 15.),
qzRange=(0.005, 0.30),
incidentAngle=options.Defaults.incidentAngle,
monitorType=self._field_defaults['ExperimentConfig']['monitorType'],
lowCurrentThreshold=self._field_defaults['ExperimentConfig']['lowCurrentThreshold'],
yRange=(18, 48),
lambdaRange=(3., 11.5),
incidentAngle=self._field_defaults['ExperimentConfig']['incidentAngle'],
mu=0,
nu=0,
muOffset=0.0,
sampleModel='air | 10 H2O | D2O'
)
reduction_config = options.ReductionConfig(
normalisationMethod=options.Defaults.normalisationMethod,
normalisationMethod=self._field_defaults['ReductionConfig']['normalisationMethod'],
qResolution=0.01,
qzRange=options.Defaults.qzRange,
thetaRange=(-12., 12.),
thetaRangeR=(-12., 12.),
fileIdentifier=["610"],
qzRange=self._field_defaults['ReductionConfig']['qzRange'],
thetaRange=(-0.75, 0.75),
fileIdentifier=["6003-6005"],
scale=[1],
normalisationFileIdentifier=[],
timeSlize=[300.0]
)
output_config = options.OutputConfig(
outputFormats=["Rqz.ort"],
outputName='test'
outputFormats=[options.OutputFomatOption.Rqz_ort],
outputName='test',
outputPath='test_results',
)
config=options.EOSConfig(self.reader_config, experiment_config, reduction_config, output_config)
# run three times to get similar timing to noslicing runs
@@ -73,30 +84,32 @@ class FullAmorTest(TestCase):
def test_noslicing(self):
experiment_config = options.ExperimentConfig(
chopperSpeed=self._field_defaults['ExperimentConfig']['chopperSpeed'],
chopperPhase=-13.5,
chopperPhaseOffset=-5,
yRange=(11., 41.),
lambdaRange=(2., 15.),
qzRange=(0.005, 0.30),
incidentAngle=options.Defaults.incidentAngle,
monitorType=self._field_defaults['ExperimentConfig']['monitorType'],
lowCurrentThreshold=self._field_defaults['ExperimentConfig']['lowCurrentThreshold'],
yRange=(18, 48),
lambdaRange=(3., 11.5),
incidentAngle=self._field_defaults['ExperimentConfig']['incidentAngle'],
mu=0,
nu=0,
muOffset=0.0
muOffset=0.0,
)
reduction_config = options.ReductionConfig(
normalisationMethod=self._field_defaults['ReductionConfig']['normalisationMethod'],
qResolution=0.01,
qzRange=options.Defaults.qzRange,
normalisationMethod=options.Defaults.normalisationMethod,
thetaRange=(-12., 12.),
thetaRangeR=(-12., 12.),
fileIdentifier=["610", "611", "608,612-613", "609"],
qzRange=self._field_defaults['ReductionConfig']['qzRange'],
thetaRange=(-0.75, 0.75),
fileIdentifier=["6003", "6004", "6005"],
scale=[1],
normalisationFileIdentifier=["614"],
autoscale=(True, True)
normalisationFileIdentifier=["5952"],
autoscale=(0.0, 0.05),
)
output_config = options.OutputConfig(
outputFormats=["Rqz.ort"],
outputName='test'
outputFormats=[options.OutputFomatOption.Rqz_ort],
outputName='test',
outputPath='test_results',
)
config=options.EOSConfig(self.reader_config, experiment_config, reduction_config, output_config)
reducer = reduction.AmorReduction(config)

View File

@@ -1,8 +1,15 @@
# -*- mode: python ; coding: utf-8 -*-
from PyInstaller.utils.hooks import collect_all
datas = []
binaries = []
hiddenimports = []
tmp_ret = collect_all('tzdata')
datas += tmp_ret[0]; binaries += tmp_ret[1]; hiddenimports += tmp_ret[2]
a = Analysis(
['eos.py'],
['eos/__main__.py'],
pathex=[],
binaries=[],
datas=[],
@@ -12,23 +19,20 @@ a = Analysis(
runtime_hooks=[],
excludes=[],
noarchive=False,
optimize=0,
optimize=1,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.datas,
[],
exclude_binaries=True,
name='eos',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
@@ -36,3 +40,12 @@ exe = EXE(
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='eos',
)