24 Commits

Author SHA1 Message Date
5e96a20f23 Evaluating ToF to extract sub-frames and assign to correct neutron pulse not chopper pulse (#4)
All checks were successful
Unit Testing / test (3.10) (push) Successful in 54s
Unit Testing / test (3.11) (push) Successful in 49s
Unit Testing / test (3.8) (push) Successful in 50s
Unit Testing / test (3.12) (push) Successful in 55s
Unit Testing / test (3.9) (push) Successful in 54s
Interpolate neutorn times between chopper pulses and assign by ToF. If ToF too short (long wavelengths from previous pulse), assign them to the previous sub-pulse.

Reviewed with Jochen

Reviewed-on: #4
Co-authored-by: Artur Glavic <artur.glavic@psi.ch>
Co-committed-by: Artur Glavic <artur.glavic@psi.ch>
2026-03-10 15:00:10 +01:00
3eed3e1058 Update revision for release
All checks were successful
Unit Testing / test (3.11) (push) Successful in 49s
Unit Testing / test (3.10) (push) Successful in 49s
Unit Testing / test (3.8) (push) Successful in 49s
Unit Testing / test (3.12) (push) Successful in 50s
Unit Testing / test (3.9) (push) Successful in 48s
2026-03-10 14:38:51 +01:00
b1188fcaab Merge pull request 'Deactivate automatic logfile writing' (#6) from logFile into main
All checks were successful
Unit Testing / test (3.10) (push) Successful in 52s
Unit Testing / test (3.11) (push) Successful in 48s
Unit Testing / test (3.8) (push) Successful in 48s
Unit Testing / test (3.12) (push) Successful in 54s
Unit Testing / test (3.9) (push) Successful in 54s
Reviewed-on: #6
2026-03-10 14:37:38 +01:00
3cd2d1a0a9 Setup logfile only in eos when command-line parameter set
All checks were successful
Unit Testing / test (3.10) (pull_request) Successful in 47s
Unit Testing / test (3.11) (pull_request) Successful in 48s
Unit Testing / test (3.8) (pull_request) Successful in 48s
Unit Testing / test (3.12) (pull_request) Successful in 48s
Unit Testing / test (3.9) (pull_request) Successful in 47s
Pull Request Merged / autoupdate_minor (pull_request) Successful in 4s
2026-03-10 14:34:17 +01:00
927a18e64b logfile nor optional 2026-03-10 13:42:59 +01:00
ffd296ef99 eos/options.py aktualisiert 2026-03-10 09:21:39 +01:00
b79f79b524 Manual increase version for release
All checks were successful
Unit Testing / test (3.11) (push) Successful in 49s
Unit Testing / test (3.10) (push) Successful in 50s
Unit Testing / test (3.8) (push) Successful in 49s
Unit Testing / test (3.12) (push) Successful in 52s
Unit Testing / test (3.9) (push) Successful in 51s
2026-03-02 12:15:42 +01:00
4f590f7761 Merge pull request 'Auto-update minor revision and date on pull request merge' (#3) from new_praction into main
All checks were successful
Unit Testing / test (3.10) (push) Successful in 48s
Unit Testing / test (3.11) (push) Successful in 46s
Unit Testing / test (3.12) (push) Successful in 48s
Unit Testing / test (3.8) (push) Successful in 46s
Unit Testing / test (3.9) (push) Successful in 47s
Reviewed-on: #3
2026-03-02 12:13:59 +01:00
21a1fd1be3 auto-update minor revision and date on pull request merge
All checks were successful
Unit Testing / test (3.11) (pull_request) Successful in 45s
Unit Testing / test (3.10) (pull_request) Successful in 48s
Unit Testing / test (3.8) (pull_request) Successful in 46s
Unit Testing / test (3.12) (pull_request) Successful in 49s
Unit Testing / test (3.9) (pull_request) Successful in 48s
Pull Request Merged / autoupdate_minor (pull_request) Successful in 4s
2026-03-02 12:11:29 +01:00
93baf06655 Fixing release and test actions
All checks were successful
Unit Testing / test (3.10) (push) Successful in 48s
Unit Testing / test (3.11) (push) Successful in 48s
Unit Testing / test (3.12) (push) Successful in 49s
Unit Testing / test (3.9) (push) Successful in 48s
Unit Testing / test (3.8) (push) Successful in 50s
Reviewed-on: #2
Co-authored-by: Artur Glavic <artur.glavic@psi.ch>
Co-committed-by: Artur Glavic <artur.glavic@psi.ch>
2026-03-02 12:00:36 +01:00
29d406a290 Add unit tests for filterinc capability and automatic spin state splitting
Some checks failed
Unit Testing / test (3.12) (push) Successful in 46s
Unit Testing / test (3.10) (push) Successful in 48s
Unit Testing / test (3.9) (push) Successful in 45s
Unit Testing / test (3.8) (push) Successful in 48s
Release / test (3.12) (push) Failing after 10s
Release / build-ubuntu-latest (push) Has been cancelled
Release / build-windows (push) Has been cancelled
Release / release (push) Has been cancelled
Release / test (3.8) (push) Has been cancelled
Release / test (3.10) (push) Has been cancelled
Release / test (3.9) (push) Has been cancelled
2026-02-27 11:44:56 +01:00
5d9c0879b4 Implement automatic splitting of polarization states 2026-02-27 10:59:59 +01:00
7f0e6f1026 Allow option to filter pulses where a switch occured, implement updating of headerin information from filtered log-values for temp., filed and polarization, don't report empty sample environment values
All checks were successful
Unit Testing / test (3.10) (push) Successful in 41s
Unit Testing / test (3.12) (push) Successful in 41s
Unit Testing / test (3.9) (push) Successful in 40s
Unit Testing / test (3.8) (push) Successful in 41s
2026-02-27 10:08:49 +01:00
30574cdf7e try to fix failures on python <3.11 where z-format isotime was not supported
All checks were successful
Unit Testing / test (3.12) (push) Successful in 35s
Unit Testing / test (3.10) (push) Successful in 38s
Unit Testing / test (3.9) (push) Successful in 34s
Unit Testing / test (3.8) (push) Successful in 37s
2026-02-27 08:54:26 +01:00
6298487bf3 fix output name having colon by default, add 2026 dataset and test for logfilter with polarization
Some checks failed
Unit Testing / test (3.10) (push) Failing after 29s
Unit Testing / test (3.8) (push) Failing after 27s
Unit Testing / test (3.9) (push) Failing after 27s
Unit Testing / test (3.12) (push) Successful in 35s
2026-02-27 08:35:56 +01:00
3a7f3cde53 fix sqrt invalid value warning in footprint correction
All checks were successful
Unit Testing / test (3.10) (push) Successful in 28s
Unit Testing / test (3.12) (push) Successful in 27s
Unit Testing / test (3.8) (push) Successful in 27s
Unit Testing / test (3.9) (push) Successful in 26s
2026-02-27 08:06:16 +01:00
dafff07e68 Update version for PyPI release
All checks were successful
Unit Testing / test (3.10) (push) Successful in 27s
Unit Testing / test (3.8) (push) Successful in 25s
Unit Testing / test (3.12) (push) Successful in 30s
Unit Testing / test (3.9) (push) Successful in 29s
2026-02-26 17:31:12 +01:00
9afd15bcb4 Deactivate creating artefact, as broken on gitea right now
All checks were successful
Unit Testing / test (3.12) (push) Successful in 27s
Unit Testing / test (3.10) (push) Successful in 30s
Unit Testing / test (3.9) (push) Successful in 26s
Unit Testing / test (3.8) (push) Successful in 28s
2026-02-26 17:13:42 +01:00
6a4f1c6205 Change artifact action version to work on gitea
All checks were successful
Unit Testing / test (3.10) (push) Successful in 29s
Unit Testing / test (3.12) (push) Successful in 30s
Unit Testing / test (3.8) (push) Successful in 30s
Unit Testing / test (3.9) (push) Successful in 30s
2026-02-26 17:08:15 +01:00
6aacbd5f22 Try to fix PyPI release
All checks were successful
Unit Testing / test (3.12) (push) Successful in 28s
Unit Testing / test (3.10) (push) Successful in 30s
Unit Testing / test (3.9) (push) Successful in 27s
Unit Testing / test (3.8) (push) Successful in 30s
2026-02-26 16:58:57 +01:00
dc7dd2a6f2 Remove unnecessary config class
All checks were successful
Unit Testing / test (3.12) (push) Successful in 28s
Unit Testing / test (3.10) (push) Successful in 30s
Unit Testing / test (3.9) (push) Successful in 27s
Unit Testing / test (3.8) (push) Successful in 28s
2026-02-26 16:44:53 +01:00
a22c23658f Add eosls command to list files with some header info
Some checks failed
Unit Testing / test (3.8) (push) Failing after 10s
Unit Testing / test (3.9) (push) Failing after 10s
Unit Testing / test (3.12) (push) Successful in 27s
Unit Testing / test (3.10) (push) Successful in 29s
2026-02-26 16:41:54 +01:00
246c179481 Changer runner for actions to gitea
All checks were successful
Unit Testing / test (3.10) (push) Successful in 30s
Unit Testing / test (3.12) (push) Successful in 29s
Unit Testing / test (3.8) (push) Successful in 28s
Unit Testing / test (3.9) (push) Successful in 27s
Co-authored-by: bruhn_b <basil.bruhn@psi.ch>
Reviewed-on: #1
Co-authored-by: Artur Glavic <artur.glavic@psi.ch>
Co-committed-by: Artur Glavic <artur.glavic@psi.ch>
2026-02-26 14:34:25 +01:00
30f616d3ab Add option to append Rqz datasets
Some checks failed
Unit Testing / test (3.10) (push) Has been cancelled
Unit Testing / test (3.11) (push) Has been cancelled
Unit Testing / test (3.12) (push) Has been cancelled
Unit Testing / test (3.8) (push) Has been cancelled
Unit Testing / test (3.9) (push) Has been cancelled
2026-02-26 12:54:33 +01:00
27 changed files with 665 additions and 235 deletions

17
.github/workflows/merge_PR.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
name: Pull Request Merged
on:
pull_request:
types: [closed]
jobs:
autoupdate_minor:
if: github.event.pull_request.merged == true
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.13'
- name: Perform PR auto-action
run: |
python PR_merged.py

View File

@@ -24,16 +24,15 @@ on:
jobs:
test:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12', '3.13']
fail-fast: false
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
fail-fast: true
steps:
- uses: actions/checkout@v4
with:
lfs: 'true'
- name: Checkout LFS objects
run: git clone https://${{secrets.GITHUB_TOKEN}}@gitea.psi.ch/${{ github.repository }}.git .
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
@@ -47,8 +46,32 @@ jobs:
- name: Test with pytest
run: |
export PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
python -m pytest tests
release:
if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all_incl_release"]'), github.event.inputs.build-items)) }}
runs-on: ubuntu-latest
needs: [test]
steps:
- uses: actions/checkout@v4
- name: get latest version tag and release note
run: |
git fetch --depth=500
this_tag=$(python -c "import eos;print('v'+eos.__version__)")
prev_tag=$(git describe --abbrev=0 --tags `git rev-list --tags --skip=1 --max-count=1`)
echo "Relese tag $this_tag with changes since $prev_tag"
echo "Changes:
" > git_changes.log
git log $prev_tag..HEAD --pretty=format:"* %s" >> git_changes.log
cat git_changes.log
echo "RELEASE_TAG=$this_tag" >> $GITHUB_ENV
- name: Create Release
uses: actions/gitea-release-action@v1
with:
name: "Amor-Eos ${{ env.RELEASE_TAG }}"
tag_name: "${{ env.RELEASE_TAG }}"
body_path: git_changes.log
build-ubuntu-latest:
needs: [test]
@@ -60,76 +83,55 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
python-version: '3.12'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install build
pip install -r requirements.txt
pip install wheel build twine
- name: Build PyPI package
run: |
python3 -m build
- name: Archive distribution
uses: actions/upload-artifact@v4
with:
name: linux-dist
path: |
dist/*.tar.gz
- name: Upload to PyPI
#if: github.event_name != 'workflow_dispatch'
uses: pypa/gh-action-pypi-publish@release/v1
with:
# user: __token__
# password: ${{ secrets.PYPI_TOKEN }}
skip-existing: true
build-windows:
needs: [test]
runs-on: windows-latest
if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all", "windows", "all_incl_release"]'), github.event.inputs.build-items)) }}
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: Install dependencies
run: |
C:\Miniconda\condabin\conda.bat env update --file conda_windows.yml --name base
C:\Miniconda\condabin\conda.bat init powershell
- name: Build with pyinstaller
run: |
pyinstaller windows_build.spec
cd dist\eos
Compress-Archive -Path .\* -Destination ..\..\eos.zip
- name: Archive distribution
uses: actions/upload-artifact@v4
twine upload dist/* -u __token__ -p ${{ secrets.PYPI_TOKEN }} --skip-existing
this_tag=$(python -c "import eos;print('v'+eos.__version__)")
echo "RELEASE_TAG=$this_tag" >> $GITHUB_ENV
- name: Update Release
if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all_incl_release"]'), github.event.inputs.build-items)) }}
uses: actions/gitea-release-action@v1
with:
name: windows-dist
path: |
eos.zip
name: "Amor-Eos ${{ env.RELEASE_TAG }}"
tag_name: "${{ env.RELEASE_TAG }}"
files: |-
dist/amor*.tar.gz
release:
if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all_incl_release"]'), github.event.inputs.build-items)) }}
runs-on: ubuntu-latest
needs: [build-ubuntu-latest, build-windows]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
fetch-tags: true
- uses: actions/download-artifact@v4
with:
name: linux-dist
- uses: actions/download-artifact@v4
with:
name: windows-dist
- name: get latest version tag
run: echo "RELEASE_TAG=$(git describe --abbrev=0 --tags)" >> $GITHUB_ENV
- uses: ncipollo/release-action@v1
with:
artifacts: "amor*.tar.gz,*.zip"
token: ${{ secrets.GITHUB_TOKEN }}
allowUpdates: true
tag: ${{ env.RELEASE_TAG }}
# build-windows:
# needs: [test]
# runs-on: windows-latest
# if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all", "windows", "all_incl_release"]'), github.event.inputs.build-items)) }}
#
# steps:
# - uses: actions/checkout@v4
# - name: Set up Python
# uses: actions/setup-python@v5
# with:
# python-version: 3.12
# - name: Install dependencies
# run: |
# C:\Miniconda\condabin\conda.bat env update --file conda_windows.yml --name base
# C:\Miniconda\condabin\conda.bat init powershell
# - name: Build with pyinstaller
# run: |
# pyinstaller windows_build.spec
# cd dist\eos
# Compress-Archive -Path .\* -Destination ..\..\eos.zip
# - name: Update Release
# if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all_incl_release"]'), github.event.inputs.build-items)) }}
# uses: actions/gitea-release-action@v1
# with:
# name: "Amor-Eos ${{ env.RELEASE_TAG }}"
# tag_name: "${{ env.RELEASE_TAG }}"
# files: |-
# eos.zip

View File

@@ -1,38 +1,39 @@
name: Unit Testing
on:
push:
branches:
- main
pull_request:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
test:
runs-on: ubuntu-22.04
strategy:
matrix:
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
fail-fast: false
steps:
- uses: actions/checkout@v4
with:
lfs: 'true'
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
pip install -r requirements.txt
- name: Test with pytest
run: |
python -m pytest tests
name: Unit Testing
on:
push:
branches:
- main
pull_request:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
fail-fast: false
steps:
- name: Checkout LFS objects
run: git clone https://${{secrets.GITHUB_TOKEN}}@gitea.psi.ch/${{ github.repository }}.git .
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
pip install -r requirements.txt
- name: Test with pytest
run: |
export PYTEST_DISABLE_PLUGIN_AUTOLOAD=1
python -m pytest tests

30
PR_merged.py Normal file
View File

@@ -0,0 +1,30 @@
"""
Actions performed automatically on pull request merged. Update changed date and minor revision number.
"""
import os
from eos import __version__
from datetime import datetime
INIT_HEADER = '''"""
Package to handle data redction at AMOR instrument to be used by __main__.py script.
"""
'''
def main():
print("Running PR_merged.py")
print(f" Previous version: {__version__}")
version = __version__.split('.')
version[2] = str(int(version[2])+1)
version = '.'.join(version)
print(f" New version: {version}")
now=datetime.now()
with open(os.path.join('eos','__init__.py'), 'w') as f:
f.write(INIT_HEADER)
f.write(f"__version__ = '{version}'\n")
f.write(f"__date__ = '{now.strftime('%Y-%m-%d')}'\n")
if __name__ == '__main__':
main()

View File

@@ -1,6 +1,6 @@
"""
Package to handle data redction at AMOR instrument to be used by __main__.py script.
"""
__version__ = '3.2.0'
__date__ = '2026-02-26'
"""
Package to handle data redction at AMOR instrument to be used by __main__.py script.
"""
__version__ = '3.2.6'
__date__ = '2026-03-10'

View File

@@ -10,7 +10,7 @@ import logging
# need to do absolute import here as pyinstaller requires it
from eos.options import ReflectivityConfig, ReaderConfig, ExperimentConfig, ReflectivityReductionConfig, ReflectivityOutputConfig
from eos.command_line import commandLineArgs
from eos.logconfig import setup_logging, update_loglevel
from eos.logconfig import setup_logging, update_loglevel, setup_logfile
def main():
@@ -27,6 +27,8 @@ def main():
output_config = ReflectivityOutputConfig.from_args(clas)
config = ReflectivityConfig(reader_config, experiment_config, reduction_config, output_config)
if output_config.logFile:
setup_logfile()
logging.warning('######## eos - data reduction for Amor ########')
# only import heavy module if sufficient command line parameters were provided

View File

@@ -4,7 +4,7 @@ from typing import List, Type
from .options import ArgParsable
def commandLineArgs(config_items: List[Type[ArgParsable]], program_name=None):
def commandLineArgs(config_items: List[Type[ArgParsable]], program_name=None, extra_args=[]):
"""
Process command line argument.
The type of the default values is used for conversion and validation.
@@ -36,4 +36,7 @@ def commandLineArgs(config_items: List[Type[ArgParsable]], program_name=None):
f'--{cpc.argument}', **cpc.add_argument_args
)
for ma in extra_args:
clas.add_argument(**ma)
return clas.parse_args()

View File

@@ -1,7 +1,11 @@
"""
Constants used in data reduction.
"""
hdm = 6.626176e-34/1.674928e-27 # h / m
lamdaCut = 2.5 # Aa
lamdaMax = 15.0 # Aa
"""
Constants used in data reduction.
"""
hdm = 6.626176e-34/1.674928e-27 # h / m
lamdaCut = 2.5 # Aa
lamdaMax = 15.0 # Aa
polarizationConfigs = ['unpolarized', 'unpolarized', 'po', 'mo', 'op', 'pp', 'mp', 'om', 'pm', 'mm']
polarizationLabels = ['undetermined', 'unpolarized', 'spin-up', 'spin-down', 'op',
'up-up', 'down-up', 'om', 'up-down', 'down-down']

View File

@@ -9,7 +9,7 @@ from typing import Tuple
from . import const
from .event_data_types import EventDataAction, EventDatasetProtocol, append_fields, EVENT_BITMASKS
from .helpers import filter_project_x, merge_frames, extract_walltime
from .helpers import filter_project_x, merge_frames, extract_walltime, add_log_to_pulses, merge_frames_w_index
from .instrument import Detector
from .options import IncidentAngle
from .header import Header
@@ -25,8 +25,9 @@ class ExtractWalltime(EventDataAction):
dataset.data.events = new_events
class MergeFrames(EventDataAction):
def __init__(self, lamdaCut=None):
def __init__(self, lamdaCut=None, extractNeutronPulses=False):
self.lamdaCut=lamdaCut
self.extractNeutronPulses=extractNeutronPulses
def perform_action(self, dataset: EventDatasetProtocol)->None:
if self.lamdaCut is None:
@@ -36,7 +37,34 @@ class MergeFrames(EventDataAction):
tofCut = lamdaCut*dataset.geometry.chopperDetectorDistance/const.hdm*1e-13
total_offset = (tofCut +
dataset.timing.tau * (dataset.timing.ch1TriggerPhase + dataset.timing.chopperPhase/2)/180)
dataset.data.events.tof = merge_frames(dataset.data.events.tof, tofCut, dataset.timing.tau, total_offset)
if self.extractNeutronPulses and 'wallTime' in dataset.data.events.dtype.names:
d = dataset.data
# put events into precise sub-frame
d.events.tof, subframes = merge_frames_w_index(d.events.tof, tofCut, dataset.timing.tau, total_offset)
subframes = subframes.astype(int)
# add a sub-pulse time 1-tau before and after each existing time
utimes, uidxs = np.unique(d.events.wallTime, return_inverse=True)
inter_times = np.empty(2*utimes.shape[0]+1, dtype=d.events.wallTime.dtype)
tauns = dataset.timing.tau*1e9
inter_times[0] = utimes[0]-tauns
inter_times[1::2] = utimes
inter_times[2::2] = utimes+tauns
# use subframe indices to sort existing events into new times
d.events.wallTime = inter_times[2*uidxs+subframes+1]
# expand pulses array with additional sub-frames
new_pulses = np.recarray(2*d.pulses.shape[0]+1, dtype=d.pulses.dtype)
new_pulses[0] = d.pulses[0]
new_pulses[1::2] = d.pulses
new_pulses[2::2] = d.pulses
new_pulses.time[0] = d.pulses.time[0]-tauns
new_pulses.time[2::2] = d.pulses.time+tauns
new_pulses.monitor /= 2.0 # ~preserve total monitor counts
d.pulses = new_pulses
else:
if self.extractNeutronPulses:
logging.error(" Trying to separate neutron pulses while wallTime is not extracted, yet!")
dataset.data.events.tof = merge_frames(dataset.data.events.tof, tofCut, dataset.timing.tau, total_offset)
class AnalyzePixelIDs(EventDataAction):
@@ -126,11 +154,14 @@ class FilterQzRange(EventDataAction):
d.events.mask += EVENT_BITMASKS["qRange"]*((self.qzRange[0]>d.events.qz) | (d.events.qz>self.qzRange[1]))
class FilterByLog(EventDataAction):
def __init__(self, filter_string):
def __init__(self, filter_string, remove_switchpulse=False):
if filter_string.startswith('!'):
filter_string = filter_string[1:]
remove_switchpulse = True
self.filter_string = filter_string
self.remove_switchpulse = remove_switchpulse
def perform_action(self, dataset: EventDatasetProtocol) -> None:
filter_variable = None
@@ -150,34 +181,11 @@ class FilterByLog(EventDataAction):
EVENT_BITMASKS[filter_variable] = max(EVENT_BITMASKS.values())*2
if not filter_variable in dataset.data.pulses.dtype.names:
# interpolate the parameter values for all existing pulses
self.add_log_to_pulses(filter_variable, dataset)
add_log_to_pulses(filter_variable, dataset)
fltr_pulses = eval(self.filter_string, {filter_variable: dataset.data.pulses[filter_variable]})
if self.remove_switchpulse:
switched = fltr_pulses[:-1] & ~fltr_pulses[1:]
fltr_pulses[:-1] &= ~switched
goodTimeS = dataset.data.pulses.time[fltr_pulses]
filter_e = np.logical_not(np.isin(dataset.data.events.wallTime, goodTimeS))
dataset.data.events.mask += EVENT_BITMASKS[filter_variable]*filter_e
def add_log_to_pulses(self, key, dataset: EventDatasetProtocol):
"""
Add a log value for each pulse to the pulses array.
"""
# TODO: perform some interpolation for the pulse where a change occured
pulses = dataset.data.pulses
log_data = dataset.data.device_logs[key]
if log_data.time[0]>0:
logTimeS = np.hstack([[0], log_data.time, [pulses.time[-1]+1]])
logValues = np.hstack([[log_data.value[0]], log_data.value])
else:
logTimeS = np.hstack([log_data.time, [pulses.time[-1]+1]])
logValues = log_data.value
pulseLogS = np.zeros(pulses.time.shape[0], dtype=log_data.value.dtype)
j = 0
for i, ti in enumerate(pulses.time):
# find the last current item that was before this pulse
while ti>=logTimeS[j+1]:
j += 1
pulseLogS[i] = logValues[j]
pulses = append_fields(pulses, [(key, pulseLogS.dtype)])
pulses[key] = pulseLogS
dataset.data.pulses = pulses

View File

@@ -46,6 +46,8 @@ EVENT_BITMASKS = {
}
def append_fields(input: np.recarray, new_fields: List[Tuple[str, np.dtype]]):
# TODO: This action is used often and time consuming as it runs len(flds) times over all indices.
# Could only be faster if array is allocated in the beginning with all fields, less flexible.
# add one ore more fields to a recarray, numpy functions seems to fail
flds = [(name, dtypei[0]) for name, dtypei in input.dtype.fields.items()]
flds += new_fields

View File

@@ -48,7 +48,7 @@ class ApplyParameterOverwrites(EventDataAction):
with open(self.config.sampleModel, 'r') as model_yml:
model = yaml.safe_load(model_yml)
else:
logging.warning(f' ! the file {self.config.sampleModel}.yml does not exist. Ignored!')
logging.warning(f' ! the file {self.config.sampleModel} does not exist. Ignored!')
return
else:
model = dict(stack=self.config.sampleModel)
@@ -165,10 +165,12 @@ class ApplyMask(EventDataAction):
self.bitmask_filter = bitmask_filter
def perform_action(self, dataset: EventDatasetProtocol) ->None:
# TODO: why is this action time consuming?
# TODO: Most time in test examples is spend here.
# While the actions here are very simple, they act on a large array,
# so even just comparison and indexing become time consuming.
d = dataset.data
pre_filter = d.events.shape[0]
if logging.getLogger().level == logging.DEBUG:
if logging.getLogger().level <= logging.DEBUG:
# only run this calculation if debug level is actually active
filtered_by_mask = {}
for key, value in EVENT_BITMASKS.items():
@@ -183,3 +185,20 @@ class ApplyMask(EventDataAction):
d.events = d.events[fltr]
post_filter = d.events.shape[0]
logging.info(f' number of events: total = {pre_filter:7d}, filtered = {post_filter:7d}')
if d.device_logs == {} or not hasattr(dataset, 'update_info_from_logs'):
return
# filter pulses and logs to allow update of header information
from .helpers import add_log_to_pulses
times = np.unique(d.events.wallTime)
# make sure all log variables are associated with pulses
for key, log in d.device_logs.items():
if not key in d.pulses.dtype.names:
# interpolate the parameter values for all existing pulses
add_log_to_pulses(key, dataset)
# remove all pulses that have no more events
d.pulses = d.pulses[np.isin(d.pulses.time, times)]
for key, log in d.device_logs.items():
d.device_logs[key] = np.recarray(d.pulses.shape, dtype = log.dtype)
d.device_logs[key].time = d.pulses.time
d.device_logs[key].value = d.pulses[key]
dataset.update_info_from_logs()

View File

@@ -3,6 +3,7 @@ Reading of Amor NeXus data files to extract metadata and event stream.
"""
from typing import BinaryIO, List, Union
import sys
import h5py
import numpy as np
import logging
@@ -27,6 +28,7 @@ except ImportError:
# Time zone used to interpret time strings
AMOR_LOCAL_TIMEZONE = zoneinfo.ZoneInfo(key='Europe/Zurich')
UTC = zoneinfo.ZoneInfo(key='UTC')
class AmorHeader:
"""
@@ -47,8 +49,8 @@ class AmorHeader:
chopper_separation=('entry1/Amor/chopper/pair_separation', float),
detector_distance=('entry1/Amor/detector/transformation/distance', float),
chopper_distance=('entry1/Amor/chopper/distance', float),
sample_temperature=('entry1/sample/temperature', float, 'ignore'),
sample_magnetic_field=('entry1/sample/magnetic_field', float, 'ignore'),
sample_temperature=('entry1/sample/temperature', float),
sample_magnetic_field=('entry1/sample/magnetic_field', float),
mu=('entry1/Amor/instrument_control_parameters/mu', float, 'mu'),
nu=('entry1/Amor/instrument_control_parameters/nu', float, 'nu'),
@@ -94,7 +96,6 @@ class AmorHeader:
try:
hdfgrp = self.hdf[hdf_path]
if hdfgrp.attrs.get('NX_class', None) == 'NXlog':
self._log_keys.append(key)
# use the last value that was recoreded before the count started
time_column = hdfgrp['time'][:]
try:
@@ -102,9 +103,12 @@ class AmorHeader:
except IndexError:
start_index = 0
if hdfgrp['value'].ndim==1:
return dtype(hdfgrp['value'][start_index])
output = dtype(hdfgrp['value'][start_index])
else:
return dtype(hdfgrp['value'][start_index, 0])
output = dtype(hdfgrp['value'][start_index, 0])
# make sure key is only appended if no exception was raised
self._log_keys.append(key)
return output
elif dtype is str:
return self.read_string(hdf_path)
else:
@@ -136,8 +140,14 @@ class AmorHeader:
start_time = self.rv('start_time_fallback')
# extract start time as unix time, adding UTC offset of 1h to time string
if start_time.endswith('Z') and sys.version_info.minor<11:
# older python versions did not support Z format
start_time = start_time[:-1]
TZ = UTC
else:
TZ = AMOR_LOCAL_TIMEZONE
start_date = datetime.fromisoformat(start_time)
self.fileDate = start_date.replace(tzinfo=AMOR_LOCAL_TIMEZONE)
self.fileDate = start_date.replace(tzinfo=TZ)
self._start_time_ns = np.uint64(self.fileDate.timestamp()*1e9)
# read general information and first data set
@@ -185,11 +195,17 @@ class AmorHeader:
)
# while event times are not evaluated, use average_value reported in file for SEE
if self.hdf['entry1/sample'].get('temperature', None) is not None:
sample_temperature = self.rv('sample_temperature')
self.sample.sample_parameters['temperature'] = fileio.Value(sample_temperature, unit='K')
try:
sample_temperature = self.rv('sample_temperature')
except IndexError: pass
else:
self.sample.sample_parameters['temperature'] = fileio.Value(sample_temperature, unit='K')
if self.hdf['entry1/sample'].get('magnetic_field', None) is not None:
sample_magnetic_field = self.rv('sample_magnetic_field')
self.sample.sample_parameters['magnetic_field'] = fileio.Value(sample_magnetic_field, unit='T')
try:
sample_magnetic_field = self.rv('sample_magnetic_field')
except IndexError: pass
else:
self.sample.sample_parameters['magnetic_field'] = fileio.Value(sample_magnetic_field, unit='T')
def read_instrument_configuration(self):
chopperSeparation = self.rv('chopper_separation')
@@ -197,8 +213,6 @@ class AmorHeader:
chopperDistance = self.rv('chopper_distance')
chopperDetectorDistance = detectorDistance - chopperDistance
polarizationConfigs = ['unpolarized', 'unpolarized', 'po', 'mo', 'op', 'pp', 'mp', 'om', 'pm', 'mm']
mu = self.rv('mu')
nu = self.rv('nu')
kap = self.rv('kap')
@@ -227,7 +241,7 @@ class AmorHeader:
self.timing = AmorTiming(ch1TriggerPhase, ch2TriggerPhase, chopperSpeed, chopperPhase, tau)
polarizationConfigLabel = self.rv('polarization_config_label')
polarizationConfig = fileio.Polarization(polarizationConfigs[polarizationConfigLabel])
polarizationConfig = fileio.Polarization(const.polarizationConfigs[polarizationConfigLabel])
logging.debug(f' polarization configuration: {polarizationConfig} (index {polarizationConfigLabel})')
@@ -387,7 +401,7 @@ class AmorEventData(AmorHeader):
hdf_path, dtype, *_ = self.hdf_paths[key]
hdfgroup = self.hdf[hdf_path]
shape = hdfgroup['time'].shape
data = np.recarray(shape, dtype=LOG_TYPE)
data = np.recarray(shape, dtype=np.dtype([('value', self.hdf_paths[key][1]), ('time', np.int64)]))
data.time = hdfgroup['time'][:]
if len(hdfgroup['value'].shape)==1:
data.value = hdfgroup['value'][:]
@@ -395,6 +409,29 @@ class AmorEventData(AmorHeader):
data.value = hdfgroup['value'][:, 0]
self.data.device_logs[key] = data
def update_info_from_logs(self):
RELEVANT_ITEMS = ['sample_temperature', 'sample_magnetic_field', 'polarization_config_label']
for key, log in self.data.device_logs.items():
if key not in RELEVANT_ITEMS:
continue
if log.value.dtype in [np.int8, np.int16, np.int32, np.int64]:
# for integer items (flags) report the most common one
value = np.bincount(log.value).argmax()
if logging.getLogger().getEffectiveLevel() <= logging.DEBUG \
and np.unique(log.value).shape[0]>1:
logging.debug(f' filtered values for {key} not unique, '
f'has {np.unique(log.value).shape[0]} values')
else:
value = log.value.mean()
if key == 'polarization_config_label':
self.instrument_settings.polarization = fileio.Polarization(const.polarizationConfigs[value])
elif key == 'sample_temperature':
self.sample.sample_parameters['temperature'].magnitue = value
elif key == 'sample_magnetic_field':
self.sample.sample_parameters['magnetic_field'].magnitue = value
def read_chopper_trigger_stream(self, packets):
chopper1TriggerTime = np.array(self.hdf['entry1/Amor/chopper/ch2_trigger/event_time_zero'][:-2], dtype=np.int64)
#self.chopper2TriggerTime = self.chopper1TriggerTime + np.array(self.hdf['entry1/Amor/chopper/ch2_trigger/event_time'][:-2], dtype=np.int64)

View File

@@ -1,10 +1,39 @@
"""
Helper functions used during calculations. Uses numba enhanced functions if available, otherwise numpy based
fallback is imported.
"""
try:
from .helpers_numba import merge_frames, extract_walltime, filter_project_x, calculate_derived_properties_focussing
except ImportError:
from .helpers_fallback import merge_frames, extract_walltime, filter_project_x, calculate_derived_properties_focussing
"""
Helper functions used during calculations. Uses numba enhanced functions if available, otherwise numpy based
fallback is imported.
"""
import numpy as np
from .event_data_types import EventDatasetProtocol, append_fields
try:
from .helpers_numba import merge_frames, extract_walltime, filter_project_x, \
calculate_derived_properties_focussing, merge_frames_w_index
except ImportError:
import logging
logging.warning('Cannot import numba enhanced functions, is it installed?')
from .helpers_fallback import merge_frames, extract_walltime, filter_project_x, \
calculate_derived_properties_focussing, merge_frames_w_index
def add_log_to_pulses(key, dataset: EventDatasetProtocol):
"""
Add a log value for each pulse to the pulses array.
"""
pulses = dataset.data.pulses
log_data = dataset.data.device_logs[key]
if log_data.time[0]>0:
logTimeS = np.hstack([[0], log_data.time, [pulses.time[-1]+1]])
logValues = np.hstack([[log_data.value[0]], log_data.value])
else:
logTimeS = np.hstack([log_data.time, [pulses.time[-1]+1]])
logValues = log_data.value
pulseLogS = np.zeros(pulses.time.shape[0], dtype=log_data.value.dtype)
j = 0
for i, ti in enumerate(pulses.time):
# find the last current item that was before this pulse
while ti>=logTimeS[j+1]:
j += 1
pulseLogS[i] = logValues[j]
pulses = append_fields(pulses, [(key, pulseLogS.dtype)])
pulses[key] = pulseLogS
dataset.data.pulses = pulses

View File

@@ -8,6 +8,17 @@ def merge_frames(tof_e, tofCut, tau, total_offset):
# tof shifted to 1 frame
return np.remainder(tof_e-(tofCut-tau), tau)+total_offset
def merge_frames_w_index(tof_e, tofCut, tau, total_offset):
"""
Version of merge frames that also returns a frame index for each pulse:
0 - belongs to the frame it was measured in
-1 - arrived in this frame but belongs to the previous neutron pulse
1 - belongs to the second neutron pulse of the original frame
"""
new_tof = merge_frames(tof_e, tofCut, tau, total_offset)
frame_idx = np.floor_divide(tof_e-tofCut, tau)
return new_tof, frame_idx
def extract_walltime(tof_e, dataPacket_p, dataPacketTime_p):
output = np.empty(np.shape(tof_e)[0], dtype=np.int64)
for i in range(len(dataPacket_p)-1):

View File

@@ -11,6 +11,22 @@ def merge_frames(tof_e, tofCut, tau, total_offset):
tof_e_out[ti] = ((tof_e[ti]-dt)%tau)+total_offset # tof shifted to 1 frame
return tof_e_out
@nb.jit(nb.float64[:,:](nb.float64[:], nb.float64, nb.float64, nb.float64),
nopython=True, parallel=True, cache=True)
def merge_frames_w_index(tof_e, tofCut, tau, total_offset):
"""
Version of merge frames that also returns a frame index for each pulse:
0 - belongs to the frame it was measured in
-1 - arrived in this frame but belongs to the previous neutron pulse
1 - belongs to the second neutron pulse of the original frame
"""
tof_idx_out = np.empty((2, tof_e.shape[0]), dtype=np.float64)
dt = (tofCut-tau)
for ti in nb.prange(tof_e.shape[0]):
tof_idx_out[0, ti] = ((tof_e[ti]-dt)%tau)+total_offset # tof shifted to 1 frame
tof_idx_out[1, ti] = ((tof_e[ti]-tofCut) // tau) # tof shifted to 1 frame
return tof_idx_out
@nb.jit(nb.int64[:](nb.float64[:], nb.uint32[:], nb.int64[:]),
nopython=True, parallel=True, cache=True)
def extract_walltime(tof_e, dataPacket_p, dataPacketTime_p):

View File

@@ -6,7 +6,7 @@ import logging
import logging.handlers
def setup_logging():
logger = logging.getLogger() # logging.getLogger('quicknxs')
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
# rename levels to make clear warning is can be a normal message
logging.addLevelName(logging.INFO, 'VERB')
@@ -19,18 +19,22 @@ def setup_logging():
console.setLevel(logging.WARNING)
logger.addHandler(console)
# if os.path.exists('amor_eos.log'):
# rollover = True
# else:
# rollover = False
logfile = logging.handlers.RotatingFileHandler('amor_eos.log', encoding='utf8', mode='w',
maxBytes=200*1024**2, backupCount=20)
# if rollover: logfile.doRollover()
def setup_logfile():
logfile = logging.handlers.RotatingFileHandler(
'amor_eos.log',
encoding='utf8',
mode='w',
maxBytes=200*1024**2,
backupCount=20,
)
formatter = logging.Formatter(
'[%(levelname).4s] - %(asctime)s - %(filename)s:%(lineno)i:%(funcName)s %(message)s',
'')
'[%(levelname).4s] - %(asctime)s - %(filename)s:%(lineno)i:%(funcName)s %(message)s',
'',
)
logfile.setFormatter(formatter)
logfile.setLevel(logging.DEBUG)
logger = logging.getLogger()
logger.addHandler(logfile)
def update_loglevel(verbose=0):

54
eos/ls.py Normal file
View File

@@ -0,0 +1,54 @@
"""
eosls executable script to list available datafiles in current folder with some metadata information.
Author: Jochen Stahn (algorithms, python draft),
Artur Glavic (structuring and optimisation of code)
"""
import os
import logging
from eos.command_line import commandLineArgs
def main():
logging.getLogger().setLevel(logging.CRITICAL)
clas = commandLineArgs([], 'eosls', extra_args=[
dict(dest='path', nargs='*', default=['.'], help='paths to list file in')])
from glob import glob
import tabulate
from eos.file_reader import AmorHeader
files = []
for path in clas.path:
files+=glob(os.path.join(path, 'amor*.hdf'))
files.sort()
data = {
'File name': [],
'Start Time': [],
'mu': [],
'nu': [],
'div': [],
'Sample': [],
'T [K]': [],
'H [T]': [],
}
for fi in files:
data['File name'].append(os.path.basename(fi))
ah = AmorHeader(fi)
data['Sample'].append(ah.sample.name)
data['Start Time'].append(ah.fileDate.strftime('%y %m-%d %H:%M:%S'))
data['mu'].append('%.3f' % ah.geometry.mu)
data['nu'].append('%.3f' % ah.geometry.nu)
data['div'].append('%.3f' % ah.geometry.div)
T = ah.sample.sample_parameters.get('temperature', None)
data['T [K]'].append(T.magnitude if T is not None else '-')
H = ah.sample.sample_parameters.get('magnetic_field', None)
data['H [T]'].append(H.magnitude if H is not None else '-')
print(tabulate.tabulate(data, headers="keys"))
if __name__ == '__main__':
main()

View File

@@ -1,5 +1,5 @@
"""
events2histogram vizualising data from Amor@SINQ, PSI
amor-nicos vizualising data from Amor@SINQ, PSI
Author: Jochen Stahn (algorithms, python draft),
Artur Glavic (structuring and optimisation of code)

View File

@@ -480,6 +480,16 @@ class ReflectivityReductionConfig(ArgParsable):
},
)
extractNeutronPulses: bool = field(
default=False,
metadata={
'short': 'np',
'group': 'data manicure',
'help': 're-assign events to actual neutron pulses => '
'2xbetter filter resolution but extra computation (experimental)',
},
)
class OutputFomatOption(StrEnum):
Rqz_ort = "Rqz.ort"
@@ -526,6 +536,13 @@ class ReflectivityOutputConfig(ArgParsable):
'help': '?',
},
)
logFile: bool = field(
default = False,
metadata = {
'group': 'output',
'help': 'write log file "amor_eos.log"',
}
)
plot: bool = field(
default=False,
metadata={
@@ -543,6 +560,14 @@ class ReflectivityOutputConfig(ArgParsable):
},
)
append: bool = field(
default=False,
metadata={
'group': 'output',
'help': 'if file already exists, append result as additional ORSO dataset (only Rqz.ort)',
},
)
def _output_format_list(self, outputFormat):
format_list = []
if OutputFomatOption.ort in outputFormat\

View File

@@ -224,6 +224,7 @@ class LZProjection(ProjectionInterface):
# do not perform gravity correction for footprint, would require norm detector distance that is unknown here
fp_corr_lz = np.where(np.absolute(delta_lz+norm.angle)>5e-3,
(delta_lz+self.angle)/(delta_lz+norm.angle), np.nan)
fp_corr_lz[fp_corr_lz<0] = np.nan
self.data.mask &= np.logical_not(np.isnan(fp_corr_lz))
self.data.norm = norm_lz*fp_corr_lz
self.norm_monitor = norm.monitor

View File

@@ -5,6 +5,8 @@ import sys
import numpy as np
from orsopy import fileio
from .event_analysis import FilterByLog
from .event_handling import ApplyMask
from .file_reader import AmorEventData
from .header import Header
from .path_handling import PathResolver
@@ -63,7 +65,7 @@ class ReflectivityReduction:
# the filtering only makes sense if using actual monitor data, not time
self.dataevent_actions |= eh.FilterMonitorThreshold(self.config.experiment.lowCurrentThreshold)
self.dataevent_actions |= eh.FilterStrangeTimes()
self.dataevent_actions |= ea.MergeFrames()
self.dataevent_actions |= ea.MergeFrames(extractNeutronPulses=self.config.reduction.extractNeutronPulses)
self.dataevent_actions |= ea.AnalyzePixelIDs(self.config.experiment.yRange)
self.dataevent_actions |= eh.TofTimeCorrection(self.config.experiment.incidentAngle==IncidentAngle.alphaF)
self.dataevent_actions |= ea.CalculateWavelength(self.config.experiment.lambdaRange)
@@ -109,7 +111,7 @@ class ReflectivityReduction:
# output
if self.config.output.is_default('outputName'):
import datetime
_date = datetime.datetime.now().replace(microsecond=0).isoformat()
_date = datetime.datetime.now().replace(microsecond=0).isoformat().replace(':', '-')
if self.header.sample.name:
_sampleName = self.header.sample.name.replace(' ', '_')
else:
@@ -165,7 +167,31 @@ class ReflectivityReduction:
self.header.measurement_data_files.append(fileio.File( file=os.path.basename(fileName),
timestamp=self.dataset.fileDate))
if 'polarization_config_label' in self.dataset.data.device_logs:
pols = np.unique(self.dataset.data.device_logs['polarization_config_label'].value)
pols = pols[pols>0]
if len(pols)>1:
logging.warning(f' found {len(pols)} polarization configurations, splitting dataset accordingly')
from copy import deepcopy
from . import const
full_ds = deepcopy(self.dataset)
for pi in pols:
plabel = const.polarizationLabels[pi]
pol_filter = FilterByLog(f'polarization_config_label=={pi}',
remove_switchpulse=True) | ApplyMask()
logging.info(f' filter {plabel} using polarization_config_label=={pi}')
pol_filter(self.dataset)
self.dataset.update_header(self.header)
pol_filter.update_header(self.header)
if self.config.reduction.timeSlize:
if i>0:
logging.warning(
" time slizing should only be used for one set of datafiles, check parameters")
self.analyze_timeslices(i, polstr=f' : polarization = {plabel}')
else:
self.analyze_unsliced(i, polstr=f' : polarization = {plabel}')
self.dataset = deepcopy(full_ds)
return
if self.config.reduction.timeSlize:
if i>0:
logging.warning(" time slizing should only be used for one set of datafiles, check parameters")
@@ -173,7 +199,7 @@ class ReflectivityReduction:
else:
self.analyze_unsliced(i)
def analyze_unsliced(self, i):
def analyze_unsliced(self, i, polstr=''):
self.monitor = self.dataset.data.pulses.monitor.sum()
logging.info(f' monitor = {self.monitor:8.2f} {MONITOR_UNITS[self.config.experiment.monitorType]}')
@@ -186,7 +212,7 @@ class ReflectivityReduction:
if 'Rqz.ort' in self.config.output.outputFormats:
headerRqz = self.header.orso_header()
headerRqz.data_set = f'Nr {i} : mu = {self.dataset.geometry.mu:6.3f} deg'
headerRqz.data_set = f'Nr {i} : mu = {self.dataset.geometry.mu:6.3f} deg{polstr}'
# projection on qz-grid
result = proj.project_on_qz()
@@ -261,7 +287,7 @@ class ReflectivityReduction:
proj.plot(colorbar=True, cmap=str(self.config.output.plot_colormap))
plt.title(f'{self.config.reduction.fileIdentifier[i]}')
def analyze_timeslices(self, i):
def analyze_timeslices(self, i, polstr=''):
wallTime_e = np.float64(self.dataset.data.events.wallTime)/1e9
pulseTimeS = np.float64(self.dataset.data.pulses.time)/1e9
interval = self.config.reduction.timeSlize[0]
@@ -311,7 +337,7 @@ class ReflectivityReduction:
headerRqz = self.header.orso_header(
extra_columns=[fileio.Column('time', 's', 'time relative to start of measurement series')])
headerRqz.data_set = f'{i}_{ti}: time = {time:8.1f} s to {time+interval:8.1f} s'
headerRqz.data_set = f'{i}_{ti}: time = {time:8.1f} s to {time+interval:8.1f} s{polstr}'
orso_data = fileio.OrsoDataset(headerRqz, result.data_for_time(time))
self.datasetsRqz.append(orso_data)
@@ -329,8 +355,23 @@ class ReflectivityReduction:
def save_Rqz(self):
fname = os.path.join(self.config.output.outputPath, f'{self.config.output.outputName}.Rqz.ort')
logging.warning(f' {fname}')
theSecondLine = f' {self.header.experiment.title} | {self.header.experiment.start_date} | sample {self.header.sample.name} | R(q_z)'
fileio.save_orso(self.datasetsRqz, fname, data_separator='\n', comment=theSecondLine)
if os.path.exists(fname) and self.config.output.append:
logging.info(' file already exists, append as new dataset')
with open(fname, 'r') as f:
f.readline()
theSecondLine = f.readline()[3:]
prev_data = fileio.load_orso(fname)
prev_names = [di.info.data_set for di in prev_data]
for i, di in enumerate(self.datasetsRqz):
while di.info.data_set in prev_names:
if di.info.data_set.startswith('Nr '):
di.info.data_set = f'Nr {i+len(prev_data)} :'+di.info.data_set.split(':', 1)[1]
break
di.info.data_set = di.info.data_set+'_'
fileio.save_orso(prev_data+self.datasetsRqz, fname, data_separator='\n', comment=theSecondLine)
else:
theSecondLine = f' {self.header.experiment.title} | {self.header.experiment.start_date} | sample {self.header.sample.name} | R(q_z)'
fileio.save_orso(self.datasetsRqz, fname, data_separator='\n', comment=theSecondLine)
def save_Rtl(self):
fname = os.path.join(self.config.output.outputPath, f'{self.config.output.outputName}.Rlt.ort')

View File

@@ -3,5 +3,6 @@ h5py
orsopy
numba
matplotlib
tabulate
backports.strenum; python_version<"3.11"
backports.zoneinfo; python_version<"3.9"

View File

@@ -34,5 +34,6 @@ Homepage = "https://github.com/jochenstahn/amor"
[options.entry_points]
console_scripts =
eos = eos.__main__:main
eosls = eos.ls:main
events2histogram = eos.e2h:main
amor-nicos = eos.nicos:main

BIN
test_data/amor2026n000826.hdf LFS Normal file

Binary file not shown.

View File

@@ -1,5 +1,6 @@
import os
import numpy as np
import logging
from unittest import TestCase
from datetime import datetime
@@ -14,7 +15,7 @@ from eos.event_data_types import EVENT_BITMASKS, AmorGeometry, AmorTiming, AmorE
from eos.event_handling import ApplyPhaseOffset, ApplyParameterOverwrites, CorrectChopperPhase, CorrectSeriesTime, \
AssociatePulseWithMonitor, FilterMonitorThreshold, FilterStrangeTimes, TofTimeCorrection, ApplyMask
from eos.event_analysis import ExtractWalltime, MergeFrames, AnalyzePixelIDs, CalculateWavelength, CalculateQ, \
FilterQzRange
FilterQzRange, FilterByLog
from eos.options import MonitorType, IncidentAngle, ExperimentConfig
@@ -45,7 +46,7 @@ class MockEventData:
# list of data packates containing previous events
packets = np.recarray((1000,), dtype=PACKET_TYPE)
packets.start_index = np.linspace(0, events.shape[0]-1, packets.shape[0], dtype=np.uint32)
packets.time = np.linspace(1700000000000000000, 1700000000000000000+3_600_000,
packets.time = np.linspace(1700000000000000000, 1700000000000000000+3_600_000_000_000,
packets.shape[0], dtype=np.int64)
# chopper pulses within the measurement time
@@ -57,7 +58,7 @@ class MockEventData:
proton_current = np.recarray((50,), dtype=PC_TYPE)
proton_current.current = 1500.0
proton_current[np.random.randint(0, proton_current.shape[0]-1, 10)] = 0. # random time with no current
proton_current.time = np.linspace(1700000000000000300, 1700000000000000000+3_600_000,
proton_current.time = np.linspace(1700000000000000300, 1700000000000000000+3_600_000_000_000,
proton_current.shape[0], dtype=np.int64)
self.data = AmorEventStream(events, packets, pulses, proton_current)
@@ -77,6 +78,28 @@ class MockEventData:
wavelength = ValueRange(3.0, 12.5, 'angstrom'),
polarization = Polarization.unpolarized)
def update_info_from_logs(self):
RELEVANT_ITEMS = ['sample_temperature', 'sample_magnetic_field', 'polarization_config_label']
for key, log in self.data.device_logs.items():
if key not in RELEVANT_ITEMS:
continue
if log.value.dtype in [np.int8, np.int16, np.int32, np.int64]:
# for integer items (flags) report the most common one
value = np.bincount(log.value).argmax()
if logging.getLogger().getEffectiveLevel() <= logging.DEBUG \
and np.unique(log.value).shape[0]>1:
logging.debug(f' filtered values for {key} not unique, '
f'has {np.unique(log.value).shape[0]} values')
else:
value = log.value.mean()
if key == 'polarization_config_label':
self.instrument_settings.polarization = Polarization(const.polarizationConfigs[value])
elif key == 'sample_temperature':
self.sample.sample_parameters['temperature'].magnitue = value
elif key == 'sample_magnetic_field':
self.sample.sample_parameters['magnetic_field'].magnitue = value
class TestActionClass(TestCase):
@classmethod
def setUpClass(cls):
@@ -397,20 +420,30 @@ class TestSimpleActions(TestCase):
dtype=np.int32))
def test_merge_frames(self):
action = MergeFrames(lamdaCut=0.0)
action = MergeFrames(lamdaCut=0.0, extractNeutronPulses=False)
action.perform_action(self.d)
self.assertEqual(self.d.data.events.tof.shape, self.d.orig_data.events.tof.shape)
np.testing.assert_array_compare(lambda x,y: x<=y, self.d.data.events.tof, self.d.orig_data.events.tof)
self.assertTrue((-self.d.timing.tau<=self.d.data.events.tof).all())
np.testing.assert_array_less(self.d.data.events.tof, self.d.timing.tau)
action = MergeFrames(lamdaCut=2.0)
action = MergeFrames(lamdaCut=2.0, extractNeutronPulses=False)
self.d.data.events.tof = self.d.orig_data.events.tof[:]
action.perform_action(self.d)
tofCut = 2.0*self.d.geometry.chopperDetectorDistance/const.hdm*1e-13
self.assertTrue((tofCut-self.d.timing.tau<=self.d.data.events.tof).all())
self.assertTrue((self.d.data.events.tof<=tofCut+self.d.timing.tau).all())
def test_merge_frames_splitting(self):
action = MergeFrames(lamdaCut=0.0, extractNeutronPulses=True)
self._extract_walltime()
action.perform_action(self.d)
self.assertEqual(self.d.data.events.tof.shape, self.d.orig_data.events.tof.shape)
np.testing.assert_array_compare(lambda x,y: x<=y, self.d.data.events.tof, self.d.orig_data.events.tof)
self.assertEqual(self.d.data.pulses.shape[0], self.d.orig_data.pulses.shape[0]*2+1)
np.testing.assert_array_less(self.d.orig_data.pulses.time[:-1], self.d.orig_data.pulses.time[1:])
np.testing.assert_array_less(self.d.data.pulses.time[:-1], self.d.data.pulses.time[1:])
def test_analyze_pixel_ids(self):
action = AnalyzePixelIDs((1000, 1001))
action.perform_action(self.d)
@@ -496,3 +529,44 @@ class TestSimpleActions(TestCase):
self.d.data.events.mask,
np.array([1, 0, 0, 0, 1], dtype=np.int32) * EVENT_BITMASKS['qRange']
)
def test_filter_by_log(self):
action = FilterByLog("test_log==0") | ApplyMask()
class LogWarnError(Exception):
...
def warn_raise(*args, **kwargs):
raise LogWarnError()
_orig_warn = logging.warning
try:
logging.warning = warn_raise
with self.assertRaises(LogWarnError):
action.perform_action(self.d)
finally:
logging.warning = _orig_warn
self._extract_walltime()
test_log = np.recarray(shape=(2,), dtype=np.dtype([('value', np.int32),
('time', np.int64)]))
test_log.time = [-5, self.d.data.pulses.time[100]+123]
test_log.value = [0, 1]
self.d.data.device_logs['test_log'] = test_log
action.perform_action(self.d)
self.assertEqual(self.d.data.pulses.shape[0], 101)
def test_filter_by_log_switchpulse(self):
action = FilterByLog("!test_log==0") | ApplyMask()
self._extract_walltime()
test_log = np.recarray(shape=(2,), dtype=np.dtype([('value', np.int32),
('time', np.int64)]))
test_log.time = [-5, self.d.data.pulses.time[100]+123]
test_log.value = [0, 1]
self.d.data.device_logs['test_log'] = test_log
self.d.data.device_logs['check_log'] = test_log.copy()
action.perform_action(self.d)
self.assertEqual(self.d.data.pulses.shape[0], 100)
np.testing.assert_array_equal(
self.d.data.device_logs['test_log'],
self.d.data.device_logs['check_log'],
)

View File

@@ -1,8 +1,10 @@
import os
import cProfile
import numpy as np
from unittest import TestCase
from dataclasses import fields, MISSING
from eos import options, reduction_reflectivity, logconfig
from orsopy import fileio
logconfig.setup_logging()
logconfig.update_loglevel(1)
@@ -38,28 +40,23 @@ class FullAmorTest(TestCase):
def tearDown(self):
self.pr.disable()
for fi in ['test_results/test.Rqz.ort', 'test_results/5952.norm']:
try:
os.unlink(fi)
except FileNotFoundError:
pass
try:
os.unlink(fi)
except FileNotFoundError:
pass
def test_time_slicing(self):
experiment_config = options.ExperimentConfig(
chopperSpeed=self._field_defaults['ExperimentConfig']['chopperSpeed'],
chopperPhase=-13.5,
chopperPhaseOffset=-5,
monitorType=self._field_defaults['ExperimentConfig']['monitorType'],
lowCurrentThreshold=self._field_defaults['ExperimentConfig']['lowCurrentThreshold'],
yRange=(18, 48),
lambdaRange=(3., 11.5),
incidentAngle=self._field_defaults['ExperimentConfig']['incidentAngle'],
mu=0,
nu=0,
muOffset=0.0,
sampleModel='air | 10 H2O | D2O'
)
reduction_config = options.ReflectivityReductionConfig(
normalisationMethod=self._field_defaults['ReflectivityReductionConfig']['normalisationMethod'],
qResolution=0.01,
qzRange=(0.01, 0.15),
thetaRange=(-0.75, 0.75),
@@ -84,22 +81,16 @@ class FullAmorTest(TestCase):
def test_noslicing(self):
experiment_config = options.ExperimentConfig(
chopperSpeed=self._field_defaults['ExperimentConfig']['chopperSpeed'],
chopperPhase=-13.5,
chopperPhaseOffset=-5,
monitorType=self._field_defaults['ExperimentConfig']['monitorType'],
lowCurrentThreshold=self._field_defaults['ExperimentConfig']['lowCurrentThreshold'],
yRange=(18, 48),
lambdaRange=(3., 11.5),
incidentAngle=self._field_defaults['ExperimentConfig']['incidentAngle'],
mu=0,
nu=0,
muOffset=0.0,
)
reduction_config = options.ReflectivityReductionConfig(
normalisationMethod=self._field_defaults['ReflectivityReductionConfig']['normalisationMethod'],
qResolution=0.01,
qzRange=self._field_defaults['ReflectivityReductionConfig']['qzRange'],
thetaRange=(-0.75, 0.75),
fileIdentifier=["6003", "6004", "6005"],
scale=[1],
@@ -117,3 +108,57 @@ class FullAmorTest(TestCase):
# run second time to reuse norm file
reducer = reduction_reflectivity.ReflectivityReduction(config)
reducer.reduce()
def test_eventfilter(self):
self.reader_config.year = 2026
experiment_config = options.ExperimentConfig()
reduction_config = options.ReflectivityReductionConfig(fileIdentifier=["826"],
logfilter=['polarization_config_label==2'])
output_config = options.ReflectivityOutputConfig(
outputFormats=[options.OutputFomatOption.Rqz_ort],
outputName='test',
outputPath='test_results',
)
config=options.ReflectivityConfig(self.reader_config, experiment_config, reduction_config, output_config)
reducer = reduction_reflectivity.ReflectivityReduction(config)
reducer.reduce()
espin_up = reducer.dataset.data.events.shape[0]
reduction_config.logfilter = ['polarization_config_label==3']
output_config.append = True
reducer = reduction_reflectivity.ReflectivityReduction(config)
reducer.reduce()
espin_down = reducer.dataset.data.events.shape[0]
# measurement should have about 2x as many counts in spin_down
self.assertAlmostEqual(espin_down/espin_up, 2., 2)
# perform the same filter but remove pulses during which the switch occured
reduction_config.logfilter = ['!polarization_config_label==3']
output_config.append = True
reducer = reduction_reflectivity.ReflectivityReduction(config)
reducer.reduce()
espin_down2 = reducer.dataset.data.events.shape[0]
# measurement should have about 2x as many counts in spin_down
self.assertLess(espin_down2, espin_down)
def test_polsplitting(self):
self.reader_config.year = 2026
experiment_config = options.ExperimentConfig()
reduction_config = options.ReflectivityReductionConfig(fileIdentifier=["826"])
output_config = options.ReflectivityOutputConfig(
outputFormats=[options.OutputFomatOption.Rqz_ort],
outputName='test',
outputPath='test_results',
)
config=options.ReflectivityConfig(self.reader_config, experiment_config, reduction_config, output_config)
reducer = reduction_reflectivity.ReflectivityReduction(config)
reducer.reduce()
results = fileio.load_orso(os.path.join(output_config.outputPath, output_config.outputName+'.Rqz.ort'))
self.assertEqual(len(results), 2)
self.assertEqual(results[0].info.data_source.measurement.instrument_settings.polarization, 'po')
self.assertEqual(results[1].info.data_source.measurement.instrument_settings.polarization, 'mo')
espin_up = np.nansum(results[0].data[:,1])
espin_down = np.nansum(results[1].data[:,1])
# the total intensity should be around equal as events are doubled and monitor counts are doubled
self.assertAlmostEqual(espin_down/espin_up, 1., 2)

View File

@@ -1,16 +1,16 @@
Make new release
================
- Update revision in `eos/__init__.py`
- Commit changes `git commit -a -m "your message here"`
- Tag version `git tag v3.x.y`
- Push changes `git push` and `git push --tags`
- This should trigger the **Release** action on GitHub that builds a new version and uploads it to PyPI.
Update on AMOR
==============
- Login via SSH using the **amor** user.
- Activate eos virtual environment `source /home/software/virtualenv/eosenv/bin/activate`
- Update eos packge `pip install --upgrade amor-eos`
Make new release
================
- Update revision in `eos/__init__.py`
- Commit changes `git commit -a -m "your message here"`
- Push changes `git push`
- Use the workflow dispatch for [**Release**](https://gitea.psi.ch/sinq-reflectometry/eos/actions?workflow=release.yml&actor=0&status=0) action
on Gitea with option "all_incl_release", that builds a new version, releases it and uploads it to PyPI.
Update on AMOR-DR
=================
- Login via SSH using the **amor-dr** user.
- Activate eos virtual environment `source /home/software/virtualenv/eosenv/bin/activate`
- Update eos packge `pip install --upgrade amor-eos`