41 Commits

Author SHA1 Message Date
155a167e26 update version date
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / build-windows (push) Has been cancelled
Release / release (push) Has been cancelled
2025-01-30 13:38:45 +01:00
fcc1fd7d84 delete old manual and life_histogram, update events2histogram 2025-01-30 13:37:27 +01:00
ee34cc96d4 direct beam reference fixed 2025-01-29 13:54:56 +01:00
Jochen Stahn
64edd8e8d7 Merge pull request #6 from jochenstahn/faster_pulsetimes
Faster pulsetimes
2025-01-29 08:21:14 +01:00
96681c8f81 fix case where there is no pulse missing 2025-01-28 16:17:48 +01:00
87a246196f apply scaling factor when time slicing 2025-01-28 16:02:54 +01:00
3607de9864 replace pulseTimeS generation in for loop by filling of pulse gaps 2025-01-28 16:00:52 +01:00
23e310ba6a removed fix coded proton time offset 2024-12-16 13:50:46 +01:00
9f2dfbcbe1 corrected output about automatic scaling 2024-12-16 08:15:11 +01:00
25f8708b4c re-bump version, make release dependent on windows build 2024-12-13 15:57:12 +01:00
2e565f55be Include timezone data in windows distribution 2024-12-13 15:36:22 +01:00
b48bea7726 increment version 2024-12-13 14:46:23 +01:00
55e9407b67 update revision info
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / release (push) Has been skipped
Release / build-windows (push) Has been cancelled
2024-12-13 14:37:50 +01:00
5fc512632f Merge pull request #5 from jochenstahn/time
Handle timezone correctly
2024-12-13 14:35:16 +01:00
55a7b7270c minor cleanup 2024-12-13 14:33:04 +01:00
f81582554e remove unnecessary requirement for pytz and insert timezone into datetime instead of recreation 2024-12-13 14:17:51 +01:00
3e22a94ff6 install zoneinfo backports for python 3.8 2024-12-13 14:07:12 +01:00
Jochen Stahn
c1dc194548 different method to add timezone 2024-12-10 14:39:59 +01:00
Jochen Stahn
c5f5602a89 added pytz 2024-12-09 10:19:22 +01:00
cf9c0018a5 tried to handle summer time correctly.... 2024-12-09 10:03:56 +01:00
03ac04ab05 removed control output 2024-12-06 14:59:18 +01:00
b2df980a21 fix of handling of pulses for more than one file 2024-12-06 10:36:25 +01:00
eb54e52c66 changed format for a logging output about low monitor signal 2024-12-04 10:51:05 +01:00
9b4c384425 release to pypi after storing archive to artifact 2024-12-04 08:48:35 +01:00
ce44e04014 add windows build to release action
Some checks failed
Release / build-ubuntu-latest (push) Failing after 2s
Release / release (push) Has been skipped
Release / build-windows (push) Has been cancelled
2024-12-03 16:11:04 +01:00
ce90fc005a remove upterm from tests 2024-12-03 15:17:12 +01:00
29edbce5bc change iso time offset to include minutes for compatibility with older python 2024-12-03 15:02:25 +01:00
a0b0ff436c replace calculated offset by Iso string 2024-12-03 14:51:09 +01:00
2cae45659c correct for local timezone issues 2024-12-03 14:12:37 +01:00
9693951233 explicitly define utc as timezone for file 2024-12-03 13:52:29 +01:00
e2970a8a5f add upterm action to workflow for debugging 2024-12-03 13:13:28 +01:00
7dcede44b6 removed obsolete control output 2024-11-28 09:03:57 +01:00
f6f853e6e4 commented VERY time consuming control output: events per pulse 2024-11-27 08:29:33 +01:00
ad8245a9a0 relative theta range over absolute over default 2024-11-26 15:27:07 +01:00
33b8829a09 check for empty files introduced 2024-11-22 11:24:17 +01:00
f645082c20 Merge branch 'main' of https://github.com/jochenstahn/amor 2024-11-08 09:01:37 +01:00
1bd1100035 changes in the "pulse generation" - not yet finished 2024-11-08 08:51:56 +01:00
Artur Glavic
b94b6714a3 Update setup.cfg 2024-11-05 13:15:20 +01:00
d4462f450e change to rst README 2024-10-30 15:36:43 +01:00
80d1014750 update readme with installation instructions 2024-10-30 15:05:33 +01:00
a3eb6a173e bump version 2024-10-30 14:32:04 +01:00
14 changed files with 276 additions and 2083 deletions

View File

@@ -38,6 +38,12 @@ jobs:
- name: Build PyPI package - name: Build PyPI package
run: | run: |
python3 -m build python3 -m build
- name: Archive distribution
uses: actions/upload-artifact@v4
with:
name: linux-dist
path: |
dist/*.tar.gz
- name: Upload to PyPI - name: Upload to PyPI
if: github.event_name != 'workflow_dispatch' if: github.event_name != 'workflow_dispatch'
uses: pypa/gh-action-pypi-publish@release/v1 uses: pypa/gh-action-pypi-publish@release/v1
@@ -45,24 +51,47 @@ jobs:
user: __token__ user: __token__
password: ${{ secrets.PYPI_TOKEN }} password: ${{ secrets.PYPI_TOKEN }}
skip-existing: true skip-existing: true
build-windows:
runs-on: windows-latest
if: ${{ (github.event_name != 'workflow_dispatch') || (contains(fromJson('["all", "windows"]'), github.event.inputs.build-items)) }}
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: Install dependencies
run: |
C:\Miniconda\condabin\conda.bat env update --file conda_windows.yml --name base
C:\Miniconda\condabin\conda.bat init powershell
- name: Build with pyinstaller
run: |
pyinstaller windows_folder.spec
cd dist\eos
Compress-Archive -Path .\* -Destination ..\..\eos.zip
- name: Archive distribution - name: Archive distribution
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4
with: with:
name: linux-dist name: windows-dist
path: | path: |
dist/*.tar.gz eos.zip
release: release:
if: github.event_name != 'workflow_dispatch' if: github.event_name != 'workflow_dispatch'
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [build-ubuntu-latest] needs: [build-ubuntu-latest, build-windows]
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- uses: actions/download-artifact@v4 - uses: actions/download-artifact@v4
with: with:
name: linux-dist name: linux-dist
- uses: actions/download-artifact@v4
with:
name: windows-dist
- uses: ncipollo/release-action@v1 - uses: ncipollo/release-action@v1
with: with:
artifacts: "amor*.tar.gz" artifacts: "amor*.tar.gz,*.zip"
token: ${{ secrets.GITHUB_TOKEN }} token: ${{ secrets.GITHUB_TOKEN }}
allowUpdates: true allowUpdates: true

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-22.04 runs-on: ubuntu-22.04
strategy: strategy:
matrix: matrix:
python-version: [3.8, 3.9, '3.10', '3.11', '3.12'] python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
fail-fast: false fail-fast: false
steps: steps:
@@ -31,6 +31,11 @@ jobs:
pip install pytest pip install pytest
pip install -r requirements.txt pip install -r requirements.txt
- name: Backport to 3.8
if: matrix.python-version == '3.8'
run: |
pip install backports.zoneinfo
- name: Test with pytest - name: Test with pytest
run: | run: |
cd tests cd tests

View File

@@ -1,12 +0,0 @@
Software repository for the neutron reflectometer Amor at the Paul Scherrer Institut, Switzerland
reduction of the raw files (.hdf) to reflectivity files in one of the representations of the **ORSO reflectivity file format**:
> `eos.py`
> `neos.py` : version for playing and testing
visualisation of the content of a raw file (.hdf):
> `events2histogram.py`
#TODO: real readme for final system needed.

27
README.rst Normal file
View File

@@ -0,0 +1,27 @@
EOS - The AMOR focusing reflectometry data reduction software
-------------------------------------------------------------
.. image:: https://img.shields.io/pypi/v/amor-eos.svg
:target: https://pypi.python.org/pypi/amor-eos/
Software repository for the neutron reflectometer Amor at the Paul Scherrer Institut, Switzerland
Reduction of the raw files (.hdf) to reflectivity files in one of the representations of the **ORSO reflectivity file format**:
eos.py --help
visualisation of the content of a raw file (.hdf):
events2histogram.py
:TODO: real readme for final system needed.
Installation
------------
Create a virtual python environment (>3.8) and install the PyPI package:
pip install amor-eos
On Windows you can also use the binary eos.exe that you find in the
[GitHub Releases]([https://github.com/jochenstahn/amor/releases/latest) section

File diff suppressed because it is too large Load Diff

44
conda_windows.yml Normal file
View File

@@ -0,0 +1,44 @@
name: eos_build
channels:
- defaults
dependencies:
- altgraph=0.17.3=py312haa95532_0
- blas=1.0=mkl
- bzip2=1.0.8=h2bbff1b_6
- ca-certificates=2024.11.26=haa95532_0
- expat=2.6.3=h5da7b33_0
- h5py=3.12.1=py312h3b2c811_0
- hdf5=1.12.1=h51c971a_3
- icc_rt=2022.1.0=h6049295_2
- intel-openmp=2023.1.0=h59b6b97_46320
- libffi=3.4.4=hd77b12b_1
- llvmlite=0.43.0=py312hf2fb9eb_0
- mkl=2023.1.0=h6b88ed4_46358
- mkl-service=2.4.0=py312h2bbff1b_1
- mkl_fft=1.3.11=py312h827c3e9_0
- mkl_random=1.2.8=py312h0158946_0
- numba=0.60.0=py312h0158946_0
- numpy=1.26.4=py312hfd52020_0
- numpy-base=1.26.4=py312h4dde369_0
- openssl=3.0.15=h827c3e9_0
- packaging=24.1=py312haa95532_0
- pefile=2023.2.7=py312haa95532_0
- pip=24.2=py312haa95532_0
- pyinstaller=6.9.0=py312h0416ee5_0
- pyinstaller-hooks-contrib=2024.7=py312haa95532_0
- python=3.12.7=h14ffc60_0
- pywin32-ctypes=0.2.2=py312haa95532_0
- setuptools=75.1.0=py312haa95532_0
- sqlite=3.45.3=h2bbff1b_0
- tbb=2021.8.0=h59b6b97_0
- tk=8.6.14=h0416ee5_0
- tzdata=2024b=h04d1e81_0
- vc=14.40=h2eaa2aa_1
- vs2015_runtime=14.40.33807=h98bb1dd_1
- wheel=0.44.0=py312haa95532_0
- xz=5.4.6=h8cc25b3_1
- zlib=1.2.13=h8cc25b3_1
- pip:
- orsopy==1.2.1
- pyyaml==6.0.2
- tzdata

View File

@@ -860,7 +860,7 @@ def process(dataPath, ident, clas):
try: lamdaMax try: lamdaMax
except NameError: lamdaMax = lamdaMin + tau * hdm/chopperDetectorDistance * 1e13 except NameError: lamdaMax = lamdaMin + tau * hdm/chopperDetectorDistance * 1e13
tofOffset = tau * chopperPhase / 180. # mismatch of chopper pulse and time-zero tofOffset = -tau * chopperPhase / 180. # mismatch of chopper pulse and time-zero
tofCut = lamdaCut * chopperDetectorDistance / hdm * 1.e-13 # tof of frame start tofCut = lamdaCut * chopperDetectorDistance / hdm * 1.e-13 # tof of frame start
tof_e = np.array(ev['/entry1/Amor/detector/data/event_time_offset'][:], dtype=np.uint64)/1.e9 + tofOffset # tof tof_e = np.array(ev['/entry1/Amor/detector/data/event_time_offset'][:], dtype=np.uint64)/1.e9 + tofOffset # tof
@@ -960,7 +960,7 @@ def commandLineArgs():
type=float, type=float,
help ="value of nu") help ="value of nu")
clas.add_argument("-P", "--chopperPhase", clas.add_argument("-P", "--chopperPhase",
default=7.5, default=-7.5,
type=float, type=float,
help ="chopper phase offset") help ="chopper phase offset")
clas.add_argument("-p", "--plot", clas.add_argument("-p", "--plot",

View File

@@ -2,5 +2,6 @@
Package to handle data redction at AMOR instrument to be used by eos.py script. Package to handle data redction at AMOR instrument to be used by eos.py script.
""" """
__version__ = '2.1.1' __version__ = '2.1.4'
__date__ = '2024-10-30' __date__ = '2025-01-30'

View File

@@ -2,7 +2,12 @@ import logging
import os import os
import subprocess import subprocess
import sys import sys
from datetime import datetime from datetime import datetime, timezone
try:
import zoneinfo
except ImportError:
# for python versions < 3.9 try to use the backports version
from backports import zoneinfo
from typing import List from typing import List
import h5py import h5py
@@ -20,18 +25,8 @@ try:
except Exception: except Exception:
nb_helpers = None nb_helpers = None
def get_current_per_pulse(pulseTimeS, currentTimeS, currents): # Time zone used to interpret time strings
# add currents for early pulses and current time value after last pulse (j+1) AMOR_LOCAL_TIMEZONE = zoneinfo.ZoneInfo(key='Europe/Zurich')
currentTimeS = np.hstack([[0], currentTimeS, [pulseTimeS[-1]+1]])
currents = np.hstack([[0], currents])
pulseCurrentS = np.zeros(pulseTimeS.shape[0], dtype=float)
j = 0
for i, ti in enumerate(pulseTimeS):
if ti >= currentTimeS[j+1]:
j += 1
pulseCurrentS[i] = currents[j]
#print(f' {i} {pulseTimeS[i]} {pulseCurrentS[i]}')
return pulseCurrentS
class AmorData: class AmorData:
"""read meta-data and event streams from .hdf file(s), apply filters and conversions""" """read meta-data and event streams from .hdf file(s), apply filters and conversions"""
@@ -196,6 +191,10 @@ class AmorData:
self.read_event_stream() self.read_event_stream()
totalNumber = np.shape(self.tof_e)[0] totalNumber = np.shape(self.tof_e)[0]
# check for empty event stream
if totalNumber == 0:
logging.error('empty event stream: can not determine end time')
sys.exit()
self.sort_pulses() self.sort_pulses()
@@ -204,11 +203,11 @@ class AmorData:
self.extract_walltime(norm) self.extract_walltime(norm)
# following lines: debugging output to trace the time-offset of proton current and neutron pulses # following lines: debugging output to trace the time-offset of proton current and neutron pulses
if self.config.monitorType == 'p': if self.config.monitorType == 'x':
cpp, t_bins = np.histogram(self.wallTime_e, self.pulseTimeS) cpp, t_bins = np.histogram(self.wallTime_e, self.pulseTimeS)
np.savetxt('tme.hst', np.vstack((self.pulseTimeS[:-1], cpp, self.monitorPerPulse[:-1])).T) np.savetxt('tme.hst', np.vstack((self.pulseTimeS[:-1], cpp, self.monitorPerPulse[:-1])).T)
self.average_events_per_pulse() #self.average_events_per_pulse() # for debugging only. VERY time consuming!!!
self.monitor_threshold() self.monitor_threshold()
@@ -233,25 +232,58 @@ class AmorData:
pulseTime -= np.int64(self.seriesStartTime) pulseTime -= np.int64(self.seriesStartTime)
self.stopTime = pulseTime[-1] self.stopTime = pulseTime[-1]
pulseTime = pulseTime[pulseTime>=0]
# fill in missing pulse times # fill in missing pulse times
# TODO: check for real end time # TODO: check for real end time
pulseTime = pulseTime[pulseTime>=0] try:
firstPulse = pulseTime[0] % np.int64(self.tau*2e9) # further files
self.pulseTimeS = np.array([], dtype=np.int64) # TODO: use the first pulse of the respective measurement
nxt = firstPulse #nextPulseTime = startTime % np.int64(self.tau*2e9)
for tt in pulseTime: #nextPulseTime = self.pulseTimeS[-1] + chopperPeriod
while tt - nxt > self.tau*1e9: nextPulseTime = pulseTime[0]
self.pulseTimeS = np.append(self.pulseTimeS, nxt) except AttributeError:
nxt += chopperPeriod # first file
self.pulseTimeS = np.append(self.pulseTimeS, tt) nextPulseTime = pulseTime[0] % np.int64(self.tau*2e9)
nxt = self.pulseTimeS[-1] + chopperPeriod
# calculate where time tiefference between pulses exceeds its time by more than 1/2
# this yields the number of missing pulses
pulseLengths = pulseTime[1:]-pulseTime[:-1]
pulseExtra = (pulseLengths-np.int64(self.tau*1e9))//np.int64(self.tau*2e9)
gap_indices = np.where(pulseExtra>0)[0]
if len(gap_indices)==0:
# no missing pulses, just use given array
self.pulseTimeS = np.array(pulseTime, dtype=np.int64)
return
self.pulseTimeS = np.array(pulseTime[:gap_indices[0]+1], dtype=np.int64)
last_index = gap_indices[0]
for gapi in gap_indices[1:]:
# insert missing pulses into each gap
gap_pulses = pulseTime[last_index]+np.arange(1, pulseExtra[last_index]+1)*chopperPeriod
self.pulseTimeS = np.append(self.pulseTimeS, gap_pulses)
self.pulseTimeS = np.append(self.pulseTimeS, pulseTime[last_index+1:gapi+1])
last_index = gapi
if last_index<len(pulseTime):
self.pulseTimeS = np.append(self.pulseTimeS, pulseTime[last_index:-1])
def get_current_per_pulse(self, pulseTimeS, currentTimeS, currents):
# add currents for early pulses and current time value after last pulse (j+1)
currentTimeS = np.hstack([[0], currentTimeS, [pulseTimeS[-1]+1]])
currents = np.hstack([[0], currents])
pulseCurrentS = np.zeros(pulseTimeS.shape[0], dtype=float)
j = 0
for i, ti in enumerate(pulseTimeS):
if ti >= currentTimeS[j+1]:
j += 1
pulseCurrentS[i] = currents[j]
#print(f' {i} {pulseTimeS[i]} {pulseCurrentS[i]}')
return pulseCurrentS
def associate_pulse_with_monitor(self): def associate_pulse_with_monitor(self):
if self.config.monitorType == 'p': # protonCharge if self.config.monitorType == 'p': # protonCharge
self.currentTime -= np.int64(self.seriesStartTime) self.currentTime -= np.int64(self.seriesStartTime)
self.currentTime -= np.int64(16e9) # time offset of proton current signal self.monitorPerPulse = self.get_current_per_pulse(self.pulseTimeS, self.currentTime, self.current) * 2*self.tau * 1e-3
self.monitorPerPulse = get_current_per_pulse(self.pulseTimeS, self.currentTime, self.current) * 2*self.tau * 1e-3
# filter low-current pulses # filter low-current pulses
self.monitorPerPulse = np.where(self.monitorPerPulse > 2*self.tau * self.config.lowCurrentThreshold * 1e-3, self.monitorPerPulse, 0) self.monitorPerPulse = np.where(self.monitorPerPulse > 2*self.tau * self.config.lowCurrentThreshold * 1e-3, self.monitorPerPulse, 0)
elif self.config.monitorType == 't': # countingTime elif self.config.monitorType == 't': # countingTime
@@ -274,7 +306,7 @@ class AmorData:
if self.config.monitorType == 'p': if self.config.monitorType == 'p':
for i, time in enumerate(self.pulseTimeS): for i, time in enumerate(self.pulseTimeS):
events = np.shape(self.wallTime_e[self.wallTime_e == time])[0] events = np.shape(self.wallTime_e[self.wallTime_e == time])[0]
#print(f' {i:6.0f} {events:6.0f} {self.monitorPerPulse[i]:6.2f}') logging.info(f'pulse: {i:6.0f}, events: {events:6.0f}, monitor: {self.monitorPerPulse[i]:6.2f}')
def monitor_threshold(self): def monitor_threshold(self):
if self.config.monitorType == 'p': # fix to check for file compatibility if self.config.monitorType == 'p': # fix to check for file compatibility
@@ -361,7 +393,7 @@ class AmorData:
self.pixelID_e = self.pixelID_e[filter_e] self.pixelID_e = self.pixelID_e[filter_e]
self.wallTime_e = self.wallTime_e[filter_e] self.wallTime_e = self.wallTime_e[filter_e]
if np.shape(filter_e)[0]-np.shape(self.tof_e)[0]>0.5: if np.shape(filter_e)[0]-np.shape(self.tof_e)[0]>0.5:
logging.warning(f'# strange times: {np.shape(filter_e)[0]-np.shape(self.tof_e)[0]}') logging.warning(f' strange times: {np.shape(filter_e)[0]-np.shape(self.tof_e)[0]}')
def read_event_stream(self): def read_event_stream(self):
self.tof_e = np.array(self.hdf['/entry1/Amor/detector/data/event_time_offset'][:])/1.e9 self.tof_e = np.array(self.hdf['/entry1/Amor/detector/data/event_time_offset'][:])/1.e9
@@ -424,8 +456,10 @@ class AmorData:
if self.config.nu: if self.config.nu:
self.nu = self.config.nu self.nu = self.config.nu
self.fileDate = datetime.fromisoformat( self.hdf['/entry1/start_time'][0].decode('utf-8') ) # extract start time as unix time, adding UTC offset of 1h to time string
self.startTime = np.int64( self.fileDate.timestamp() * 1e9 ) dz = datetime.fromisoformat(self.hdf['/entry1/start_time'][0].decode('utf-8'))
self.fileDate=dz.replace(tzinfo=AMOR_LOCAL_TIMEZONE)
self.startTime = np.int64( (self.fileDate.timestamp() ) * 1e9 )
if self.seriesStartTime is None: if self.seriesStartTime is None:
self.seriesStartTime = self.startTime self.seriesStartTime = self.startTime

View File

@@ -178,11 +178,11 @@ class AmorReduction:
interval = self.reduction_config.timeSlize[0] interval = self.reduction_config.timeSlize[0]
try: try:
start = self.reduction_config.timeSlize[1] start = self.reduction_config.timeSlize[1]
except: except IndexError:
start = 0 start = 0
try: try:
stop = self.reduction_config.timeSlize[2] stop = self.reduction_config.timeSlize[2]
except: except IndexError:
stop = wallTime_e[-1] stop = wallTime_e[-1]
# make overwriting log lines possible by removing newline at the end # make overwriting log lines possible by removing newline at the end
#logging.StreamHandler.terminator = "\r" #logging.StreamHandler.terminator = "\r"
@@ -195,10 +195,16 @@ class AmorReduction:
detZ_e = self.file_reader.detZ_e[filter_e] detZ_e = self.file_reader.detZ_e[filter_e]
filter_m = np.where((time<pulseTimeS) & (pulseTimeS<time+interval), True, False) filter_m = np.where((time<pulseTimeS) & (pulseTimeS<time+interval), True, False)
self.monitor = np.sum(self.file_reader.monitorPerPulse[filter_m]) self.monitor = np.sum(self.file_reader.monitorPerPulse[filter_m])
logging.info(f' {ti:<4d} {time:5.0f} {self.monitor:7.2f} {self.monitorUnit[self.experiment_config.monitorType]}') logging.info(f' {ti:<4d} {time:6.0f} {self.monitor:7.2f} {self.monitorUnit[self.experiment_config.monitorType]}')
qz_lz, qx_lz, ref_lz, err_lz, res_lz, lamda_lz, theta_lz, int_lz, mask_lz = self.project_on_lz( qz_lz, qx_lz, ref_lz, err_lz, res_lz, lamda_lz, theta_lz, int_lz, mask_lz = self.project_on_lz(
self.file_reader, self.norm_lz, self.normAngle, lamda_e, detZ_e) self.file_reader, self.norm_lz, self.normAngle, lamda_e, detZ_e)
try:
ref_lz *= self.reduction_config.scale[i]
err_lz *= self.reduction_config.scale[i]
except IndexError:
ref_lz *= self.reduction_config.scale[-1]
err_lz *= self.reduction_config.scale[-1]
q_q, R_q, dR_q, dq_q = self.project_on_qz(qz_lz, ref_lz, err_lz, res_lz, self.norm_lz, mask_lz) q_q, R_q, dR_q, dq_q = self.project_on_qz(qz_lz, ref_lz, err_lz, res_lz, self.norm_lz, mask_lz)
filter_q = np.where((self.experiment_config.qzRange[0]<q_q) & (q_q<self.experiment_config.qzRange[1]), filter_q = np.where((self.experiment_config.qzRange[0]<q_q) & (q_q<self.experiment_config.qzRange[1]),
@@ -265,7 +271,7 @@ class AmorReduction:
scale = 1. scale = 1.
R_q /= scale R_q /= scale
dR_q /= scale dR_q /= scale
logging.debug(f' scaling factor = {scale}') logging.info(f' scaling factor = {1/scale}')
return R_q, dR_q return R_q, dR_q
@@ -345,19 +351,23 @@ class AmorReduction:
lamda_e = fromHDF.lamda_e lamda_e = fromHDF.lamda_e
detZ_e = fromHDF.detZ_e detZ_e = fromHDF.detZ_e
self.normMonitor = np.sum(fromHDF.monitorPerPulse) self.normMonitor = np.sum(fromHDF.monitorPerPulse)
self.norm_lz, bins_l, bins_z = np.histogram2d(lamda_e, detZ_e, bins = (self.grid.lamda(), self.grid.z())) norm_lz, bins_l, bins_z = np.histogram2d(lamda_e, detZ_e, bins = (self.grid.lamda(), self.grid.z()))
self.norm_lz = np.where(self.norm_lz>2, self.norm_lz, np.nan) norm_lz = np.where(norm_lz>2, norm_lz, np.nan)
# correct for the SM reflectivity if self.reduction_config.normalisationMethod == 'd':
lamda_l = self.grid.lamda() # direct reference => invert map vertically
theta_z = self.normAngle + fromHDF.delta_z self.norm_lz = np.flip(norm_lz, 1)
lamda_lz = (self.grid.lz().T*lamda_l[:-1]).T else:
theta_lz = self.grid.lz()*theta_z # correct for reference sm reflectivity
qz_lz = 4.0*np.pi * np.sin(np.deg2rad(theta_lz)) / lamda_lz lamda_l = self.grid.lamda()
Rsm_lz = np.ones(np.shape(qz_lz)) theta_z = self.normAngle + fromHDF.delta_z
Rsm_lz = np.where(qz_lz>0.0217, 1-(qz_lz-0.0217)*(0.0625/0.0217), Rsm_lz) lamda_lz = (self.grid.lz().T*lamda_l[:-1]).T
# TODO: introduce variable for `m` and propably for the decay theta_lz = self.grid.lz()*theta_z
Rsm_lz = np.where(qz_lz>0.0217*5, np.nan, Rsm_lz) qz_lz = 4.0*np.pi * np.sin(np.deg2rad(theta_lz)) / lamda_lz
self.norm_lz = self.norm_lz / Rsm_lz # TODO: introduce variable for `m` and propably for the slope
Rsm_lz = np.ones(np.shape(qz_lz))
Rsm_lz = np.where(qz_lz>0.0217, 1-(qz_lz-0.0217)*(0.0625/0.0217), Rsm_lz)
Rsm_lz = np.where(qz_lz>0.0217*5, np.nan, Rsm_lz)
self.norm_lz = norm_lz / Rsm_lz
if len(lamda_e) > 1e6: if len(lamda_e) > 1e6:
with open(n_path, 'wb') as fh: with open(n_path, 'wb') as fh:
@@ -372,23 +382,26 @@ class AmorReduction:
# projection on lambda-z-grid # projection on lambda-z-grid
lamda_l = self.grid.lamda() lamda_l = self.grid.lamda()
alphaF_z = fromHDF.nu - fromHDF.mu + fromHDF.delta_z alphaF_z = fromHDF.nu - fromHDF.mu + fromHDF.delta_z
# TODO: implement various methods to obtain alpha_i.
#if self.experiment_config.incidentAngle == 'alphaF': #if self.experiment_config.incidentAngle == 'alphaF':
# # for specular reflectometry with a highly divergent beam
# alphaF_z = fromHDF.nu - fromHDF.mu + fromHDF.delta_z # alphaF_z = fromHDF.nu - fromHDF.mu + fromHDF.delta_z
#elif self.experiment_config.incidentAngle == 'nu': #elif self.experiment_config.incidentAngle == 'nu':
# # for specular reflectometry, using kappa nad nu but ignoring mu
# alphaF_z = (fromHDF.nu + fromHDF.delta_z + fromHDF.kap + fromHDF.kad) / 2. # alphaF_z = (fromHDF.nu + fromHDF.delta_z + fromHDF.kap + fromHDF.kad) / 2.
#else: #else:
# # using kappa, for a collimated incoming beam
# pass # pass
lamda_lz = (self.grid.lz().T*lamda_l[:-1]).T lamda_lz = (self.grid.lz().T*lamda_l[:-1]).T
alphaF_lz = self.grid.lz()*alphaF_z alphaF_lz = self.grid.lz()*alphaF_z
thetaN_z = fromHDF.delta_z + normAngle
thetaN_lz = np.ones(np.shape(norm_lz))*thetaN_z
thetaN_lz = np.where(np.absolute(thetaN_lz)>5e-3, thetaN_lz, np.nan)
mask_lz = np.where(np.isnan(norm_lz), False, True) mask_lz = np.where(np.isnan(norm_lz), False, True)
mask_lz = np.logical_and(mask_lz, np.where(np.absolute(thetaN_lz)>5e-3, True, False))
mask_lz = np.logical_and(mask_lz, np.where(np.absolute(alphaF_lz)>5e-3, True, False)) mask_lz = np.logical_and(mask_lz, np.where(np.absolute(alphaF_lz)>5e-3, True, False))
if self.reduction_config.thetaRange[1]<12: if self.reduction_config.thetaRangeR[1]<12:
t0 = fromHDF.nu - fromHDF.mu
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz-t0 >= self.reduction_config.thetaRangeR[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz-t0 <= self.reduction_config.thetaRangeR[1], True, False))
elif self.reduction_config.thetaRange[1]<12:
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz >= self.reduction_config.thetaRange[0], True, False)) mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz >= self.reduction_config.thetaRange[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz <= self.reduction_config.thetaRange[1], True, False)) mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz <= self.reduction_config.thetaRange[1], True, False))
else: else:
@@ -396,10 +409,6 @@ class AmorReduction:
fromHDF.nu - fromHDF.mu + fromHDF.div/2] fromHDF.nu - fromHDF.mu + fromHDF.div/2]
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz >= self.reduction_config.thetaRange[0], True, False)) mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz >= self.reduction_config.thetaRange[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz <= self.reduction_config.thetaRange[1], True, False)) mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz <= self.reduction_config.thetaRange[1], True, False))
if self.reduction_config.thetaRangeR[1]<12:
t0 = fromHDF.nu - fromHDF.mu
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz-t0 >= self.reduction_config.thetaRangeR[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(alphaF_lz-t0 <= self.reduction_config.thetaRangeR[1], True, False))
if self.experiment_config.lambdaRange[1]<15: if self.experiment_config.lambdaRange[1]<15:
mask_lz = np.logical_and(mask_lz, np.where(lamda_lz >= self.experiment_config.lambdaRange[0], True, False)) mask_lz = np.logical_and(mask_lz, np.where(lamda_lz >= self.experiment_config.lambdaRange[0], True, False))
mask_lz = np.logical_and(mask_lz, np.where(lamda_lz <= self.experiment_config.lambdaRange[1], True, False)) mask_lz = np.logical_and(mask_lz, np.where(lamda_lz <= self.experiment_config.lambdaRange[1], True, False))
@@ -424,21 +433,24 @@ class AmorReduction:
if self.reduction_config.normalisationMethod == 'o': if self.reduction_config.normalisationMethod == 'o':
logging.debug(' assuming an overilluminated sample and correcting for the angle of incidence') logging.debug(' assuming an overilluminated sample and correcting for the angle of incidence')
thetaN_z = fromHDF.delta_z + normAngle
thetaN_lz = np.ones(np.shape(norm_lz))*thetaN_z
thetaN_lz = np.where(np.absolute(thetaN_lz)>5e-3, thetaN_lz, np.nan)
mask_lz = np.logical_and(mask_lz, np.where(np.absolute(thetaN_lz)>5e-3, True, False))
ref_lz = (int_lz * np.absolute(thetaN_lz)) / (norm_lz * np.absolute(thetaF_lz)) ref_lz = (int_lz * np.absolute(thetaN_lz)) / (norm_lz * np.absolute(thetaF_lz))
elif self.reduction_config.normalisationMethod == 'u': elif self.reduction_config.normalisationMethod == 'u':
logging.debug(' assuming an underilluminated sample and ignoring the angle of incidence') logging.debug(' assuming an underilluminated sample and ignoring the angle of incidence')
ref_lz = (int_lz / norm_lz) ref_lz = (int_lz / norm_lz)
elif self.reduction_config.normalisationMethod == 'd': elif self.reduction_config.normalisationMethod == 'd':
logging.debug(' assuming direct beam for normalisation and ignoring the angle of incidence') logging.debug(' assuming direct beam for normalisation and ignoring the angle of incidence')
norm_lz = np.flip(norm_lz,1)
ref_lz = (int_lz / norm_lz) ref_lz = (int_lz / norm_lz)
else: else:
logging.error('unknown normalisation method! Use [u], [o] or [d]') logging.error('unknown normalisation method! Use [u]nder, [o]ver or [d]irect illumination')
ref_lz = (int_lz * np.absolute(thetaN_lz)) / (norm_lz * np.absolute(thetaF_lz)) ref_lz = (int_lz / norm_lz)
if self.monitor > 1e-6 : if self.monitor > 1e-6 :
ref_lz *= self.normMonitor / self.monitor ref_lz *= self.normMonitor / self.monitor
else: else:
logging.warning(' too small monitor value for normalisation -> ignoring monitors') logging.info(' too small monitor value for normalisation -> ignoring monitors')
err_lz = ref_lz * np.sqrt( 1/(int_lz+.1) + 1/norm_lz ) err_lz = ref_lz * np.sqrt( 1/(int_lz+.1) + 1/norm_lz )
# TODO: allow for non-ideal Delta lambda / lambda (rather than 2.2%) # TODO: allow for non-ideal Delta lambda / lambda (rather than 2.2%)

View File

@@ -1,716 +0,0 @@
__version__ = '2024-03-30'
import os
import sys
import subprocess
import h5py
import glob
import numpy as np
import argparse
import matplotlib.pyplot as plt
import matplotlib as mpl
import time
import logging
from datetime import datetime
#==============================================================================
#==============================================================================
class Detector:
def __init__(self):
self.nBlades = 14 # number of active blades in the detector
angle = np.deg2rad( 5.1 ) # deg angle of incidence of the beam on the blades (def: 5.1)
self.dZ = 4.0 * np.sin(angle) # mm height-distance of neighboring pixels on one blade
self.dX = 4.0 * np.cos(angle) # mm depth-distance of neighboring pixels on one blace
self.bladeZ = 10.7 # mm distance between detector blades (consistent with nu!)
self.zero = 0.5 * self.nBlades * self.bladeZ # mm vertical center of the detector
#==============================================================================
def pixel2quantity():
det = Detector()
nPixel = 64 * 32 * det.nBlades
pixelID = np.arange(nPixel)
(bladeNr, bPixel) = np.divmod(pixelID, 64*32)
(bZ, bY) = np.divmod(bPixel, 64)
z = det.zero - bladeNr * det.bladeZ - bZ * det.dZ
x = (31 - bZ) * det.dX
bladeAngle = np.rad2deg( 2. * np.arcsin(0.5*det.bladeZ / detectorDistance) )
delta = (det.nBlades/2. - bladeNr) * bladeAngle - np.rad2deg( np.arctan(bZ*det.dZ / ( detectorDistance + bZ * det.dX) ) )
dZ = bladeNr * 32 + bZ
quantity = np.vstack((dZ.T, bY.T, delta.T, x.T)).T
return quantity
#==============================================================================
def analyse_ev(event_e, tof_e, yMin, yMax, thetaMin, thetaMax):
data_e = np.zeros((len(event_e), 9), dtype=float)
# data_e column description:
# 0: wall time / s
# 1: pixelID
# 2: z on detector
# 3: y on detector
# 4: delta / deg = angle on detector
# 5: path within detector / mm
# 6: lambda / angstrom
# 7: theta / deg
# 8: q_z / angstrom^-1
data_e[:,0] = tof_e[:]
data_e[:,1] = event_e[:]
# filter 'strange' tof times > 2 tau
if True:
filter_e = (data_e[:,0] <= 2*tau)
#print(event_e[~filter_e])
#print(data_e[~filter_e,0])
data_e = data_e[filter_e,:]
if np.shape(filter_e)[0]-np.shape(data_e)[0] > 0.5 and verbous:
logging.warning(f'## strange times: {np.shape(filter_e)[0]-np.shape(data_e)[0]}')
pixelLookUp = pixel2quantity()
data_e[:,2:6] = pixelLookUp[np.int_(data_e[:,1])-1,:]
#================================
# filter y range
filter_e = (yMin <= data_e[:,3]) & (data_e[:,3] <= yMax)
data_e = data_e[filter_e,:]
# correct tof for beam size effect at chopper
data_e[:,0] -= ( data_e[:,4] / 180. ) * tau
# effective flight path length
#data_e[:,5] = chopperDetectorDistance + data_e[:,5]
# calculate lambda
hdm = 6.626176e-34/1.674928e-27 # h / m
data_e[:,6] = 1.e13 * data_e[:,0] * hdm / ( chopperDetectorDistance + data_e[:,5] )
# theta
data_e[:,7] = nu - mu + data_e[:,4]
# gravity compensation
data_e[:,7] += np.rad2deg( np.arctan( 3.07e-10 * ( detectorDistance + data_e[:,5]) * data_e[:,6] * data_e[:,6] ) )
# filter theta range
filter_l = (thetaMin <= data_e[:,7]) & (data_e[:,7] <= thetaMax)
data_e = data_e[filter_l,:]
# q_z
data_e[:,8] = 4*np.pi * np.sin( np.deg2rad( data_e[:,7] ) ) / data_e[:,6]
# filter q_z range
#filter_e = (qMin < data_e[:,6]) & (data_e[:,6] < qMax)
#data_e = data_e[filter_e,:]
return data_e
#==============================================================================
class Meta:
# AMOR hdf dataset with associated properties from metadata
def __init__(self, fileName):
self.fileName = fileName
fh = h5py.File(fileName, 'r', swmr=True)
# for processing
self.chopperDistance = float(np.take(fh['/entry1/Amor/chopper/pair_separation'], 0)) # mm
# the following is the distance from the sample to the detector entry window, not to the center of rotation
self.detectorDistance = float(np.take(fh['/entry1/Amor/detector/transformation/distance'], 0)) # mm
self.chopperDetectorDistance = self.detectorDistance - float(np.take(fh['entry1/Amor/chopper/distance'], 0)) # mm
self.lamdaCut = 2.5 # Aa
startDate = str(fh['/entry1/start_time'][0].decode('utf-8'))
self.startDate = datetime.strptime(startDate, '%Y-%m-%d %H:%M:%S')
startDate = datetime.timestamp(self.startDate)
self.countingTime = float(np.take(fh['/entry1/Amor/detector/data/event_time_zero'], -1))/1e9 - startDate
# not exact for low rates
ka0 = 0.245 # given inclination of the beam after the Selene guide
year_date = str(datetime.today()).split(' ')[0].replace("-", "/", 1)
# deside from where to take the control paralemters
try:
self.mu = float(np.take(fh['/entry1/Amor/master_parameters/mu/value'], 0))
self.nu = float(np.take(fh['/entry1/Amor/master_parameters/nu/value'], 0))
self.kap = float(np.take(fh['/entry1/Amor/master_parameters/kap/value'], 0))
self.kad = float(np.take(fh['/entry1/Amor/master_parameters/kad/value'], 0))
self.div = float(np.take(fh['/entry1/Amor/master_parameters/div/value'], 0))
chSp = float(np.take(fh['/entry1/Amor/chopper/rotation_speed/value'], 0))
self.chPh = float(np.take(fh['/entry1/Amor/chopper/phase/value'], 0))
except (KeyError, IndexError):
logging.warning(f" using parameters from nicos cache")
#cachePath = '/home/amor/nicosdata/amor/cache/'
cachePath = '/home/nicos/amorcache/'
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-mu/'+year_date)).split('\t')[-1]
self.mu = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-nu/'+year_date)).split('\t')[-1]
self.nu = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-kap/'+year_date)).split('\t')[-1]
self.kap = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-kad/'+year_date)).split('\t')[-1]
self.kad = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-div/'+year_date)).split('\t')[-1]
self.div = float(value)
value = str(subprocess.getoutput('/usr/bin/grep "value" '+cachePath+'nicos-ch1_speed/'+year_date)).split('\t')[-1]
chSp = float(value)
self.chPh = np.nan
if chSp:
self.tau = 30. / chSp
else:
self.tau = 0
try: # not yet correctly implemented in nexus template
spin = str(fh['/entry1/polarizer/spin_flipper/spin'][0].decode('utf-8'))
if spin == "b'p'":
self.spin = 'p'
elif spin == "b'm'":
self.spin = 'm'
elif spin == "b'up'":
self.spin = 'p'
elif spin == "b'down'":
self.spin = 'm'
elif spin == '?':
self.spin = '?'
else:
self.spin = 'n'
except (KeyError, IndexError):
self.spin = '?'
fh.close()
#==============================================================================
def resolveNumber(dataPath, ident):
if ident == '0' or '-' in ident[0]:
try:
nnr = int(ident)
except:
logging.error("ERROR: '{}' is no valid file identifier!".format(ident))
fileNames = glob.glob(dataPath+'/*.hdf')
fileNames.sort()
fileName = fileNames[nnr-1]
fileName = fileName.split('/')[-1]
fileNumber = fileName.split('n')[1].split('.')[0].lstrip('0')
else:
fileNumber = ident
return fileNumber
#==============================================================================
def fileNameCreator(dataPath, ident):
clas = commandLineArgs()
ident=str(ident)
try:
nnr = int(ident)
except:
logging.error("ERROR: '{}' is no valid file identifier!".format(ident))
if nnr <= 0 :
fileName = glob.glob(dataPath+'/*.hdf')[nnr-1]
fileName = fileName.split('/')[-1]
else:
fileName = f'amor{clas.year}n{ident:>06s}'
fileName = fileName.split('.')[0]
fileName = fileName+'.hdf'
fileName = dataPath+fileName
fileNumber = fileName.split('n')[-1].split('.')[0].lstrip('0')
return fileName, fileNumber
#==============================================================================
class PlotSelection:
def headline(self, fileNumber, totalCounts):
headLine = "#{} \u03bc={:>1.2f} \u03bd={:>1.2f} {:>12,} cts {:>8.1f} s".format(fileNumber, mu+5e-3, nu+5e-3, totalCounts, countingTime)
return headLine
# grids
def y_grid(self):
y_grid = np.arange(yMin, yMax+1, 1)
return y_grid
def lamda_grid(self):
dldl = 0.005 # Delta lambda / lambda
lMin = max(2, lamdaMin)
lamda_grid = lMin*(1+dldl)**np.arange(int(np.log(lamdaMax/lMin)/np.log(1+dldl)+1))
return lamda_grid
def theta_grid(self):
det = Detector()
bladeAngle = np.rad2deg( 2. * np.arcsin(0.5*det.bladeZ / detectorDistance) )
blade_grid = np.arctan( np.arange(33) * det.dZ / ( detectorDistance + np.arange(33) * det.dX) )
blade_grid = np.rad2deg(blade_grid)
stepWidth = blade_grid[1] - blade_grid[0]
blade_grid = blade_grid - 0.2 * stepWidth
delta_grid = []
for b in np.arange(det.nBlades-1):
delta_grid = np.concatenate((delta_grid, blade_grid), axis=None)
blade_grid = blade_grid + bladeAngle
delta_grid = delta_grid[delta_grid<blade_grid[0]-0.5*stepWidth]
delta_grid = np.concatenate((delta_grid, blade_grid), axis=None)
theta_grid = nu - mu - np.flip(delta_grid) + 0.5*det.nBlades * bladeAngle
theta_grid = theta_grid[theta_grid>=thetaMin]
theta_grid = theta_grid[theta_grid<=thetaMax]
return theta_grid
def q_grid(self):
dqdq = 0.010 # Delta q_z / q_z
q_grid = qMin*(1.+dqdq)**np.arange(int(np.log(qMax/qMin)/np.log(1+dqdq)))
return q_grid
# create PNG with several plots
def all(self, fileNumber, arg, data_e):
#cmap='gist_earth'
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
I_yt, bins_y, bins_t = np.histogram2d(data_e[:,3], data_e[:,7], bins = (self.y_grid(), self.theta_grid()))
I_lt, bins_l, bins_t = np.histogram2d(data_e[:,6], data_e[:,7], bins = (self.lamda_grid(), self.theta_grid()))
I_q, bins_q = np.histogram(data_e[:,8], bins = self.q_grid())
q_lim = 4*np.pi*np.array([ max( np.sin(self.theta_grid()[0]*np.pi/180.)/self.lamda_grid()[-1] , 1e-4 ),
min( np.sin(self.theta_grid()[-1]*np.pi/180.)/self.lamda_grid()[0] , 0.03 )])
if arg == 'lin':
#vmin = min(np.min(I_lt), np.min(I_yt))
vmin = 0
vmax = max(5, np.max(I_lt), np.max(I_yt))
else:
vmin = 0
vmax = max(1, np.log(np.max(I_lt)+.1)/np.log(10)*1.05, np.log(np.max(I_yt)+.1)/np.log(10)*1.05)
# I(y, theta)
fig = plt.figure()
axs = fig.add_gridspec(2,3)
myt = fig.add_subplot(axs[0,0])
myt.set_title('detector area')
myt.set_xlabel('$y ~/~ \\mathrm{bins}$')
myt.set_ylabel('$\\theta ~/~ \\mathrm{deg}$')
if arg == 'lin':
myt.pcolormesh(bins_y, bins_t, I_yt.T, cmap=cmap, vmin=vmin, vmax=vmax)
else:
myt.pcolormesh(bins_y, bins_t, (np.log(I_yt + 5.e-1) / np.log(10.)).T, cmap=cmap, vmin=vmin, vmax=vmax)
# I(lambda, theta)
mlt = fig.add_subplot(axs[0,1:])
mlt.set_title('angle- and energy disperse')
mlt.set_xlabel('$\\lambda ~/~ \\mathrm{\\AA}$')
mlt.axes.get_yaxis().set_visible(False)
if arg == 'lin':
cb = mlt.pcolormesh(bins_l, bins_t, I_lt.T, cmap=cmap, vmin=vmin, vmax=vmax)
else:
cb = mlt.pcolormesh(bins_l, bins_t, (np.log(I_lt + 5.e-1) / np.log(10.)).T, cmap=cmap, vmin=vmin, vmax=vmax)
# I(q_z)
lqz = fig.add_subplot(axs[1,:])
lqz.set_title('$I(q_z)$')
lqz.set_ylabel('$\\log_{10}(\\mathrm{cnts})$')
lqz.set_xlabel('$q_z~/~\\mathrm{\\AA}^{-1}$')
lqz.set_xlim(q_lim)
if arg == 'lin':
plt.plot(bins_q[:-1], I_q, color='blue', linewidth=0.5)
else:
err_q = np.sqrt(I_q+1)
low_q = np.where(I_q-err_q>0, I_q-err_q, 0.1)
plt.fill_between(bins_q[:-1], np.log(low_q)/np.log(10), np.log(I_q+err_q/2)/np.log(10), color='lightgrey')
plt.plot(bins_q[:-1], np.log(I_q+5e-1)/np.log(10), color='blue', linewidth=0.5)
lw = I_q[ ((q_lim[0] < bins_q[:-1]) & (bins_q[:-1] < q_lim[1])) ].min()
plt.ylim(max(-0.1, np.log(lw+.1)/np.log(10)-0.1), )
#
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=2.8, c='r')
fig.colorbar(cb, ax=mlt)
plt.subplots_adjust(hspace=0.6, wspace=0.1)
plt.savefig(output, format='png', dpi=150)
# create PNG with one plot
def Iyz(self, fileNumber, arg, data_e):
det = Detector()
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
z_grid = np.arange(det.nBlades*32)
I_yz, bins_y, bins_z = np.histogram2d(data_e[:,3], data_e[:,2], bins = (self.y_grid(), z_grid))
if arg == 'log':
vmin = 0
vmax = max(1, np.log(np.max(I_yt)+.1)/np.log(10)*1.05)
plt.pcolormesh(bins_y[:],bins_z[:],(np.log(I_yz+6e-1)/np.log(10)).T, cmap=cmap, vmin=vmin, vmax=vmax)
else:
plt.pcolormesh(bins_y[:],bins_z[:],I_yz.T, cmap=cmap)
plt.xlabel('$y ~/~ \\mathrm{bins}$')
plt.ylabel('$z ~/~ \\mathrm{bins}$')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
def Ilt(self, fileNumber, arg, data_e) :
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
I_lt, bins_l, bins_t = np.histogram2d(data_e[:,6], data_e[:,7], bins = (self.lamda_grid(), self.theta_grid()))
if arg == 'log':
vmax = max(1, np.log(np.max(I_lt)+.1)/np.log(10)*1.05 )
plt.pcolormesh(bins_l, bins_t, (np.log(I_lt+I_lt[I_lt>0].min()/2)/np.log(10.)).T, cmap=cmap, vmin=0, vmax=vmax)
else :
vmax = max(np.max(I_lt), 5)
plt.pcolormesh(bins_l, bins_t, I_lt.T, cmap=cmap, vmin=0, vmax=vmax)
plt.xlim(0,)
#if np.min(bins_t) > 0.01 :
# plt.ylim(bottom=0)
#else:
# plt.ylim(bottom=np.min(bins_t))
#if np.max(bins_t) < -0.01:
# plt.ylim(top=0)
#else:
# plt.ylim(top=np.max(bins_t))
plt.xlim(lamdaMin, lamdaMax)
plt.ylim(thetaMin, thetaMax)
plt.xlabel('$\\lambda ~/~ \\mathrm{\\AA}$')
plt.ylabel('$\\theta ~/~ \\mathrm{deg}$')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
def Itz(self, fileNumber, arg, data_e):
det = Detector()
cmap = mpl.cm.gnuplot(np.arange(256))
cmap[:1, :] = np.array([256/256, 255/256, 236/256, 1])
cmap = mpl.colors.ListedColormap(cmap, name='myColorMap', N=cmap.shape[0])
time_grid = np.arange(0, tau, 0.0005)
z_grid = np.arange(det.nBlades*32+1)
I_tz, bins_t, bins_z = np.histogram2d(data_e[:,0], data_e[:,2], bins = (time_grid, z_grid))
if arg == 'log':
vmax = max(2., np.log(np.max(I_tz)+.1)/np.log(10)*1.05 )
plt.pcolormesh(bins_t, bins_z, (np.log(I_tz+5.e-1)/np.log(10.)).T, cmap=cmap, vmin=0, vmax=vmax)
else :
vmax = max(np.max(I_tz), 5)
plt.pcolormesh(bins_t, bins_z, I_tz.T, cmap=cmap, vmin=0, vmax=vmax)
if True:
plt.xlim(0,)
plt.ylim(0,)
plt.xlabel('$t ~/~ \\mathrm{s}$')
plt.ylabel('$z$ pixel row')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.colorbar()
plt.savefig(output, format='png', dpi=150)
def Iq(self, fileNumber, arg, data_e):
I_q, bins_q = np.histogram(data_e[:,8], bins = self.q_grid())
err_q = np.sqrt(I_q+1)
q_lim = 4*np.pi*np.array([ max( np.sin(self.theta_grid()[0]*np.pi/180.)/self.lamda_grid()[-1] , 1e-4 ),
min( np.sin(self.theta_grid()[-1]*np.pi/180.)/self.lamda_grid()[0] , 0.03 )])
if arg == 'log':
low_q = np.where(I_q-err_q>0, I_q-err_q, 0.1)
plt.fill_between(bins_q[:-1], np.log(low_q)/np.log(10), np.log(I_q+err_q/2)/np.log(10), color='lightgrey')
plt.plot(bins_q[:-1], np.log(I_q+5e-1)/np.log(10), color='blue', linewidth=0.5)
lw = I_q[ ((q_lim[0] < bins_q[:-1]) & (bins_q[:-1] < q_lim[1])) ].min()
plt.ylim(max(-0.1, np.log(lw+.1)/np.log(10)-0.1), )
else:
plt.plot(bins_q[:-1], I_q, color='blue', linewidth=0.5)
plt.ylabel('$\\log_{10}(\\mathrm{cnts})$')
plt.xlabel('$q_z ~/~ \\mathrm{\\AA}^{-1}$')
plt.xlim(q_lim)
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
def Il(self, fileNumber, arg, data_e):
I_l, bins_l = np.histogram(data_e[:,6], bins = self.lamda_grid())
if arg == 'lin':
plt.plot(bins_l[:-1], I_l)
plt.ylabel('$I ~/~ \\mathrm{cnts}$')
else:
plt.plot(bins_l[:-1], np.log(I_l+5.e-1)/np.log(10.))
plt.ylabel('$\\log_{10} I ~/~ \\mathrm{cnts}$')
plt.xlabel('$\\lambda ~/~ \\mathrm{\\AA}$')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
def It(self, fileNumber, arg, data_e):
I_t, bins_t = np.histogram(data_e[:,7], bins = self.theta_grid())
plt.plot( I_t, bins_t[:-1])
plt.xlabel('$\\mathrm{cnts}$')
plt.ylabel('$\\theta ~/~ \\mathrm{deg}$')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png', dpi=150)
def tof(self, fileNumber, arg, data_e):
time_grid = np.arange(0, 1.3*tau, 0.0005)
I_t, bins_t = np.histogram(data_e[:,0], bins = time_grid)
if arg == 'lin':
plt.plot(bins_t[:-1]+tau, I_t)
plt.plot(bins_t[:-1], I_t)
plt.plot(bins_t[:-1]+2*tau, I_t)
else:
lI_t = np.log(I_t+5.e-1)/np.log(10.)
plt.plot(bins_t[:-1]+tau, lI_t)
plt.plot(bins_t[:-1], lI_t)
plt.plot(bins_t[:-1]+2*tau, lI_t)
plt.ylabel('log(counts)')
plt.xlabel('time / s')
headline = self.headline(fileNumber, np.shape(data_e)[0])
plt.title(headline, loc='left', y=1.0, c='r')
plt.savefig(output, format='png')
#==============================================================================
def process(dataPath, ident, clas):
#================================
# constants
hdm = 6.626176e-34/1.674928e-27 # h / m
#================================
# instrument specific parameters
#================================
global lamdaMin, lamdaMax, qMin, qMax, thetaMin, thetaMax, yMin, yMax
# defaults
lamdaCut = 2.5 # Aa used to reshuffle tof
# data filtering and folding
#================================
if clas.lambdaRange:
lamdaMin = clas.lambdaRange[0]
lamdaMax = clas.lambdaRange[1]
else:
lamdaMin = lamdaCut
chopperPhase = clas.chopperPhase
tofOffset = clas.TOFOffset
thetaMin = clas.thetaRange[0]
thetaMax = clas.thetaRange[1]
yMin = clas.yRange[0]
yMax = clas.yRange[1]
qMin = clas.qRange[0]
qMax = clas.qRange[1]
#================================
# find and open input file
global ev
data_eSum = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0]])
sumTime = 0
number = resolveNumber(dataPath, ident)
fileName, fileNumber = fileNameCreator(dataPath, str(number))
if verbous:
logging.info('life_histogrammer processing file ->\033[1m {} \033[0m<-'.format(fileNumber))
for i in range(6):
ev = h5py.File(fileName, 'r', swmr=True)
try:
ev['/entry1/Amor/detector/data/event_time_zero'][-1]
break
except (KeyError, IndexError):
ev.close()
if i < 5:
if verbous:
print("no data yet, retrying ({}) ".format(10-2*i), end='\r')
time.sleep(2)
continue
else:
if verbous:
print("# time-out: no longer waiting for data!\a")
return
# get and process data
meta = Meta(fileName)
global mu, nu, tau
if clas.mu < 98.:
mu = clas.mu
else:
mu = meta.mu + clas.muOffset
if clas.nu < 98.:
nu = clas.nu
else:
nu = meta.nu
if clas.chopperSpeed:
tau = 30./ clas.chopperSpeed
else:
tau = meta.tau
try:
chPh
except NameError:
chPh = meta.chPh
spin = meta.spin
global countingTime, detectorDistance, chopperDetectorDistance
detectorDistance = meta.detectorDistance
chopperDetectorDistance = meta.chopperDetectorDistance
countingTime = meta.countingTime
if verbous:
logging.info(" mu = {:>4.2f} deg, nu = {:>4.2f} deg".format(mu, nu))
if spin == 'u':
logging.info(' spin <+|')
elif spin == 'd':
logging.info(' spin <-|')
try: lamdaMax
except NameError: lamdaMax = lamdaMin + tau * hdm/chopperDetectorDistance * 1e13
tofOffset = tau * chopperPhase / 180. # mismatch of chopper pulse and time-zero
tofCut = lamdaCut * chopperDetectorDistance / hdm * 1.e-13 # tof of frame start
tof_e = np.array(ev['/entry1/Amor/detector/data/event_time_offset'][:], dtype=np.uint64)/1.e9 + tofOffset # tof
detPixelID_e = np.array(ev['/entry1/Amor/detector/data/event_id'][:], dtype=np.uint64) # pixel index
dataPacket_p = np.array(ev['/entry1/Amor/detector/data/event_index'][:], dtype=np.uint64) # data packet index
tof_e = np.where(tof_e<tofCut, tof_e+2.*tau, tof_e)
tof_e = np.where(tof_e>tau+tofCut, tof_e-tau, tof_e)
data_e = analyse_ev(detPixelID_e, tof_e, yMin, yMax, thetaMin, thetaMax)
ev.close()
data_eSum = np.append(data_eSum, data_e, axis=0)
sumTime += countingTime
if verbous:
logging.info(" total counts = {} in {:6.1f} s".format(np.shape(data_e)[0], sumTime))
#================================
# plotting data
plotType = clas.plot[0]
try:
arg = clas.plot[1]
except IndexError:
arg = 'def'
plott = PlotSelection()
try:
plotFunction = getattr(plott, plotType)
plotFunction(fileNumber, arg, data_e)
plt.close()
except Exception as e:
logging.error(f"ERROR: '{plotType}' is no known output format!")
logging.error(f" original error: {e}")
#==============================================================================
def commandLineArgs():
msg = "events2histogram reads the eventstream from an hdf raw file and \
creates various histogrammed outputs or plots."
clas = argparse.ArgumentParser(description = msg)
clas.add_argument("-c", "--chopperSpeed",
type=float,
help ="chopper speed in rpm")
clas.add_argument("-d", "--dataPath",
help ="relative path to directory with .hdf files")
clas.add_argument("-f", "--fileIdent",
default='0',
help ="file number or offset (if negative)")
clas.add_argument("-l", "--lambdaRange",
nargs=2,
type=float,
help ="wavelength range to be used")
clas.add_argument("-M", "--muOffset",
default=0.,
type=float,
help ="mu offset")
clas.add_argument("-m", "--mu",
default=99.,
type=float,
help ="value of mu")
clas.add_argument("-n", "--nu",
default=99.,
type=float,
help ="value of nu")
clas.add_argument("-P", "--chopperPhase",
default=-5.,
type=float,
help ="chopper phase offset")
clas.add_argument("-p", "--plot",
default=['all', 'def'],
nargs='+',
help ="select what to plot or write")
clas.add_argument("-q", "--qRange",
default=[0.005, 0.30],
nargs=2,
type=float,
help ="q_z range")
clas.add_argument("-T", "--TOFOffset",
default=0.0,
type=float,
help ="TOF zero offset")
clas.add_argument("-t", "--thetaRange",
default=[-12., 12.],
nargs=2,
type=float,
help ="theta range to be used")
clas.add_argument("-Y", "--year",
default = str(datetime.today()).split('-')[0],
help = "year, the measurement was performed")
clas.add_argument("-y", "--yRange",
default=[0, 63],
nargs=2,
type=int,
help ="detector y range to be used")
return clas.parse_args()
#==============================================================================
def get_dataPath(clas):
if clas.dataPath:
dataPath = clas.dataPath + '/'
if not os.path.exists(dataPath):
sys.exit('# *** the directory "'+dataPath+'" does not exist ***')
elif os.path.exists('./raw'):
dataPath = "./raw/"
elif os.path.exists('../raw'):
dataPath = "../raw/"
else:
sys.exit('# *** please provide the path to the .hdf data files (-d <path>, default is "./raw")')
return dataPath
#==============================================================================
def get_directDataPath(clas):
#dataPath = clas.dataPath + '/'
year = str(datetime.today()).split('-')[0]
year_date = str(datetime.today()).split(' ')[0].replace("-", "/", 1)
pNr = str(subprocess.getoutput('/usr/bin/grep "proposal\t" /home/amor/nicosdata/amor/cache/nicos-exp/'+year_date)[-1]).split('\'')[1]
dataPath = '/home/amor/nicosdata/amor/data/'+year+'/'+pNr+'/'
if not os.path.exists(dataPath):
sys.exit('# *** the directory "'+dataPath+'" does not exist ***')
return dataPath
#==============================================================================
def main():
global verbous, output, dataPath
clas = commandLineArgs()
dataPath = get_dataPath(clas)
logging.basicConfig(level=logging.INFO, format='# %(message)s')
verbous = True
output = 'life_plot.png'
process(dataPath, clas.fileIdent, clas)
#==============================================================================
#==============================================================================
if __name__ == "__main__":
main()

View File

@@ -7,6 +7,7 @@ version = attr: libeos.__version__
author = Jochen Stahn - Paul Scherrer Institut author = Jochen Stahn - Paul Scherrer Institut
author_email = jochen.stahn@psi.ch author_email = jochen.stahn@psi.ch
description = EOS reflectometry reduction for AMOR instrument description = EOS reflectometry reduction for AMOR instrument
long_description = Reduces data obtained by focusing time of flight neutron reflectivity to full reflectivity curve.
license = MIT license = MIT
classifiers = classifiers =
Programming Language :: Python :: 3 Programming Language :: Python :: 3

View File

@@ -12,7 +12,7 @@ a = Analysis(
runtime_hooks=[], runtime_hooks=[],
excludes=[], excludes=[],
noarchive=False, noarchive=False,
optimize=0, optimize=1,
) )
pyz = PYZ(a.pure) pyz = PYZ(a.pure)

51
windows_folder.spec Normal file
View File

@@ -0,0 +1,51 @@
# -*- mode: python ; coding: utf-8 -*-
from PyInstaller.utils.hooks import collect_all
datas = []
binaries = []
hiddenimports = []
tmp_ret = collect_all('tzdata')
datas += tmp_ret[0]; binaries += tmp_ret[1]; hiddenimports += tmp_ret[2]
a = Analysis(
['eos.py'],
pathex=[],
binaries=[],
datas=[],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
optimize=1,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
[],
exclude_binaries=True,
name='eos',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
coll = COLLECT(
exe,
a.binaries,
a.datas,
strip=False,
upx=True,
upx_exclude=[],
name='eos',
)