45 Commits
0.2 ... master

Author SHA1 Message Date
992f5a8f0d pgroup add to a get_scan.. 2026-02-03 13:25:28 +01:00
9f6b0f8290 Channel names feature 2026-02-03 11:26:17 +01:00
d2f6e7285a Channel names update 2026-02-03 11:20:06 +01:00
9beff10fe6 Removing cluster tools (now with own repo at /sf/cristallina/applications/it/cluster_tools) 2026-02-03 11:19:14 +01:00
82b6b681d0 No real changes, just file permissions it seems 2026-02-03 11:17:18 +01:00
25ee07977b Update tests (no obvious changes, just ownership it seems) 2026-02-03 11:08:36 +01:00
9f01f5ad03 Docs cleanup 2026-02-03 11:06:33 +01:00
9e54ae4bda added number of retries 2025-12-05 14:53:24 +01:00
5112e050d3 more cleanups 2025-11-24 23:37:55 +01:00
a9bfc61a25 cleanup in analysis, comments, black, and upper threshold. 2025-11-24 15:23:37 +01:00
69e311cb85 Added RSM, in a first iteration 2025-11-24 15:05:43 +01:00
e95b5a1670 added code for p22760 experiment 2025-11-14 22:25:10 +01:00
6406fea7ed added custom value-to-color mapper 2025-11-14 22:23:59 +01:00
aa51b3fc41 finalized JF_stacks and unfinished scan in scan_info 2025-11-12 16:53:59 +01:00
ed645874dc added JF_stack calculator 2025-11-11 15:52:03 +01:00
a70ef5caa2 Unfinished scan including tests 2025-11-11 15:09:54 +01:00
d397a9668b changes before migration 2025-09-03 18:17:39 +02:00
ed090be840 added expansion functionality to ROI 2025-07-13 15:27:53 +02:00
b671e2d8dc updated channels definitions 2025-07-12 23:12:11 +02:00
81e8c2b7c5 added shift function to ROI 2025-07-12 23:11:19 +02:00
fbb2cac698 added a pgroup module 2025-05-14 16:30:28 +02:00
63f810f42e made imports more robust 2025-05-14 16:09:15 +02:00
0f23e462a8 Upper cut threshold added to perform image calculations 2025-04-02 04:18:31 +02:00
Jakub Vonka
3ad42521f9 Line plot with colorbar added 2025-04-02 04:15:31 +02:00
Karina Kazarian
259212282b edit histogram code 2025-03-29 17:06:02 +01:00
34ddfb376f added partialjson to requirements 2025-02-04 11:29:10 +01:00
637d7ac955 added scikit-image to the requirements 2025-02-04 11:15:20 +01:00
ebdb161726 Update .gitlab-ci.yml file 2025-02-04 11:11:25 +01:00
dd5b119c55 Update .gitlab-ci.yml file 2025-02-04 11:08:39 +01:00
2321dc0487 trying with micromamba 2025-02-03 17:46:50 +01:00
6758f3cc6d Thresholding and sum part 2 2025-01-18 20:26:35 +01:00
f8eaf1ad2c added thresholding to sum and new detectors 2025-01-18 20:25:52 +01:00
f9f062d219 Adding cluster tools to this repo 2024-11-26 16:36:26 +01:00
7649164f88 Beamtime p22199-Puma commisioning Nov2024 2024-11-26 16:36:25 +01:00
16b06a7445 Update .gitlab-ci.yml file 2024-09-26 17:55:02 +02:00
43c01d021a Update .gitlab-ci.yml file 2024-07-01 16:33:15 +02:00
aea3b1b306 unfinished scan and update for newer jungfrau_utils 2024-06-27 15:21:22 +02:00
e01c8050ee adapted jupyter_helper ttl 2024-06-24 17:30:21 +02:00
29a122209a added Unfinished Scan 2024-06-24 17:29:31 +02:00
688c6ed4c5 Updated channels and fixed typo in utils 2024-06-21 14:19:55 +02:00
02b9279da7 special case for RA cluster 2024-06-21 14:19:55 +02:00
7ce5015780 disabled deploy 2024-06-17 17:15:52 +02:00
2319fa9cbf collaborative helper extended 2024-06-17 17:03:17 +02:00
25b17435f8 small examples added 2024-04-23 18:08:31 +02:00
1d33e86f76 added JF frame checker, small cleanup 2024-04-23 18:05:52 +02:00
122 changed files with 4136 additions and 186 deletions

0
.coveragerc Normal file → Executable file
View File

1
.gitignore vendored Normal file → Executable file
View File

@@ -37,6 +37,7 @@ htmlcov/*
junit*.xml
coverage.xml
.pytest_cache/
tests/cache_test/
# Build and docs folder/files
build/*

8
.gitlab-ci.yml Normal file → Executable file
View File

@@ -9,8 +9,10 @@ build-job:
run_tests:
stage: test
before_script:
- echo "Running as $(whoami)"
- echo "$CI_BUILDS_DIR"
- source /home/gitlab-runner/miniconda3/bin/activate cristallina
- eval "$(/home/gitlab-runner/bin/micromamba shell hook --shell=bash)"
- micromamba activate cristallina
- echo `pwd`
- pip install -e .
script:
@@ -23,7 +25,8 @@ pages:
stage: deploy
before_script:
- echo "$CI_BUILDS_DIR"
- source /home/gitlab-runner/miniconda3/bin/activate cristallina
- eval "$(/home/gitlab-runner/bin/micromamba shell hook --shell=bash)"
- micromamba activate cristallina
script:
- tox -e docs
- mv docs/_build/html/ public/
@@ -33,3 +36,4 @@ pages:
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH

0
.readthedocs.yml Normal file → Executable file
View File

1
AUTHORS.rst Normal file → Executable file
View File

@@ -3,3 +3,4 @@ Contributors
============
* Alexander Steppke <alexander.steppke@psi.ch>
* Jakub Vonka <jakub.vonka@psi.ch>

0
CHANGELOG.rst Normal file → Executable file
View File

0
CONTRIBUTING.rst Normal file → Executable file
View File

0
LICENSE.txt Normal file → Executable file
View File

0
README.rst Normal file → Executable file
View File

0
docs/Makefile Normal file → Executable file
View File

0
docs/_build/doctrees/api/cristallina.doctree vendored Normal file → Executable file
View File

0
docs/_build/doctrees/api/modules.doctree vendored Normal file → Executable file
View File

0
docs/_build/doctrees/authors.doctree vendored Normal file → Executable file
View File

0
docs/_build/doctrees/changelog.doctree vendored Normal file → Executable file
View File

0
docs/_build/doctrees/contributing.doctree vendored Normal file → Executable file
View File

0
docs/_build/doctrees/environment.pickle vendored Normal file → Executable file
View File

0
docs/_build/doctrees/index.doctree vendored Normal file → Executable file
View File

0
docs/_build/doctrees/license.doctree vendored Normal file → Executable file
View File

0
docs/_build/doctrees/readme.doctree vendored Normal file → Executable file
View File

0
docs/_build/html/.buildinfo vendored Normal file → Executable file
View File

0
docs/_build/html/_modules/cristallina/SEA_GraphClient.html vendored Normal file → Executable file
View File

0
docs/_build/html/_modules/cristallina/analysis.html vendored Normal file → Executable file
View File

0
docs/_build/html/_modules/cristallina/plot.html vendored Normal file → Executable file
View File

0
docs/_build/html/_modules/cristallina/skeleton.html vendored Normal file → Executable file
View File

0
docs/_build/html/_modules/cristallina/utils.html vendored Normal file → Executable file
View File

0
docs/_build/html/_modules/index.html vendored Normal file → Executable file
View File

0
docs/_build/html/_sources/api/cristallina.rst.txt vendored Normal file → Executable file
View File

0
docs/_build/html/_sources/api/modules.rst.txt vendored Normal file → Executable file
View File

0
docs/_build/html/_sources/authors.rst.txt vendored Normal file → Executable file
View File

0
docs/_build/html/_sources/changelog.rst.txt vendored Normal file → Executable file
View File

0
docs/_build/html/_sources/contributing.rst.txt vendored Normal file → Executable file
View File

0
docs/_build/html/_sources/index.rst.txt vendored Normal file → Executable file
View File

0
docs/_build/html/_sources/license.rst.txt vendored Normal file → Executable file
View File

0
docs/_build/html/_sources/readme.rst.txt vendored Normal file → Executable file
View File

0
docs/_build/html/_static/alabaster.css vendored Normal file → Executable file
View File

0
docs/_build/html/_static/basic.css vendored Normal file → Executable file
View File

0
docs/_build/html/_static/custom.css vendored Normal file → Executable file
View File

0
docs/_build/html/_static/doctools.js vendored Normal file → Executable file
View File

0
docs/_build/html/_static/documentation_options.js vendored Normal file → Executable file
View File

0
docs/_build/html/_static/file.png vendored Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 286 B

After

Width:  |  Height:  |  Size: 286 B

0
docs/_build/html/_static/language_data.js vendored Normal file → Executable file
View File

0
docs/_build/html/_static/minus.png vendored Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 90 B

After

Width:  |  Height:  |  Size: 90 B

0
docs/_build/html/_static/plus.png vendored Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 90 B

After

Width:  |  Height:  |  Size: 90 B

0
docs/_build/html/_static/pygments.css vendored Normal file → Executable file
View File

0
docs/_build/html/_static/searchtools.js vendored Normal file → Executable file
View File

0
docs/_build/html/_static/sphinx_highlight.js vendored Normal file → Executable file
View File

0
docs/_build/html/api/cristallina.html vendored Normal file → Executable file
View File

0
docs/_build/html/api/modules.html vendored Normal file → Executable file
View File

0
docs/_build/html/authors.html vendored Normal file → Executable file
View File

0
docs/_build/html/changelog.html vendored Normal file → Executable file
View File

0
docs/_build/html/contributing.html vendored Normal file → Executable file
View File

0
docs/_build/html/genindex.html vendored Normal file → Executable file
View File

0
docs/_build/html/index.html vendored Normal file → Executable file
View File

0
docs/_build/html/license.html vendored Normal file → Executable file
View File

0
docs/_build/html/objects.inv vendored Normal file → Executable file
View File

0
docs/_build/html/py-modindex.html vendored Normal file → Executable file
View File

0
docs/_build/html/readme.html vendored Normal file → Executable file
View File

0
docs/_build/html/search.html vendored Normal file → Executable file
View File

0
docs/_build/html/searchindex.js vendored Normal file → Executable file
View File

0
docs/_static/.gitignore vendored Normal file → Executable file
View File

0
docs/api/cristallina.rst Normal file → Executable file
View File

0
docs/api/modules.rst Normal file → Executable file
View File

0
docs/authors.rst Normal file → Executable file
View File

0
docs/changelog.rst Normal file → Executable file
View File

0
docs/conf.py Normal file → Executable file
View File

0
docs/contributing.rst Normal file → Executable file
View File

0
docs/index.rst Normal file → Executable file
View File

0
docs/license.rst Normal file → Executable file
View File

0
docs/readme.rst Normal file → Executable file
View File

0
docs/requirements.txt Normal file → Executable file
View File

0
examples/2d_gaussian_fitting_example.ipynb Normal file → Executable file
View File

763
examples/SmallExample.ipynb Executable file

File diff suppressed because one or more lines are too long

212
examples/data_examples.ipynb Executable file

File diff suppressed because one or more lines are too long

0
pyproject.toml Normal file → Executable file
View File

0
pytest.ini Normal file → Executable file
View File

14
requirements.txt Executable file
View File

@@ -0,0 +1,14 @@
#
importlib_metadata
joblib>=1.2.0
jungfrau_utils>=3.12.1
lmfit>=1.1.0
matplotlib
scikit-image # for the image history equalization
PyYAML
scipy
setuptools
Sphinx
tqdm
xraydb
partialjson # for parsing scan.json that are in progress

0
setup.cfg Normal file → Executable file
View File

0
setup.py Normal file → Executable file
View File

View File

@@ -1,58 +0,0 @@
#!/bin/bash
### Small script to create an SSH tunnel from outside of PSI network to access RA.
### Change to your PSI account username and run. Once running, going to https://localhost:5000 leads to jupyera.psi.ch without needing VPN.
def_user=$USER
def_addr="jupytera"
def_port="5000"
def_debug=false
def_node=false
function show_usage {
echo "usage: $0 [-u USER] [-p PORT] [-s]"
echo " -u, --user USER set user (default: ${def_user})"
echo " -p, --port PORT set local port (default: ${def_port})"
echo " -s, --staging tunnel to jupytera staging (default: jupytera production)"
echo " -n, --node node at which jupyter is running"
echo " -d, --debug turn on debug mode (default: off)"
echo " -h, --help, -? show this help"
}
user=${def_user}
addr=${def_addr}
port=${def_port}
debug=${def_debug}
node=${def_node}
while [[ "$#" -gt 0 ]]; do
case $1 in
-u|--user) user="$2" ; shift ;;
-p|--port) port="$2" ; shift ;;
-n|--node) node="$2" ; shift ;;
-s|--staging) addr="jupytera-staging" ;;
-d|--debug) debug=true ;;
-h|--help|-\?) show_usage ; exit ;;
*) echo "unknown argument: $1" ; show_usage ; exit 1 ;;
esac
shift
done
echo "Creating tunnel to ${addr}.psi.ch on https://localhost:${port}/ ..."
echo
echo "Username: ${user}"
# If node not given, link to a generic jupyter from spawner. If node given and a notebook already started there, link there.
if [ "${node}" = false ]; then
cmd="ssh -J ${user}@hop.psi.ch ${user}@ra.psi.ch -L ${port}:${addr}.psi.ch:443"
else
echo "Establishing tunnel to ${node}"
cmd="ssh -J ${user}@hop.psi.ch ${user}@ra.psi.ch -L ${port}:${node}.psi.ch:8888"
fi
if ${debug} ; then
echo "DEBUG:" $cmd
else
$cmd
fi

0
src/cristallina/SEA_GraphClient.py Normal file → Executable file
View File

17
src/cristallina/__init__.py Normal file → Executable file
View File

@@ -15,9 +15,16 @@ except PackageNotFoundError: # pragma: no cover
finally:
del version, PackageNotFoundError
try:
from . import utils
from . import plot
from . import analysis
from . import image
from . import channels
from . import utils
from . import plot
from . import analysis
from . import image
from . import channels
from . import uscan
except (ImportError, ModuleNotFoundError) as e:
# Handle the case where the package is not installed
# or if there are issues with the imports
print(f"Error importing modules: {e}")

280
src/cristallina/analysis.py Normal file → Executable file
View File

@@ -1,7 +1,8 @@
import re
from collections import defaultdict, deque
from collections import defaultdict
from pathlib import Path
from typing import Optional
import logging
import numpy as np
from matplotlib import pyplot as plt
@@ -10,12 +11,15 @@ import lmfit
from sfdata import SFDataFiles, sfdatafile, SFScanInfo
import joblib
from joblib import Memory
from . import utils
from . import channels
from .utils import ROI
logger = logging.getLogger(__name__)
def setup_cachedirs(pgroup=None, cachedir=None):
"""
Sets the path to a persistent cache directory either from the given p-group (e.g. "p20841")
@@ -36,26 +40,30 @@ def setup_cachedirs(pgroup=None, cachedir=None):
parts = re.split(r"(\d.*)", pgroup) # ['p', '2343', '']
pgroup_no = parts[-2]
candidates = [f"/sf/cristallina/data/p{pgroup_no}/work",
f"/sf/cristallina/data/p{pgroup_no}/res",]
candidates = [
f"/sf/cristallina/data/p{pgroup_no}/work",
f"/sf/cristallina/data/p{pgroup_no}/res",
]
for cache_parent_dir in candidates:
if Path(cache_parent_dir).exists():
cachedir = Path(cache_parent_dir) / "cachedir"
break
except KeyError as e:
print(e)
logger.warning(f"Could not determine p-group due to {e}. Using default cachedir.")
cachedir = "/das/work/units/cristallina/p19739/cachedir"
try:
memory = Memory(cachedir, verbose=0, compress=2)
except PermissionError as e:
logger.warning(f"Could not use cachedir {cachedir} due to {e}. Using /tmp instead.")
cachedir = "/tmp"
memory = Memory(cachedir, verbose=0, compress=2)
return memory
memory = setup_cachedirs()
@@ -68,6 +76,8 @@ def perform_image_calculations(
roi: Optional[ROI] = None,
preview=False,
operations=["sum"],
lower_cutoff_threshold=None,
upper_cutoff_threshold=None,
):
"""
Performs one or more calculations ("sum", "mean" or "std") for a given region of interest (roi)
@@ -112,6 +122,12 @@ def perform_image_calculations(
else:
im_ROI = im[:, roi.rows, roi.cols]
# use cutoff to set values to 0 below
if lower_cutoff_threshold is not None:
im_ROI = np.where(im_ROI < lower_cutoff_threshold, 0, im_ROI)
if upper_cutoff_threshold is not None:
im_ROI = np.where(im_ROI > upper_cutoff_threshold, 0, im_ROI)
# iterate over all operations
for op in operations:
label, func = possible_operations[op]
@@ -134,6 +150,7 @@ def sum_images(
batch_size=10,
roi: Optional[ROI] = None,
preview=False,
lower_cutoff_threshold=None,
):
"""
Sums a given region of interest (roi) for an image channel from a
@@ -157,29 +174,7 @@ def sum_images(
roi=roi,
preview=preview,
operations=["sum"],
)
def get_contrast_images(
filesets,
channel="JF16T03V01",
alignment_channels=None,
batch_size=10,
roi: Optional[ROI] = None,
preview=False,
):
"""
See perform_image_calculations. Here calculates mean and standard deviation for a given set of images.
"""
return perform_image_calculations(
filesets,
channel=channel,
alignment_channels=alignment_channels,
batch_size=batch_size,
roi=roi,
preview=preview,
operations=["mean", "std"],
lower_cutoff_threshold=lower_cutoff_threshold,
)
@@ -191,12 +186,12 @@ def perform_image_stack_sum(
batch_size=10,
roi: Optional[ROI] = None,
preview=False,
# operations=["sum"],
lower_cutoff_threshold=None,
lower_cutoff_threshold=None, # in keV
upper_cutoff_threshold=None, # in keV
):
"""
Performs one or more calculations ("sum", "mean" or "std") for a given region of interest (roi)
for an image channel from a fileset (e.g. "run0352/data/acq0001.*.h5" or step.fnames from a SFScanInfo object).
Performs summation along the pulse dimensionfor a given region of interest (roi) for an image channel
from a fileset (e.g. "run0352/data/acq0001.*.h5" or step.fnames from a SFScanInfo object).
Allows alignment, i.e. reducing only to a common subset with other channels.
@@ -204,16 +199,9 @@ def perform_image_stack_sum(
Preview only applies calculation to first batch and returns.
Returns a dictionary ({"JF16T03V01_intensity":[11, 18, 21, 55, ...]})
with the given channel values for each pulse and corresponding pulse id.
Returns: A 2D array with the summed image over all pulses without missing data.
"""
possible_operations = {
"sum": ["intensity", np.sum],
"mean": ["mean", np.mean],
"std": ["mean", np.std],
}
with SFDataFiles(*filesets) as data:
if alignment_channels is not None:
channels = [channel] + [ch for ch in alignment_channels]
@@ -233,10 +221,8 @@ def perform_image_stack_sum(
im_ROI = im[roi.rows, roi.cols]
summed = np.zeros(im_ROI[0].shape)
for image_slice in Images.in_batches(batch_size):
index_slice, im = image_slice
for index_slice, im in Images.in_batches(batch_size):
if roi is None:
im_ROI = im
@@ -245,6 +231,8 @@ def perform_image_stack_sum(
if lower_cutoff_threshold is not None:
im_ROI = np.where(im_ROI < lower_cutoff_threshold, 0, im_ROI)
if upper_cutoff_threshold is not None:
im_ROI = np.where(im_ROI > upper_cutoff_threshold, 0, im_ROI)
summed = summed + np.sum(im_ROI, axis=(0))
@@ -267,10 +255,10 @@ def perform_image_roi_crop(
upper_cutoff=np.inf,
):
"""
Cuts out a region of interest (ROI) for an image channel
Cuts out a region of interest (ROI) for an image channel
from a fileset (e.g. "run0352/data/acq0001.*.h5" or step.fnames from a SFScanInfo object).
Drops missing data from output and allows alignment,
Drops missing data from output and allows alignment,
i.e. reducing only to a common subset with other channels.
Calculations are performed in batches to reduce maximum memory requirements.
@@ -297,10 +285,10 @@ def perform_image_roi_crop(
Images = subset[channel]
rois_within_batch = list()
rois_within_batch = []
for image_slice in Images.in_batches(batch_size):
index_slice, im = image_slice
if roi is None:
@@ -316,10 +304,164 @@ def perform_image_roi_crop(
# only return first batch
if preview:
break
return np.array(rois_within_batch)
@memory.cache()
def calculate_JF_stacks(
scan: SFScanInfo,
lower_cutoff_threshold=7.6, # in keV
upper_cutoff_threshold=None, # in keV
exclude_steps: list[int] | None = None,
recompute: bool = False,
detector: str = "JF16T03V02",
):
"""Calculate image stacks for JF detectors in a scan.
Args:
scan (SFScanInfo): The scan object containing scan steps.
lower_cutoff_threshold: Threshold to apply to pixel values before summation in keV.
upper_cutoff_threshold: Upper threshold to apply to pixel values before summation in keV.
exclude_steps: List of step indices to exclude from processing. Defaults to None.
recompute: If True, forces recomputation even if cached results are available. Defaults to False.
detector: The detector channel to process. Defaults to "JF16T03V02" (JF 1.5M).
Returns:
stacks: List of summed image stacks for each step.
I0_norms: List of I0 normalizations for each step.
"""
stacks = []
I0_norms = []
for i, step in enumerate(scan):
if exclude_steps is not None and i in exclude_steps:
logger.info(f"skipping step {i}")
continue
JF_channels = [channels.JF, channels.JF_8M, channels.JF_I0, channels.GASMONITOR]
available_channels = [ch.name for ch in step.channels]
selected_channels = [ch for ch in JF_channels if ch in available_channels]
subset = step[*selected_channels]
pids_before = subset.pids.copy()
subset.drop_missing()
pids_after = subset.pids.copy()
if set(pids_before) != set(pids_after):
logger.warning(
f"Step {i}: dropped {set(pids_before) - set(pids_after)} pulse IDs due to missing data."
)
# we only consider the JF files here
files = [f.fname for f in step.files if "JF" in f.fname]
stack = perform_image_stack_sum(
files,
channel=detector,
lower_cutoff_threshold=lower_cutoff_threshold,
upper_cutoff_threshold=upper_cutoff_threshold,
)
stacks.append(stack)
# TODO: define roi for I0 detector
stack_I0 = perform_image_roi_crop(
files,
channel=channels.JF_I0,
lower_cutoff=2,
)
I0_norm = np.sum(stack_I0, axis=(0, 1, 2))
I0_norms.append(I0_norm)
return np.array(stacks), np.array(I0_norms)
@memory.cache(ignore=["batch_size"]) # we ignore batch_size for caching purposes
def calculate_image_histograms(
filesets,
channel="JF16T03V01",
alignment_channels=None,
batch_size=10,
roi: Optional[ROI] = None,
preview=False,
lower_cutoff_threshold=None,
bins=None,
):
"""
Calculates a histogram for a given region of interest (roi)
for an image channel from a fileset (e.g. "run0352/data/acq0001.*.h5" or step.fnames from a SFScanInfo object).
Allows alignment, i.e. reducing only to a common subset with other channels.
Calculations are performed in batches to reduce maximum memory requirements.
Preview only applies calculation to first batch and returns.
Returns:
(histogram, bins)
"""
with SFDataFiles(*filesets) as data:
if alignment_channels is not None:
channels = [channel] + [ch for ch in alignment_channels]
else:
channels = [channel]
subset = data[channels]
subset.drop_missing()
Images = subset[channel]
# create empty array for stack sum with right shape
im = Images[0]
if roi is None:
im_ROI = im[:]
else:
im_ROI = im[roi.rows, roi.cols]
summed = np.zeros(im_ROI[0].shape)
if bins is None:
for image_slice in Images.in_batches(batch_size):
index_slice, im = image_slice
if roi is None:
im_ROI = im
else:
im_ROI = im[:, roi.rows, roi.cols]
if lower_cutoff_threshold is not None:
im_ROI = np.where(im_ROI < lower_cutoff_threshold, 0, im_ROI)
bins = np.histogram_bin_edges(im.flatten(), bins="auto")
# only return first batch to calculate bins
break
all_hist = np.zeros(len(bins) - 1)
for image_slice in Images.in_batches(batch_size):
index_slice, im = image_slice
if roi is None:
im_ROI = im
else:
im_ROI = im[:, roi.rows, roi.cols]
if lower_cutoff_threshold is not None:
im_ROI = np.where(im_ROI < lower_cutoff_threshold, 0, im_ROI)
if bins is None:
bins = np.histogram_bin_edges(im.flatten(), bins="auto")
summed = summed + np.sum(im_ROI, axis=(0))
hist, _ = np.histogram(im.flatten(), bins=bins)
all_hist += hist
# only return first batch
if preview:
break
return all_hist, bins
def fit_2d_gaussian(image, roi: Optional[ROI] = None, plot=False):
"""
2D Gaussian fit using LMFit for a given image and an optional region of interest.
@@ -347,7 +489,7 @@ def fit_2d_gaussian(image, roi: Optional[ROI] = None, plot=False):
z = im.ravel() # and this also as a 1D
model = lmfit.models.Gaussian2dModel()
params = model.guess(z.astype('float'), x.astype('float'), y.astype('float'))
params = model.guess(z.astype("float"), x.astype("float"), y.astype("float"))
result = model.fit(
z,
x=x,
@@ -378,7 +520,7 @@ def _plot_2d_gaussian_fit(im, z, model, result):
"""Plot helper function to use the current image data, model and fit result and
plots them together.
"""
from matplotlib import pyplot as plt
from scipy.interpolate import griddata
len_y, len_x = im.shape
@@ -407,9 +549,7 @@ def _plot_2d_gaussian_fit(im, z, model, result):
ax = axs[1, 0]
fit = model.func(X, Y, **result.best_values)
art = ax.pcolormesh(
X, Y, Z - fit, vmin=-0.05 * vmax, vmax=0.05 * vmax, cmap="gray", shading="auto"
)
art = ax.pcolormesh(X, Y, Z - fit, vmin=-0.05 * vmax, vmax=0.05 * vmax, cmap="gray", shading="auto")
fig.colorbar(art, ax=ax, label="z", shrink=0.5)
ax.set_title("Data - Fit")
@@ -544,7 +684,7 @@ def fit_2d_gaussian_rotated(
center_x = result.params["centerx"].value
center_y = result.params["centery"].value
if plot == True:
if plot:
_plot_2d_gaussian_fit(im, z, mod, result)
return center_x, center_y, result
@@ -552,7 +692,7 @@ def fit_2d_gaussian_rotated(
def fit_1d_gaussian(x, y, use_offset=True, ax=None, print_results=False):
"""
1D-Gaussian fit with optional constant offset using LMFIT.
1D-Gaussian fit with optional constant offset using LMFIT.
Uses a heuristic guess for initial parameters.
Returns: lmfit.model.ModelResult
@@ -564,19 +704,23 @@ def fit_1d_gaussian(x, y, use_offset=True, ax=None, print_results=False):
model = peak + offset
if use_offset:
pars = offset.make_params(c = np.median(y))
pars = offset.make_params(c=np.median(y))
else:
pars = offset.make_params(c=0)
pars['c'].vary = False
pars += peak.guess(y, x, amplitude=(np.max(y)-np.min(y))/2)
result = model.fit(y, pars, x=x,)
pars["c"].vary = False
pars += peak.guess(y, x, amplitude=(np.max(y) - np.min(y)) / 2)
result = model.fit(
y,
pars,
x=x,
)
if print_results:
print(result.fit_report())
if ax is not None:
ax.plot(x, result.best_fit, label='fit')
ax.plot(x, result.best_fit, label="fit")
return result
return result

25
src/cristallina/channels.py Normal file → Executable file
View File

@@ -2,14 +2,35 @@
GASMONITOR = 'SARFE10-PBIG050-EVR0:CALCI'
PULSE_E_AVG = 'SARFE10-PBPG050:PHOTON-ENERGY-PER-PULSE-AVG'
# Event receiver
# Event receiver BS channel
EVR = 'SAR-CVME-TIFALL6:EvtSet'
# Jungfrau 1.5M detector
JF = 'JF16T03V01'
JF = 'JF16T03V02'
# Jungfrau 8M detector
JF_8M = "JF17T16V01"
# JF I0 monitor and stripsel detector
JF_I0 = 'JF20T01V01'
JF_strip = 'JF05T01V01'
# PSSS
PSSS_X = 'SARFE10-PSSS059:SPECTRUM_X'
PSSS_Y = 'SARFE10-PSSS059:SPECTRUM_Y'
PSSS_SUM = 'SARFE10-PSSS059:SPECTRUM_Y_SUM'
PSSS_COM = 'SARFE10-PSSS059:SPECT-COM'
# Beam position monitors
PBPS149_intensity = "SAROP31-PBPS149:INTENSITY_UJ"
PBPS113_intensity = "SAROP31-PBPS113:INTENSITY_UJ"
PBPS053_intensity = "SARFE10-PBPS053:INTENSITY"
# X-ray eye
XEYE = "SARES30-CAMS156-XE:FPICTURE"
# Detector Up/Down stage
TDY = "SARES30-MOBI2:MOT_Z.RBV"
# Monochromator energy
MONO_ENERGY = "SAROP31-ODCC110:MOT_ENY.RBV"

0
src/cristallina/config.py Normal file → Executable file
View File

0
src/cristallina/cristallina_style.mplstyle Normal file → Executable file
View File

0
src/cristallina/image.py Normal file → Executable file
View File

39
src/cristallina/jupyter_helper.py Normal file → Executable file
View File

@@ -13,19 +13,48 @@ This should work on the local consoles and also on RA (yet untested).
from packaging.version import Version
import random
import os
import jupyter_collaboration
if Version(jupyter_collaboration.__version__) < Version("1.9"):
from ypy_websocket.ystore import SQLiteYStore
else:
# approach for >=2.0
elif Version(jupyter_collaboration.__version__) >= Version("2.0") and Version(jupyter_collaboration.__version__) < Version("3.0a1"):
# approach for >=2.0 but smaller than 3.0
from jupyter_collaboration.stores import SQLiteYStore
elif Version(jupyter_collaboration.__version__) >= Version("3.0a1"):
# approach for >=3.0
from jupyter_server_ydoc.stores import SQLiteYStore
class MySQLiteYStore(SQLiteYStore):
"""
Custom SQL location for jupyter collaboration to avoid NFS locking issues.
"""
suffix = random.randint(0, 1E9)
db_path = f"/tmp/ystore_{suffix}.db"
suffix = random.randint(int(1E9), int(9E9))
if os.path.isdir('/scratch'):
# We are on RA and should use the scratch space here
db_path = f"/scratch/ystore_{suffix}.db"
else:
db_path = f"/tmp/ystore_{suffix}.db"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.db_path = MySQLiteYStore.db_path
print(f"Using custom SQLiteYStore at {self.db_path}")
self.log.info(f"Using custom SQLiteYStore with path {self.db_path}")
if self.document_ttl is not None:
print(f"Using database_ttl: {self.document_ttl}")
self.log.info(f"Using database_ttl: {self.document_ttl}")
self.document_ttl = int(self.document_ttl)
def __del__(self):
# Delete the temporary database file upon shutdown
try:
if os.path.exists(self.db_path):
os.remove(self.db_path)
except OSError as e:
self.log.warning(f"Error when removing temporary database: {e.strerror}")

18
src/cristallina/names.py Executable file
View File

@@ -0,0 +1,18 @@
### Channels for slic
GAS_MONITOR = 'SARFE10-PBIG050-EVR0:CALCI'
PDIM113_INT = 'SAROP31-PBPS113:Lnk9Ch0-PP_VAL_PD4'
PSCD153_INT = 'SAROP31-PBPS149:Lnk9Ch0-PP_VAL_PD4'
PBPS053_INT = 'SARFE10-PBPS053:INTENSITY'
PBPS053_XPOS = 'SARFE10-PBPS053:XPOS'
PBPS053_YPOS = 'SARFE10-PBPS053:YPOS'
PBPS113_INT = 'SAROP31-PBPS113:INTENSITY'
PBPS113_XPOS = 'SAROP31-PBPS113:XPOS'
PBPS113_YPOS = 'SAROP31-PBPS113:YPOS'
PBPS149_INT = 'SAROP31-PBPS149:INTENSITY'
PBPS149_XPOS = 'SAROP31-PBPS149:XPOS'
PBPS149_YPOS = 'SAROP31-PBPS149:YPOS'

View File

@@ -0,0 +1,36 @@
import os
import json
from pathlib import Path
from collections import defaultdict, deque
from copy import copy
import time
import scipy
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
import re
from pathlib import Path
import warnings
from matplotlib import pyplot as plt
import matplotlib as mpl
import cristallina.utils as cu
import cristallina as cr
from loguru import logger
from cristallina.uscan import UnfinishedScan
from joblib import Parallel, delayed, Memory
import sfdata
from sfdata import SFProcFile, SFDataFile, SFDataFiles, SFScanInfo
memory = Memory(location="/sf/cristallina/data/p22478/work/joblib", compress=2, verbose=2)
pgroup = "p22478"
print(f"p22478 module imported.")

View File

@@ -0,0 +1,69 @@
import os
import json
from pathlib import Path
from collections import defaultdict, deque
from copy import copy
import time
import scipy
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
import re
from pathlib import Path
import warnings
from matplotlib import pyplot as plt
import matplotlib as mpl
import cristallina.utils as cu
import cristallina as cr
from loguru import logger
from cristallina.uscan import UnfinishedScan
from joblib import Parallel, delayed, Memory
import sfdata
from sfdata import SFProcFile, SFDataFile, SFDataFiles, SFScanInfo
memory = Memory(location="/sf/cristallina/data/p22760/work/joblib", compress=2, verbose=2)
pgroup = "p22760"
print(f"p22760 module imported.")
def get_theta_prediction(B_field):
"""Get predicted theta angle from B field using linear interpolation.
Args:
B_field (float): Magnetic field value.
"""
B = np.array([0. , 0. , 0.05 , 0.1 , 0.15 , 0.2 , 0.25 , 0.3 ,
0.35 , 0.4 , 0.5 , 0.525 , 0.53 , 0.55 , 0.555 , 0.56 ,
0.565 , 0.5675, 0.57 , 0.5725, 0.575 , 0.6 ])
left = np.array([-16.72402197, -16.72592595, -16.72266943, -16.71780642,
-16.70710748, -16.68879129, -16.67129814, -16.64430405,
-16.61258093, -16.57162955, -16.43958512, -16.36767873,
-16.35235521, -16.20000072, -16.14585335, -16.14986136,
-16.14953497, -16.15033841, -16.14907281, -16.14883662,
-16.14936827, -16.14544489])
right = np.array([-15.82641889, -15.82810495, -15.82810142, -15.83362565,
-15.84191596, -15.85030174, -15.86875095, -15.88716295,
-15.91257856, -15.94358668, -16.0436225 , -16.07455302,
-16.08374928, -16.13199655, -16.14585335, -16.14986136,
-16.14953497, -16.15033841, -16.14907281, -16.14883662,
-16.14936827, -16.14544489])
interpolation_right = scipy.interpolate.interp1d(B, right, kind='linear')
interpolation_left = scipy.interpolate.interp1d(B, left, kind='linear')
theta_right = interpolation_right(B_field)
theta_left = interpolation_left(B_field)
return theta_left, theta_right

View File

94
src/cristallina/plot.py Normal file → Executable file
View File

@@ -1,7 +1,7 @@
import re
from collections import defaultdict
import matplotlib
import matplotlib as mpl
from matplotlib import pyplot as plt
import warnings
@@ -12,7 +12,6 @@ warnings.simplefilter("ignore", DeprecationWarning)
import numpy as np
from tqdm import tqdm
from matplotlib import patches
from pathlib import Path
from sfdata import SFDataFiles, sfdatafile, SFScanInfo
@@ -24,24 +23,29 @@ from .utils import ROI
# setup style sheet
plt.style.use("cristallina.cristallina_style")
def ju_patch_less_verbose(ju_module):
"""Quick monkey patch to suppress verbose messages from gain & pedestal file searcher."""
ju_module.swissfel_helpers._locate_gain_file = ju_module.swissfel_helpers.locate_gain_file
ju_module.swissfel_helpers._locate_pedestal_file = ju_module.swissfel_helpers.locate_pedestal_file
"""Quick monkey patch to suppress verbose messages from gain & pedestal file searcher.
Not anymore required for newer versions of ju."""
def less_verbose_gain(*args, **kwargs):
kwargs["verbose"] = False
return ju_module.swissfel_helpers._locate_gain_file(*args, **kwargs)
if hasattr(ju_module, "swissfel_helpers"):
def less_verbose_pedestal(*args, **kwargs):
kwargs["verbose"] = False
return ju_module.swissfel_helpers._locate_pedestal_file(*args, **kwargs)
ju_module.swissfel_helpers._locate_gain_file = ju_module.swissfel_helpers.locate_gain_file
ju_module.swissfel_helpers._locate_pedestal_file = ju_module.swissfel_helpers.locate_pedestal_file
# ju_module.swissfel_helpers.locate_gain_file = less_verbose_gain
# ju_module.swissfel_helpers.locate_pedestal_file = less_verbose_pedestal
def less_verbose_gain(*args, **kwargs):
kwargs["verbose"] = False
return ju_module.swissfel_helpers._locate_gain_file(*args, **kwargs)
ju_module.file_adapter.locate_gain_file = less_verbose_gain
ju_module.file_adapter.locate_pedestal_file = less_verbose_pedestal
def less_verbose_pedestal(*args, **kwargs):
kwargs["verbose"] = False
return ju_module.swissfel_helpers._locate_pedestal_file(*args, **kwargs)
# ju_module.swissfel_helpers.locate_gain_file = less_verbose_gain
# ju_module.swissfel_helpers.locate_pedestal_file = less_verbose_pedestal
ju_module.file_adapter.locate_gain_file = less_verbose_gain
ju_module.file_adapter.locate_pedestal_file = less_verbose_pedestal
ju_patch_less_verbose(ju)
@@ -157,7 +161,18 @@ def plot_2d_channel(data: SFDataFiles, channel_name, ax=None):
axis_styling(ax, channel_name, description)
def plot_detector_image(image_data, title=None, comment=None, ax=None, rois=None, norms=None, log_colorscale=False, show_legend=True, **fig_kw):
def plot_detector_image(
image_data,
title=None,
comment=None,
ax=None,
rois=None,
norms=None,
log_colorscale=False,
show_legend=True,
ax_colormap=matplotlib.colormaps["viridis"],
**fig_kw,
):
"""
Plots channel data for a channel that contains an image (2d array) of numeric values per pulse.
Optional:
@@ -166,6 +181,7 @@ def plot_detector_image(image_data, title=None, comment=None, ax=None, rois=None
- log_colorscale: True for a logarithmic colormap
- title: Title of the plot
- show_legend: True if the legend box should be drawn
- ax_colormap: a matplotlib colormap (viridis by default)
"""
im = image_data
@@ -187,7 +203,8 @@ def plot_detector_image(image_data, title=None, comment=None, ax=None, rois=None
else:
norm = matplotlib.colors.Normalize(vmin=norms[0], vmax=norms[1])
ax.imshow(im, norm=norm)
ax.imshow(im, norm=norm, cmap=ax_colormap)
ax.invert_yaxis()
if rois is not None:
@@ -211,10 +228,9 @@ def plot_detector_image(image_data, title=None, comment=None, ax=None, rois=None
description = f"mean: {mean:.2e},\nstd: {std:.2e}"
if not show_legend:
description=""
axis_styling(ax, title, description)
description = ""
axis_styling(ax, title, description)
def plot_image_channel(data: SFDataFiles, channel_name, pulse=0, ax=None, rois=None, norms=None, log_colorscale=False):
@@ -228,9 +244,7 @@ def plot_image_channel(data: SFDataFiles, channel_name, pulse=0, ax=None, rois=N
image_data = data[channel_name][pulse]
plot_detector_image(
image_data, title=channel_name, ax=ax, rois=rois, norms=norms, log_colorscale=log_colorscale
)
plot_detector_image(image_data, title=channel_name, ax=ax, rois=rois, norms=norms, log_colorscale=log_colorscale)
def plot_spectrum_channel(data: SFDataFiles, channel_name_x, channel_name_y, average=True, pulse=0, ax=None):
@@ -257,3 +271,37 @@ def plot_spectrum_channel(data: SFDataFiles, channel_name_x, channel_name_y, ave
description = None # f"mean: {mean:.2e},\nstd: {std:.2e}"
ax.set_xlabel(channel_name_x)
axis_styling(ax, channel_name_y, description)
def line_plot_with_colorbar(xs,ys,colors, cmap=plt.cm.viridis,
markers='o',markersize=6,alpha=1,
title=None,xlabel=None,ylabel=None,cbar_label=None,
**fig_kw):
'''Plot lines with colorbar.
xs, ys -> array of arrays
colors -> array
'''
fig,ax = plt.subplots(1,1,constrained_layout=True,**fig_kw)
# normalise to [0..1]
norm = mpl.colors.Normalize(vmin=np.min(colors),vmax=np.max(colors))
# create a ScalarMappable and initialize a data structure
s_m = mpl.cm.ScalarMappable(cmap=cmap, norm=norm)
s_m.set_array([])
for x,y,col in zip(xs,ys,colors):
ax.plot(x,y,color=s_m.to_rgba(col),marker=markers,markersize=markersize,alpha=alpha)
if title:
plt.suptitle(title)
if xlabel:
ax.set_xlabel(xlabel)
if ylabel:
ax.set_ylabel(ylabel)
# add colorbar
fig.colorbar(s_m,ax=ax,ticks=colors,label=cbar_label,alpha=alpha)
def get_normalized_colormap(vmin, vmax, cmap=mpl.cm.viridis):
norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax)
return lambda x: cmap(norm(x))

901
src/cristallina/rsm.py Executable file
View File

@@ -0,0 +1,901 @@
import re
from copy import copy
from collections import defaultdict, deque
from pathlib import Path
from typing import Optional
import logging
import time
import numpy as np
from matplotlib import pyplot as plt
from sfdata import SFDataFiles, sfdatafile, SFScanInfo
from joblib import Memory
from . import utils
from . import channels, analysis
from .utils import ROI
logger = logging.getLogger(__name__)
def setup_cachedirs(pgroup=None, cachedir=None):
"""
Sets the path to a persistent cache directory either from the given p-group (e.g. "p20841")
or an explicitly given directory.
If heuristics fail we use "/tmp" as a non-persistent alternative.
"""
if cachedir is not None:
# explicit directory given, use this choice
memory = Memory(cachedir, verbose=0, compress=2)
return memory
try:
if pgroup is None:
pgroup_no = utils.heuristic_extract_pgroup()
else:
parts = re.split(r"(\d.*)", pgroup) # ['p', '2343', '']
pgroup_no = parts[-2]
candidates = [f"/sf/cristallina/data/p{pgroup_no}/work",
f"/sf/cristallina/data/p{pgroup_no}/res",]
for cache_parent_dir in candidates:
if Path(cache_parent_dir).exists():
cachedir = Path(cache_parent_dir) / "cachedir"
break
except KeyError as e:
print(e)
cachedir = "/das/work/units/cristallina/p19739/cachedir"
try:
memory = Memory(cachedir, verbose=0, compress=2)
except PermissionError as e:
cachedir = "/tmp"
memory = Memory(cachedir, verbose=0, compress=2)
return memory
memory = setup_cachedirs()
@memory.cache() # we ignore batch_size for caching purposes
def RSM(run_number, sam, det, roi, offset=0, lower_cutoff_threshold=10, detector=channels.JF):
det = copy(det)
sam=copy(sam)
roi=copy(roi)
scan = utils.scan_info(run_number, use_uscan=True)
for i in range(2):
try:
channel_Exray='SAROP31-ODCC110:MOT_ENY.RBV'
energy=np.array(scan[0][channel_Exray])[1][0]
# detector 2 theta
channel_detangle='SARES31-GPS:ROT2THETA-BS'
tth=np.mean(np.array(scan[0][channel_detangle])[1])
break
except KeyError:
print("Waiting")
time.sleep(60)
readbacks=scan.readbacks
stacks, IO_norms = analysis.calculate_JF_stacks(scan,
lower_cutoff_threshold=lower_cutoff_threshold,
exclude_steps=None,
recompute=False,
detector=detector)
roi.values = np.array(stacks)[:, roi.rows, roi.cols]
roi.values = np.where(roi.values < lower_cutoff_threshold, 0, roi.values)
roi.values = roi.values/IO_norms[:,None, None]
roi.intensities = np.sum(roi.values[:], axis=(1, 2))
roi.intensities_normalized = roi.intensities/(roi.width*roi.height)
roi.intensities_I0 = roi.intensities/IO_norms
roicoord=[roi.left, roi.right, roi.bottom, roi.top]
coors_h_all=[]
coors_k_all=[]
coors_l_all=[]
inten_all=[]
det.move_angles(gamma=tth)
for i in range(len(readbacks)):
#
sam.rotate_sample(alpha=readbacks[i]+offset)
exp = Experiment(sam, det, energy)
# change the coordinates into h,k,l
hkl_temp=exp.ROI_hkl(roicoord)
if len(inten_all)==0:
coors_h_all=np.reshape(hkl_temp[0],-1)
coors_k_all=np.reshape(hkl_temp[1],-1)
coors_l_all=np.reshape(hkl_temp[2],-1)
inten_all=np.reshape(roi.values[i],-1)
else:
coors_h_all=np.concatenate((coors_h_all,np.reshape(hkl_temp[0],-1)))
coors_k_all=np.concatenate((coors_k_all,np.reshape(hkl_temp[1],-1)))
coors_l_all=np.concatenate((coors_l_all,np.reshape(hkl_temp[2],-1)))
inten_all=np.concatenate((inten_all,np.reshape(roi.values[i],-1)))
return coors_h_all, coors_k_all, coors_l_all, inten_all
@memory.cache()
def RSM_interpolate(coors_h_all, coors_k_all, coors_l_all, inten_all, hbound, kbound, gridsize):
lmin, lmax = coors_l_all.min(),coors_l_all.max()
l_grid=np.arange(lmin,lmax+gridsize,gridsize)
k_grid=np.arange(kbound[0], kbound[1]+gridsize, gridsize)
h_grid=np.arange(hbound[0], hbound[1]+gridsize, gridsize)
[H,K,L]=np.meshgrid(h_grid,k_grid,l_grid)
# interpolation
grided_all=griddata(np.transpose(np.array([coors_h_all,coors_k_all,coors_l_all])),inten_all,np.transpose(np.array([H,K,L])),method='nearest')
return grided_all
@memory.cache()
def RSM_full(run_number, sam, det, roi, hbound, kbound, gridsize, offset=0, lower_cutoff_threshold=10, detector=channels.JF):
det = copy(det)
sam=copy(sam)
roi=copy(roi)
coors_h_all, coors_k_all, coors_l_all, inten_all = RSM(run_number, sam, det, roi, offset=offset, lower_cutoff_threshold=lower_cutoff_threshold, detector=detector)
return RSM_interpolate(coors_h_all, coors_k_all, coors_l_all, inten_all, hbound, kbound, gridsize)
# based on henry's codes
#numpy imports
import numpy as np
import uncertainties.unumpy as unumpy
#visualization
import matplotlib.pyplot as plt
from matplotlib import animation, rc, gridspec
#scipy imports
from scipy.spatial.transform import Rotation as Rot
from scipy.optimize import brentq, curve_fit, minimize
from scipy.interpolate import griddata
from tqdm import tqdm
# data handling
import re
import concurrent.futures
import sys
import os
#constants
hbar_eV = 6.582119569e-16
c_mps = 2.99792458e8
#for ease of use later
xhat = np.array((1,0,0))
yhat = np.array((0,1,0))
zhat = np.array((0,0,1))
#Helper functions
def real_space_vectors(sample):
#this function takes alpha, beta, gamma and unit cell sizes and returns unit cell vectors a,b,c
#equations were copied from the internet.
#It works correctly for alp=bet=90deg (as in tas2). I didnt check for the other angles.
a=sample["a"]*xhat
b=sample["b"]*np.array([np.cos(sample["gam"]*np.pi/180),np.sin(sample["gam"]*np.pi/180),0])
c1=np.cos(sample["bet"]*np.pi/180)
c2=(np.cos(sample["alp"]*np.pi/180)-np.cos(sample["bet"]*np.pi/180)*np.cos(sample["gam"]*np.pi/180))/np.sin(sample["gam"]*np.pi/180)
c3=np.sqrt(1-(np.cos(sample["bet"]*np.pi/180))**2-c2**2)
c=sample["c"]*np.array([c1,c2,c3])
return a, b, c
def reciprocal_space_vectors(a, b, c):
#Takes real space basis, a, b and c and returns reciprocal space basis arec, brec, crec.
denom = a @ np.cross(b, c)
arec = 2*np.pi*np.cross(b, c)/denom
brec = 2*np.pi*np.cross(c, a)/denom
crec = 2*np.pi*np.cross(a, b)/denom
return arec, brec, crec
def rotate(vector, axis, angle, degrees = True):
#rotates vector around axis by angle degrees.
axis = axis/np.sqrt(axis @ axis) #normalize just in case
if degrees:
p = np.pi/180 #conversion constant
else:
p = 1 #no conversion needed
r = Rot.from_rotvec(axis*angle*p)
return r.apply(vector)
def E_to_k(energy):
"""
Converts energy to wavevector magnitude
Input:
energy : float, energy of incoming radiation in eV
Output:
wavevector : wavevector magnitude in 1/nm
"""
return 1e-9*energy/(hbar_eV*c_mps)
class Detector():
'''
Description:
Holds Information about the detector, specifically for a Bernina Experiment. Detector position and tilt can be given as well as pixel size and offset between
detector panels. Contains useful functions that return the real space position of each pixel on the detector and visualization tools that show the detector in
real space.
Inputs:
Location:
gamma : float, Angle in degrees
delta : float, Angle in degrees
R_m : float, Distance from sample in meters
Location:
center_pixel : tuple, Center pixel indices
px_size_um : float, Side length of a pixel in um
Orientation:
psi : float, Angle representing the rotation of detector around detector normal
beta : float, Angle representing rotation around x-pixel axis
xi : float, Angle representing rotation around y-pixel axis
'''
### Setup ###
def __init__(self, R_m, px_size_um, center_pixel, detector_size_px = (1030,514), gamma = 0, delta = 0, psi = 0, beta = 0, xi = 0):
self.px_size_um = px_size_um
self.center_pixel = center_pixel
self.detector_size_px = detector_size_px
self.phi = psi
self.beta = beta
self.xi = xi
self.R_m = R_m
self.pixel_positions_real = np.array(()) #for plotting later
self.move_angles(gamma, delta)
self.get_cartesian_coordinates()
def move_angles(self, gamma = None, delta = None, phi = None, beta = None, xi = None):
'''
Rotates the detector to gamma and delta. Redefines r_m and delta_hat and gamma_hat
Input:
Location:
gamma : float, Rotation angle in azimuthal direction
delta : float, Rotation angle in polar direction
Orientation:
phi : float, Angle representing the rotation of detector around detector normal
beta : float, Angle representing rotation around x-pixel axis
xi : float, Angle representing rotation around y-pixel axis
'''
if gamma != None:
self.gamma = gamma
if delta != None:
self.delta = delta
if phi != None:
self.phi = phi
if beta != None:
self.beta = beta
if xi != None:
self.xi = xi
self.get_cartesian_coordinates()
gamma_rad = np.pi*self.gamma/180
delta_rad = np.pi*self.delta/180
#change delta_hat, gamma_hat perpendicular to R
delta_hat_perp = np.array((np.sin(delta_rad)*np.cos(gamma_rad), np.sin(delta_rad)*np.sin(gamma_rad),np.cos(delta_rad)))
gamma_hat_perp = np.array((np.sin(gamma_rad), -np.cos(gamma_rad),0))
#Rotate delta_hat, gamma_hat by phi
delta_hat_p = rotate(delta_hat_perp, self.cart_vec, self.phi, degrees = True)
gamma_hat_p = rotate(gamma_hat_perp, self.cart_vec, self.phi, degrees = True)
#rotate gamma hat prime by beta around delta hat
self.gamma_hat = rotate(gamma_hat_p, delta_hat_p, self.beta, degrees = True)
#rotate delta hat prime by beta around gamma hat
self.delta_hat = rotate(delta_hat_p, self.gamma_hat, self.xi, degrees = True)
def move_to_cartesian(self,vec):
'''
Moves the detector to be in the path of a cartesian vector.
Note, off angles dont work here
Input:
vec : 1D np.ndarray of size 3, location to move the detector in cartesian coordinates
'''
projxy = (vec @ xhat)*xhat + (vec @ yhat)*yhat #project k_out onto xy plane for gamma
projxy = projxy/np.sqrt(projxy@projxy) #normalize
projz = (vec@zhat)*zhat
projz = projz/np.sqrt(projz@projz)
vec_normalize = vec/np.sqrt(vec@vec)
self.delta = np.sign(zhat @ vec) * np.arcsin(projz@vec_normalize) * 180/np.pi
self.gamma = -np.sign(yhat @ vec) * np.arccos(-projxy@xhat) * 180/np.pi
self.move_angles(self.gamma, self.delta)
### Computational Tools ###
def get_cartesian_coordinates(self):
"""
Returns cartesian coordinates of the detector as a vector.
"""
gamma_rad = np.pi*self.gamma/180
delta_rad = np.pi*self.delta/180
self.cart_vec = self.R_m*(-np.cos(gamma_rad)*np.cos(delta_rad)*xhat-np.sin(gamma_rad)*np.cos(delta_rad)*yhat+np.sin(delta_rad)*zhat)
def pixel_cartesian_coordinate(self, i_y, i_x):
"""
Find the cartesian coordinate of a pixel with index i_y and i_x
Inputs:
i_y : int, defined above
i_x : int, defined above
"""
i_x = self.center_pixel[0] - i_x
i_y = self.center_pixel[1] - i_y
pixel_position_relative = 1e-6*self.px_size_um*(np.array((i_x*self.gamma_hat[0], i_x*self.gamma_hat[1], i_x*self.gamma_hat[2])) + np.array((i_y*self.delta_hat[0], i_y*self.delta_hat[1], i_y*self.delta_hat[2])))
pixel_position_real = self.cart_vec + pixel_position_relative
return pixel_position_real
def whole_detector_cartesian_coordinates(self):
"""
Calculates the real space coordinates for each pixel of the detector in its defined postion as a numpy array.
"""
i_x = self.center_pixel[0] - np.arange(self.detector_size_px[0])
i_y = self.center_pixel[1] - np.arange(self.detector_size_px[1])
ii_x, ii_y = np.meshgrid(i_x, i_y)
#probably a better way to do this part
sum_one = 1e-6*self.px_size_um*np.array((ii_x*self.gamma_hat[0], ii_x*self.gamma_hat[1], ii_x*self.gamma_hat[2]))
sum_two = 1e-6*self.px_size_um*np.array((ii_y*self.delta_hat[0], ii_y*self.delta_hat[1], ii_y*self.delta_hat[2]))
pixel_positions_relative = sum_one + sum_two
pixel_positions_real = self.cart_vec + np.transpose(pixel_positions_relative, [1,2,0])
self.pixel_positions_real = np.transpose(pixel_positions_real,[2,0,1])
def ROI_cartesian_coordinates(self, ROI):
"""
Calculates the real space coordinates for each pixel of an ROI of adetector in its defined postion as a numpy array.
Input:
ROI : tuple of size 4, in [xmin, xmax, ymin, ymax]
Output:
pixel_positions_ROI : 3D np.ndarray of shape (3x(2*y_half)x(2*x_half)) holding the real space position of each pixel in ROI.
"""
self.whole_detector_cartesian_coordinates()
x_min, x_max, y_min, y_max = ROI
pixel_positions_ROI = self.pixel_positions_real[:,y_min:y_max, x_min:x_max]
return pixel_positions_ROI
class Sample():
'''
Description:
Stores information about sample geometry and orientation and contains useful computational tools as well as visualization of real/reciprocal space lattice
basis vectors.
Inputs:
Required:
sample_dict : dictionary, containing lattice constants and angles.
Example, {"a":0.336, "b":0.336, "c": 0.5853, "alp":90, "bet":90, "gam":120}
sample_name : string, Name of sample
Example, "1T-TaS2"
sample_normal : 1D np.ndarray of size 3, vector defining the sample normal in real space vector basis.
Not Required:
alpha : float, Angle in degrees, angle according to experiment, roughly correct.
ov : float Angle in degrees, angle according to experiment, relatively correct between runs.
ov0: float, Angle in degrees, how much the sample is offset to the experimental angles in ov
chi0: float, Angle in degrees, how much the sample is offset to the experimental angles in chi
alpha0: float, Angle in degrees, how much the sample is offset to the experimental angles in alpha
'''
### SETUP ###
def __init__(self, sample_dict, sample_name, surface_normal = np.array((0,0,1)), ov0 = 0, chi0 = 0, alpha0 = 0, alpha = 0, ov = 0, chi = 0):
self.sample_dict = sample_dict
self.sample_name = sample_name
self.surface_normal = surface_normal
self.alpha0 = alpha0
self.ov0 = ov0
self.chi0 = chi0
self.define_off_angles(alpha0, ov0, chi0)
self.initialize_lattice_vectors()
self.alpha = alpha
self.ov = ov
self.chi = chi
self.alpha_axis = np.array([0,0,1])
self.ov_axis = rotate(yhat, self.alpha_axis, self.alpha, degrees=True)
self.rotate_sample(alpha, ov)
def define_off_angles(self, alpha0 = 0, ov0 = 0, chi0 = 0):
"""
Initialize sample with off angles
"""
self.alpha0 = alpha0
self.ov0 = ov0
self.chi0 = chi0
self.alpha0_axis = zhat
self.ov0_axis = rotate(yhat, self.alpha0_axis, self.alpha0, degrees=True)
chi0_axis = rotate(xhat, self.alpha0_axis, self.alpha0, degrees=True)
self.chi0_axis = rotate(chi0_axis, self.ov0_axis, self.ov0, degrees=True)
self.initialize_lattice_vectors()
def change_lattice_constants(self, a = None, b = None, c = None):
if a != None:
self.sample_dict["a"] = a
if b != None:
self.sample_dict["b"] = b
if c != None:
self.sample_dict["c"] = c
self.initialize_lattice_vectors()
self.rotate_sample(self.alpha, self.ov)
def initialize_lattice_vectors(self):
"""
Set up the real space vectors and reciprocal space vectors
"""
self.a_unrotated, self.b_unrotated, self.c_unrotated = real_space_vectors(self.sample_dict) #basis vectors unrotated
self.arec0_unrotated, self.brec0_unrotated, self.crec0_unrotated = reciprocal_space_vectors(self.a_unrotated, self.b_unrotated, self.c_unrotated)
self.surface_normal_init = self.a_unrotated * self.surface_normal[0] + self.b_unrotated * self.surface_normal[1] + self.c_unrotated * self.surface_normal[2]
self.surface_normal_init = self.surface_normal_init/np.linalg.norm(self.surface_normal_init)
#get rotated basis vectors
self.a0 = self.vector_setup(self.a_unrotated)
self.b0 = self.vector_setup(self.b_unrotated)
self.c0 = self.vector_setup(self.c_unrotated)
self.surface_normal_init = self.vector_setup(self.surface_normal_init)
#compute reciprocal space vectors
self.arec0, self.brec0, self.crec0 = reciprocal_space_vectors(self.a0, self.b0, self.c0)
def vector_setup(self, vector):
"""
Input
vector: np.ndarray of size 3
Output
vector: np.ndarray of size 3
Helper function for initialize_lattice_vectors, rotates each basis vector to the correct position.
"""
#first deal with surface normal
surface_normal_vec = self.a_unrotated * self.surface_normal[0] + self.b_unrotated * self.surface_normal[1] + self.c_unrotated * self.surface_normal[2]
surface_normal_vec = surface_normal_vec/np.linalg.norm(surface_normal_vec)
angle = np.arccos(surface_normal_vec @ zhat)*180/np.pi
axis = np.cross(surface_normal_vec, zhat)/np.linalg.norm(np.cross(surface_normal_vec, zhat))
vector = rotate(vector, axis, angle, degrees = True)
#rotate to beamline coordinates (horizontal)
vector = rotate(vector, xhat, 90, degrees = True)
vector = rotate(vector, yhat, 90, degrees = True)
#rotate to account for misalignment
vector = rotate(vector, self.alpha0_axis, self.alpha0, degrees = True)
vector = rotate(vector, self.ov0_axis, self.ov0, degrees = True)
vector = rotate(vector, self.chi0_axis, self.chi0, degrees = True)
return vector
### ROTATE SAMPLE ###
def rotate_sample_vector(self, vector):
"""
Input
vector: np.ndarray of size 3
Output
vector: np.ndarray of size 3
Description
Rotates a sample vector by self.alpha and self.ov
"""
vector = rotate(vector, self.alpha_axis, self.alpha)
vector = rotate(vector, self.ov_axis, self.ov)
vector = rotate(vector, self.chi_axis, self.chi)
return vector
def rotate_sample(self, alpha = None, ov = None, chi = None):
'''
Rotates the sample by alpha and then by ov.
'''
if alpha != None:
self.alpha = alpha
if ov != None:
self.ov = ov
if chi!= None:
self.chi = chi
self.ov_axis = rotate(yhat, self.alpha_axis, self.alpha, degrees=True) #update ovrotate axis for rotation
chi_axis = rotate(xhat, self.alpha_axis, self.alpha, degrees=True)
self.chi_axis = rotate(chi_axis, self.ov_axis, self.ov, degrees = True)
self.arec = self.rotate_sample_vector(self.arec0)
self.brec = self.rotate_sample_vector(self.brec0)
self.crec = self.rotate_sample_vector(self.crec0)
self.a = self.rotate_sample_vector(self.a0)
self.b = self.rotate_sample_vector(self.b0)
self.c = self.rotate_sample_vector(self.c0)
self.surface_normal_updated = self.rotate_sample_vector(self.surface_normal_init)
### Computational Tools ###
def peak_q(self, peak_indices):
'''
Calculated the coordinates for a specified peak (h, k, l) in reciprocal space.
Inputs:
peak_indices : tuple/array of size 3, (h, k, l) indices.
Example: (0, 1, 4)
Outputs:
q : 1D np.ndarray of size 3, cartesian momentum space vector of given peak indices
'''
q = peak_indices[0]*self.arec + peak_indices[1]*self.brec + peak_indices[2]*self.crec
return q
def get_basis_change_matrix(self, return_inverse = False, form = "rotated"):
"""
Calculates matrix to go from cartesian basis to hkl basis
Inputs:
return_inverse : boolean, whether to return inverse of default matrix.
Options:
form : string, whether to return rotated basis or unrotated basis. Either "rotated" or "unrotated"
"""
if form == "rotated":
A = np.transpose(np.array((self.arec, self.brec, self.crec)))
if form == "unrotated":
a = self.arec0_unrotated
b = self.brec0_unrotated
c = self.crec0_unrotated
A = np.transpose(np.array((a, b, c)))
if return_inverse:
Ainv = np.linalg.inv(A)
return Ainv
else:
return A
def convert_hkl_to_Q(self, hkl):
return hkl[0]*self.arec0_unrotated+hkl[1]*self.brec0_unrotated+hkl[2]*self.crec0_unrotated
### Visualization Tools ###
def quiver_plot(self, space = "Reciprocal", figax = None):
"""
Plots real and reciprocal space crystal basis vectors in 3D. Good for debugging.
Inputs:
Not Required:
space : string, either "Reciprocal" or "Real". By default, space = "Reciprocal"
Returns:
fig : matplotlib figure
ax : matplotlib axis
"""
if figax == None:
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(projection='3d')
else:
fig = figax[0]
ax = figax[1]
if space in ["Real", "All"]:
ax.quiver(0, 0, 0, *self.a,color = "red", length = 0.05, normalize = True, label = "Real Space Basis")
ax.quiver(0, 0, 0, *self.b,color = "red", length = 0.05, normalize = True)
ax.quiver(0, 0, 0, *self.c,color = "red", length = 0.05, normalize = True)
# ax.quiver(0, 0, 0, *self.surface_normal_updated,color = "green", length = 0.05, normalize = True, label ="Surface Normal")
if space in ['Reciprocal', "All"]:
ax.quiver(0, 0, 0, *self.arec,color = "blue", length = np.linalg.norm(self.arec), normalize = True, label = "Reciprocal Space Basis")
ax.quiver(0, 0, 0, *self.brec,color = "blue", length = np.linalg.norm(self.brec), normalize = True)
ax.quiver(0, 0, 0, *self.crec,color = "blue", length = np.linalg.norm(self.crec), normalize = True)
ax.quiver(0, 0, 0, *self.surface_normal_updated,color = "green", length = 35, normalize = True, label ="Surface Normal")
ax.set_xlim((-50, 50))
ax.set_ylim((-50, 50))
ax.set_zlim((-50, 50))
ax.set_aspect('auto')
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
ax.legend()
return fig, ax
class Experiment():
'''
Description:
Given a sample, a detector, and the energy of an incoming beam, this class contains important tools to calculate scattering conditions to
find a specific peak or functions to calculate the momentum transfer of each pixel for use in the Reconstruction class.
Input:
Sample : Sample object for the experiment
Detector : Detector object for the experiment
energy : float, Energy of the incoming beam
'''
### Setup ###
def __init__(self, Sample, Detector, energy):
self.energy = energy
self.Sample = Sample
self.Detector = Detector
self.set_energy(energy) #incoming beam
self.pixel_hkl = np.array(())
### Changes in Experimental Setup ###
def make_angle_dict(self):
"""
Creates a dictionary containing alpha, ov, delta, and gamma for the experiment in its current state.
Output:
angle_dict : dictionary, holds angle info of experiment
"""
angle_dict = {}
angle_dict["alpha"] = self.Sample.alpha
angle_dict["ov"] = self.Sample.ov
angle_dict["delta"] = self.Detector.delta
angle_dict["gamma"] = self.Detector.gamma
return angle_dict
def move_from_string(self, string, value):
if string in ["energy", "energies"]:
self.set_energy(value)
elif string in ["alp", "alphas", "alpha"]:
self.Sample.rotate_stample(alpha = value)
elif string in ["ov", "phi"]:
self.Sample.rotate_stample(ov = value)
elif string in ["delta", "delt", "deltas"]:
self.Detector.move_angles(delta = value)
elif string in ["gamma", "gam", "gammas"]:
self.Detector.move_angles(gamma = value)
else:
print("String not recognized, choose from ['gamma', 'delta', 'energy', 'alpha', 'phi']")
def move_from_dict(self, angle_dict):
"""
Takes in a dictionary containing angles. Moves the sample and detector accordingly
Input:
angle_dict : dictionary, holds angle info of experiment
"""
alpha = angle_dict["alpha"]
ov = angle_dict["ov"]
self.Sample.rotate_sample(ov = ov, alpha = alpha)
delta = angle_dict["delta"]
gamma = angle_dict["gamma"]
self.Detector.move_angles(delta = delta, gamma = gamma)
def set_energy(self, energy):
self.energy = energy
self.k_in = E_to_k(self.energy)*np.array([-1,0,0]) #incoming beam
### Computational Tools ###
def whole_detector_hkl(self):
"""
Calculate momentum transfer in HKL basis for the entire detector.
Output:
pixel_hkl : 3d np.ndarray of shape (3,y_size,x_size)
"""
#real space coordinates
self.Detector.whole_detector_cartesian_coordinates()
#calculate momentum transfer
pixel_k_out = np.linalg.norm(self.k_in)*self.Detector.pixel_positions_real/np.linalg.norm(self.Detector.pixel_positions_real, axis = 0)
self.pixel_Q = -np.transpose(self.k_in - np.transpose(pixel_k_out,[1,2,0]),[2,0,1])
#solve for hkl of each pixel
Ainv = self.Sample.get_basis_change_matrix(return_inverse = True)
self.pixel_hkl = np.tensordot(Ainv, self.pixel_Q, axes = ([1],[0]))
return self.pixel_hkl
def ROI_hkl(self, ROI):
"""
Calculate momentum transfer in HKL basis for ROI of the detector.
Input:
ROI : tuple of size 4, in [xmin, xmax, ymin, ymax]
Output:
pixel_hkl : 3d np.ndarray of shape (3, 2*y_half, 2*x_half)
"""
real_space_coords = self.Detector.ROI_cartesian_coordinates(ROI)
pixel_k_out = np.linalg.norm(self.k_in)*real_space_coords/np.linalg.norm(real_space_coords, axis = 0)
pixel_Q = -np.transpose(self.k_in - np.transpose(pixel_k_out,[1,2,0]),[2,0,1])
#solve for hkl of each pixel
Ainv = self.Sample.get_basis_change_matrix(return_inverse = True)
pixel_hkl = np.tensordot(Ainv, pixel_Q, axes = ([1],[0]))
return pixel_hkl
def ROI_Q(self, ROI):
"""
Calculate momentum transfer for ROI of the detector
Input:
ROI : tuple of size 4, in [xmin, xmax, ymin, ymax]
Output:
pixel_hkl : 3d np.ndarray of shape (3, 2*y_half, 2*x_half)
"""
real_space_coords = self.Detector.ROI_cartesian_coordinates(ROI)
pixel_k_out = np.linalg.norm(self.k_in)*real_space_coords/np.linalg.norm(real_space_coords, axis = 0)
pixel_Q = -np.transpose(self.k_in - np.transpose(pixel_k_out,[1,2,0]),[2,0,1])
return pixel_Q
def ROI_Q_angle(self, ROI, theta=0):
"""
Calculate momentum transfer for ROI of the detector, assuming there is a coordinate system which rotates with theta
Input:
ROI : tuple of size 4, in [xmin, xmax, ymin, ymax]
theta : sample rotation angle in degree
Output:
pixel_hkl : 3d np.ndarray of shape (3, 2*y_half, 2*x_half)
"""
real_space_coords = self.Detector.ROI_cartesian_coordinates(ROI)
pixel_k_out = np.linalg.norm(self.k_in)*real_space_coords/np.linalg.norm(real_space_coords, axis = 0)
pixel_Q = -np.transpose(self.k_in - np.transpose(pixel_k_out,[1,2,0]),[2,0,1])
theta_rad=theta/180*np.pi
pixel_Q_afterrotate=np.array(pixel_Q)
pixel_Q_afterrotate[0]=pixel_Q[0]*np.cos(theta_rad)+pixel_Q[1]*np.sin(theta_rad)
pixel_Q_afterrotate[1]=-pixel_Q[0]*np.sin(theta_rad)+pixel_Q[1]*np.cos(theta_rad)
return pixel_Q_afterrotate
def point_to_hkl(self, ind_y, ind_x):
"""
Calculate momentum transfer in HKL basis for a single pixel of the detector given ind_y and ind_x.
Inputs:
ind_y : int, defined above
ind_x : int, defined above
"""
# Maps a single pixel to hkl coordinate.
pixel_coords_real = self.Detector.pixel_cartesian_coordinate(ind_y, ind_x)
k_out = np.linalg.norm(self.k_in)*pixel_coords_real/np.linalg.norm(pixel_coords_real)
Q = -(self.k_in - k_out)
Ainv = self.Sample.get_basis_change_matrix(return_inverse = True)
return Ainv @ Q
def visualize_scattering_xyz(self):
"""
Makes a plot showing the scattering vectors in reciprocal space in the xyz basis.
"""
fig, ax = self.Sample.quiver_plot(space = "Reciprocal")
try:
detector_position = self.Detector.cart_vec
except:
detector_position = self.Detector.center_px_position
ax.quiver(np.linalg.norm(self.k_in),0,0,*(self.k_in), length = np.linalg.norm(self.k_in), normalize = True, color = "purple", label = "Incoming beam (K_in)")
k_out_plotting = np.linalg.norm(self.k_in)*detector_position/np.linalg.norm(detector_position)
ax.quiver(0,0,0,*(k_out_plotting), length = np.linalg.norm(k_out_plotting), normalize = True, color = "black", label = "Detector Location (K_out)")
Q_plotting = self.k_in - k_out_plotting
ax.quiver(0,0,0,*(-Q_plotting),length = np.linalg.norm(Q_plotting), normalize = True, color = "orange", label = "Momentum Transfer (Q)")
self.whole_detector_hkl()
Q_pix = self.pixel_Q[:,::50,::50].reshape(3,-1)
ax.scatter(Q_pix[0,:], Q_pix[1,:], Q_pix[2,:], marker = ".", color = "brown")
center = self.pixel_Q[:,self.Detector.center_pixel[1], self.Detector.center_pixel[0]]
ax.scatter(center[0], center[1], center[2], color = "red")
ax.legend()
return fig, ax

0
src/cristallina/skeleton.py Normal file → Executable file
View File

135
src/cristallina/uscan.py Executable file
View File

@@ -0,0 +1,135 @@
import json
import time
from pathlib import Path
from sfdata.utils import print_skip_warning, json_load, adjust_shape
from sfdata.ign import remove_ignored_filetypes_scan
from sfdata import SFDataFiles, SFScanInfo
from partialjson.json_parser import JSONParser
from loguru import logger
MAX_STEP_WAIT = 300 # Maximum wait time in seconds for files to appear
class UnfinishedScanInfo(SFScanInfo):
"""
Represents an unfinished scan from SwissFEL data, mimicking a finished SFScanInfo. It allows to iterate
over already available scan steps and waits for new data if the scan is still ongoing.
If the scan is already finished, it behaves like a regular SFScanInfo.
Args:
fname (str): The filepath of the JSON file to be processed for scan information (e.g. scan.json).
refresh_interval (int): Time in seconds to wait before checking for new data again. Default is 10 seconds.
"""
def __init__(self, fname, refresh_interval=10):
self.fname = fname
self.finished = False
self.refresh_interval = refresh_interval
if is_finished_scan(fname):
# simple case, it is a finished scan and our parent can handle it
super().__init__(fname)
self.finished = True
else:
# try to parse it as a partial JSON file
self._parse_partial_json(fname)
def _parse_partial_json(self, fname):
with open(fname) as f:
content = f.read()
parser = JSONParser()
self.info = info = parser.parse(content)
# we try to extract as much information as possible,
# leaving missing parts out if not available
fnames = info.get("scan_files", [])
self.files = remove_ignored_filetypes_scan(fnames)
self.parameters = info.get("scan_parameters", [])
values = info.get("scan_values", [])
readbacks = info.get("scan_readbacks", [])
# filter for empty values which can occur in the partial json parsing
values = [vals for vals in values if vals]
readbacks = [rb for rb in readbacks if rb]
self.values = adjust_shape(values)
self.readbacks = adjust_shape(readbacks)
def __iter__(self):
if self.finished:
return super().__iter__()
return self._generate_data()
def _generate_data(self):
"""Generator that yields scan data as it becomes available during the scan."""
yielded_count = 0
while True:
retries = 0
self._parse_partial_json(self.fname)
# Check if we have new files to yield
while self.files and len(self.files) > yielded_count:
fns = self.files[yielded_count]
if not files_available_on_disk(fns):
time.sleep(self.refresh_interval)
retries += 1
if (retries * self.refresh_interval) < MAX_STEP_WAIT: # Wait up to 5 minutes for files to appear
continue # Wait and recheck
else:
logger.error(f"Timeout waiting for files to become available for step {yielded_count} {fns}")
# we still yield the remaining files to avoid infinite loop, but log an error and leave it to the caller to handle missing data
yielded_count += 1
retries = 0
try:
with SFDataFiles(*fns) as data:
yield data
except Exception as exc:
# TODO: Think about what could go wrong here and deal with it more specifically
sn = f"step {yielded_count - 1} {fns}"
print_skip_warning(exc, sn)
continue # Try next file
if is_finished_scan(self.fname) and (yielded_count >= len(self.files)):
return # Scan is finished, and we yielded all available files, stop iteration
# Wait before checking again
time.sleep(self.refresh_interval)
def is_finished_scan(fname):
""" If the scan.json file is complete, valid and the files are on the disk the scan is finished.
This does not cover the case where a scan is interrupted and continued later.
It also relies on the behavior of RA that the scan.json is only fully written at the end of the scan.
TODO: A clear criterion for a finished scan would be that another later scan exists."""
try:
content = json_load(fname)
files = [file for step in content['scan_files'] for file in step]
# Check if all files are available on disk
if not files_available_on_disk(set(files)):
return False
except json.JSONDecodeError:
return False
return True
def files_available_on_disk(fnames):
"""Check if all files for this step are available on disk and contain some data."""
# fnames = [fn for fn in fnames if not "PVDATA" in fn] # PVDATA files are not written to disk at the moment!
# logger.debug(f"Skipping PVDATA files for availability check as a workaround!")
if all(Path(fn).exists() for fn in fnames):
return all(Path(fn).stat().st_size > 0 for fn in fnames)
return False

119
src/cristallina/utils.py Normal file → Executable file
View File

@@ -3,11 +3,14 @@ import os
import json
import re
from pathlib import Path
from collections import defaultdict
import requests
import subprocess
from collections import defaultdict
import time
import datetime
from json import JSONDecodeError
import requests
import logging
@@ -26,6 +29,7 @@ import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from . import channels
from .uscan import UnfinishedScanInfo
def collect_runs_metadata(pgroup_data_dir: str | os.PathLike):
@@ -100,13 +104,19 @@ def collect_runs_metadata(pgroup_data_dir: str | os.PathLike):
return df
def scan_info(run_number, base_path=None, small_data=True):
def scan_info(run_number, base_path=None, small_data=True, pgroup=None, use_uscan=False):
"""Returns SFScanInfo object for a given run number.
If there is are small data channels, they will be added (small_data=False to suppress their loading).
use_uscan=True will use UnfinishedScanInfo to load scans that are still running.
"""
if base_path == None:
base_path = heuristic_extract_base_path()
base_path = heuristic_extract_base_path(pgroup)
if use_uscan:
scan = UnfinishedScanInfo(f"{base_path}run{run_number:04}/meta/scan.json")
return scan
scan = SFScanInfo(f"{base_path}run{run_number:04}/meta/scan.json")
try:
@@ -127,12 +137,12 @@ def scan_info(run_number, base_path=None, small_data=True):
return scan
def get_scan_from_run_number_or_scan(run_number_or_scan, small_data=True):
def get_scan_from_run_number_or_scan(run_number_or_scan, small_data=True, pgroup=None):
"""Returns SFScanInfo object from run number or SFScanInfo object (then just passes that to output)"""
if type(run_number_or_scan) == SFScanInfo:
scan = run_number_or_scan
else:
scan = scan_info(run_number_or_scan, small_data=small_data)
scan = scan_info(run_number_or_scan, small_data=small_data, pgroup=pgroup)
return scan
@@ -272,6 +282,11 @@ def process_run(
det_pids[only_shots],
data[detector][only_shots, bottom:top, left:right].std(axis=(1, 2)),
)
if "max" in calculate:
sd[roi.name + "_max"] = (
det_pids[only_shots],
data[detector][only_shots, bottom:top, left:right].max(axis=(1, 2)),
)
if "img" in calculate:
sd[f"{roi.name}_img"] = (
det_pids[only_shots],
@@ -298,6 +313,8 @@ def process_run(
Parallel(n_jobs=n_jobs, verbose=10)(delayed(process_step)(i) for i in range(len(scan)))
def is_processed_JF_file(filepath, detector="JF16T03V01"):
"""Checks if a given .h5 file from a Jungfrau detector has been processed or only
contains raw adc values.
@@ -311,6 +328,26 @@ def is_processed_JF_file(filepath, detector="JF16T03V01"):
raise ValueError(f"{filepath} does not seem to be an Jungfrau file from the detector {detector}.")
return "meta" in f["data"][detector].keys()
def check_JF_frames(JF_file_path):
""" Simple check if all frames in a given Jungfrau .h5 file are OK or if
there are missing or invalid frames.
Raises an error if invalid frames are found, else returns True.
"""
pattern = r"\.(JF[0-9].*)\.h5"
m = re.search(pattern, str(JF_file_path))
if m is None:
raise LookupError("Cannot match Jungfrau detector name from file path.")
JF_name = m.groups()[0]
with SFDataFiles(JF_file_path) as data:
JF = data[JF_name]
if JF.nvalid != JF.ntotal:
raise ValueError("Jungfrau frames invalid, this should not happen.")
else:
return True
def get_step_time(step: SFDataFiles):
"""Returns the start and end time as a unix timestamp (in seconds)
@@ -326,26 +363,23 @@ def get_step_time(step: SFDataFiles):
) / np.timedelta64(1, "s")
# TODO: Clean this up
import time
import subprocess
from pathlib import Path
def wait_for_run(run_number, total_length=15):
base_path = cr.utils.heuristic_extract_base_path()
def wait_for_run(run_number, total_num_steps, snd_file_path="/tmp/CantinaBand3.wav"):
""" Busy wait loop until run has completed all steps. Plays sound
when all files are written to disk.
"""
base_path = heuristic_extract_base_path()
data_path = Path(base_path) / f"run{run_number:0>4}/data"
while True:
pvfiles = data_path.glob("acq*.PVDATA.h5")
length = len(list(pvfiles))
if length == total_length:
if length == total_num_steps:
break
else:
print(f"Waiting for run {run_number} to complete, only {length} steps so far ...")
time.sleep(5)
subprocess.run(["paplay", "/tmp/CantinaBand3.wav"])
subprocess.run(["paplay", snd_file_path])
class ROI:
@@ -353,7 +387,8 @@ class ROI:
Example: ROI(left=10, right=20, bottom=100, top=200).
Directions assume that lower left corner of image is at (x=0, y=0).
Directions assume that lower left corner of image is at (x=0, y=0)
and y is pointing upwards, x pointing to the right.
"""
def __init__(
@@ -425,6 +460,45 @@ class ROI:
@property
def height(self):
return self.top - self.bottom
def shift(self, x: float = 0, y: float = 0, inplace=False):
""" Returns a new ROI that is shifted
by x to the left and by y pixels up.
If `inplace=True` shifts the itself instead.
"""
if not inplace:
return ROI(left=self.left+x,
right=self.right+x,
bottom = self.bottom+y,
top=self.top+y,
name=self.name)
else:
self.left = self.left + x
self.right = self.right + x
self.bottom = self.bottom + y
self.top = self.top + y
def expand(self, x: float = 0, y: float = 0, inplace=False):
""" Returns a new ROI that is expanded in x
to the left and right, and in y to the
top and bottom.
If `inplace=True` expands the itself instead.
"""
if not inplace:
return ROI(left=self.left - x,
right=self.right + x,
bottom = self.bottom - y,
top=self.top + y,
name=self.name)
else:
self.left = self.left - x
self.right = self.right + x
self.bottom = self.bottom - y
self.top = self.top + y
def __repr__(self):
if hasattr(self, "name"):
@@ -517,9 +591,10 @@ def heuristic_extract_pgroup(path=None):
raise KeyError("Automatic p-group extraction from the current working directory didn't work.")
def heuristic_extract_base_path():
def heuristic_extract_base_path(p_number=None):
"""The function tries to guess the full path where the raw data is saved."""
p_number = heuristic_extract_pgroup()
if p_number is None:
p_number = heuristic_extract_pgroup()
base_path = f"/sf/cristallina/data/p{p_number}/raw/"
return base_path
@@ -562,7 +637,7 @@ def find_nearest(array, value):
def find_two_nearest(time_array, percentage):
"""Finds indeces of the two values that are the nearest to the given value in an array"""
"""Finds indices of the two values that are the nearest to the given value in an array"""
array = np.asarray(time_array)
value = (np.max(array) - np.min(array)) * percentage + np.min(array)
idx = (np.abs(array - value)).argmin()
@@ -616,12 +691,10 @@ def gauss_fit(x, y, fit_details=None, plot=None):
def xray_transmission(energy, thickness, material="Si", density=[]):
"""Calculate x-ray tranmission for given energy, thickness and material. Default material is Si. Add material=element as a string for another material. Density as optional parameter"""
mu = material_mu(material, energy)
if density == []:
mu_array = material_mu(material, energy)
else:
mu_array = material_mu(formula, energy, density=density)
mu_array = material_mu(material, energy, density=density)
trans = np.exp(
-0.1 * (thickness * 1000) * mu_array

0
tests/conftest.py Normal file → Executable file
View File

0
tests/data/JF16T03V01.res.h5 Normal file → Executable file
View File

0
tests/data/gains.h5 Normal file → Executable file
View File

0
tests/data/p20841/raw/run0185/data/acq0001.BSDATA.h5 Normal file → Executable file
View File

Some files were not shown because too many files have changed in this diff Show More