Compare commits

..

59 Commits
0.7.3 ... main

Author SHA1 Message Date
108c1aae2f Updating for version 0.7.11
All checks were successful
pyzebra CI/CD pipeline / prepare (push) Successful in 1s
pyzebra CI/CD pipeline / prod-env (push) Successful in 1m30s
pyzebra CI/CD pipeline / test-env (push) Has been skipped
pyzebra CI/CD pipeline / cleanup (push) Successful in 1s
2025-04-08 18:19:58 +02:00
b82184b9e7 Switch from miniconda to miniforge
All checks were successful
pyzebra CI/CD pipeline / prepare (push) Successful in 0s
pyzebra CI/CD pipeline / test-env (push) Successful in 1m28s
pyzebra CI/CD pipeline / prod-env (push) Has been skipped
pyzebra CI/CD pipeline / cleanup (push) Successful in 1s
2025-04-08 17:57:53 +02:00
b6a43c3f3b Remove scripts
All checks were successful
pyzebra CI/CD pipeline / prepare (push) Successful in 1s
pyzebra CI/CD pipeline / test-env (push) Successful in 1m42s
pyzebra CI/CD pipeline / prod-env (push) Has been skipped
pyzebra CI/CD pipeline / cleanup (push) Successful in 0s
This is an outdated way of pyzebra deployment
2025-04-08 17:33:37 +02:00
18b692a62e Replace gitlab with gitea workflow
All checks were successful
pyzebra CI/CD pipeline / prepare (push) Successful in 1s
pyzebra CI/CD pipeline / test-env (push) Successful in 1m41s
pyzebra CI/CD pipeline / prod-env (push) Has been skipped
pyzebra CI/CD pipeline / cleanup (push) Successful in 0s
2025-04-08 17:03:01 +02:00
c2d6f6b259 Revert "Install via a direct package path"
This reverts commit dfa6bfe926ef99f9650b7d1d3e99c8373d9c9415.
2025-02-12 11:43:15 +01:00
60b90ec9e5 Always run cleanup job
[skip ci]
2025-02-11 18:12:27 +01:00
1fc30ae3e1 Build conda packages in CI_BUILDS_DIR 2025-02-11 18:10:13 +01:00
dfa6bfe926 Install via a direct package path 2025-02-11 18:07:57 +01:00
ed3f58436b Push release commit and tag
[skip ci]
2024-11-19 16:25:10 +01:00
68f7b429f7 Extract default branch name 2024-11-19 15:29:51 +01:00
e5030902c7 Infer path of file with version from path of release script 2024-11-19 15:26:56 +01:00
60f01d9dd8 Revert "Utility style fix"
This reverts commit dc1f2a92cc454aa045da38210c2b26382dc80264.
2024-09-09 00:06:28 +02:00
bd429393a5 Clean build folder
[skip ci]
2024-09-08 23:50:46 +02:00
9e3ffd6230 Updating for version 0.7.10 2024-09-08 23:40:15 +02:00
bdc71f15c1 Build in separate folders for prod and test envs 2024-09-08 23:39:08 +02:00
c3398ef4e5 Fix anaconda upload 2024-09-08 22:32:51 +02:00
9b33f1152b Fix tagged prod deployments 2024-09-08 22:32:51 +02:00
dc1f2a92cc Utility style fix 2024-09-05 17:26:47 +02:00
e9ae52bb60 Use locally built package in deployment
[skip ci]
2024-09-05 15:00:04 +02:00
982887ab85 Split build-and-publish job
[skip ci]
2024-09-05 14:58:01 +02:00
19e934e873 Run pipeline only on default branch changes 2024-09-05 13:21:11 +02:00
8604d695c6 Extract conda activation into before_script
[skip ci]
2024-09-05 11:11:40 +02:00
a55295829f Add cleanup stage
[skip ci]
2024-09-05 11:05:13 +02:00
4181d597a8 Fix home link
[skip ci]
2024-07-12 11:34:53 +02:00
c4869fb0cd Delay deploy-prod job by 1 min
conda doesn't show a newly uploaded package after only 5 sec
2024-07-12 11:04:54 +02:00
80e75d9ef9 Updating for version 0.7.9 2024-07-12 10:52:07 +02:00
eb2177215b Fix make_release script 2024-07-12 10:52:01 +02:00
89fb4f054f Translate metadata entry zebramode -> zebra_mode 2024-07-11 16:39:37 +02:00
019a36bbb7 Split motors on comma with any whitespace around 2024-06-07 15:57:27 +02:00
58a704764a Fix KeyError if 'mf' or 'temp' not in data 2024-06-04 14:17:34 +02:00
144b37ba09 Use rhel8 builds of anatric and Sxtal_Refgen 2024-05-22 16:34:56 +02:00
48114a0dd9 Rename master -> main in .gitlab-ci.yml 2024-05-22 16:26:55 +02:00
0fee06f2d6 Update .gitlab-ci.yml 2024-05-21 13:34:20 +02:00
31b4b0bb5f Remove github workflow 2024-03-01 10:26:25 +01:00
a6611976e1 Fix missing log args 2023-11-29 14:10:35 +01:00
9b48fb7a24 Isolate loggers per document 2023-11-21 18:54:59 +01:00
14d122b947 Simplify path assembly to app folder 2023-11-21 15:09:51 +01:00
bff44a7461 Don't show server output 2023-11-21 15:09:39 +01:00
eae8a1bde4 Handle NaNs in magnetic_field/temp for hdf data
Fix #58
2023-09-29 17:17:14 +02:00
OZaharko
a1c1de4adf Change Titles 2023-08-16 17:35:58 +02:00
9e6fc04d63 Updating for version 0.7.8 2023-08-16 17:15:16 +02:00
779426f4bb add bokeh word to server output 2023-08-16 16:59:44 +02:00
1a5d61a9f7 Update .gitlab-ci.yml 2023-08-16 14:18:59 +02:00
b41ab102b1 Updating for version 0.7.7 2023-08-03 14:24:39 +02:00
bc791b1028 fix y axis label in projection plot of hdf_param_study
similar to 6164be16f0f3356d22d6195b39a59b08282c36f0
2023-08-03 14:24:06 +02:00
3ab4420912 Update .gitlab-ci.yml 2023-08-02 16:04:53 +02:00
90552cee2c Updating for version 0.7.6 2023-08-02 15:17:06 +02:00
6164be16f0 fix y axis label in projection plot 2023-08-02 14:41:49 +02:00
07f03a2a04 Add .gitlab-ci.yml 2023-07-27 15:17:31 +02:00
f89267b5ec Updating for version 0.7.5 2023-07-02 22:59:36 +02:00
8e6cef32b5 Use libmamba solver 2023-07-02 22:59:11 +02:00
e318055304 Replace depricated dtype aliases
For numpy>=1.20
2023-07-02 22:32:43 +02:00
015eb095a4 Prepare for transition to bokeh/3
* Rename plot_height -> height
* Rename plot_width -> width
* Replace on_click callbacks of RadioGroup and CheckboxGroup
2023-06-20 15:54:47 +02:00
d145f9107d Drop python/3.7 support 2023-06-20 14:52:26 +02:00
d1a0ba6fec Fix for gamma with the new data format
Fix #57
2023-06-06 13:53:10 +02:00
2e2677d856 Set chi=180, phi=0 for nb geometry 2023-06-01 16:18:50 +02:00
a165167902 Updating for version 0.7.4 2023-05-25 13:51:29 +02:00
0b88ab0c7f Use scan["nu"] as a scalar
Fix #53
2023-05-23 08:56:18 +02:00
59d392c9ec Update to a new data format in ccl/dat files 2023-05-16 11:26:08 +02:00
29 changed files with 354 additions and 280 deletions

View File

@ -0,0 +1,53 @@
name: pyzebra CI/CD pipeline
on:
push:
branches:
- main
tags:
- '*'
env:
CONDA: /opt/miniforge3
jobs:
prepare:
runs-on: pyzebra
steps:
- run: $CONDA/bin/conda config --add channels conda-forge
- run: $CONDA/bin/conda config --set solver libmamba
test-env:
runs-on: pyzebra
needs: prepare
if: github.ref == 'refs/heads/main'
env:
BUILD_DIR: ${{ runner.temp }}/conda_build
steps:
- name: Checkout repository
uses: actions/checkout@v4
- run: $CONDA/bin/conda build --no-anaconda-upload --output-folder $BUILD_DIR ./conda-recipe
- run: $CONDA/bin/conda remove --name test --all --keep-env -y
- run: $CONDA/bin/conda install --name test --channel $BUILD_DIR python=3.8 pyzebra -y
- run: sudo systemctl restart pyzebra-test.service
prod-env:
runs-on: pyzebra
needs: prepare
if: startsWith(github.ref, 'refs/tags/')
env:
BUILD_DIR: ${{ runner.temp }}/conda_build
steps:
- name: Checkout repository
uses: actions/checkout@v4
- run: $CONDA/bin/conda build --token ${{ secrets.ANACONDA_TOKEN }} --output-folder $BUILD_DIR ./conda-recipe
- run: $CONDA/bin/conda remove --name prod --all --keep-env -y
- run: $CONDA/bin/conda install --name prod --channel $BUILD_DIR python=3.8 pyzebra -y
- run: sudo systemctl restart pyzebra-prod.service
cleanup:
runs-on: pyzebra
needs: [test-env, prod-env]
if: always()
steps:
- run: $CONDA/bin/conda build purge-all

View File

@ -1,25 +0,0 @@
name: Deployment
on:
push:
tags:
- '*'
jobs:
publish-conda-package:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Prepare
run: |
$CONDA/bin/conda install --quiet --yes conda-build anaconda-client
$CONDA/bin/conda config --append channels conda-forge
$CONDA/bin/conda config --set anaconda_upload yes
- name: Build and upload
env:
ANACONDA_TOKEN: ${{ secrets.ANACONDA_TOKEN }}
run: |
$CONDA/bin/conda build --token $ANACONDA_TOKEN conda-recipe

View File

@ -15,10 +15,10 @@ build:
requirements:
build:
- python >=3.7
- python >=3.8
- setuptools
run:
- python >=3.7
- python >=3.8
- numpy
- scipy
- h5py
@ -28,7 +28,7 @@ requirements:
about:
home: https://github.com/paulscherrerinstitute/pyzebra
home: https://gitlab.psi.ch/zebra/pyzebra
summary: {{ data['description'] }}
license: GNU GPLv3
license_file: LICENSE

View File

@ -7,18 +7,19 @@ import subprocess
def main():
default_branch = "main"
branch = subprocess.check_output("git rev-parse --abbrev-ref HEAD", shell=True).decode().strip()
if branch != "master":
print("Aborting, not on 'master' branch.")
if branch != default_branch:
print(f"Aborting, not on '{default_branch}' branch.")
return
filepath = "pyzebra/__init__.py"
version_filepath = os.path.join(os.path.basename(os.path.dirname(__file__)), "__init__.py")
parser = argparse.ArgumentParser()
parser.add_argument("level", type=str, choices=["patch", "minor", "major"])
args = parser.parse_args()
with open(filepath) as f:
with open(version_filepath) as f:
file_content = f.read()
version = re.search(r'__version__ = "(.*?)"', file_content).group(1)
@ -36,11 +37,12 @@ def main():
new_version = f"{major}.{minor}.{patch}"
with open(filepath, "w") as f:
with open(version_filepath, "w") as f:
f.write(re.sub(r'__version__ = "(.*?)"', f'__version__ = "{new_version}"', file_content))
os.system(f"git commit {filepath} -m 'Updating for version {new_version}'")
os.system(f"git commit {version_filepath} -m 'Updating for version {new_version}'")
os.system(f"git tag -a {new_version} -m 'Release {new_version}'")
os.system("git push --follow-tags")
if __name__ == "__main__":

View File

@ -6,4 +6,4 @@ from pyzebra.sxtal_refgen import *
from pyzebra.utils import *
from pyzebra.xtal import *
__version__ = "0.7.3"
__version__ = "0.7.11"

View File

@ -1,6 +1,9 @@
import logging
import subprocess
import xml.etree.ElementTree as ET
logger = logging.getLogger(__name__)
DATA_FACTORY_IMPLEMENTATION = ["trics", "morph", "d10"]
REFLECTION_PRINTER_FORMATS = [
@ -16,11 +19,11 @@ REFLECTION_PRINTER_FORMATS = [
"oksana",
]
ANATRIC_PATH = "/afs/psi.ch/project/sinq/rhel7/bin/anatric"
ANATRIC_PATH = "/afs/psi.ch/project/sinq/rhel8/bin/anatric"
ALGORITHMS = ["adaptivemaxcog", "adaptivedynamic"]
def anatric(config_file, anatric_path=ANATRIC_PATH, cwd=None):
def anatric(config_file, anatric_path=ANATRIC_PATH, cwd=None, log=logger):
comp_proc = subprocess.run(
[anatric_path, config_file],
stdout=subprocess.PIPE,
@ -29,8 +32,8 @@ def anatric(config_file, anatric_path=ANATRIC_PATH, cwd=None):
check=True,
text=True,
)
print(" ".join(comp_proc.args))
print(comp_proc.stdout)
log.info(" ".join(comp_proc.args))
log.info(comp_proc.stdout)
class AnatricConfig:

View File

@ -1,17 +0,0 @@
import logging
import sys
from io import StringIO
def on_server_loaded(_server_context):
formatter = logging.Formatter(
fmt="%(asctime)s %(levelname)s: %(message)s", datefmt="%Y-%m-%d %H:%M:%S"
)
sys.stdout = StringIO()
bokeh_handler = logging.StreamHandler(StringIO())
bokeh_handler.setFormatter(formatter)
bokeh_logger = logging.getLogger("bokeh")
bokeh_logger.setLevel(logging.WARNING)
bokeh_logger.addHandler(bokeh_handler)

View File

@ -4,7 +4,7 @@ import sys
def main():
app_path = os.path.join(os.path.dirname(os.path.abspath(__file__)))
app_path = os.path.dirname(os.path.abspath(__file__))
subprocess.run(["bokeh", "serve", app_path, *sys.argv[1:]], check=True)

View File

@ -1,5 +1,6 @@
import types
from bokeh.io import curdoc
from bokeh.models import (
Button,
CellEditor,
@ -51,6 +52,7 @@ def _params_factory(function):
class FitControls:
def __init__(self):
self.log = curdoc().logger
self.params = {}
def add_function_button_callback(click):
@ -145,7 +147,11 @@ class FitControls:
def _process_scan(self, scan):
pyzebra.fit_scan(
scan, self.params, fit_from=self.from_spinner.value, fit_to=self.to_spinner.value
scan,
self.params,
fit_from=self.from_spinner.value,
fit_to=self.to_spinner.value,
log=self.log,
)
pyzebra.get_area(
scan,

View File

@ -11,6 +11,7 @@ import pyzebra
class InputControls:
def __init__(self, dataset, dlfiles, on_file_open=lambda: None, on_monitor_change=lambda: None):
doc = curdoc()
log = doc.logger
def filelist_select_update_for_proposal():
proposal_path = proposal_textinput.name
@ -45,19 +46,19 @@ class InputControls:
f_name = os.path.basename(f_path)
base, ext = os.path.splitext(f_name)
try:
file_data = pyzebra.parse_1D(file, ext)
except:
print(f"Error loading {f_name}")
file_data = pyzebra.parse_1D(file, ext, log=log)
except Exception as e:
log.exception(e)
continue
pyzebra.normalize_dataset(file_data, monitor_spinner.value)
if not new_data: # first file
new_data = file_data
pyzebra.merge_duplicates(new_data)
pyzebra.merge_duplicates(new_data, log=log)
dlfiles.set_names([base] * dlfiles.n_files)
else:
pyzebra.merge_datasets(new_data, file_data)
pyzebra.merge_datasets(new_data, file_data, log=log)
if new_data:
dataset.clear()
@ -76,13 +77,13 @@ class InputControls:
f_name = os.path.basename(f_path)
_, ext = os.path.splitext(f_name)
try:
file_data = pyzebra.parse_1D(file, ext)
except:
print(f"Error loading {f_name}")
file_data = pyzebra.parse_1D(file, ext, log=log)
except Exception as e:
log.exception(e)
continue
pyzebra.normalize_dataset(file_data, monitor_spinner.value)
pyzebra.merge_datasets(dataset, file_data)
pyzebra.merge_datasets(dataset, file_data, log=log)
if file_data:
on_file_open()
@ -97,19 +98,19 @@ class InputControls:
with io.StringIO(base64.b64decode(f_str).decode()) as file:
base, ext = os.path.splitext(f_name)
try:
file_data = pyzebra.parse_1D(file, ext)
except:
print(f"Error loading {f_name}")
file_data = pyzebra.parse_1D(file, ext, log=log)
except Exception as e:
log.exception(e)
continue
pyzebra.normalize_dataset(file_data, monitor_spinner.value)
if not new_data: # first file
new_data = file_data
pyzebra.merge_duplicates(new_data)
pyzebra.merge_duplicates(new_data, log=log)
dlfiles.set_names([base] * dlfiles.n_files)
else:
pyzebra.merge_datasets(new_data, file_data)
pyzebra.merge_datasets(new_data, file_data, log=log)
if new_data:
dataset.clear()
@ -129,13 +130,13 @@ class InputControls:
with io.StringIO(base64.b64decode(f_str).decode()) as file:
_, ext = os.path.splitext(f_name)
try:
file_data = pyzebra.parse_1D(file, ext)
except:
print(f"Error loading {f_name}")
file_data = pyzebra.parse_1D(file, ext, log=log)
except Exception as e:
log.exception(e)
continue
pyzebra.normalize_dataset(file_data, monitor_spinner.value)
pyzebra.merge_datasets(dataset, file_data)
pyzebra.merge_datasets(dataset, file_data, log=log)
if file_data:
on_file_open()

View File

@ -1,6 +1,6 @@
import argparse
import logging
import sys
from io import StringIO
from bokeh.io import curdoc
from bokeh.layouts import column, row
@ -43,11 +43,17 @@ doc.anatric_path = args.anatric_path
doc.spind_path = args.spind_path
doc.sxtal_refgen_path = args.sxtal_refgen_path
# In app_hooks.py a StreamHandler was added to "bokeh" logger
bokeh_stream = logging.getLogger("bokeh").handlers[0].stream
stream = StringIO()
handler = logging.StreamHandler(stream)
handler.setFormatter(
logging.Formatter(fmt="%(asctime)s %(levelname)s: %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
)
logger = logging.getLogger(str(id(doc)))
logger.setLevel(logging.INFO)
logger.addHandler(handler)
doc.logger = logger
log_textareainput = TextAreaInput(title="logging output:")
bokeh_log_textareainput = TextAreaInput(title="server output:")
log_textareainput = TextAreaInput(title="Logging output:")
def proposal_textinput_callback(_attr, _old, _new):
@ -65,7 +71,7 @@ def apply_button_callback():
try:
proposal_path = pyzebra.find_proposal_path(proposal)
except ValueError as e:
print(e)
logger.exception(e)
return
apply_button.disabled = True
else:
@ -94,14 +100,13 @@ doc.add_root(
panel_spind.create(),
]
),
row(log_textareainput, bokeh_log_textareainput, sizing_mode="scale_both"),
row(log_textareainput, sizing_mode="scale_both"),
)
)
def update_stdout():
log_textareainput.value = sys.stdout.getvalue()
bokeh_log_textareainput.value = bokeh_stream.getvalue()
log_textareainput.value = stream.getvalue()
doc.add_periodic_callback(update_stdout, 1000)

View File

@ -33,6 +33,7 @@ from pyzebra import EXPORT_TARGETS, app
def create():
doc = curdoc()
log = doc.logger
dataset1 = []
dataset2 = []
app_dlfiles = app.DownloadFiles(n_files=2)
@ -94,7 +95,7 @@ def create():
def file_open_button_callback():
if len(file_select.value) != 2:
print("WARNING: Select exactly 2 .ccl files.")
log.warning("Select exactly 2 .ccl files.")
return
new_data1 = []
@ -104,13 +105,13 @@ def create():
f_name = os.path.basename(f_path)
base, ext = os.path.splitext(f_name)
try:
file_data = pyzebra.parse_1D(file, ext)
except:
print(f"Error loading {f_name}")
file_data = pyzebra.parse_1D(file, ext, log=log)
except Exception as e:
log.exception(e)
return
pyzebra.normalize_dataset(file_data, monitor_spinner.value)
pyzebra.merge_duplicates(file_data)
pyzebra.merge_duplicates(file_data, log=log)
if ind == 0:
app_dlfiles.set_names([base, base])
@ -133,7 +134,7 @@ def create():
def upload_button_callback(_attr, _old, _new):
if len(upload_button.filename) != 2:
print("WARNING: Upload exactly 2 .ccl files.")
log.warning("Upload exactly 2 .ccl files.")
return
new_data1 = []
@ -142,13 +143,13 @@ def create():
with io.StringIO(base64.b64decode(f_str).decode()) as file:
base, ext = os.path.splitext(f_name)
try:
file_data = pyzebra.parse_1D(file, ext)
except:
print(f"Error loading {f_name}")
file_data = pyzebra.parse_1D(file, ext, log=log)
except Exception as e:
log.exception(e)
return
pyzebra.normalize_dataset(file_data, monitor_spinner.value)
pyzebra.merge_duplicates(file_data)
pyzebra.merge_duplicates(file_data, log=log)
if ind == 0:
app_dlfiles.set_names([base, base])
@ -243,8 +244,8 @@ def create():
plot = figure(
x_axis_label="Scan motor",
y_axis_label="Counts",
plot_height=470,
plot_width=700,
height=470,
width=700,
tools="pan,wheel_zoom,reset",
)
@ -377,11 +378,11 @@ def create():
scan_from2 = dataset2[int(merge_from_select.value)]
if scan_into1 is scan_from1:
print("WARNING: Selected scans for merging are identical")
log.warning("Selected scans for merging are identical")
return
pyzebra.merge_scans(scan_into1, scan_from1)
pyzebra.merge_scans(scan_into2, scan_from2)
pyzebra.merge_scans(scan_into1, scan_from1, log=log)
pyzebra.merge_scans(scan_into2, scan_from2, log=log)
_update_table()
_update_plot()

View File

@ -2,6 +2,7 @@ import os
import tempfile
import numpy as np
from bokeh.io import curdoc
from bokeh.layouts import column, row
from bokeh.models import (
Button,
@ -25,6 +26,8 @@ from pyzebra import EXPORT_TARGETS, app
def create():
doc = curdoc()
log = doc.logger
dataset = []
app_dlfiles = app.DownloadFiles(n_files=2)
@ -112,8 +115,8 @@ def create():
plot = figure(
x_axis_label="Scan motor",
y_axis_label="Counts",
plot_height=470,
plot_width=700,
height=470,
width=700,
tools="pan,wheel_zoom,reset",
)
@ -214,10 +217,10 @@ def create():
scan_from = dataset[int(merge_from_select.value)]
if scan_into is scan_from:
print("WARNING: Selected scans for merging are identical")
log.warning("Selected scans for merging are identical")
return
pyzebra.merge_scans(scan_into, scan_from)
pyzebra.merge_scans(scan_into, scan_from, log=log)
_update_table()
_update_plot()

View File

@ -5,6 +5,7 @@ import subprocess
import tempfile
import numpy as np
from bokeh.io import curdoc
from bokeh.layouts import column, row
from bokeh.models import (
Arrow,
@ -39,6 +40,8 @@ SORT_OPT_NB = ["gamma", "nu", "omega"]
def create():
doc = curdoc()
log = doc.logger
ang_lims = {}
cif_data = {}
params = {}
@ -132,7 +135,11 @@ def create():
params = dict()
params["SPGR"] = cryst_space_group.value
params["CELL"] = cryst_cell.value
ub = pyzebra.calc_ub_matrix(params)
try:
ub = pyzebra.calc_ub_matrix(params, log=log)
except Exception as e:
log.exception(e)
return
ub_matrix.value = " ".join(ub)
ub_matrix_calc = Button(label="UB matrix:", button_type="primary", width=100)
@ -221,9 +228,9 @@ def create():
geom_template = None
pyzebra.export_geom_file(geom_path, ang_lims, geom_template)
print(f"Content of {geom_path}:")
log.info(f"Content of {geom_path}:")
with open(geom_path) as f:
print(f.read())
log.info(f.read())
priority = [sorting_0.value, sorting_1.value, sorting_2.value]
chunks = [sorting_0_dt.value, sorting_1_dt.value, sorting_2_dt.value]
@ -248,9 +255,9 @@ def create():
cfl_template = None
pyzebra.export_cfl_file(cfl_path, params, cfl_template)
print(f"Content of {cfl_path}:")
log.info(f"Content of {cfl_path}:")
with open(cfl_path) as f:
print(f.read())
log.info(f.read())
comp_proc = subprocess.run(
[pyzebra.SXTAL_REFGEN_PATH, cfl_path],
@ -260,8 +267,8 @@ def create():
stderr=subprocess.STDOUT,
text=True,
)
print(" ".join(comp_proc.args))
print(comp_proc.stdout)
log.info(" ".join(comp_proc.args))
log.info(comp_proc.stdout)
if i == 1: # all hkl files are identical, so keep only one
hkl_fname = base_fname + ".hkl"
@ -591,8 +598,8 @@ def create():
_, ext = os.path.splitext(fname)
try:
file_data = pyzebra.parse_hkl(file, ext)
except:
print(f"Error loading {fname}")
except Exception as e:
log.exception(e)
return
fnames.append(fname)
@ -604,7 +611,7 @@ def create():
plot_file = Button(label="Plot selected file(s)", button_type="primary", width=200)
plot_file.on_click(plot_file_callback)
plot = figure(plot_height=550, plot_width=550 + 32, tools="pan,wheel_zoom,reset")
plot = figure(height=550, width=550 + 32, tools="pan,wheel_zoom,reset")
plot.toolbar.logo = None
plot.xaxis.visible = False

View File

@ -24,6 +24,7 @@ from pyzebra import DATA_FACTORY_IMPLEMENTATION, REFLECTION_PRINTER_FORMATS
def create():
doc = curdoc()
log = doc.logger
config = pyzebra.AnatricConfig()
def _load_config_file(file):
@ -347,7 +348,11 @@ def create():
with tempfile.TemporaryDirectory() as temp_dir:
temp_file = temp_dir + "/config.xml"
config.save_as(temp_file)
pyzebra.anatric(temp_file, anatric_path=doc.anatric_path, cwd=temp_dir)
try:
pyzebra.anatric(temp_file, anatric_path=doc.anatric_path, cwd=temp_dir, log=log)
except Exception as e:
log.exception(e)
return
with open(os.path.join(temp_dir, config.logfile)) as f_log:
output_log.value = f_log.read()

View File

@ -36,6 +36,7 @@ IMAGE_PLOT_H = int(IMAGE_H * 2.4) + 27
def create():
doc = curdoc()
log = doc.logger
dataset = []
cami_meta = {}
@ -133,8 +134,8 @@ def create():
for f_name in file_select.value:
try:
new_data.append(pyzebra.read_detector_data(f_name))
except KeyError:
print("Could not read data from the file.")
except KeyError as e:
log.exception(e)
return
dataset.extend(new_data)
@ -275,7 +276,7 @@ def create():
frame_range.bounds = (0, n_im)
scan_motor = scan["scan_motor"]
proj_y_plot.axis[1].axis_label = f"Scanning motor, {scan_motor}"
proj_y_plot.yaxis.axis_label = f"Scanning motor, {scan_motor}"
var = scan[scan_motor]
var_start = var[0]
@ -301,8 +302,8 @@ def create():
x_range=det_x_range,
y_range=frame_range,
extra_y_ranges={"scanning_motor": scanning_motor_range},
plot_height=540,
plot_width=IMAGE_PLOT_W - 3,
height=540,
width=IMAGE_PLOT_W - 3,
tools="pan,box_zoom,wheel_zoom,reset",
active_scroll="wheel_zoom",
)
@ -325,8 +326,8 @@ def create():
x_range=det_y_range,
y_range=frame_range,
extra_y_ranges={"scanning_motor": scanning_motor_range},
plot_height=540,
plot_width=IMAGE_PLOT_H + 22,
height=540,
width=IMAGE_PLOT_H + 22,
tools="pan,box_zoom,wheel_zoom,reset",
active_scroll="wheel_zoom",
)
@ -352,8 +353,8 @@ def create():
colormap_select.on_change("value", colormap_select_callback)
colormap_select.value = "Plasma256"
def proj_auto_checkbox_callback(state):
if state:
def proj_auto_checkbox_callback(_attr, _old, new):
if 0 in new:
proj_display_min_spinner.disabled = True
proj_display_max_spinner.disabled = True
else:
@ -365,7 +366,7 @@ def create():
proj_auto_checkbox = CheckboxGroup(
labels=["Projections Intensity Range"], active=[0], width=145, margin=[10, 5, 0, 5]
)
proj_auto_checkbox.on_click(proj_auto_checkbox_callback)
proj_auto_checkbox.on_change("active", proj_auto_checkbox_callback)
def proj_display_max_spinner_callback(_attr, _old, new):
color_mapper_proj.high = new
@ -411,8 +412,8 @@ def create():
param_plot = figure(
x_axis_label="Parameter",
y_axis_label="Fit parameter",
plot_height=400,
plot_width=700,
height=400,
width=700,
tools="pan,wheel_zoom,reset",
)

View File

@ -43,6 +43,7 @@ IMAGE_PLOT_H = int(IMAGE_H * 2.4) + 27
def create():
doc = curdoc()
log = doc.logger
dataset = []
cami_meta = {}
@ -102,8 +103,8 @@ def create():
nonlocal dataset
try:
scan = pyzebra.read_detector_data(io.BytesIO(base64.b64decode(new)), None)
except KeyError:
print("Could not read data from the file.")
except Exception as e:
log.exception(e)
return
dataset = [scan]
@ -137,8 +138,8 @@ def create():
f_name = os.path.basename(f_path)
try:
file_data = [pyzebra.read_detector_data(f_path, cm)]
except:
print(f"Error loading {f_name}")
except Exception as e:
log.exception(e)
continue
pyzebra.normalize_dataset(file_data, monitor_spinner.value)
@ -146,7 +147,7 @@ def create():
if not new_data: # first file
new_data = file_data
else:
pyzebra.merge_datasets(new_data, file_data)
pyzebra.merge_datasets(new_data, file_data, log=log)
if new_data:
dataset = new_data
@ -161,12 +162,12 @@ def create():
f_name = os.path.basename(f_path)
try:
file_data = [pyzebra.read_detector_data(f_path, None)]
except:
print(f"Error loading {f_name}")
except Exception as e:
log.exception(e)
continue
pyzebra.normalize_dataset(file_data, monitor_spinner.value)
pyzebra.merge_datasets(dataset, file_data)
pyzebra.merge_datasets(dataset, file_data, log=log)
if file_data:
_init_datatable()
@ -292,10 +293,10 @@ def create():
scan_from = dataset[int(merge_from_select.value)]
if scan_into is scan_from:
print("WARNING: Selected scans for merging are identical")
log.warning("Selected scans for merging are identical")
return
pyzebra.merge_h5_scans(scan_into, scan_from)
pyzebra.merge_h5_scans(scan_into, scan_from, log=log)
_update_table()
_update_image()
_update_proj_plots()
@ -356,8 +357,8 @@ def create():
gamma_c = gamma[det_c_y, det_c_x]
nu_c = nu[det_c_y, det_c_x]
omega_c = omega[det_c_y, det_c_x]
chi_c = None
phi_c = None
chi_c = scan["chi"][index]
phi_c = scan["phi"][index]
else: # zebra_mode == "bi"
wave = scan["wave"]
@ -406,7 +407,7 @@ def create():
frame_range.bounds = (0, n_im)
scan_motor = scan["scan_motor"]
proj_y_plot.axis[1].axis_label = f"Scanning motor, {scan_motor}"
proj_y_plot.yaxis.axis_label = f"Scanning motor, {scan_motor}"
var = scan[scan_motor]
var_start = var[0]
@ -458,8 +459,8 @@ def create():
y_range=Range1d(0, IMAGE_H, bounds=(0, IMAGE_H)),
x_axis_location="above",
y_axis_location="right",
plot_height=IMAGE_PLOT_H,
plot_width=IMAGE_PLOT_W,
height=IMAGE_PLOT_H,
width=IMAGE_PLOT_W,
toolbar_location="left",
tools="pan,box_zoom,wheel_zoom,reset",
active_scroll="wheel_zoom",
@ -509,8 +510,8 @@ def create():
proj_v = figure(
x_range=plot.x_range,
y_axis_location="right",
plot_height=150,
plot_width=IMAGE_PLOT_W,
height=150,
width=IMAGE_PLOT_W,
tools="",
toolbar_location=None,
)
@ -524,8 +525,8 @@ def create():
proj_h = figure(
x_axis_location="above",
y_range=plot.y_range,
plot_height=IMAGE_PLOT_H,
plot_width=150,
height=IMAGE_PLOT_H,
width=150,
tools="",
toolbar_location=None,
)
@ -589,8 +590,8 @@ def create():
y_range=frame_range,
extra_x_ranges={"gamma": gamma_range},
extra_y_ranges={"scanning_motor": scanning_motor_range},
plot_height=540,
plot_width=IMAGE_PLOT_W - 3,
height=540,
width=IMAGE_PLOT_W - 3,
tools="pan,box_zoom,wheel_zoom,reset",
active_scroll="wheel_zoom",
)
@ -617,8 +618,8 @@ def create():
y_range=frame_range,
extra_x_ranges={"nu": nu_range},
extra_y_ranges={"scanning_motor": scanning_motor_range},
plot_height=540,
plot_width=IMAGE_PLOT_H + 22,
height=540,
width=IMAGE_PLOT_H + 22,
tools="pan,box_zoom,wheel_zoom,reset",
active_scroll="wheel_zoom",
)
@ -636,7 +637,7 @@ def create():
proj_y_image = proj_y_plot.image(source=proj_y_image_source, color_mapper=lin_color_mapper_proj)
# ROI slice plot
roi_avg_plot = figure(plot_height=150, plot_width=IMAGE_PLOT_W, tools="", toolbar_location=None)
roi_avg_plot = figure(height=150, width=IMAGE_PLOT_W, tools="", toolbar_location=None)
roi_avg_plot_line_source = ColumnDataSource(dict(x=[], y=[]))
roi_avg_plot.line(source=roi_avg_plot_line_source, line_color="steelblue")
@ -655,8 +656,8 @@ def create():
colormap_select.on_change("value", colormap_select_callback)
colormap_select.value = "Plasma256"
def colormap_scale_rg_callback(selection):
if selection == 0: # Linear
def colormap_scale_rg_callback(_attr, _old, new):
if new == 0: # Linear
plot_image.glyph.color_mapper = lin_color_mapper
proj_x_image.glyph.color_mapper = lin_color_mapper_proj
proj_y_image.glyph.color_mapper = lin_color_mapper_proj
@ -675,10 +676,10 @@ def create():
colormap_scale_rg.active = 0
colormap_scale_rg = RadioGroup(labels=["Linear", "Logarithmic"], active=0, width=100)
colormap_scale_rg.on_click(colormap_scale_rg_callback)
colormap_scale_rg.on_change("active", colormap_scale_rg_callback)
def main_auto_checkbox_callback(state):
if state:
def main_auto_checkbox_callback(_attr, _old, new):
if 0 in new:
display_min_spinner.disabled = True
display_max_spinner.disabled = True
else:
@ -690,7 +691,7 @@ def create():
main_auto_checkbox = CheckboxGroup(
labels=["Frame Intensity Range"], active=[0], width=145, margin=[10, 5, 0, 5]
)
main_auto_checkbox.on_click(main_auto_checkbox_callback)
main_auto_checkbox.on_change("active", main_auto_checkbox_callback)
def display_max_spinner_callback(_attr, _old, new):
lin_color_mapper.high = new
@ -709,8 +710,8 @@ def create():
display_min_spinner = Spinner(value=0, disabled=bool(main_auto_checkbox.active), width=100)
display_min_spinner.on_change("value", display_min_spinner_callback)
def proj_auto_checkbox_callback(state):
if state:
def proj_auto_checkbox_callback(_attr, _old, new):
if 0 in new:
proj_display_min_spinner.disabled = True
proj_display_max_spinner.disabled = True
else:
@ -722,7 +723,7 @@ def create():
proj_auto_checkbox = CheckboxGroup(
labels=["Projections Intensity Range"], active=[0], width=145, margin=[10, 5, 0, 5]
)
proj_auto_checkbox.on_click(proj_auto_checkbox_callback)
proj_auto_checkbox.on_change("active", proj_auto_checkbox_callback)
def proj_display_max_spinner_callback(_attr, _old, new):
lin_color_mapper_proj.high = new
@ -811,7 +812,7 @@ def create():
gamma = scan["gamma"][0]
omega = scan["omega"][0]
nu = scan["nu"][0]
nu = scan["nu"]
chi = scan["chi"][0]
phi = scan["phi"][0]
@ -842,10 +843,6 @@ def create():
x_pos = scan["fit"]["x_pos"]
y_pos = scan["fit"]["y_pos"]
if scan["zebra_mode"] == "nb":
chi = None
phi = None
events_data["wave"].append(wave)
events_data["ddist"].append(ddist)
events_data["cell"].append(cell)

View File

@ -3,6 +3,7 @@ import os
import tempfile
import numpy as np
from bokeh.io import curdoc
from bokeh.layouts import column, row
from bokeh.models import (
Button,
@ -38,6 +39,8 @@ def color_palette(n_colors):
def create():
doc = curdoc()
log = doc.logger
dataset = []
app_dlfiles = app.DownloadFiles(n_files=1)
@ -209,8 +212,8 @@ def create():
plot = figure(
x_axis_label="Scan motor",
y_axis_label="Counts",
plot_height=450,
plot_width=700,
height=450,
width=700,
tools="pan,wheel_zoom,reset",
)
@ -243,8 +246,8 @@ def create():
ov_plot = figure(
x_axis_label="Scan motor",
y_axis_label="Counts",
plot_height=450,
plot_width=700,
height=450,
width=700,
tools="pan,wheel_zoom,reset",
)
@ -261,8 +264,8 @@ def create():
y_axis_label="Param",
x_range=Range1d(),
y_range=Range1d(),
plot_height=450,
plot_width=700,
height=450,
width=700,
tools="pan,wheel_zoom,reset",
)
@ -279,8 +282,8 @@ def create():
param_plot = figure(
x_axis_label="Parameter",
y_axis_label="Fit parameter",
plot_height=400,
plot_width=700,
height=400,
width=700,
tools="pan,wheel_zoom,reset",
)
@ -361,10 +364,10 @@ def create():
scan_from = dataset[int(merge_from_select.value)]
if scan_into is scan_from:
print("WARNING: Selected scans for merging are identical")
log.warning("Selected scans for merging are identical")
return
pyzebra.merge_scans(scan_into, scan_from)
pyzebra.merge_scans(scan_into, scan_from, log=log)
_update_table()
_update_single_scan_plot()
_update_overview()

View File

@ -3,6 +3,7 @@ import io
import os
import numpy as np
from bokeh.io import curdoc
from bokeh.layouts import column, row
from bokeh.models import (
Button,
@ -31,6 +32,8 @@ from pyzebra.app.panel_hdf_viewer import calculate_hkl
def create():
doc = curdoc()
log = doc.logger
_update_slice = None
measured_data_div = Div(text="Measured <b>HDF</b> data:")
measured_data = FileInput(accept=".hdf", multiple=True, width=200)
@ -59,8 +62,8 @@ def create():
# Read data
try:
det_data = pyzebra.read_detector_data(io.BytesIO(base64.b64decode(fdata)))
except:
print(f"Error loading {fname}")
except Exception as e:
log.exception(e)
return None
if ind == 0:
@ -179,8 +182,8 @@ def create():
_, ext = os.path.splitext(fname)
try:
fdata = pyzebra.parse_hkl(file, ext)
except:
print(f"Error loading {fname}")
except Exception as e:
log.exception(e)
return
for ind in range(len(fdata["counts"])):
@ -290,8 +293,8 @@ def create():
plot = figure(
x_range=DataRange1d(),
y_range=DataRange1d(),
plot_height=550 + 27,
plot_width=550 + 117,
height=550 + 27,
width=550 + 117,
tools="pan,wheel_zoom,reset",
)
plot.toolbar.logo = None
@ -324,7 +327,7 @@ def create():
hkl_in_plane_y = TextInput(title="in-plane Y", value="", width=100, disabled=True)
def redef_lattice_cb_callback(_attr, _old, new):
if new:
if 0 in new:
redef_lattice_ti.disabled = False
else:
redef_lattice_ti.disabled = True
@ -334,7 +337,7 @@ def create():
redef_lattice_ti = TextInput(width=490, disabled=True)
def redef_ub_cb_callback(_attr, _old, new):
if new:
if 0 in new:
redef_ub_ti.disabled = False
else:
redef_ub_ti.disabled = True
@ -369,8 +372,8 @@ def create():
display_max_ni = NumericInput(title="max:", value=1, mode="float", width=70)
display_max_ni.on_change("value", display_max_ni_callback)
def colormap_scale_rg_callback(selection):
if selection == 0: # Linear
def colormap_scale_rg_callback(_attr, _old, new):
if new == 0: # Linear
plot_image.glyph.color_mapper = lin_color_mapper
lin_color_bar.visible = True
log_color_bar.visible = False
@ -384,7 +387,7 @@ def create():
colormap_scale_rg.active = 0
colormap_scale_rg = RadioGroup(labels=["Linear", "Logarithmic"], active=0, width=100)
colormap_scale_rg.on_click(colormap_scale_rg_callback)
colormap_scale_rg.on_change("active", colormap_scale_rg_callback)
xrange_min_ni = NumericInput(title="x range min:", value=0, mode="float", width=70)
xrange_max_ni = NumericInput(title="max:", value=1, mode="float", width=70)
@ -395,7 +398,7 @@ def create():
yrange_step_ni = NumericInput(title="y mesh:", value=0.01, mode="float", width=70)
def auto_range_cb_callback(_attr, _old, new):
if new:
if 0 in new:
xrange_min_ni.disabled = True
xrange_max_ni.disabled = True
yrange_min_ni.disabled = True

View File

@ -21,6 +21,7 @@ import pyzebra
def create():
doc = curdoc()
log = doc.logger
events_data = doc.events_data
npeaks_spinner = Spinner(title="Number of peaks from hdf_view panel:", disabled=True)
@ -63,8 +64,8 @@ def create():
stderr=subprocess.STDOUT,
text=True,
)
print(" ".join(comp_proc.args))
print(comp_proc.stdout)
log.info(" ".join(comp_proc.args))
log.info(comp_proc.stdout)
# prepare an event file
diff_vec = []
@ -94,9 +95,9 @@ def create():
f"{x_pos} {y_pos} {intensity} {snr_cnts} {dv1} {dv2} {dv3} {d_spacing}\n"
)
print(f"Content of {temp_event_file}:")
log.info(f"Content of {temp_event_file}:")
with open(temp_event_file) as f:
print(f.read())
log.info(f.read())
comp_proc = subprocess.run(
[
@ -123,8 +124,8 @@ def create():
stderr=subprocess.STDOUT,
text=True,
)
print(" ".join(comp_proc.args))
print(comp_proc.stdout)
log.info(" ".join(comp_proc.args))
log.info(comp_proc.stdout)
spind_out_file = os.path.join(temp_dir, "spind.txt")
spind_res = dict(
@ -146,12 +147,12 @@ def create():
ub_matrices.append(ub_matrix_spind)
spind_res["ub_matrix"].append(str(ub_matrix_spind * 1e-10))
print(f"Content of {spind_out_file}:")
log.info(f"Content of {spind_out_file}:")
with open(spind_out_file) as f:
print(f.read())
log.info(f.read())
except FileNotFoundError:
print("No results from spind")
log.warning("No results from spind")
results_table_source.data.update(spind_res)

View File

@ -3,6 +3,7 @@ import io
import os
import numpy as np
from bokeh.io import curdoc
from bokeh.layouts import column, row
from bokeh.models import (
Arrow,
@ -30,6 +31,9 @@ import pyzebra
class PlotHKL:
def __init__(self):
doc = curdoc()
log = doc.logger
_update_slice = None
measured_data_div = Div(text="Measured <b>CCL</b> data:")
measured_data = FileInput(accept=".ccl", multiple=True, width=200)
@ -62,9 +66,9 @@ class PlotHKL:
with io.StringIO(base64.b64decode(md_fdata[0]).decode()) as file:
_, ext = os.path.splitext(md_fnames[0])
try:
file_data = pyzebra.parse_1D(file, ext)
except:
print(f"Error loading {md_fnames[0]}")
file_data = pyzebra.parse_1D(file, ext, log=log)
except Exception as e:
log.exception(e)
return None
alpha = file_data[0]["alpha_cell"] * np.pi / 180.0
@ -144,9 +148,9 @@ class PlotHKL:
with io.StringIO(base64.b64decode(md_fdata[j]).decode()) as file:
_, ext = os.path.splitext(md_fname)
try:
file_data = pyzebra.parse_1D(file, ext)
except:
print(f"Error loading {md_fname}")
file_data = pyzebra.parse_1D(file, ext, log=log)
except Exception as e:
log.exception(e)
return None
pyzebra.normalize_dataset(file_data)
@ -291,8 +295,8 @@ class PlotHKL:
_, ext = os.path.splitext(fname)
try:
fdata = pyzebra.parse_hkl(file, ext)
except:
print(f"Error loading {fname}")
except Exception as e:
log.exception(e)
return
for ind in range(len(fdata["counts"])):
@ -441,7 +445,7 @@ class PlotHKL:
plot_file = Button(label="Plot selected file(s)", button_type="primary", width=200)
plot_file.on_click(plot_file_callback)
plot = figure(plot_height=550, plot_width=550 + 32, tools="pan,wheel_zoom,reset")
plot = figure(height=550, width=550 + 32, tools="pan,wheel_zoom,reset")
plot.toolbar.logo = None
plot.xaxis.visible = False
@ -517,7 +521,7 @@ class PlotHKL:
tol_k_ni = NumericInput(title="k tolerance:", value=0.01, mode="float", width=100)
def show_legend_cb_callback(_attr, _old, new):
plot.legend.visible = bool(new)
plot.legend.visible = 0 in new
show_legend_cb = CheckboxGroup(labels=["Show legend"], active=[0])
show_legend_cb.on_change("active", show_legend_cb_callback)

View File

@ -1,3 +1,4 @@
import logging
import os
import re
from ast import literal_eval
@ -5,44 +6,51 @@ from collections import defaultdict
import numpy as np
logger = logging.getLogger(__name__)
META_VARS_STR = (
"instrument",
"title",
"sample",
"comment",
"user",
"ProposalID",
"proposal_id",
"original_filename",
"date",
"zebra_mode",
"proposal",
"proposal_user",
"proposal_title",
"proposal_email",
"detectorDistance",
"zebramode",
"sample_name",
)
META_VARS_FLOAT = (
"omega",
"mf",
"2-theta",
"chi",
"phi",
"nu",
"temp",
"wavelength",
"a",
"b",
"c",
"alpha",
"beta",
"gamma",
"omega",
"chi",
"phi",
"temp",
"mf",
"temperature",
"magnetic_field",
"cex1",
"cex2",
"wavelength",
"mexz",
"moml",
"mcvl",
"momu",
"mcvu",
"2-theta",
"twotheta",
"nu",
"gamma_angle",
"polar_angle",
"tilt_angle",
"distance",
"distance_an",
"snv",
"snh",
"snvm",
@ -55,6 +63,13 @@ META_VARS_FLOAT = (
"s2vb",
"s2hr",
"s2hl",
"a5",
"a6",
"a4t",
"s2ant",
"s2anb",
"s2anl",
"s2anr",
)
META_UB_MATRIX = ("ub1j", "ub2j", "ub3j", "UB")
@ -99,7 +114,7 @@ def load_1D(filepath):
return dataset
def parse_1D(fileobj, data_type):
def parse_1D(fileobj, data_type, log=logger):
metadata = {"data_type": data_type}
# read metadata
@ -116,13 +131,23 @@ def parse_1D(fileobj, data_type):
var_name = var_name.strip()
value = value.strip()
if value == "UNKNOWN":
metadata[var_name] = None
continue
try:
if var_name in META_VARS_STR:
if var_name == "zebramode":
var_name = "zebra_mode"
metadata[var_name] = value
elif var_name in META_VARS_FLOAT:
if var_name == "2-theta": # fix that angle name not to be an expression
var_name = "twotheta"
if var_name == "temperature":
var_name = "temp"
if var_name == "magnetic_field":
var_name = "mf"
if var_name in ("a", "b", "c", "alpha", "beta", "gamma"):
var_name += "_cell"
metadata[var_name] = float(value)
@ -137,7 +162,7 @@ def parse_1D(fileobj, data_type):
metadata["ub"][row, :] = list(map(float, value.split()))
except Exception:
print(f"Error reading {var_name} with value '{value}'")
log.error(f"Error reading {var_name} with value '{value}'")
metadata[var_name] = 0
# handle older files that don't contain "zebra_mode" metadata
@ -202,15 +227,18 @@ def parse_1D(fileobj, data_type):
dataset.append({**metadata, **scan})
elif data_type == ".dat":
# TODO: this might need to be adapted in the future, when "gamma" will be added to dat files
if metadata["zebra_mode"] == "nb":
metadata["gamma"] = metadata["twotheta"]
if "gamma_angle" in metadata:
# support for the new format
metadata["gamma"] = metadata["gamma_angle"]
else:
metadata["gamma"] = metadata["twotheta"]
scan = defaultdict(list)
scan["export"] = True
match = re.search("Scanning Variables: (.*), Steps: (.*)", next(fileobj))
motors = [motor.lower() for motor in match.group(1).split(", ")]
motors = [motor.strip().lower() for motor in match.group(1).split(",")]
# Steps can be separated by " " or ", "
steps = [float(step.strip(",")) for step in match.group(2).split()]
@ -272,7 +300,7 @@ def parse_1D(fileobj, data_type):
dataset.append({**metadata, **scan})
else:
print("Unknown file extention")
log.error("Unknown file extention")
return dataset

View File

@ -1,3 +1,4 @@
import logging
import os
import numpy as np
@ -6,6 +7,8 @@ from scipy.integrate import simpson, trapezoid
from pyzebra import CCL_ANGLES
logger = logging.getLogger(__name__)
PARAM_PRECISIONS = {
"twotheta": 0.1,
"chi": 0.1,
@ -33,12 +36,12 @@ def normalize_dataset(dataset, monitor=100_000):
scan["monitor"] = monitor
def merge_duplicates(dataset):
merged = np.zeros(len(dataset), dtype=np.bool)
def merge_duplicates(dataset, log=logger):
merged = np.zeros(len(dataset), dtype=bool)
for ind_into, scan_into in enumerate(dataset):
for ind_from, scan_from in enumerate(dataset[ind_into + 1 :], start=ind_into + 1):
if _parameters_match(scan_into, scan_from) and not merged[ind_from]:
merge_scans(scan_into, scan_from)
merge_scans(scan_into, scan_from, log=log)
merged[ind_from] = True
@ -75,28 +78,30 @@ def _parameters_match(scan1, scan2):
return True
def merge_datasets(dataset_into, dataset_from):
def merge_datasets(dataset_into, dataset_from, log=logger):
scan_motors_into = dataset_into[0]["scan_motors"]
scan_motors_from = dataset_from[0]["scan_motors"]
if scan_motors_into != scan_motors_from:
print(f"Scan motors mismatch between datasets: {scan_motors_into} vs {scan_motors_from}")
log.warning(
f"Scan motors mismatch between datasets: {scan_motors_into} vs {scan_motors_from}"
)
return
merged = np.zeros(len(dataset_from), dtype=np.bool)
merged = np.zeros(len(dataset_from), dtype=bool)
for scan_into in dataset_into:
for ind, scan_from in enumerate(dataset_from):
if _parameters_match(scan_into, scan_from) and not merged[ind]:
if scan_into["counts"].ndim == 3:
merge_h5_scans(scan_into, scan_from)
merge_h5_scans(scan_into, scan_from, log=log)
else: # scan_into["counts"].ndim == 1
merge_scans(scan_into, scan_from)
merge_scans(scan_into, scan_from, log=log)
merged[ind] = True
for scan_from in dataset_from:
dataset_into.append(scan_from)
def merge_scans(scan_into, scan_from):
def merge_scans(scan_into, scan_from, log=logger):
if "init_scan" not in scan_into:
scan_into["init_scan"] = scan_into.copy()
@ -148,10 +153,10 @@ def merge_scans(scan_into, scan_from):
fname1 = os.path.basename(scan_into["original_filename"])
fname2 = os.path.basename(scan_from["original_filename"])
print(f'Merging scans: {scan_into["idx"]} ({fname1}) <-- {scan_from["idx"]} ({fname2})')
log.info(f'Merging scans: {scan_into["idx"]} ({fname1}) <-- {scan_from["idx"]} ({fname2})')
def merge_h5_scans(scan_into, scan_from):
def merge_h5_scans(scan_into, scan_from, log=logger):
if "init_scan" not in scan_into:
scan_into["init_scan"] = scan_into.copy()
@ -160,7 +165,7 @@ def merge_h5_scans(scan_into, scan_from):
for scan in scan_into["merged_scans"]:
if scan_from is scan:
print("Already merged scan")
log.warning("Already merged scan")
return
scan_into["merged_scans"].append(scan_from)
@ -212,7 +217,7 @@ def merge_h5_scans(scan_into, scan_from):
fname1 = os.path.basename(scan_into["original_filename"])
fname2 = os.path.basename(scan_from["original_filename"])
print(f'Merging scans: {scan_into["idx"]} ({fname1}) <-- {scan_from["idx"]} ({fname2})')
log.info(f'Merging scans: {scan_into["idx"]} ({fname1}) <-- {scan_from["idx"]} ({fname2})')
def restore_scan(scan):
@ -230,7 +235,7 @@ def restore_scan(scan):
scan["export"] = True
def fit_scan(scan, model_dict, fit_from=None, fit_to=None):
def fit_scan(scan, model_dict, fit_from=None, fit_to=None, log=logger):
if fit_from is None:
fit_from = -np.inf
if fit_to is None:
@ -243,7 +248,7 @@ def fit_scan(scan, model_dict, fit_from=None, fit_to=None):
# apply fitting range
fit_ind = (fit_from <= x_fit) & (x_fit <= fit_to)
if not np.any(fit_ind):
print(f"No data in fit range for scan {scan['idx']}")
log.warning(f"No data in fit range for scan {scan['idx']}")
return
y_fit = y_fit[fit_ind]

View File

@ -47,9 +47,9 @@ def parse_h5meta(file):
if variable in META_STR:
pass
elif variable in META_CELL:
value = np.array(value.split(",")[:6], dtype=np.float)
value = np.array(value.split(",")[:6], dtype=float)
elif variable in META_MATRIX:
value = np.array(value.split(",")[:9], dtype=np.float).reshape(3, 3)
value = np.array(value.split(",")[:9], dtype=float).reshape(3, 3)
else: # default is a single float number
value = float(value)
content[section][variable] = value
@ -69,7 +69,7 @@ def read_detector_data(filepath, cami_meta=None):
ndarray: A 3D array of data, omega, gamma, nu.
"""
with h5py.File(filepath, "r") as h5f:
counts = h5f["/entry1/area_detector2/data"][:].astype(np.float64)
counts = h5f["/entry1/area_detector2/data"][:].astype(float)
n, cols, rows = counts.shape
if "/entry1/experiment_identifier" in h5f: # old format
@ -114,10 +114,14 @@ def read_detector_data(filepath, cami_meta=None):
scan["nu"] = h5f["/entry1/ZEBRA/area_detector2/tilt_angle"][0]
scan["ddist"] = h5f["/entry1/ZEBRA/area_detector2/distance"][0]
scan["wave"] = h5f["/entry1/ZEBRA/monochromator/wavelength"][0]
scan["chi"] = h5f["/entry1/sample/chi"][:]
if scan["zebra_mode"] == "nb":
scan["chi"] = np.array([180])
scan["phi"] = np.array([0])
elif scan["zebra_mode"] == "bi":
scan["chi"] = h5f["/entry1/sample/chi"][:]
scan["phi"] = h5f["/entry1/sample/phi"][:]
if len(scan["chi"]) == 1:
scan["chi"] = np.ones(n) * scan["chi"]
scan["phi"] = h5f["/entry1/sample/phi"][:]
if len(scan["phi"]) == 1:
scan["phi"] = np.ones(n) * scan["phi"]
if h5f["/entry1/sample/UB"].size == 0:
@ -144,11 +148,21 @@ def read_detector_data(filepath, cami_meta=None):
if "/entry1/sample/magnetic_field" in h5f:
scan["mf"] = h5f["/entry1/sample/magnetic_field"][:]
if "mf" in scan:
# TODO: NaNs are not JSON compliant, so replace them with None
# this is not a great solution, but makes it safe to use the array in bokeh
scan["mf"] = np.where(np.isnan(scan["mf"]), None, scan["mf"])
if "/entry1/sample/temperature" in h5f:
scan["temp"] = h5f["/entry1/sample/temperature"][:]
elif "/entry1/sample/Ts/value" in h5f:
scan["temp"] = h5f["/entry1/sample/Ts/value"][:]
if "temp" in scan:
# TODO: NaNs are not JSON compliant, so replace them with None
# this is not a great solution, but makes it safe to use the array in bokeh
scan["temp"] = np.where(np.isnan(scan["temp"]), None, scan["temp"])
# overwrite metadata from .cami
if cami_meta is not None:
if "crystal" in cami_meta:

View File

@ -1,4 +1,5 @@
import io
import logging
import os
import subprocess
import tempfile
@ -6,7 +7,9 @@ from math import ceil, floor
import numpy as np
SXTAL_REFGEN_PATH = "/afs/psi.ch/project/sinq/rhel7/bin/Sxtal_Refgen"
logger = logging.getLogger(__name__)
SXTAL_REFGEN_PATH = "/afs/psi.ch/project/sinq/rhel8/bin/Sxtal_Refgen"
_zebraBI_default_geom = """GEOM 2 Bissecting - HiCHI
BLFR z-up
@ -144,7 +147,7 @@ def export_geom_file(path, ang_lims, template=None):
out_file.write(f"{'':<8}{ang:<10}{vals[0]:<10}{vals[1]:<10}{vals[2]:<10}\n")
def calc_ub_matrix(params):
def calc_ub_matrix(params, log=logger):
with tempfile.TemporaryDirectory() as temp_dir:
cfl_file = os.path.join(temp_dir, "ub_matrix.cfl")
@ -160,8 +163,8 @@ def calc_ub_matrix(params):
stderr=subprocess.STDOUT,
text=True,
)
print(" ".join(comp_proc.args))
print(comp_proc.stdout)
log.info(" ".join(comp_proc.args))
log.info(comp_proc.stdout)
sfa_file = os.path.join(temp_dir, "ub_matrix.sfa")
ub_matrix = []

View File

@ -1,11 +0,0 @@
[Unit]
Description=pyzebra-test web server
[Service]
Type=simple
User=pyzebra
ExecStart=/bin/bash /usr/local/sbin/pyzebra-test.sh
Restart=always
[Install]
WantedBy=multi-user.target

View File

@ -1,4 +0,0 @@
source /opt/miniconda3/etc/profile.d/conda.sh
conda activate test
python /opt/pyzebra/pyzebra/app/cli.py --port=5010 --allow-websocket-origin=pyzebra.psi.ch:5010 --args --spind-path=/opt/spind

View File

@ -1,10 +0,0 @@
[Unit]
Description=pyzebra web server
[Service]
Type=simple
ExecStart=/bin/bash /usr/local/sbin/pyzebra.sh
Restart=always
[Install]
WantedBy=multi-user.target

View File

@ -1,4 +0,0 @@
source /opt/miniconda3/etc/profile.d/conda.sh
conda activate prod
pyzebra --port=80 --allow-websocket-origin=pyzebra.psi.ch:80 --args --spind-path=/opt/spind