Update README.md

This commit is contained in:
ozerov_d
2023-09-07 10:44:44 +02:00
parent 054cb6e3be
commit e4cd3d7087

View File

@ -6,7 +6,6 @@ Runs on files produced by [sf-daq](https://github.com/paulscherrerinstitute/sf_d
# Table of Contents # Table of Contents
* [Installation](#installation) * [Installation](#installation)
* [Google Authentication](#google-api)
* [Usage](#usage) * [Usage](#usage)
* [Before beamtime](#usage1) * [Before beamtime](#usage1)
* [During beamtime](#usage2) * [During beamtime](#usage2)
@ -16,6 +15,7 @@ Runs on files produced by [sf-daq](https://github.com/paulscherrerinstitute/sf_d
* [pausing indexing](#usage2_pause) * [pausing indexing](#usage2_pause)
* [After beamtime](#usage3) * [After beamtime](#usage3)
* [Configuration files](#config) * [Configuration files](#config)
* [Google Authentication](#google-api)
## Description ## Description
Automatic Processing tool checks for the new files/runs produced by sf-daq and runs automatically workload (currently - indexing (by crystfel)) and fills logbook (google spreadsheet) with information with some daq parameters from the sf-daq and processing. Automatic Processing tool checks for the new files/runs produced by sf-daq and runs automatically workload (currently - indexing (by crystfel)) and fills logbook (google spreadsheet) with information with some daq parameters from the sf-daq and processing.
@ -42,7 +42,7 @@ Needed conda environment can be sourced/used from the common place. In case new
gspread numpy matplotlib gspread numpy matplotlib
``` ```
In case of installation from source, so different location of the code and conda environment - change correspondingly lines in [env_setup.sh](#config_env_setup) file In case of installation from source, so different location of the code or conda environment - change correspondingly lines in [env_setup.sh](#config_env_setup) file
## Google Authentication<a name="google-api"></a> ## Google Authentication<a name="google-api"></a>
@ -208,6 +208,11 @@ this file contains indexing parameters used by crystfel.
**HINT** - in case several proteins are used during expertiment, it's possible to define different indexing parameters for each of them: in case run_index.<cell_name>.sh file is present - indexing parameters from that file will be used to process <cell_name> protein sample, if not present(default) - run_index.sh parameters are used **HINT** - in case several proteins are used during expertiment, it's possible to define different indexing parameters for each of them: in case run_index.<cell_name>.sh file is present - indexing parameters from that file will be used to process <cell_name> protein sample, if not present(default) - run_index.sh parameters are used
## Google Authentication<a name="google-api"></a>
ap can fill automatically google spreadsheet with different information. This is done using google-api and one need to have api-keys created and allowed for the corresponding spreadsheet (logbook). To create keys, few steps needs to be done first:
- [enable API access for a project](https://docs.gspread.org/en/v5.10.0/oauth2.html#enable-api-access-for-a-project)
- [create (*hint* - do several for same project) service accounts](https://docs.gspread.org/en/v5.10.0/oauth2.html#for-bots-using-service-account) (steps 1-4)
## Roadmap ## Roadmap
@ -218,4 +223,6 @@ this file contains indexing parameters used by crystfel.
## Authors and acknowledgment ## Authors and acknowledgment
Automatic Processing tool was made in 2018 by Karol Nass and Dmitry Ozerov. Automatic Processing tool was made in 2018 by Karol Nass and Dmitry Ozerov.
Automatic processing pipeline is described in "*Nass, K. et al. Pink-beam serial femtosecond crystallography for accurate structure-factor determination at an X-ray free-electron laser. IUCrJ 8, 905920 (2021).*" This paper can be referenced in case usage of automatic pipeline was helpful in beamtime.