279 lines
15 KiB
Markdown
279 lines
15 KiB
Markdown
## DIMA: Data Integration and Metadata Annotation
|
||
|
||
|
||
## Description
|
||
|
||
**DIMA** (Data Integration and Metadata Annotation) is a Python package developed to support the findable, accessible, interoperable, and reusable (FAIR) data transformation of multi-instrument data at the **Laboratory of Atmospheric Chemistry** as part of the project **IVDAV**: *Instant and Versatile Data Visualization During the Current Dark Period of the Life Cycle of FAIR Research*, funded by the [ETH-Domain ORD Program Measure 1](https://ethrat.ch/en/measure-1-calls-for-field-specific-actions/).
|
||
|
||
|
||
The **FAIR** data transformation involves cycles of data harmonization and metadata review. DIMA facilitates these processes by enabling the integration and annotation of multi-instrument data in HDF5 format. This data may originate from diverse experimental campaigns, including **beamtimes**, **kinetic flowtube studies**, **smog chamber experiments**, and **field campaigns**.
|
||
|
||
|
||
## Key features
|
||
|
||
DIMA provides reusable operations for data integration, manipulation, and extraction using HDF5 files. These serve as the foundation for the following higher-level operations:
|
||
|
||
1. **Data integration pipeline**: Searches for, retrieves, and integrates multi-instrument data sources in HDF5 format using a human-readable campaign descriptor YAML file that points to the data sources on a network drive.
|
||
|
||
2. **Metadata revision pipeline**: Enables updates, deletions, and additions of metadata in an HDF5 file. It operates on the target HDF5 file and a YAML file specifying the required changes. A suitable YAML file specification can be generated by serializing the current metadata of the target HDF5 file. This supports alignment with conventions and the development of campaign-centric vocabularies.
|
||
|
||
|
||
3. **Visualization pipeline:**
|
||
Generates a treemap visualization of an HDF5 file, highlighting its structure and key metadata elements.
|
||
|
||
4. **Jupyter notebooks**
|
||
Demonstrates DIMA’s core functionalities, such as data integration, HDF5 file creation, visualization, and metadata annotation. Key notebooks include examples for data sharing, OpenBis ETL, and workflow demos.
|
||
|
||
## Requirements
|
||
|
||
For **Windows** users, the following are required:
|
||
|
||
1. **Git Bash**: Install [Git Bash](https://git-scm.com/downloads) to run shell scripts (`.sh` files).
|
||
|
||
2. **Conda**: Install [Anaconda](https://www.anaconda.com/products/individual) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html).
|
||
|
||
3. **PSI Network Access**
|
||
|
||
Ensure you have access to the PSI internal network and the necessary permissions to access the source directories. See [notebooks/demo_data_integration.ipynb](notebooks/demo_data_integration.ipynb) for details on how to set up data integration from network drives.
|
||
|
||
|
||
:bulb: **Tip**: Editing your system’s PATH variable ensures both Conda and Git are available in the terminal environment used by Git Bash.
|
||
|
||
|
||
## Getting Started
|
||
|
||
### Download DIMA
|
||
|
||
Open a **Git Bash** terminal.
|
||
|
||
Navigate to your `Gitea` folder, clone the repository, and navigate to the `dima` folder as follows:
|
||
|
||
```bash
|
||
cd path/to/Gitea
|
||
git clone --recurse-submodules https://gitea.psi.ch/5505-public/dima.git
|
||
cd dima
|
||
```
|
||
|
||
### Install Python Interpreter
|
||
|
||
Open **Git Bash** terminal.
|
||
|
||
**Option 1**: Install a suitable conda environment `multiphase_chemistry_env` inside the repository `dima` as follows:
|
||
|
||
```bash
|
||
cd path/to/GitLab/dima
|
||
Bash setup_env.sh
|
||
```
|
||
|
||
Open **Anaconda Prompt** or a terminal with access to conda.
|
||
|
||
**Option 2**: Install conda enviroment from YAML file as follows:
|
||
```bash
|
||
cd path/to/GitLab/dima
|
||
conda env create --file environment.yml
|
||
```
|
||
|
||
<details>
|
||
<summary> <b> Working with Jupyter Notebooks </b> </summary>
|
||
|
||
We now make the previously installed Python environment `multiphase_chemistry_env` selectable as a kernel in Jupyter's interface.
|
||
|
||
1. Open an Anaconda Prompt, check if the environment exists, and activate it:
|
||
```
|
||
conda env list
|
||
conda activate multiphase_chemistry_env
|
||
```
|
||
2. Register the environment in Jupyter:
|
||
```
|
||
python -m ipykernel install --user --name multiphase_chemistry_env --display-name "Python (multiphase_chemistry_env)"
|
||
```
|
||
3. Start a Jupyter Notebook by running the command:
|
||
```
|
||
jupyter notebook
|
||
```
|
||
and select the `multiphase_chemistry_env` environment from the kernel options.
|
||
|
||
</details>
|
||
|
||
## Repository Structure and Software arquitecture
|
||
|
||
**Directories**
|
||
|
||
- `input_files/` stores some example raw input data or campaign descriptor YAML files.
|
||
|
||
- `output_files/` stores generated outputs for local processing.
|
||
|
||
- `instruments/` contains instrument-specific dictionaries and file readers.
|
||
|
||
- `src/` contains the main source code, HDF5 Writer and Data Operations Manager.
|
||
|
||
- `utils/` contains generic data conversion operations, supporting the source code.
|
||
|
||
- `notebooks/` contains a collection of Jupyter notebooks, demonstrating DIMA's main functionalities.
|
||
|
||
- `pipelines/` contains source code for the data integration pipeline and metadata revision workflow.
|
||
|
||
- `visualization/` contains primarily functions for visualization of HDF5 files as treemaps.
|
||
|
||
---
|
||
|
||
**Software arquitecture**
|
||
|
||
<p align="center">
|
||
<img src="docs/software_arquitecture_diagram.svg" alt="Alt Text">
|
||
</p>
|
||
|
||
## Contributing
|
||
|
||
We welcome contributions to DIMA! The easiest way to contribute is by expanding our file reader registry. This allows DIMA to support new instrument file formats.
|
||
|
||
### Adding a New Instrument Reader
|
||
|
||
To integrate a new instrument, add the following files:
|
||
|
||
- **YAML File** (Instrument-specific metadata terms)
|
||
- **Location**: `instruments/dictionaries/`
|
||
- **Example**: `ACSM_TOFWARE_flags.yaml`
|
||
|
||
- **Python File** (File reader for the instrument’s data files)
|
||
- **Location**: `instruments/readers/`
|
||
- **Example**: `flag_reader.py` (reads `flag.json` files)
|
||
|
||
### Registering the New Instrument Reader
|
||
|
||
Once the files are added, register the new reader in **one of the following ways**:
|
||
|
||
1. **Modify the Python registry**
|
||
- Open: `instruments/readers/filereader_registry.py`
|
||
- Add an entry for the new instrument reader
|
||
|
||
**Example:**
|
||
```python
|
||
# Import the new reader
|
||
from instruments.readers.flag_reader import read_jsonflag_as_dict
|
||
# Register the new instrument in the registry
|
||
file_extensions.append('.ext')
|
||
file_readers.update({'<newInstrument>_ext' : lambda x: read_<newInstFile>_as_dict(x)})
|
||
```
|
||
|
||
2. **Modify the YAML registry**
|
||
- Open: `instruments/readers/registry.yaml`
|
||
- Add an entry for the new instrument reader
|
||
|
||
### Notes
|
||
|
||
We are in the process of implementing validation mechanisms and clear guidelines for file readers. More detailed contribution instructions will be provided soon.
|
||
|
||
If you would like to contribute, please follow best practices, ensure code quality, and run tests before submitting changes. Additional documentation on setting up the development environment and running tests will be added soon.
|
||
|
||
Thank you for your contributions! 🚀
|
||
|
||
|
||
## Acknowledgment
|
||
|
||
We gratefully acknowledge the support of the **Laboratory of Atmospheric Chemistry** and funding from the [ETH-Domain ORD Program Measure 1](https://ethrat.ch/en/measure-1-calls-for-field-specific-actions/) through the **IVDAV** project. Special thanks to all contributors and the open-source community for their valuable insights.
|
||
|
||
|
||
## License
|
||
This section is work in progress!
|
||
|
||
## Project status
|
||
If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.
|
||
|
||
## How-to tutorials
|
||
|
||
<details>
|
||
|
||
<summary> <b> Data integration workflow </b> </summary>
|
||
|
||
This section is in progress!
|
||
|
||
</details>
|
||
|
||
|
||
<details>
|
||
|
||
<summary> <b> Metadata review workflow </b> </summary>
|
||
|
||
- review through branches
|
||
- updating files with metadata in Openbis
|
||
|
||
#### Metadata
|
||
| **Attribute** | **CF Equivalent** | **Definition** |
|
||
|-------------------------|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||
| campaign_name | - | Denotes a range of possible campaigns, including laboratory and field experiments, beamtime, smog chamber studies, etc., related to atmospheric chemistry research. |
|
||
| project | - | Denotes a valid name of the project under which the data was collected or produced. |
|
||
| contact | contact (specifically E-mail address) | Denotes the name of data producer who conducted the experiment or carried out the project that produced the raw dataset (or an aggragated dataset with multiple owners) |
|
||
| description | title (only info about content), comment (too broad in scope), source | Provides a short description of methods and processing steps used to arrive at the current version of the dataset. |
|
||
| experiment | - | Denotes a valid name of the specific experiment or study that generated the data. |
|
||
| actris_level | - | Indicates the processing level of the data within the ACTRIS (Aerosol, Clouds and Trace Gases Research Infrastructure) framework. |
|
||
| dataset_startdate | - | Denotes the start datetime of the dataset collection. |
|
||
| dataset_enddate | - | Denotes the end datetime of the dataset collection. |
|
||
| processing_script | - | Denotes the name of the file used to process an initial version (e.g, original version) of the dataset into a processed dataset. |
|
||
| processing_date | - | The date when the data processing was completed. | |
|
||
## Adaptability to Experimental Campaign Needs
|
||
|
||
The `instruments/` module is designed to be highly adaptable, accommodating new instrument types or file reading capabilities with minimal code refactoring. The module is complemented by instrument-specific dictionaries of terms in YAML format, which facilitate automated annotation of observed variables with:
|
||
- `standard_name`
|
||
- `units`
|
||
- `description`
|
||
|
||
as suggested by [CF metadata conventions](http://cfconventions.org/).
|
||
### Versioning and Community Collaboration
|
||
The instrument-specific dictionaries in YAML format provide a human readable interface for community-based development of instrument vocabularies. These descriptions can potentially be enhanced with semantic annotations for interoperability across research domains.
|
||
|
||
### Specifying a compound attribute in yaml language.
|
||
Consider the compound attribute *relative_humidity*, which has subattributes *value*, *units*, *range*, and *definition*. The yaml description of
|
||
such an attribute is as follows:
|
||
```yaml
|
||
relative_humidity:
|
||
value: 65
|
||
units: percentage
|
||
range: '[0,100]'
|
||
definition: 'Relative humidity represents the amount of water vapor present in the air relative to the maximum amount of water vapor the air can hold at a given temperature.'
|
||
```
|
||
### Deleting or renaming a compound attribute in yaml language.
|
||
- Assume the attribute *relative_humidity* already exists. Then it should be displayed as follows with the subattribute *rename_as*. This can be set differently to suggest a renaming of the attribute.
|
||
- To suggest deletion of an attribute, we are required to add a subattribute *delete* with value as *true*. Below for example, the
|
||
attribute *relative_ humidity* is suggested to be deleted. Otherwise if *delete* is set as *false*, it will have no effect.
|
||
```yaml
|
||
relative_humidity:
|
||
delete: true # we added this line in the review process
|
||
rename_as: relative_humidity
|
||
value: 65
|
||
units: percentage
|
||
range: '[0,100]'
|
||
definition: 'Relative humidity represents the amount of water vapor present in the air relative to the maximum amount of water vapor the air can hold at a given temperature.'
|
||
|
||
```
|
||
</details>
|
||
|
||
# Editing this README
|
||
|
||
When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template.
|
||
|
||
## Suggestions for a good README
|
||
Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information.
|
||
|
||
## Badges
|
||
On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge.
|
||
|
||
## Visuals
|
||
Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method.
|
||
|
||
## Installation
|
||
Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection.
|
||
|
||
## Usage
|
||
Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README.
|
||
|
||
## Support
|
||
Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc.
|
||
|
||
## Roadmap
|
||
If you have ideas for releases in the future, it is a good idea to list them in the README.
|
||
|
||
|
||
|
||
|