2025-02-03 10:31:48 +01:00
2025-06-22 10:45:44 +02:00
2025-06-21 20:28:49 +02:00
2025-02-03 10:31:48 +01:00
2025-02-03 10:31:48 +01:00

DIMA: Data Integration and Metadata Annotation

Description

DIMA (Data Integration and Metadata Annotation) is a Python package for data curation and HDF5 conversion of multi-instrument scientific data. It was developed to support the Findable, Accessible, Interoperable, and Reusable (FAIR) data transformation efforts at the Laboratory of Atmospheric Chemistry at the PSI Center for Energy and Environmental Sciences.

The FAIR data transformation involves cycles of data harmonization and metadata review. DIMA facilitates these processes by enabling the integration and annotation of multi-instrument data into the HDF5 format. This data may originate from diverse experimental campaigns, including beamtimes, kinetic flow tube studies, smog chamber experiments, and field campaigns.

Key features

DIMA provides reusable operations for data integration, manipulation, and extraction using HDF5 files. These serve as the foundation for the following higher-level operations:

  1. Data integration pipeline: Searches for, retrieves, and integrates multi-instrument data sources in HDF5 format using a human-readable campaign descriptor YAML file that points to the data sources on a network drive.

  2. Metadata revision pipeline: Enables updates, deletions, and additions of metadata in an HDF5 file. It operates on the target HDF5 file and a YAML file specifying the required changes. A suitable YAML file specification can be generated by serializing the current metadata of the target HDF5 file. This supports alignment with conventions and the development of campaign-centric vocabularies.

  3. Visualization pipeline: Generates a treemap visualization of an HDF5 file, highlighting its structure and key metadata elements.

  4. Jupyter notebooks Demonstrates DIMAs core functionalities, such as data integration, HDF5 file creation, visualization, and metadata annotation. Key notebooks include examples for data sharing, OpenBis ETL, and workflow demos.

Requirements

For Windows users, the following are required:

  1. Git Bash: Install Git Bash to run shell scripts (.sh files).

  2. Miniforge: Install Miniforge.

  3. PSI Network Access

    Ensure you have access to the PSI internal network and the necessary permissions to access the source directories. See notebooks/demo_data_integration.ipynb for details on how to set up data integration from network drives.

💡 Tip: Editing your systems PATH variable ensures both Conda and Git are available in the terminal environment used by Git Bash.

Getting Started

Download DIMA

Open a Git Bash terminal (or a terminal of your choice).

Navigate to your Gitea folder, clone the repository, and move into the dima directory:

cd path/to/Gitea
git clone --recurse-submodules https://gitea.psi.ch/5505-public/dima.git
cd dima

Install Python Environment Using Miniforge and conda-forge

We recommend using Miniforge to manage your conda environments. Miniforge ensures compatibility with packages from the conda-forge channel.

  1. Make sure you have installed Miniforge.

  2. Open Miniforge Prompt

    ⚠️ Ensure your Conda base environment is from Miniforge (not Anaconda). Run conda info and check for miniforge in the base path and conda-forge as the default channel.

  3. Create the Environment from environment.yml. Inside the Miniforge Prompt or a terminal with access to conda and run:

    cd path/to/Gitea/dima
    conda env create --file environment.yml
    
  4. Activate the Environment

    conda activate dima_env
    
  5. Remove the defaults channel (if present):

    conda config --remove channels defaults
    
  6. Add conda-forge as the highest-priority channel:

conda config --add channels conda-forge
conda config --set channel_priority strict

Working with Jupyter Notebooks

We now make the previously installed Python environment dima_env selectable as a kernel in Jupyter's interface.

  1. Open an Anaconda Prompt, check if the environment exists, and activate it:
    conda env list
    conda activate dima_env
    
  2. Register the environment in Jupyter:
    python -m ipykernel install --user --name dima_env --display-name "Python (dima_env)"  
    
  3. Start a Jupyter Notebook by running the command:
    jupyter notebook
    

and select the dima_env environment from the kernel options.

Repository Structure and Software arquitecture

Directories

  • input_files/ stores some example raw input data or campaign descriptor YAML files.

  • output_files/ stores generated outputs for local processing.

  • instruments/ contains instrument-specific dictionaries and file readers.

  • src/ contains the main source code, HDF5 Writer and Data Operations Manager.

  • utils/ contains generic data conversion operations, supporting the source code.

  • notebooks/ contains a collection of Jupyter notebooks, demonstrating DIMA's main functionalities.

  • pipelines/ contains source code for the data integration pipeline and metadata revision workflow.

  • visualization/ contains primarily functions for visualization of HDF5 files as treemaps.


Software arquitecture

Alt Text

Contributing

We welcome contributions to DIMA! The easiest way to contribute is by expanding our file reader registry. This allows DIMA to support new instrument file formats.

Adding a New Instrument Reader

To integrate a new instrument, add the following files:

  • YAML File (Instrument-specific metadata terms)

    • Location: instruments/dictionaries/
    • Example: ACSM_TOFWARE_flags.yaml
  • Python File (File reader for the instruments data files)

    • Location: instruments/readers/
    • Example: flag_reader.py (reads flag.json files)

Registering the New Instrument Reader

Once the files are added, register the new reader in one of the following ways:

  1. Modify the Python registry

    • Open: instruments/readers/filereader_registry.py
    • Add an entry for the new instrument reader

    Example:

    # Import the new reader
    from instruments.readers.flag_reader import read_jsonflag_as_dict
    # Register the new instrument in the registry
    file_extensions.append('.ext') 
    file_readers.update({'<newInstrument>_ext' : lambda x: read_<newInstFile>_as_dict(x)})
    
  2. Modify the YAML registry

    • Open: instruments/readers/registry.yaml
    • Add an entry for the new instrument reader

Notes

We are in the process of implementing validation mechanisms and clear guidelines for file readers. More detailed contribution instructions will be provided soon.

If you would like to contribute, please follow best practices, ensure code quality, and run tests before submitting changes. Additional documentation on setting up the development environment and running tests will be added soon.

Thank you for your contributions! 🚀

Acknowledgment

We gratefully acknowledge the support of the Laboratory of Atmospheric Chemistry and funding from the ETH-Domain ORD Program Measure 1 through the IVDAV project. Special thanks to all contributors and the open-source community for their valuable insights.

License

This section is work in progress!

Project status

If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers.

How-to tutorials

Data integration workflow

This section is in progress!

Metadata review workflow
  • review through branches
  • updating files with metadata in Openbis

Metadata

Attribute CF Equivalent Definition
campaign_name - Denotes a range of possible campaigns, including laboratory and field experiments, beamtime, smog chamber studies, etc., related to atmospheric chemistry research.
project - Denotes a valid name of the project under which the data was collected or produced.
contact contact (specifically E-mail address) Denotes the name of data producer who conducted the experiment or carried out the project that produced the raw dataset (or an aggragated dataset with multiple owners)
description title (only info about content), comment (too broad in scope), source Provides a short description of methods and processing steps used to arrive at the current version of the dataset.
experiment - Denotes a valid name of the specific experiment or study that generated the data.
actris_level - Indicates the processing level of the data within the ACTRIS (Aerosol, Clouds and Trace Gases Research Infrastructure) framework.
dataset_startdate - Denotes the start datetime of the dataset collection.
dataset_enddate - Denotes the end datetime of the dataset collection.
processing_script - Denotes the name of the file used to process an initial version (e.g, original version) of the dataset into a processed dataset.
processing_date - The date when the data processing was completed.

Adaptability to Experimental Campaign Needs

The instruments/ module is designed to be highly adaptable, accommodating new instrument types or file reading capabilities with minimal code refactoring. The module is complemented by instrument-specific dictionaries of terms in YAML format, which facilitate automated annotation of observed variables with:

  • standard_name
  • units
  • description

as suggested by CF metadata conventions.

Versioning and Community Collaboration

The instrument-specific dictionaries in YAML format provide a human readable interface for community-based development of instrument vocabularies. These descriptions can potentially be enhanced with semantic annotations for interoperability across research domains.

Specifying a compound attribute in yaml language.

Consider the compound attribute relative_humidity, which has subattributes value, units, range, and definition. The yaml description of such an attribute is as follows:

relative_humidity:
  value: 65
  units: percentage
  range: '[0,100]'
  definition: 'Relative humidity represents the amount of water vapor present in the air relative to the maximum amount of water vapor the air can hold at a given temperature.'  

Deleting or renaming a compound attribute in yaml language.

  • Assume the attribute relative_humidity already exists. Then it should be displayed as follows with the subattribute rename_as. This can be set differently to suggest a renaming of the attribute.
  • To suggest deletion of an attribute, we are required to add a subattribute delete with value as true. Below for example, the attribute relative_ humidity is suggested to be deleted. Otherwise if delete is set as false, it will have no effect.
relative_humidity:
  delete: true # we added this line in the review process
  rename_as: relative_humidity
  value: 65
  units: percentage
  range: '[0,100]'
  definition: 'Relative humidity represents the amount of water vapor present in the air relative to the maximum amount of water vapor the air can hold at a given temperature.'

Authors

This toolkit was developed by:

  • Juan F. Flórez-Ospina
  • Lucia Iezzi
  • Natasha Garner
  • Thorsten Bartels-Rausch

All authors are affiliated with the PSI Center for Energy and Environmental Sciences, 5232 Villigen PSI, Switzerland.


Funding

This work was funded by the ETH-Domain Open Research Data (ORD) Program Measure 1.

It is part of the project IVDAV: Instant and Versatile Data Visualization During the Current Dark Period of the Life Cycle of FAIR Research, funded by the ETH-Domain ORD Program Measure 1, which is described in more detail at the ORD Program project portal.


Description
DIMA (Data Integration and Metadata Annotation) is a Python package for curating heterogeneous scientific data, enabling structured metadata annotation and export to self-describing HDF5 formats.
Readme 12 MiB
Languages
Python 91.9%
Jupyter Notebook 8.1%