mirror of
https://gitea.psi.ch/APOG/acsmnode.git
synced 2025-06-24 21:21:08 +02:00
Update ACSM data chain workflow with markdown descriptions
This commit is contained in:
@ -1,5 +1,23 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ACSM Data Chain Workflow\n",
|
||||
"\n",
|
||||
"In this notebook, we will go through our **ACSM Data Chain**. This involves the following steps:\n",
|
||||
"\n",
|
||||
"1. Run the data integration pipeline to retrieve ACSM input data and prepare it for processing. \n",
|
||||
"2. Perform QC/QA analysis. \n",
|
||||
"3. (Optional) Conduct visual analysis for flag validation. \n",
|
||||
"4. Prepare input data and QC/QA analysis results for submission to the EBAS database. \n",
|
||||
"\n",
|
||||
"## Import Libraries and Data Chain Steps\n",
|
||||
"\n",
|
||||
"* Execute (or Run) the cell below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@ -31,14 +49,22 @@
|
||||
"for item in sys.path:\n",
|
||||
" print(item)\n",
|
||||
"\n",
|
||||
"CAMPAIGN_DATA_FILE = \"../data/collection_JFJ_2024_2025-03-14_2025-03-14.h5\"\n",
|
||||
"APPEND_DATA_DIR = \"../data/collection_JFJ_2024_2025-03-14_2025-03-14\""
|
||||
"from dima.pipelines.data_integration import run_pipeline as get_campaign_data\n",
|
||||
"from pipelines.steps.apply_calibration_factors import main as apply_calibration_factors\n",
|
||||
"from pipelines.steps.generate_flags import main as generate_flags\n",
|
||||
"from pipelines.steps.prepare_ebas_submission import main as prepare_ebas_submission "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
"source": [
|
||||
"## Step 1: Retrieve Input Data from a Network Drive\n",
|
||||
"\n",
|
||||
"* Create a configuration file (i.e., a `.yaml` file) following the example provided in the input folder.\n",
|
||||
"* Set up the input and output directory paths.\n",
|
||||
"* Execute the cell."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
@ -46,8 +72,30 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pipelines.steps.apply_calibration_factors import main as run_apply_calibration_factors\n",
|
||||
"path_to_config_file = '../campaignDescriptor.yaml'\n",
|
||||
"paths_to_hdf5_files = get_campaign_data(path_to_config_file)\n",
|
||||
"\n",
|
||||
"# Select campaign data file and append directory\n",
|
||||
"CAMPAIGN_DATA_FILE = paths_to_hdf5_files[0]\n",
|
||||
"APPEND_DATA_DIR = os.path.splitext(CAMPAIGN_DATA_FILE)[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 2: Calibrate Input Campaign Data and Save Data Products\n",
|
||||
"\n",
|
||||
"* Set up the input and output directory paths.\n",
|
||||
"* Execute the cell."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"path_to_data_file = CAMPAIGN_DATA_FILE\n",
|
||||
"path_to_calibration_file = '../pipelines/params/calibration_factors.yaml'\n",
|
||||
"dataset_name = 'ACSM_TOFWARE/2024/ACSM_JFJ_2024_timeseries.txt/data_table'\n",
|
||||
@ -55,70 +103,103 @@
|
||||
"#status = subprocess.run(command, capture_output=True, check=True)\n",
|
||||
"#print(status.stdout.decode())\n",
|
||||
"\n",
|
||||
"run_apply_calibration_factors(path_to_data_file,path_to_calibration_file)\n"
|
||||
"apply_calibration_factors(path_to_data_file,path_to_calibration_file)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pipelines.steps.generate_flags import main as run_generate_flags\n",
|
||||
"path_to_data_file = CAMPAIGN_DATA_FILE\n",
|
||||
"dataset_name = 'ACSM_TOFWARE/2024/ACSM_JFJ_2024_meta.txt/data_table'\n",
|
||||
"path_to_config_file = 'pipelines/params/validity_thresholds.yaml'\n",
|
||||
"#command = ['python', 'pipelines/steps/compute_automated_flags.py', path_to_data_file, dataset_name, path_to_config_file]\n",
|
||||
"#status = subprocess.run(command, capture_output=True, check=True)\n",
|
||||
"#print(status.stdout.decode())\n",
|
||||
"run_generate_flags(path_to_data_file, 'diagnostics')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pipelines.steps.generate_flags import main as run_generate_flags\n",
|
||||
"path_to_data_file = CAMPAIGN_DATA_FILE\n",
|
||||
"dataset_name = 'ACSM_TOFWARE/2024/ACSM_JFJ_2024_meta.txt/data_table'\n",
|
||||
"path_to_config_file = 'pipelines/params/validity_thresholds.yaml'\n",
|
||||
"#command = ['python', 'pipelines/steps/compute_automated_flags.py', path_to_data_file, dataset_name, path_to_config_file]\n",
|
||||
"#status = subprocess.run(command, capture_output=True, check=True)\n",
|
||||
"#print(status.stdout.decode())\n",
|
||||
"run_generate_flags(path_to_data_file, 'species')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pipelines.steps.prepare_ebas_submission import main as run_prepare_ebas_submission \n",
|
||||
"## Step 3: Perform QC/QA Analysis\n",
|
||||
"\n",
|
||||
"* Generate automated flags based on validity thresholds for diagnostic channels.\n",
|
||||
"* (Optional) Generate manual flags using the **Data Flagging App**, accessible at: \n",
|
||||
" [http://localhost:8050/](http://localhost:8050/)\n",
|
||||
"* Execute the cell."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"dataset_name = 'ACSM_TOFWARE/2024/ACSM_JFJ_2024_meta.txt/data_table'\n",
|
||||
"path_to_config_file = 'pipelines/params/validity_thresholds.yaml'\n",
|
||||
"#command = ['python', 'pipelines/steps/compute_automated_flags.py', path_to_data_file, dataset_name, path_to_config_file]\n",
|
||||
"#status = subprocess.run(command, capture_output=True, check=True)\n",
|
||||
"#print(status.stdout.decode())\n",
|
||||
"generate_flags(path_to_data_file, 'diagnostics')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## (Optional) Step 3.1: Inspect Previously Generated Flags for Correctness\n",
|
||||
"\n",
|
||||
"* Perform flag validation using the Jupyter Notebook workflow available at: \n",
|
||||
" [../notebooks/demo_visualize_diagnostic_flags_from_hdf5_file.ipynb](demo_visualize_diagnostic_flags_from_hdf5_file.ipynb)\n",
|
||||
"* Follow the notebook steps to visually inspect previously generated flags."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 4: Apply Diagnostic and Manual Flags to Variables of Interest\n",
|
||||
"\n",
|
||||
"* Generate flags for species based on previously collected QC/QA flags.\n",
|
||||
"* Execute the cell."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"path_to_data_file = CAMPAIGN_DATA_FILE\n",
|
||||
"dataset_name = 'ACSM_TOFWARE/2024/ACSM_JFJ_2024_meta.txt/data_table'\n",
|
||||
"path_to_config_file = 'pipelines/params/validity_thresholds.yaml'\n",
|
||||
"#command = ['python', 'pipelines/steps/compute_automated_flags.py', path_to_data_file, dataset_name, path_to_config_file]\n",
|
||||
"#status = subprocess.run(command, capture_output=True, check=True)\n",
|
||||
"#print(status.stdout.decode())\n",
|
||||
"generate_flags(path_to_data_file, 'species')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 5: Generate Campaign Data in EBAS Format\n",
|
||||
"\n",
|
||||
"* Gather and set paths to the required data products produced in the previous steps.\n",
|
||||
"* Execute the cell."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"PATH1=\"../data/collection_JFJ_2024_2025-03-14_2025-03-14/ACSM_TOFWARE_processed/2024/ACSM_JFJ_2024_timeseries_calibrated.csv\"\n",
|
||||
"PATH2=\"../data/collection_JFJ_2024_2025-03-14_2025-03-14/ACSM_TOFWARE_processed/2024/ACSM_JFJ_2024_timeseries_calibrated_err.csv\"\n",
|
||||
"PATH3=\"../data/collection_JFJ_2024_2025-03-14_2025-03-14/ACSM_TOFWARE_processed/2024/ACSM_JFJ_2024_timeseries_calibration_factors.csv\"\n",
|
||||
"PATH4=\"../data/collection_JFJ_2024_2025-03-14_2025-03-14/ACSM_TOFWARE_flags/2024/ACSM_JFJ_2024_timeseries_flags.csv\"\n",
|
||||
"month = 4\n",
|
||||
"run_prepare_ebas_submission([PATH1,PATH2,PATH3], PATH4, month)"
|
||||
"prepare_ebas_submission([PATH1,PATH2,PATH3], PATH4, month)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 6: Save Data Products to an HDF5 File\n",
|
||||
"\n",
|
||||
"* Gather and set paths to the required data products produced in the previous steps.\n",
|
||||
"* Execute the cell.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
Reference in New Issue
Block a user