Refactor step 1 in notebook to facilitate usage of campaign descriptors
This commit is contained in:
@ -1,182 +1,181 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Data integration workflow of experimental campaign\n",
|
||||
"\n",
|
||||
"In this notebook, we will go through a our data integration workflow. This involves the following steps:\n",
|
||||
"\n",
|
||||
"1. Specify data integration file through YAML configuration file.\n",
|
||||
"2. Create an integrated HDF5 file of experimental campaign from configuration file.\n",
|
||||
"3. Display the created HDF5 file using a treemap\n",
|
||||
"\n",
|
||||
"## Import libraries and modules\n",
|
||||
"\n",
|
||||
"* Excecute (or Run) the Cell below"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from nbutils import add_project_path_to_sys_path\n",
|
||||
"\n",
|
||||
"# Add project root to sys.path\n",
|
||||
"add_project_path_to_sys_path()\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" import visualization.hdf5_vis as hdf5_vis\n",
|
||||
" import pipelines.data_integration as data_integration\n",
|
||||
" print(\"Imports successful!\")\n",
|
||||
"except ImportError as e:\n",
|
||||
" print(f\"Import error: {e}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 1: Specify data integration task through YAML configuration file\n",
|
||||
"\n",
|
||||
"* Create your configuration file (i.e., *.yaml file) adhering to the example yaml file in the input folder.\n",
|
||||
"* Set up input directory and output directory paths and Excecute Cell.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#output_filename_path = 'output_files/unified_file_smog_chamber_2024-04-07_UTC-OFST_+0200_NG.h5'\n",
|
||||
"yaml_config_file_path = '../input_files/data_integr_config_file_TBR.yaml'\n",
|
||||
"\n",
|
||||
"#path_to_input_directory = 'output_files/kinetic_flowtube_study_2022-01-31_LuciaI'\n",
|
||||
"#path_to_hdf5_file = hdf5_lib.create_hdf5_file_from_filesystem_path(path_to_input_directory)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 2: Create an integrated HDF5 file of experimental campaign.\n",
|
||||
"\n",
|
||||
"* Excecute Cell. Here we run the function `integrate_data_sources` with input argument as the previously specified YAML config file."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"hdf5_file_path = data_integration.run_pipeline(yaml_config_file_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"hdf5_file_path "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Display integrated HDF5 file using a treemap\n",
|
||||
"\n",
|
||||
"* Excecute Cell. A visual representation in html format of the integrated file should be displayed and stored in the output directory folder"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"if isinstance(hdf5_file_path ,list):\n",
|
||||
" for path_item in hdf5_file_path :\n",
|
||||
" hdf5_vis.display_group_hierarchy_on_a_treemap(path_item)\n",
|
||||
"else:\n",
|
||||
" hdf5_vis.display_group_hierarchy_on_a_treemap(hdf5_file_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import src.hdf5_ops as h5de \n",
|
||||
"h5de.serialize_metadata(hdf5_file_path[0],folder_depth=3,output_format='yaml')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import src.hdf5_ops as h5de \n",
|
||||
"print(hdf5_file_path)\n",
|
||||
"DataOpsAPI = h5de.HDF5DataOpsManager(hdf5_file_path[0])\n",
|
||||
"\n",
|
||||
"DataOpsAPI.load_file_obj()\n",
|
||||
"\n",
|
||||
"#DataOpsAPI.reformat_datetime_column('ICAD/HONO/2022_11_22_Channel1_Data.dat/data_table',\n",
|
||||
"# 'Start Date/Time (UTC)',\n",
|
||||
"# '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d %H:%M:%S')\n",
|
||||
"DataOpsAPI.extract_and_load_dataset_metadata()\n",
|
||||
"df = DataOpsAPI.dataset_metadata_df\n",
|
||||
"print(df.head())\n",
|
||||
"\n",
|
||||
"DataOpsAPI.unload_file_obj()\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"DataOpsAPI.load_file_obj()\n",
|
||||
"\n",
|
||||
"DataOpsAPI.append_metadata('/',{'test_attr':'this is a test value'})\n",
|
||||
"\n",
|
||||
"DataOpsAPI.unload_file_obj()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "multiphase_chemistry_env",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Data integration workflow of experimental campaign\n",
|
||||
"\n",
|
||||
"In this notebook, we will go through a our data integration workflow. This involves the following steps:\n",
|
||||
"\n",
|
||||
"1. Specify data integration file through YAML configuration file.\n",
|
||||
"2. Create an integrated HDF5 file of experimental campaign from configuration file.\n",
|
||||
"3. Display the created HDF5 file using a treemap\n",
|
||||
"\n",
|
||||
"## Import libraries and modules\n",
|
||||
"\n",
|
||||
"* Excecute (or Run) the Cell below"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from nbutils import add_project_path_to_sys_path\n",
|
||||
"\n",
|
||||
"# Add project root to sys.path\n",
|
||||
"add_project_path_to_sys_path()\n",
|
||||
"\n",
|
||||
"try:\n",
|
||||
" import visualization.hdf5_vis as hdf5_vis\n",
|
||||
" import pipelines.data_integration as data_integration\n",
|
||||
" print(\"Imports successful!\")\n",
|
||||
"except ImportError as e:\n",
|
||||
" print(f\"Import error: {e}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 1: Specify data integration task through YAML configuration file\n",
|
||||
"\n",
|
||||
"* Create your configuration file (i.e., *.yaml file) adhering to the example yaml file in the input folder.\n",
|
||||
"* Set up input directory and output directory paths and Excecute Cell.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"number, initials = 1, 'LI' # Set as either 2, 'TBR' or 3, 'NG'\n",
|
||||
"campaign_descriptor_path = f'../input_files/campaignDescriptor{number}_{initials}.yaml'\n",
|
||||
"\n",
|
||||
"print(campaign_descriptor_path)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Step 2: Create an integrated HDF5 file of experimental campaign.\n",
|
||||
"\n",
|
||||
"* Excecute Cell. Here we run the function `integrate_data_sources` with input argument as the previously specified YAML config file."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"hdf5_file_path = data_integration.run_pipeline(campaign_descriptor_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"hdf5_file_path "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Display integrated HDF5 file using a treemap\n",
|
||||
"\n",
|
||||
"* Excecute Cell. A visual representation in html format of the integrated file should be displayed and stored in the output directory folder"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\n",
|
||||
"if isinstance(hdf5_file_path ,list):\n",
|
||||
" for path_item in hdf5_file_path :\n",
|
||||
" hdf5_vis.display_group_hierarchy_on_a_treemap(path_item)\n",
|
||||
"else:\n",
|
||||
" hdf5_vis.display_group_hierarchy_on_a_treemap(hdf5_file_path)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import src.hdf5_ops as h5de \n",
|
||||
"h5de.serialize_metadata(hdf5_file_path[0],folder_depth=3,output_format='yaml')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import src.hdf5_ops as h5de \n",
|
||||
"print(hdf5_file_path)\n",
|
||||
"DataOpsAPI = h5de.HDF5DataOpsManager(hdf5_file_path[0])\n",
|
||||
"\n",
|
||||
"DataOpsAPI.load_file_obj()\n",
|
||||
"\n",
|
||||
"#DataOpsAPI.reformat_datetime_column('ICAD/HONO/2022_11_22_Channel1_Data.dat/data_table',\n",
|
||||
"# 'Start Date/Time (UTC)',\n",
|
||||
"# '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d %H:%M:%S')\n",
|
||||
"DataOpsAPI.extract_and_load_dataset_metadata()\n",
|
||||
"df = DataOpsAPI.dataset_metadata_df\n",
|
||||
"print(df.head())\n",
|
||||
"\n",
|
||||
"DataOpsAPI.unload_file_obj()\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"DataOpsAPI.load_file_obj()\n",
|
||||
"\n",
|
||||
"DataOpsAPI.append_metadata('/',{'test_attr':'this is a test value'})\n",
|
||||
"\n",
|
||||
"DataOpsAPI.unload_file_obj()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
Reference in New Issue
Block a user